azure:az-104_2024:azure_compute

Azure Compute

Virtual Machines

  • Redeploying a VM will migrate it to another Azure host (physical hardware).
    • A new VM instance is created.
    • Data on temporary disk will be lost in the process.
  • Migrating VMs to a different region or availability zone
  • A reapply operation re-runs the VM’s provisioning, which can help resolve provisioning failures or other issues that may have occurred during the VM’s creation or updates.
  • The VM continues to run—it's not rebooted.
  • For VM ARM templates the offer attribute is the general server type (e.g. WindowsServer, LinuxServer) and the SKU attribute is the specific OS (e.g. 18.04-LTS, 2019-Datacenter).

Just-In-Time (JIT) VM access

  • When JIT access is enabled NSG or Azure Firewall rules are added to block inbound access for specific management ports (e.g. 3389, 22 etc).
  • When access is needed a user makes a request to access a VM. If the request is approved, Defender for Cloud configures the NSGs and Azure Firewall to allow inbound traffic to the selected ports from the relevant IP address (or range), for the amount of time that was specified.
JIT does not support VMs protected by Azure Firewalls controlled by Azure Firewall Manager. The Azure Firewall must be configured with Rules (Classic) and cannot use Firewall policies.

Bastion

Downgrading a SKU is not supported. To downgrade, you must delete and recreate Azure Bastion.
  • In addition to having a web client interface for RDP/SSH access Bastion supports using native/local RDP/SSH clients.
    • This requires the Standard SKU.

Availability Sets

  • VMs cannot be added to a scale set after it is created. A new VM has to be created and added to a scale set at the time of creation.
  • Using Availability Sets puts VMs in separate fault domains, but not different Availability Zones. Availabilty Sets provide better latency than Availability Zones, but the later providers better reliability.
  • A Fault Domain is infrastructure that shares common power and network switches. It is a single point-of-failure.
  • Each availability set can be configured with up to 3 fault domains and 20 update domains. These can't be changed once the availability set has been created.
    • Fault domains define the group of virtual machines that share a common power source and network switch.
    • Update domains indicate groups of virtual machines and underlying physical hardware that can be rebooted at the same time.
  • If the VM you wish to resize is part of an availability set, then you must stop all VMs in the availability set before changing the size of any VM in the availability set.

Scale Sets

  • The orchestration mode is defined when you create the scale set and cannot be changed or updated later.
  • Uniform - Optimized for large-scale stateless workloads with identical instances.
  • Flexible - Achieve high availability at scale with identical or multiple virtual machine types.
    • Flexible orchestration offers high availability guarantees (up to 1000 VMs) by spreading VMs across fault domains in a region or within an Availability Zone. This enables you to scale out your application while maintaining fault domain isolation that is essential to run quorum-based or stateful workloads.

App Service

  • App Service plans that have no apps associated with them still incur charges because they continue to reserve the configured VM instances.
    • Consequently, you should try to minimize the number of App Service Plans that are used.

Azure Containers

  • In Azure there is a continuum of container related services:
    • Container Instances/Container Groups
    • Azure Container Apps
    • Azure Kubernetes Service
  • There is a trade-off of the ease-of-use and capabilities of these technologies, so you can use the one that best matches the needs of your use case.
  • A container group is a collection of containers that get scheduled on the same host machine. The containers in a container group share a lifecycle, resources, local network, and storage volumes. It's similar in concept to a pod in Kubernetes.
  • Container Groups can only be created from the CLI.
Multi-container groups currently support only Linux containers. For Windows containers, Azure Container Instances only supports deployment of a single container instance.

Azure Container Registry

  • There are three tiers for Azure Container Registry:
    • Basic - sufficient for most low usage scenarios, like development scenarios
    • Standard - good for most production need scenarios
    • Premium - provides higher throughput and geo-redundancy.
      • Private Endpoint network connectivity is only supported on this tier.
      • Customer Managed Keys are only supported on this SKU

Azure Kubernetes Service (AKS)

I have discovered that the current AZ-104 exam does not include questions about AKS. But, for now, I'm leaving this info here.
  • Microsoft Entra authentication is provided to AKS clusters with OpenID Connect. OpenID Connect is an identity layer built on top of the OAuth 2.0 protocol.

  • AKS uses two main networking models: overlay network and flat network.
    • The main differences between the models are how pod IP addresses are assigned and how traffic leaves the cluster.
  • kubenet - nodes have an IP on a subnet and pods have an IP on a logically separate IP space. NAT is used to allow pods to communicate with resources on the network. This is the default and simplest network model. Ref
  • Azure CNI (traditional) - assigns a VNet IP address to every pod. It assigns this IP address from a pre-reserved set of IPs on every node or a separate subnet reserved for pods.
    • This approach requires IP address planning and could lead to address exhaustion, which introduces difficulties scaling your clusters as your application demands grow
  • Azure CNI Overlay - With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network (VNet) subnet.
    • Pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an Overlay network. Network Address Translation (NAT) uses the node's IP address to reach resources outside the cluster. This is the recommended CNI plugin for most scenarios.
  • azure/az-104_2024/azure_compute.txt
  • Last modified: 2024/10/04 20:38
  • by mmuze