
Microsoft
464+ preguntas de práctica gratuitas con respuestas verificadas por IA
Microsoft Azure Fundamentals
Impulsado por IA
Cada respuesta de Microsoft AZ-900 es verificada de forma cruzada por 3 modelos de IA líderes para garantizar la máxima precisión. Obtén explicaciones detalladas por opción y análisis profundo de cada pregunta.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You plan to deploy several Azure virtual machines. You need to ensure that the services running on the virtual machines are available if a single data center fails. Solution: You deploy the virtual machines to two or more scale sets. Does this meet the goal?
Yes is incorrect because it assumes that using multiple scale sets automatically provides datacenter-level redundancy. VM Scale Sets primarily address scaling and uniform management, and their placement across datacenters is not guaranteed unless Availability Zones are explicitly used. Two scale sets can still be created in the same zone/datacenter, meaning a single datacenter outage could take down all instances. To meet the goal, you must design for zonal redundancy (or equivalent) rather than relying on multiple scale sets alone.
No is correct because deploying VMs to two or more scale sets does not, by itself, ensure the VMs are distributed across separate datacenters. Without explicitly configuring Availability Zones (or a zone-redundant architecture), multiple scale sets can be deployed into the same zone or underlying datacenter. The requirement is resiliency to a single datacenter failure, which typically maps to multi-zone deployment with a zone-redundant load-balancing front end. Therefore, the proposed solution is insufficient to guarantee the stated availability goal.
Core concept: This question tests high availability design for Azure virtual machines across datacenters within a region, specifically whether the proposed deployment approach provides resiliency to a single datacenter failure. Why the answer is correct: Deploying VMs to two or more Virtual Machine Scale Sets (VMSS) does not inherently guarantee placement across multiple datacenters. A VMSS, by default, can place instances within a single availability zone or even within a single datacenter depending on configuration, and multiple scale sets can still end up in the same zone/datacenter. To ensure availability when a single datacenter fails, you must use Availability Zones (zonal or zone-redundant architecture) or, in non-zonal regions, Availability Sets (which protect against rack-level failures, not full datacenter failures). Therefore, the solution as stated does not meet the goal. Key features / configurations: - Availability Zones: Deploy VMs/VMSS across multiple zones (e.g., zones 1, 2, 3) to survive a datacenter (zone) outage. - VM Scale Sets zonal deployment: Pin a scale set to a specific zone; use multiple scale sets across different zones, or use zone-redundant load balancing. - Load balancing: Use Standard Load Balancer/Application Gateway with zone-redundant frontend to distribute traffic across zonal backends. - Availability Sets: Provide fault/update domain separation within a datacenter; not sufficient for a datacenter outage. Common misconceptions: - Assuming “multiple scale sets” automatically means “multiple datacenters.” Placement is not guaranteed unless you explicitly use zones. - Confusing Availability Sets (intra-datacenter resiliency) with Availability Zones (inter-datacenter resiliency). - Believing VMSS alone provides datacenter-level HA without zonal configuration and a zone-redundant traffic entry point. Exam tips: - If the requirement says “single datacenter fails,” think Availability Zones. - VMSS improves scalability and instance-level resiliency, but you must configure zones to get datacenter-level resiliency. - Availability Sets protect against host/rack maintenance and failures, not full datacenter outages. - Ensure the ingress component (Load Balancer/App Gateway) is also zone-redundant when designing zonal HA.
¿Quieres practicar todas las preguntas en cualquier lugar?
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
¿Quieres practicar todas las preguntas en cualquier lugar?
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
¿Quieres practicar todas las preguntas en cualquier lugar?
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.


Descarga Cloud Pass y accede a todas las preguntas de práctica de Microsoft AZ-900 gratis.
¿Quieres practicar todas las preguntas en cualquier lugar?
Obtén la app gratis
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
HOTSPOT - For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:
By creating additional resource groups in an Azure subscription, additional costs are incurred.
Correct answer: No (B). A resource group is a logical container used to organize Azure resources for lifecycle management, RBAC, policy assignment, and cost management reporting. Creating additional resource groups does not, by itself, deploy any billable resources or consume metered capacity. Therefore, Azure does not charge simply for having more resource groups. Costs occur when you create or use resources inside the resource groups (for example, virtual machines, managed disks, storage accounts, VPN gateways, databases) or when you generate billable activity (such as outbound data transfer, transactions, or log ingestion). Resource groups are part of Azure Resource Manager governance/management features and are not priced as a standalone item. Why “Yes” is wrong: it confuses organizational constructs with billable services. You might see costs associated with resources placed into those groups, but the act of creating the groups themselves does not incur additional charges.
By copying several gigabits of data to Azure from an on-premises network over a VPN, additional data transfer costs are incurred.
Correct answer: No (B). Copying data from an on-premises network to Azure is inbound data transfer to Azure (ingress). In Azure’s general bandwidth pricing model, inbound data transfer is typically free of charge. Therefore, copying several gigabits of data into Azure over a VPN does not usually incur additional data transfer (bandwidth) charges from Azure for the transfer itself. Important nuance for exam readiness: while ingress bandwidth is generally free, the overall solution might still have costs if you are using a billable connectivity component (for example, an Azure VPN Gateway is billed per hour and may have other charges). However, the question specifically asks about “additional data transfer costs” incurred by copying data into Azure, which points to bandwidth directionality rather than gateway runtime. Why “Yes” is wrong: it assumes all network transfer is billed. For AZ-900, remember the common rule: inbound to Azure is generally free; outbound from Azure is generally charged.
By copying several GB of data from Azure to an on-premises network over a VPN, additional data transfer costs are incurred.
Correct answer: Yes (A). Copying data from Azure to an on-premises network is outbound data transfer from Azure (egress). Azure commonly charges for outbound bandwidth, and those charges increase with the amount of data transferred. Therefore, copying several GB of data from Azure back to on-premises over a VPN generally incurs additional data transfer costs. This is a frequent AZ-900 exam concept: data egress is a typical cost driver, and architects should design with egress in mind (for example, keep workloads and dependent services in the same region, minimize cross-boundary transfers, and use caching/CDN patterns where appropriate). This aligns with the Cost Optimization pillar of the Azure Well-Architected Framework. Why “No” is wrong: it incorrectly applies the “inbound is free” rule to outbound traffic. While some scenarios may have special pricing (or free allowances in specific services), the default principle tested here is that outbound data transfer from Azure is billed.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You plan to deploy several Azure virtual machines. You need to ensure that the services running on the virtual machines are available if a single data center fails. Solution: You deploy the virtual machines to two or more regions. Does this meet the goal?
Yes is incorrect because regions are larger geographic constructs that contain one or more datacenters, so deploying across regions addresses a broader failure domain than the one described. Although a multi-region design can certainly improve overall availability, it is not the Azure feature typically used to satisfy a requirement about a single datacenter failing. For AZ-900 questions, the expected match for datacenter-level fault tolerance is Availability Zones rather than multiple regions.
No. Deploying virtual machines to two or more regions is aimed at protecting against a regional outage, not specifically the failure of a single datacenter. The Azure feature designed for datacenter-level resiliency is Availability Zones, which place resources in separate physical locations within the same region. Because the requirement is narrowly focused on surviving a single datacenter failure, multi-region deployment does not best meet the stated goal in this exam context.
Core Concept: This question tests understanding of Azure regions, availability zones, and datacenter-level fault tolerance. In AZ-900, a single Azure region contains one or more datacenters, and availability zones are specifically designed to protect workloads from the failure of a single datacenter within a region. Why the Answer is Correct: Deploying virtual machines to two or more regions does not directly target the requirement of surviving a single datacenter failure. A single datacenter failure is more appropriately addressed by using Availability Zones or, in some cases, Availability Sets within the same region. Regions are geographically separate and are typically used for broader disaster recovery and business continuity scenarios, not specifically for single-datacenter resilience. Key Features / What to Know: - Availability Zones provide physically separate locations within an Azure region, each with independent power, cooling, and networking. - Availability Sets distribute VMs across fault domains and update domains within a datacenter environment, helping reduce localized hardware failure impact. - Regions are separate geographic areas and are mainly used for regional disaster recovery, compliance, and latency considerations. - Multi-region deployments can improve resilience, but they are not the standard answer when the requirement is specifically a single datacenter failure. Common Misconceptions: A common mistake is assuming that a more resilient or broader architecture automatically best matches the requirement. While multiple regions can provide higher-level disaster recovery, the question asks specifically about a single datacenter failure, which points to Availability Zones. Another misconception is treating regions and datacenters as interchangeable; they are not the same scope of failure. Exam Tips: - If the requirement mentions a single datacenter failure, think Availability Zones first. - If the requirement mentions an entire region outage or disaster recovery, think paired regions or multi-region deployment. - In AZ-900, always match the Azure service to the exact failure scope described in the question.
What is required to use Azure Cost Management?
A Dev/Test subscription is a specific offer intended to reduce costs for development and testing workloads (often via Visual Studio subscriptions). It is not a prerequisite to use Azure Cost Management. Cost Management can analyze costs for many subscription types, including Dev/Test, but you do not need Dev/Test to access the service.
Software Assurance is a licensing benefit associated with Microsoft volume licensing (e.g., Windows Server/SQL Server benefits, Azure Hybrid Benefit eligibility). It does not grant access to Azure Cost Management. Cost Management is tied to Azure billing and scopes (billing account/subscription), not to Software Assurance entitlements.
An Enterprise Agreement (EA) is a large-organization purchasing agreement and is not required to use Azure Cost Management. While EA customers commonly use Cost Management at scale (often with advanced chargeback/showback needs), the tooling is available for non-EA customers as well. EA is a purchasing model, not a functional prerequisite.
A pay-as-you-go subscription is the most straightforward requirement among the choices because it represents having an Azure subscription with billable usage and a billing relationship. Azure Cost Management relies on billing and usage data; a pay-as-you-go subscription provides that baseline. Many other subscription/billing types also work, but pay-as-you-go is the best answer here.
Core Concept: Azure Cost Management (often referred to as Cost Management + Billing) is an Azure governance and financial management capability used to monitor, allocate, and optimize cloud spend. It supports cost analysis, budgets, alerts, cost allocation (tags/management groups/subscriptions), and recommendations to improve cost efficiency—aligning strongly with the Azure Well-Architected Framework Cost Optimization pillar. Why the Answer is Correct: To use Azure Cost Management in the context of AZ-900 fundamentals, you need an Azure subscription that generates billable usage and is supported by Cost Management. A pay-as-you-go subscription is the baseline, commonly referenced subscription type that enables billing and therefore cost tracking and analysis. Cost Management is not restricted to special licensing programs like Software Assurance or Enterprise Agreement; it is available broadly for Azure billing accounts and subscriptions. In exam terms, “pay-as-you-go subscription” is the most universally correct requirement among the options. Key Features / What You Can Do: With Cost Management you can: view cost by scope (management group, subscription, resource group), filter/group by tags, create budgets and alerts, export cost data, and use recommendations to reduce spend (e.g., right-sizing, reserved instances/savings plans where applicable). Access is controlled via Azure RBAC roles such as Cost Management Reader/Contributor or Billing Reader, depending on scope. Common Misconceptions: Learners often assume you need an Enterprise Agreement (EA) because Cost Management historically had strong EA integration. Others confuse Software Assurance (a licensing benefit) with cost tooling access. Dev/Test subscriptions are a pricing offer for development workloads, not a prerequisite for cost analysis. Exam Tips: For AZ-900, remember: Cost Management is a governance tool available with Azure subscriptions and billing. If asked what is “required,” pick the option that represents having a standard Azure subscription with billing (pay-as-you-go). Also watch for wording about permissions—sometimes the real requirement is appropriate RBAC/billing access, but that is not offered in this question.
You plan to migrate a web application to Azure. The web application is accessed by external users. You need to recommend a cloud deployment solution to minimize the amount of administrative effort used to manage the web application. What should you include in the recommendation?
SaaS provides a complete, ready-to-use application managed by the provider (for example, Microsoft 365). It minimizes administration the most, but you typically cannot deploy your own custom web application code as-is; you would be adopting a vendor’s application instead of hosting yours. For a migration of an existing custom web app, SaaS is usually not the correct fit unless you are replacing the app entirely.
PaaS is the best fit for migrating and hosting your own web application while minimizing administrative effort. Services like Azure App Service let you deploy code without managing servers, OS patching, or much of the runtime maintenance. You still control the application and configuration, but Microsoft manages the underlying platform, enabling easier scaling, high availability options, and integrated monitoring and security features.
IaaS (virtual machines) gives you the most control and is common for lift-and-shift migrations, but it requires the most administration. You are responsible for the OS, patching, web server/runtime configuration, scaling setup, backups, and ongoing maintenance. Because the requirement is to minimize administrative effort, IaaS is generally the least suitable option among the main service models.
DaaS (Database as a Service) refers to managed database offerings (for example, Azure SQL Database, Azure Cosmos DB) where the provider manages database infrastructure and many maintenance tasks. While DaaS can reduce database administration, it does not address hosting the web application itself. You would still need a compute/hosting model (PaaS or IaaS) for the web tier.
Core concept: This question tests the cloud service models (IaaS, PaaS, SaaS) and how they affect operational responsibility. In AZ-900, “minimize administrative effort” typically means choosing the model where the cloud provider manages the most of the underlying platform while still allowing you to deploy your own application. Why the answer is correct: Platform as a Service (PaaS) is designed for hosting applications without managing servers, operating systems, or much of the runtime patching. For a web application accessed by external users, a PaaS offering such as Azure App Service (Web Apps) is a common recommendation. With App Service, Microsoft manages the infrastructure, OS updates, many platform patches, and provides built-in capabilities (scaling, SSL binding, deployment slots), which significantly reduces administrative overhead compared to running VMs. Key features and best practices: PaaS web hosting in Azure typically includes automated patching of the underlying OS, built-in load balancing, autoscale, monitoring integration (Azure Monitor/Application Insights), and simplified CI/CD integration (GitHub Actions/Azure DevOps). From an Azure Well-Architected Framework perspective, PaaS improves Operational Excellence (less toil, standardized deployments), Reliability (managed platform with HA options), and Security (managed patching, integration with Entra ID, managed certificates, private endpoints where applicable). Common misconceptions: SaaS can sound like the lowest admin effort, but SaaS means you consume a complete application provided by a vendor (e.g., Microsoft 365, Dynamics 365). In this scenario you are migrating “a web application” you own, so you need a hosting platform rather than replacing it with a vendor’s finished product. IaaS (VMs) is often chosen for lift-and-shift, but it requires the most administration (OS patching, web server configuration, scaling, backups). “DaaS” is not the right model for hosting a web app; it refers to managed database services. Exam tips: When the question says “minimize administrative effort” for hosting your own app, think PaaS. When it says “no code/consume a complete application,” think SaaS. When it says “maximum control/custom OS,” think IaaS. Also watch for wording like “web app hosting” which strongly maps to Azure App Service (PaaS).
Which two types of customers are eligible to use Azure Government to develop a cloud solution? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
Incorrect. A Canadian government contractor is not automatically eligible for Azure Government. Azure Government is a U.S. sovereign cloud intended for U.S. government entities and approved U.S. government contractors. Canadian public sector organizations typically use commercial Azure regions in Canada (or other arrangements), but they do not qualify for Azure Government based solely on being a government contractor.
Incorrect. A European government contractor is not eligible for Azure Government by default. Azure Government is restricted to U.S. government entities and validated U.S. government contractors. European contractors may use commercial Azure regions in Europe or other sovereign solutions, but they cannot use Azure Government unless they meet specific U.S. eligibility requirements (which this option does not imply).
Correct. A United States government entity (federal, state, local, or tribal) is a primary intended customer for Azure Government. The platform is designed to meet U.S. public sector compliance requirements and provides isolation from commercial Azure, supporting regulated workloads and governance needs that U.S. government agencies commonly have.
Correct. A United States government contractor can be eligible for Azure Government, provided they complete Microsoft’s eligibility validation and are supporting U.S. government workloads. This is a key audience for Azure Government because many regulated solutions are built and operated by contractors on behalf of U.S. government agencies.
Incorrect. A European government entity is not eligible for Azure Government. The service is a U.S. sovereign cloud environment with restricted access for U.S. public sector customers and approved contractors. European government entities generally use commercial Azure in European regions or other sovereign offerings, but not Azure Government.
Core concept: Azure Government is a sovereign cloud environment designed for U.S. public sector workloads. It is physically isolated from the commercial Azure cloud, operated by screened U.S. persons, and built to meet U.S. government compliance requirements (for example, FedRAMP High, DoD IL levels for certain services/regions, CJIS support in specific scenarios). The exam is testing who is eligible to use this environment. Why the answer is correct: Eligible customers for Azure Government include (1) U.S. federal, state, local, and tribal government entities and (2) U.S. government contractors that meet eligibility requirements and can validate their relationship to U.S. government workloads. Therefore, a United States government entity (C) and a United States government contractor (D) are the two correct choices. Key features / important details: Azure Government uses separate datacenters, separate network, and separate identity endpoints (for example, *.usgovcloudapi.net) to support regulatory and contractual requirements. Access is not “open sign-up” like commercial Azure; customers must go through an eligibility validation process. From an Azure Well-Architected Framework perspective, this supports Security and Compliance requirements (data residency, personnel screening, and regulatory attestations) and helps meet governance needs for public sector workloads. Common misconceptions: A frequent trap is assuming “any government” or “any contractor” qualifies. Azure Government is specifically for U.S. government and its approved ecosystem. Canadian or European entities/contractors do not qualify for Azure Government simply because they are governmental; they would typically use commercial Azure in-region, or other sovereign offerings where available (for example, certain national clouds/sovereign solutions), but not Azure Government. Exam tips: For AZ-900, remember the three common cloud environments: Public (commercial Azure), Sovereign (Azure Government), and specialized clouds. If the question says “Azure Government,” think “U.S. public sector eligibility + isolated environment + compliance-driven access.” If the option is non-U.S. (European/Canadian), it is almost always incorrect for Azure Government eligibility questions.
You have an Azure web app. You need to manage the settings of the web app from an iPhone. What are two Azure management tools that you can use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
Azure CLI is a command-line tool for managing Azure resources, including App Service. However, it typically requires a local installation and a supported OS/shell environment. An iPhone is not a standard platform for installing and running Azure CLI natively, so it’s not considered a complete solution for managing web app settings directly from the phone in an exam context.
The Azure portal is a web-based management interface accessible through a browser, including on an iPhone. It provides full UI-based management of an Azure web app (App Service), such as application settings, configuration, scaling, deployment slots, and monitoring. Because it requires no local installation and works via mobile browser access, it is a complete solution.
Azure Cloud Shell is a browser-accessible shell environment hosted by Azure. From an iPhone, you can open Cloud Shell in the Azure portal and run Azure CLI or Azure PowerShell commands to manage the web app’s settings. It avoids local installation and provides authenticated access plus persistent storage, making it a complete mobile-friendly management tool.
Windows PowerShell is a scripting and automation environment commonly used to manage Azure (often via the Az PowerShell module). However, it assumes a Windows (or at least a compatible PowerShell runtime) environment. An iPhone does not natively provide a standard Windows PowerShell execution environment, so it’s not a complete solution for managing settings from the phone.
Azure Storage Explorer is a client application designed to manage Azure Storage resources (blobs, files, queues, and tables). It is not intended for configuring or managing Azure App Service web app settings. Additionally, it’s a desktop tool rather than a mobile-first management option, making it unsuitable for this requirement.
Core concept: This question tests Azure management tools and how you can administer Azure resources (an Azure App Service Web App) from different devices. In AZ-900, you should recognize the primary management planes: the Azure portal (web UI), Azure Cloud Shell (browser-based shell), and command-line tools (Azure CLI/PowerShell) that typically require a suitable execution environment. Why the answer is correct: From an iPhone, the most practical and fully supported ways to manage a web app’s settings are: 1) The Azure portal (B): It’s a web-based interface accessible from a mobile browser. You can view and modify App Service configuration such as application settings, connection strings, deployment slots, scaling, and monitoring. 2) Azure Cloud Shell (C): Cloud Shell runs in the browser and provides an authenticated shell environment hosted by Azure. Because it’s browser-based, you can use it from an iPhone without installing local tooling. From Cloud Shell you can run Azure CLI or Azure PowerShell commands to manage the web app. Key features and best practices: - Azure portal provides guided experiences, validation, and resource blades for App Service configuration. It aligns with Azure Well-Architected operational excellence by simplifying day-2 operations (configuration, diagnostics, access control via RBAC). - Azure Cloud Shell provides a managed environment (with Azure CLI and PowerShell available) and persistent storage via an Azure Files share, enabling repeatable operational tasks and scripts without local setup. Common misconceptions: - Azure CLI (A) and Windows PowerShell (D) are management tools, but they generally require installation and a compatible local runtime. An iPhone is not a typical environment for installing and running these tools natively, so they are not considered complete solutions in this context. - Azure Storage Explorer (E) manages storage accounts (blobs, files, queues, tables) and is not used to manage App Service web app settings. Exam tips: For “manage from a phone/mobile device” questions, prioritize browser-based tools: Azure portal and Cloud Shell. If the question implies “no local installation,” Cloud Shell is often the best fit. Also, map tools to resource types: Storage Explorer is for storage, not App Service configuration.
You have an on-premises network that contains several servers. You plan to migrate all the servers to Azure. You need to recommend a solution to ensure that some of the servers are available if a single Azure data center goes offline for an extended period. What should you include in the recommendation?
Fault tolerance is the ability of a system to keep running when a failure occurs (server, rack, or datacenter). For an extended datacenter outage, you design redundancy and failover using Availability Zones (separate datacenters within a region) and/or cross-region disaster recovery. This directly matches the requirement that “some of the servers are available” even if one datacenter is down.
Elasticity refers to automatically adding or removing resources to match demand (for example, autoscaling during peak usage and scaling in when demand drops). While elasticity improves cost efficiency and performance under variable load, it does not inherently protect against a datacenter outage unless combined with a fault-tolerant multi-zone or multi-region design.
Scalability is the ability to increase capacity to handle growth, either vertically (bigger VM) or horizontally (more instances). Like elasticity, scalability focuses on capacity and performance, not resiliency. A scalable system can still be taken down if all instances are in a single datacenter and that datacenter becomes unavailable.
Low latency means minimizing network delay so users get faster responses, often achieved by choosing closer regions, using CDNs, or optimizing routing. Latency is a performance characteristic, not an availability strategy. A low-latency deployment can still experience downtime if it lacks redundancy across datacenters or zones.
Core concept: This question tests the cloud concept of resiliency, specifically fault tolerance (and closely related high availability). Fault tolerance is the ability of a system to continue operating when a component fails. In Azure terms, the “component” can be a server, rack, or an entire datacenter (availability zone). Why the answer is correct: If a single Azure datacenter goes offline for an extended period, you need workloads to keep running from another isolated location. That requirement maps directly to fault tolerance: designing the solution so that failure of one datacenter does not cause an outage. In practice, you achieve this by deploying across fault domains and, more importantly for datacenter-level failures, across Availability Zones (zone-redundant architecture) or across paired regions (disaster recovery). The question is phrased at the concept level, and “fault tolerance” is the correct term among the options. Key features / configurations (what you would include in a recommendation): - Use Availability Zones for zonal redundancy: place VMs in a zone-redundant configuration using VM Scale Sets, or use Availability Zones with load balancers. - Use zone-redundant services where available (e.g., zone-redundant storage, Azure SQL zone redundancy in supported tiers/regions). - Consider region-to-region DR for extended outages or regional disasters: Azure Site Recovery, database geo-replication, and traffic routing via Azure Front Door or Traffic Manager. These align with the Azure Well-Architected Framework Reliability pillar: design for redundancy, failover, and recovery. Common misconceptions: Elasticity and scalability are about handling changing demand (performance/capacity), not surviving a datacenter outage. Low latency is about response time and proximity, not resiliency. Exam tips: When you see “datacenter goes offline,” think Availability Zones (datacenter-level isolation). When you see “region goes offline,” think paired regions and disaster recovery. If the question asks for the overarching concept, choose fault tolerance/high availability rather than performance-related terms.
What are two characteristics of the public cloud? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
Dedicated hardware is not a standard characteristic of the public cloud. Public cloud typically uses shared physical infrastructure with logical isolation (multi-tenancy). While Azure offers dedicated options (e.g., Azure Dedicated Host, isolated SKUs) for compliance or licensing needs, these are specialized services rather than the defining model for public cloud.
Unsecured connections are not a characteristic of public cloud. Public cloud services are designed to be secured through strong identity (Microsoft Entra ID), encryption in transit/at rest, network controls (NSGs, firewalls), and private connectivity options (VPN/ExpressRoute, Private Link). Security is a shared responsibility, not an inherent weakness of public cloud.
Limited storage is the opposite of a typical public cloud characteristic. Public cloud platforms provide scalable storage that can grow on demand (e.g., Azure Blob Storage, managed disks, Azure Files) with various performance and redundancy tiers. While quotas and service limits exist, the model is generally elastic and expandable rather than inherently limited.
Metered pricing is a key public cloud characteristic: you pay for what you use based on measured consumption (CPU time, GB-month storage, transactions, bandwidth, etc.). Azure supports consumption-based billing and also offers commitment-based discounts (Reserved Instances, Savings Plans), but usage is still measured and billed accordingly—matching the “measured service” concept.
Self-service management is a defining public cloud trait: customers can provision, configure, and deprovision resources on demand without provider intervention. In Azure, this is enabled through the portal, APIs, ARM/Bicep, Azure CLI/PowerShell, and automation tools. This supports rapid provisioning, agility, and operational consistency through Infrastructure as Code.
Core Concept: This question tests foundational public cloud characteristics commonly emphasized in AZ-900: consumption-based (pay-as-you-go) billing and on-demand self-service. Public cloud refers to cloud services delivered over the internet by a third-party provider (for example, Microsoft Azure) using shared infrastructure with logical isolation. Why the Answer is Correct: D (metered pricing) is a defining public cloud trait: customers are billed based on measured usage (compute time, storage consumed, requests, egress bandwidth, etc.). This aligns with the NIST cloud model’s “measured service” and is central to Azure’s consumption model (with some services also offering reserved capacity/commitments, but still grounded in usage measurement). E (self-service management) is also core: customers can provision and manage resources on demand without requiring human interaction with the provider. In Azure, this is done via the Azure portal, ARM/Bicep templates, Azure CLI/PowerShell, and APIs—supporting rapid provisioning and elasticity. Key Features / Best Practices: Public cloud typically provides rapid provisioning, scalability/elasticity, global reach, and a shared responsibility model. From an Azure Well-Architected Framework perspective, metered pricing supports Cost Optimization (pay only for what you use, right-size, autoscale, shut down dev/test), while self-service management supports Operational Excellence (automation, IaC, repeatable deployments, policy-driven governance). Azure also offers tools like Cost Management + Billing, budgets, and tagging to manage consumption. Common Misconceptions: “A. dedicated hardware” can exist in public cloud (e.g., Azure Dedicated Host), but it is not a general characteristic of the public cloud; it’s a specialized option. “B. unsecured connections” is incorrect because public cloud can be secured using encryption, private endpoints, VPN/ExpressRoute, NSGs, firewalls, and identity controls. “C. limited storage” is incorrect because public cloud is known for scalable, effectively on-demand storage capacity. Exam Tips: For AZ-900, map public cloud to NIST characteristics: on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. When you see “metered pricing” and “self-service,” they are strong indicators of public cloud fundamentals.
Your company hosts an accounting application named App1 that is used by all the customers of the company. App1 has low usage during the first three weeks of each month and very high usage during the last week of each month. Which benefit of Azure Cloud Services supports cost management for this type of usage pattern?
High availability is about keeping an application running despite failures by using redundancy (multiple instances, zones, regions) and failover. While it improves resiliency and supports uptime SLAs, it typically increases cost because you run extra capacity. It does not specifically address the need to reduce resources during low-demand periods, so it’s not the best cost-management benefit for this usage pattern.
High latency means slower response times between users and the application, usually caused by distance, network congestion, or inefficient architecture. It is not a benefit of cloud services and does not support cost management. In exam terms, latency is something you try to minimize using regions close to users, CDNs, caching, and optimized networking—not something you select as an advantage.
Elasticity is the ability to automatically or quickly scale resources out/in (or up/down) to match demand. For an app with low usage most of the month and a predictable spike at month-end, elasticity enables adding capacity only when needed and removing it afterward. This aligns directly with cost optimization because you avoid paying for peak resources all month and can use autoscale rules or schedules.
Load balancing distributes incoming traffic across multiple instances to improve performance and reliability. It helps handle high traffic, but it doesn’t inherently reduce costs because you still must provision (or scale) the instances behind the load balancer. Load balancing is often used together with elasticity (autoscaling + load balancing), but by itself it doesn’t address scaling down during low usage.
Core Concept: This question tests the cloud concept of elasticity (and its close partner, scalability) as a cost-management benefit. In Azure, elasticity means you can automatically or quickly add/remove compute resources to match demand, then pay only for what you use. Why the Answer is Correct: App1 has a predictable usage pattern: low demand for ~3 weeks and a spike in the last week. Elasticity enables scaling out (adding instances/compute) during the high-usage period and scaling in (removing instances/compute) during the low-usage period. This directly supports cost management because you avoid paying for peak capacity all month. In Azure, this is commonly implemented with autoscaling rules (for example, based on CPU, queue length, requests, or schedules) so capacity increases during the last week and decreases afterward. Key Features / Best Practices: Elasticity is delivered through services and features such as Virtual Machine Scale Sets autoscale, App Service autoscale, AKS cluster autoscaler, and serverless options (Azure Functions) that scale dynamically. From an Azure Well-Architected Framework Cost Optimization perspective, elasticity is a primary lever: right-size resources, scale based on demand, and use automation to prevent overprovisioning. For predictable spikes (like “last week of the month”), schedule-based autoscale is often ideal, sometimes combined with metric-based rules for unexpected surges. Common Misconceptions: High availability and load balancing are often associated with “handling high traffic,” but they don’t inherently reduce cost. High availability focuses on resiliency and uptime (often increasing cost due to redundancy). Load balancing distributes traffic across instances, but you still need enough instances provisioned; it doesn’t automatically reduce capacity when demand drops. “High latency” is not a benefit; it’s a performance problem. Exam Tips: For AZ-900, map keywords to concepts: “variable demand,” “spikes,” “pay only for what you use,” and “scale up/down” point to elasticity. If the question emphasizes uptime/SLAs, think high availability. If it emphasizes distributing traffic across multiple servers, think load balancing. If it emphasizes cost savings from matching capacity to demand, choose elasticity.
You have an on-premises network that contains 100 servers. You need to recommend a solution that provides additional resources to your users. The solution must minimize capital and operational expenditure costs. What should you include in the recommendation?
A complete migration to the public cloud can reduce CapEx by eliminating on-prem hardware purchases and can lower some OpEx through managed services. However, “complete migration” is not always the most cost-minimizing recommendation when an organization already has substantial on-premises investment and only needs extra capacity. Migration projects also introduce transition costs, time, and potential refactoring.
An additional data center increases both CapEx and OpEx. You must purchase land/space, servers, networking, and storage, and you must pay ongoing costs for power, cooling, physical security, maintenance, and staffing. This directly conflicts with the requirement to minimize capital and operational expenditure and does not provide the elasticity benefits of cloud.
A private cloud provides cloud-like management and virtualization but is still owned and operated by the organization (or dedicated hosting). It typically requires significant upfront investment in hardware and ongoing operational management, so it does not minimize CapEx/OpEx compared to using public cloud resources on demand. It also lacks the same elasticity and pay-as-you-go economics.
A hybrid cloud lets you keep existing on-premises servers while using Azure for additional capacity when needed. This supports pay-as-you-go scaling (reducing CapEx) and offloads some operational responsibilities to the cloud provider (reducing OpEx). It’s a common approach for cloud bursting, dev/test, backup, and disaster recovery while maintaining on-premises workloads.
Core concept: This question tests cloud deployment models and cost optimization. The key idea is using cloud elasticity (scale up/down on demand) to add capacity without buying and operating more on-premises hardware. Why the answer is correct: A hybrid cloud combines on-premises infrastructure with public cloud resources. With 100 existing on-prem servers, the organization can “burst” into Azure when additional compute/storage is needed, rather than purchasing new servers (capital expense) or building/expanding a data center. This minimizes CapEx by avoiding upfront hardware purchases and minimizes OpEx by reducing ongoing costs for power, cooling, physical security, and hardware lifecycle management. In practice, you keep steady-state workloads on-premises and use Azure for variable or peak demand, disaster recovery, dev/test, or new services. Key features and best practices: Hybrid is commonly enabled through connectivity (VPN Gateway or ExpressRoute), identity integration (Microsoft Entra ID with hybrid identity), and management/governance (Azure Arc, Azure Policy). For “additional resources,” typical patterns include cloud bursting with virtual machines/scale sets, adding storage (Azure Storage), or using PaaS services to offload operational burden. From the Azure Well-Architected Framework cost optimization pillar, hybrid supports right-sizing and pay-as-you-go consumption while maintaining existing investments. Common misconceptions: A complete migration to the public cloud can also reduce CapEx, but it is not always the lowest-cost or fastest path when you already have significant on-premises assets and may have constraints (latency, data residency, legacy apps). A private cloud sounds “cloud-like,” but it still requires buying and operating hardware—often higher CapEx/OpEx. An additional data center is the most expensive option. Exam tips: When the question mentions an existing on-prem environment and the need for “additional resources” with minimal CapEx/OpEx, think “hybrid cloud” and “cloud bursting.” If the question instead says “eliminate data center” or “move everything,” then public cloud migration becomes more likely.
Your company plans to request an architectural review of an Azure environment from Microsoft. The company currently has a Basic support plan. You need to recommend a new support plan for the company. The solution must minimize costs. Which support plan should you recommend?
Premier support can provide architectural guidance and extensive proactive services, so it would satisfy the technical requirement. However, it is a high-end enterprise support offering intended for organizations needing broad, ongoing support management and proactive engagement. Because the question specifically asks to minimize costs, Premier is more expensive than necessary. Professional Direct provides the needed architectural advisory capability at a lower cost.
Developer support is intended for trial, development, and non-production use cases rather than enterprise architectural review needs. It offers limited support scope and does not include the advisory architecture services implied by a formal architectural review request. Although it is cheaper, it does not meet the requirement. Cost minimization only applies after the functional requirement is satisfied.
Professional Direct is the correct choice because it includes advisory support and architecture guidance capabilities that go beyond standard break-fix technical support. An architectural review from Microsoft implies access to experts who can assess design decisions and provide recommendations, which aligns with Professional Direct benefits. It is also less expensive than Premier, so it best meets the requirement to minimize costs while still enabling the requested service. Among the listed options, it is the lowest tier that fits the need for architecture-focused engagement.
Standard support provides technical support for production workloads, including faster response and 24/7 access for certain severities, but it is primarily a reactive support plan. It does not include the advisory or architecture review services associated with Professional Direct. The question is specifically about requesting an architectural review from Microsoft, which requires more than standard technical support. Therefore, Standard is insufficient even though it costs less.
Core concept: This question tests knowledge of Azure support plans and which plan includes advisory support such as architecture guidance or architectural reviews from Microsoft. In AZ-900, Basic covers only billing/subscription management, while higher tiers add technical support and, at the Professional Direct level, advisory services. Why correct: If a company wants Microsoft to perform or assist with an architectural review, it needs a plan that includes advisory support beyond break-fix technical assistance. Among the listed options, Professional Direct is the lowest-cost plan that provides architecture support/advisory capabilities from Microsoft. Therefore, it satisfies the requirement while still minimizing cost relative to Premier. Key features: Professional Direct includes business-critical support, faster response times than Standard, and advisory services such as architecture guidance and support from Microsoft experts. Standard is primarily reactive technical support for production workloads, not a proactive architecture review offering. Premier is a higher-end enterprise support model with broader proactive services, but it is more expensive than necessary here. Common misconceptions: A common mistake is assuming Standard is enough because it supports production workloads and offers 24/7 technical support. However, production support is not the same as architectural review or advisory engagement. Another misconception is choosing Premier because it certainly includes such services, but the question explicitly asks to minimize costs. Exam tips: On AZ-900, distinguish between reactive technical support and proactive/advisory support. Basic is billing only, Developer is for non-production, Standard is production technical support, and Professional Direct adds advisory capabilities. When the requirement mentions architecture reviews, advisory services, or proactive guidance, Professional Direct is typically the minimum suitable choice among standard support plans.
HOTSPOT - For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:
If you have Azure resources deployed to every region, you can implement availability zones in all the regions.
No. Availability Zones are not available in all Azure regions. Even if you deploy resources to every Azure region, you can only use Availability Zones in regions that are “zone-enabled.” Additionally, zone support can vary by service within a region (for example, a region may support zonal VMs but a specific PaaS offering might not be zone-redundant there yet). For AZ-900, the key point is that zones are a regional feature and are not universally supported. The correct approach is to check the official Azure “regions and availability zones” documentation (and service-specific documentation) to confirm whether a region supports zones before designing for zonal resiliency. Therefore, the statement that you can implement availability zones in all regions is false.
Only virtual machines that run Windows Server can be created in availability zones.
No. Availability Zones are not limited to Windows Server virtual machines. Azure supports creating zonal virtual machines running both Windows and Linux, and the zonal placement is an infrastructure attribute (zone 1/2/3) rather than an operating system feature. In practice, you can deploy Windows Server, various Linux distributions, and many marketplace images into a specific zone, assuming the VM size and the region support zonal deployment. The incorrect option reflects a common misconception that certain resiliency features are OS-specific; in Azure, high availability constructs like zones apply broadly across compute and other services. For exam purposes: OS choice does not determine whether a VM can be deployed into an Availability Zone.
Availability zones are used to replicate data and applications to multiple regions.
No. Availability Zones are designed for high availability within a single region by distributing resources across multiple physically separate datacenters in that region. Replicating data and applications to multiple regions is a multi-region disaster recovery (DR) strategy, not an Availability Zone capability. Cross-region replication is typically provided by services and patterns such as Azure Storage geo-redundant options (GRS/GZRS), Azure SQL active geo-replication, Azure Site Recovery, or multi-region application routing using Azure Front Door or Traffic Manager. Availability Zones help protect against a datacenter/zone failure; multi-region replication helps protect against a regional outage. Since the statement claims zones replicate to multiple regions, it is incorrect.
HOTSPOT - How should you calculate the monthly uptime percentage? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:
First selection
The numerator of the monthly uptime percentage formula must represent the time the service was actually available during the month. That is calculated as total possible time minus the time it was unavailable. Therefore, the correct first selection is (Maximum Available Minutes – Downtime in Minutes). This produces the “available minutes” for the month. Why the others are wrong: - Downtime in Minutes alone (A) is not uptime; it’s the opposite measure. Using downtime as the numerator would incorrectly increase the percentage as downtime increases. - Maximum Available Minutes (B) alone would imply 100% uptime regardless of outages, because it ignores downtime entirely. In SLA terms, you always subtract downtime from the total measurement window to get actual uptime, then divide by the total measurement window.
Second selection
To compute maximum available minutes for a month, you typically build it from days → hours → minutes. The key conversion factor from hours to minutes is 60 minutes per hour. Given the options (60, 1,440, Maximum Available Minutes), the correct second selection is 1,440 because it represents the number of minutes in a day (24 × 60 = 1,440). In many uptime calculations, you’ll see monthly minutes computed as (number of days in month × 1,440). Why the others are wrong: - 60 (A) is minutes per hour, but the common monthly shortcut is minutes per day (1,440). Using 60 would require an additional step (hours per day). - Maximum Available Minutes (C) is the final computed value, not the conversion factor used to calculate it.
Third selection
Uptime percentage is expressed as a percentage, so after calculating the uptime ratio (available minutes divided by maximum available minutes), you multiply by 100. Therefore, the correct third selection is 100. Why the others are wrong: - 99.99 (B) is a specific SLA target (often referred to as “four nines”), not the multiplier used to convert a ratio into a percentage. You might compare your computed result to 99.99%, but you don’t multiply by 99.99. - 1.440 (C) appears to be a misformatted version of 1,440 (minutes per day). That value is used earlier to compute total minutes in the month, not as the final percentage multiplier. So the final structure is: ((Maximum Available Minutes – Downtime in Minutes) / Maximum Available Minutes) × 100.
HOTSPOT - For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:
The Service Level Agreement (SLA) guaranteed uptime for paid Azure services is at least 99.9 percent.
No. It is not true that the SLA guaranteed uptime for paid Azure services is at least 99.9% across the board. SLAs vary by service and configuration. Some services have SLAs below 99.9% in certain tiers or scenarios, and some services may have no SLA at all (especially preview features). For example, a single-instance Virtual Machine has historically had a lower SLA than a VM deployed in an Availability Set or across Availability Zones. Conversely, some services offer higher SLAs (e.g., 99.95%, 99.99%) when deployed with redundancy features. The exam expects you to understand that “paid” does not automatically mean “>= 99.9% SLA”; you must check the specific service’s SLA documentation and the required architecture to qualify for that SLA.
Companies can increase the Service Level Agreement (SLA) guaranteed uptime by adding Azure resources to multiple regions.
Yes. Deploying resources across multiple regions can increase overall solution availability (and can be part of meeting higher availability targets), because it reduces the impact of a single regional outage. In practice, you use multi-region architectures such as active-active or active-passive, combined with services like Azure Traffic Manager, Azure Front Door, or load balancing plus replicated data stores (e.g., geo-redundant options). While each individual service still has its own SLA, the composite application uptime can be improved by eliminating regional single points of failure, which is a core Reliability principle in the Azure Well-Architected Framework. The key idea: multi-region redundancy improves resiliency and can raise the effective end-to-end uptime compared to a single-region deployment.
Companies can increase the Service Level Agreement (SLA) guaranteed uptime by purchasing multiple subscriptions.
No. Purchasing multiple subscriptions does not increase the SLA guaranteed uptime. A subscription is primarily a billing, quota, and management boundary (used for cost management, access control scoping, and governance). It does not automatically create redundancy, failover capability, or higher-availability deployment patterns for a service. If you deploy the same single-instance workload in two different subscriptions but still in the same region and without proper redundancy/failover design, you have not meaningfully improved availability. To increase uptime, you must add resilient architecture elements (multiple instances, Availability Zones, multi-region deployments, replicated data, automated failover), not simply split resources across subscriptions. Subscriptions can help with organizational separation and limits management, but they are not an SLA improvement mechanism.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. Your Azure environment contains multiple Azure virtual machines. You need to ensure that a virtual machine named VM1 is accessible from the Internet over HTTP. Solution: You modify a DDoS protection plan. Does this meet the goal?
Answering "Yes" is incorrect because DDoS Protection does not control whether HTTP traffic is permitted to reach a VM. Even with DDoS Standard enabled, if VM1 lacks a public IP (or a public load balancer/app gateway front end) or if an NSG blocks TCP/80, HTTP access from the Internet will fail. DDoS protection is additive security for already-public resources; it does not publish services or open ports. You still need explicit networking configuration to expose VM1 over HTTP.
A DDoS protection plan is designed to mitigate DDoS attacks against public IP resources, not to configure inbound connectivity. Changing the plan will not assign a public IP to VM1, create a load balancer rule, or add an NSG rule to allow TCP port 80. Therefore, VM1 will not become accessible over HTTP simply by modifying DDoS protection settings. To meet the goal, you must configure a public endpoint and allow inbound TCP/80 via NSG (and/or load balancer/application gateway rules).
Core concept: This question tests how to expose an Azure VM to the Internet over HTTP and which Azure services/configurations actually control inbound HTTP reachability (public IP, NSG rules, load balancer/NAT, and optionally Azure Firewall/WAF). Why the answer is correct: Modifying an Azure DDoS Protection plan does not make a VM reachable over HTTP. DDoS Protection (Standard) is a network protection service that mitigates volumetric and protocol attacks against public IP resources, but it does not create or change inbound allow rules, does not assign a public IP, and does not publish port 80 to the Internet. To make VM1 accessible over HTTP, you must ensure VM1 has a public endpoint (e.g., a public IP directly on the NIC, or a public Load Balancer with an inbound NAT rule / load-balancing rule) and that network security rules allow TCP/80. Key features / configurations: - Public exposure: Public IP on VM NIC or Azure Load Balancer (public) front end. - Traffic allowance: NSG inbound rule allowing TCP 80 from Internet (or specific source ranges) to VM1. - Optional: Application Gateway/WAF for HTTP(S) layer protection; Azure Firewall for centralized filtering. - DDoS Standard: Enabled at the VNet level; protects public IP resources in that VNet but does not open ports. Common misconceptions: - Assuming DDoS Protection “enables” Internet access or opens ports; it only mitigates attacks. - Confusing security services (DDoS/WAF) with connectivity configuration (public IP/NSG/LB rules). - Believing that enabling a protection plan automatically publishes services; publishing requires explicit inbound configuration. Exam tips: - DDoS Protection Standard mitigates attacks; it does not change NSG rules or create public endpoints. - For Internet HTTP access to a VM, think: public IP (or public LB/App Gateway) + NSG allow TCP/80. - Always separate “reachability” (routing/endpoints) from “protection” (DDoS/WAF/firewall).
This question requires that you evaluate the underlined text to determine if it is correct.
Resource groups provide organizations with the ability to manage the compliance of Azure resources across multiple subscriptions.
Instructions: Review the underlined text. If it makes the statement correct, select No change is needed. If the statement is incorrect, select the answer choice that makes the statement correct.
No change is needed is incorrect because resource groups do not provide compliance management across multiple subscriptions. A resource group is scoped to a single subscription and is mainly for organizing resources, applying RBAC at that scope, and managing lifecycle operations like deployment and deletion. While you can tag and control access at the resource-group level, that is not the same as defining and enforcing compliance rules. Cross-subscription compliance requires a governance service such as Azure Policy applied at an appropriate scope.
Management groups are used to organize subscriptions into a hierarchy and provide a scope for applying governance controls like Azure Policy and RBAC across multiple subscriptions. However, management groups themselves do not define or enforce compliance rules; they are a container/scope. To actually manage compliance, you must assign Azure Policy (or initiatives) at the management group scope. Therefore, replacing the text with “Management groups” would be incomplete and technically inaccurate for compliance enforcement.
Azure Policy is the Azure governance service used to create, assign, and manage policies that enforce or audit resource configurations for compliance. Policies can be assigned at broad scopes (management group or subscription) to cover multiple subscriptions and their resources consistently. It provides compliance reporting and can prevent non-compliant deployments using effects like Deny, or remediate using Modify/DeployIfNotExists. This directly matches the requirement to manage compliance across Azure resources spanning multiple subscriptions.
Azure App Service plans are a compute and pricing construct for hosting web apps, APIs, and functions, defining region, SKU, and scaling characteristics. They have nothing to do with governance, compliance evaluation, or enforcing configuration standards across resources or subscriptions. App Service plans apply only to App Service workloads and do not provide policy-based compliance controls. The scenario is about compliance management, which is addressed by Azure Policy, not hosting plans.
Core concept: This question tests your understanding of Azure governance and compliance tooling—specifically which Azure construct is used to manage and enforce compliance across resources and potentially across multiple subscriptions. Why the answer is correct: Resource groups are primarily a logical container for organizing and managing Azure resources (lifecycle, RBAC scoping, tagging) within a single subscription. They do not, by themselves, provide compliance enforcement across multiple subscriptions. Azure Policy is the governance service designed to define rules (policy definitions) and enforce/assess compliance (policy assignments) across scopes including management groups, subscriptions, and resource groups. Therefore, replacing “Resource groups” with “Azure policies” makes the statement correct. Key features / configurations: - Azure Policy definitions: JSON rules that describe allowed/denied configurations (e.g., allowed locations, required tags, allowed SKUs). - Policy assignments and scope: Assign policies at management group, subscription, resource group, or resource level to evaluate/enforce compliance. - Effects: Deny, Audit, Append, Modify, DeployIfNotExists (enforcement vs. reporting vs. remediation). - Initiatives (policy sets): Group multiple policies to manage compliance frameworks at scale. - Compliance reporting and remediation tasks: View compliance state and remediate non-compliant resources (often with managed identity for Modify/DeployIfNotExists). Common misconceptions: - Confusing resource groups (organization/lifecycle boundary) with governance/compliance enforcement (Azure Policy). - Assuming management groups “provide compliance” directly; they provide hierarchy and scope for applying governance tools, but the compliance rules come from Azure Policy. - Mixing up Azure Policy with RBAC: RBAC controls who can do what; Policy controls what configurations are allowed. Exam tips: - Azure Policy = enforce/assess configuration compliance (Deny/Audit/Modify/DeployIfNotExists). - Management groups = organize subscriptions and provide a scope to apply Policy/RBAC across many subscriptions. - Resource groups = organize resources within a subscription; useful for RBAC scoping and lifecycle management, not cross-subscription compliance. - If the question says “compliance,” think Azure Policy (and sometimes Blueprints/Defender for Cloud), not resource groups.
You have an Azure environment that contains multiple Azure virtual machines. You plan to implement a solution that enables the client computers on your on-premises network to communicate to the Azure virtual machines. You need to recommend which Azure resources must be created for the planned solution. Which two Azure resources should you include in the recommendation? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
A virtual network gateway is the Azure-side VPN endpoint used for site-to-site VPN (or VNet-to-VNet) connectivity. It enables encrypted IPsec/IKE tunnels and routing between on-premises networks and Azure VNets so on-premises clients can reach Azure VM private IPs. It must be deployed into a dedicated GatewaySubnet and requires an appropriate gateway SKU and a public IP for VPN scenarios.
A load balancer distributes network traffic across multiple VMs (Layer 4) to improve availability and scale for inbound or internal traffic. It does not create a private connection from on-premises to Azure; it only balances traffic once it is already in Azure (or coming from the internet/VNet). Therefore it is not a required resource for enabling on-premises client communication to Azure VMs.
An application gateway is a Layer 7 (HTTP/HTTPS) load balancer with features like SSL termination, path-based routing, and WAF. It is used to publish and protect web applications, not to establish hybrid network connectivity. Even if you wanted to expose a web app to on-premises users, it would not replace the need for a VPN/ExpressRoute gateway for private network communication.
A virtual network provides the private IP address space and subnets where Azure VMs reside. While a VNet is foundational for hosting VMs, the question focuses on enabling on-premises clients to communicate with those VMs. The specific hybrid connectivity requirements from the options are the Virtual Network Gateway and the GatewaySubnet; the VNet is assumed to already exist because the environment contains VMs.
A gateway subnet (named exactly GatewaySubnet) is a required, dedicated subnet within a VNet that hosts the virtual network gateway resources. Azure requires this subnet to deploy a VPN gateway, and it should not contain other workloads. Proper sizing is important for future scalability and features (for example, active-active gateways or additional gateway-related services).
Core concept: This question is testing connectivity from an on-premises network to Azure virtual machines. In Azure, the standard way to enable private network communication between on-premises clients and Azure VNets is a VPN connection (site-to-site VPN) or ExpressRoute. For AZ-900, the expected building blocks for a site-to-site VPN are a Virtual Network Gateway and its required GatewaySubnet. Why the answer is correct: To allow on-premises client computers to communicate with VMs in Azure over a private, encrypted tunnel, you deploy a Virtual Network Gateway (VPN gateway) in the Azure virtual network. The gateway provides the VPN endpoint in Azure and handles IPsec/IKE negotiation and routing between the on-premises network and the Azure VNet. A Virtual Network Gateway must be deployed into a dedicated subnet named GatewaySubnet; without it, the gateway cannot be created. Therefore, the two required Azure resources from the list are (A) a virtual network gateway and (E) a gateway subnet. Key features / configurations / best practices: A GatewaySubnet is a special subnet reserved for gateway resources. Best practice is to size it appropriately (often /27 or larger) to allow future growth (additional gateway instances, active-active, or other gateway-related features). The Virtual Network Gateway is chosen as VPN type (route-based is common) and is associated with a public IP. You then create a connection to the on-premises VPN device (Local Network Gateway is also typically required, but it is not an option here). From an Azure Well-Architected Framework perspective, this supports Security (encrypted traffic), Reliability (gateway SKUs and active-active options), and Operational Excellence (standardized connectivity pattern). Common misconceptions: Many learners pick “virtual network” because VMs live in a VNet; however, the question asks which resources must be created for on-premises communication, and the gateway components are the critical requirements. Load Balancer and Application Gateway are for distributing inbound traffic to services/VMs, not for establishing private hybrid connectivity. Exam tips: For hybrid connectivity questions: Site-to-site VPN typically requires Virtual Network Gateway + GatewaySubnet (and usually Local Network Gateway + VPN connection). If the question emphasizes private connectivity rather than web traffic distribution, think “gateway,” not “load balancer/application gateway.”
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. Your Azure environment contains multiple Azure virtual machines. You need to ensure that a virtual machine named VM1 is accessible from the Internet over HTTP. Solution: You modify an Azure firewall. Does this meet the goal?
Answering "Yes" assumes that Azure Firewall is the required and sufficient control to expose VM1 over HTTP. However, Azure Firewall is not automatically in the inbound path for Internet-to-VM traffic and does not publish services without explicit DNAT configuration and routing. Without those specific configurations (and typically corresponding NSG allowances), modifying the firewall would not make VM1 accessible over TCP/80 from the Internet.
Azure Firewall changes alone do not guarantee that VM1 becomes reachable from the Internet over HTTP. To publish HTTP via Azure Firewall, you must configure a public IP on the firewall, create a DNAT rule mapping TCP/80 to VM1’s private IP, and ensure routing sends inbound traffic through the firewall. In many deployments, inbound Internet access is instead enabled via a public IP (or load balancer) and NSG rules, so simply modifying the firewall does not meet the stated goal.
Core concept: This question tests how to publish an Azure VM to the Internet over HTTP (TCP/80) and which Azure networking/security components are appropriate for enabling inbound Internet access. Why the answer is correct: Modifying Azure Firewall does not inherently make a specific VM (VM1) reachable from the Internet over HTTP unless the firewall is actually in the inbound traffic path and is configured with the required public IP, routing (UDRs), and DNAT/network rules to forward TCP/80 to VM1. In many standard VM deployments, inbound Internet access is controlled by the VM’s public IP (or a load balancer/app gateway) and Network Security Group (NSG) rules on the subnet/NIC. Azure Firewall is primarily used for centralized filtering and egress control, and it is not automatically used for inbound publishing of a VM. Therefore, “modify an Azure firewall” by itself does not meet the goal. Key features / configurations: - Typical inbound HTTP exposure for a VM: Public IP on VM NIC or Public Load Balancer + NSG inbound rule allowing TCP/80. - Azure Firewall inbound publishing requires: Azure Firewall with a Public IP + DNAT rule (TCP/80 -> VM1 private IP:80) + correct routing so traffic traverses the firewall. - NSGs still commonly apply at subnet/NIC level to allow the forwarded traffic. Common misconceptions: - Assuming Azure Firewall automatically protects/publishes all VMs in a VNet without explicit routing and DNAT configuration. - Confusing Azure Firewall (centralized filtering) with services designed for inbound publishing like Public Load Balancer or Application Gateway. - Forgetting that inbound access to a VM typically requires both a public endpoint and an allow rule (NSG) for the port. Exam tips: - For “VM accessible from the Internet on port X,” think: Public IP or public load balancer + NSG inbound allow. - Azure Firewall can publish inbound traffic only with DNAT and correct routing; it’s not automatic. - If the question doesn’t mention DNAT/public IP/routing through the firewall, “modify Azure Firewall” is usually not sufficient.
HOTSPOT - For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:
All Azure services in private preview must be accessed by using a separate Azure portal.
No. Private preview does not mean you must use a separate Azure portal. Most private preview features are accessed through the same Azure portal and management plane (Azure Resource Manager), but require explicit enablement—such as being allowlisted by Microsoft, registering a feature flag, using a specific API version, or deploying via CLI/PowerShell/ARM templates. The defining characteristic of private preview is restricted availability (invite-only) and limited support, not a different portal. A separate portal is not a standard requirement for preview services and is not how Microsoft generally delivers previews. On the exam, treat “separate portal” as a distractor; focus instead on access restrictions and the lack of production guarantees.
Azure services in public preview can be used in production environments.
No. Public preview services/features are generally not recommended for production environments. While Microsoft may allow you to deploy and test them, public preview is intended for evaluation, feedback, and non-critical workloads. Because the service can change, may have limited support, and typically has no SLA, using it for production workloads conflicts with reliability best practices. For AZ-900, the safe rule is: production workloads should use GA services unless Microsoft explicitly states otherwise for a specific preview. In Well-Architected terms, running production on preview increases operational and reliability risk due to potential breaking changes and lack of uptime commitments.
Azure services in public preview are subject to a Service Level Agreement (SLA).
No. Azure services in public preview are typically not covered by an SLA. SLAs are generally associated with GA services and specify Microsoft’s uptime commitments when the service is deployed according to the SLA requirements. Preview offerings (public preview and private preview) commonly exclude SLA guarantees and may also have limited support. This is a frequent AZ-900 exam point: “Preview = no SLA.” Even if a preview feature is available broadly, Microsoft does not usually provide the same contractual uptime commitments until the service reaches GA.










