
Simulate the real exam experience with 50 questions and a 45-minute time limit. Practice with AI-verified answers and detailed explanations.
AI-Powered
Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.
What are two characteristics of the public cloud? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
Dedicated hardware is not a standard characteristic of the public cloud. Public cloud typically uses shared physical infrastructure with logical isolation (multi-tenancy). While Azure offers dedicated options (e.g., Azure Dedicated Host, isolated SKUs) for compliance or licensing needs, these are specialized services rather than the defining model for public cloud.
Unsecured connections are not a characteristic of public cloud. Public cloud services are designed to be secured through strong identity (Microsoft Entra ID), encryption in transit/at rest, network controls (NSGs, firewalls), and private connectivity options (VPN/ExpressRoute, Private Link). Security is a shared responsibility, not an inherent weakness of public cloud.
Limited storage is the opposite of a typical public cloud characteristic. Public cloud platforms provide scalable storage that can grow on demand (e.g., Azure Blob Storage, managed disks, Azure Files) with various performance and redundancy tiers. While quotas and service limits exist, the model is generally elastic and expandable rather than inherently limited.
Metered pricing is a key public cloud characteristic: you pay for what you use based on measured consumption (CPU time, GB-month storage, transactions, bandwidth, etc.). Azure supports consumption-based billing and also offers commitment-based discounts (Reserved Instances, Savings Plans), but usage is still measured and billed accordingly—matching the “measured service” concept.
Self-service management is a defining public cloud trait: customers can provision, configure, and deprovision resources on demand without provider intervention. In Azure, this is enabled through the portal, APIs, ARM/Bicep, Azure CLI/PowerShell, and automation tools. This supports rapid provisioning, agility, and operational consistency through Infrastructure as Code.
Core Concept: This question tests foundational public cloud characteristics commonly emphasized in AZ-900: consumption-based (pay-as-you-go) billing and on-demand self-service. Public cloud refers to cloud services delivered over the internet by a third-party provider (for example, Microsoft Azure) using shared infrastructure with logical isolation. Why the Answer is Correct: D (metered pricing) is a defining public cloud trait: customers are billed based on measured usage (compute time, storage consumed, requests, egress bandwidth, etc.). This aligns with the NIST cloud model’s “measured service” and is central to Azure’s consumption model (with some services also offering reserved capacity/commitments, but still grounded in usage measurement). E (self-service management) is also core: customers can provision and manage resources on demand without requiring human interaction with the provider. In Azure, this is done via the Azure portal, ARM/Bicep templates, Azure CLI/PowerShell, and APIs—supporting rapid provisioning and elasticity. Key Features / Best Practices: Public cloud typically provides rapid provisioning, scalability/elasticity, global reach, and a shared responsibility model. From an Azure Well-Architected Framework perspective, metered pricing supports Cost Optimization (pay only for what you use, right-size, autoscale, shut down dev/test), while self-service management supports Operational Excellence (automation, IaC, repeatable deployments, policy-driven governance). Azure also offers tools like Cost Management + Billing, budgets, and tagging to manage consumption. Common Misconceptions: “A. dedicated hardware” can exist in public cloud (e.g., Azure Dedicated Host), but it is not a general characteristic of the public cloud; it’s a specialized option. “B. unsecured connections” is incorrect because public cloud can be secured using encryption, private endpoints, VPN/ExpressRoute, NSGs, firewalls, and identity controls. “C. limited storage” is incorrect because public cloud is known for scalable, effectively on-demand storage capacity. Exam Tips: For AZ-900, map public cloud to NIST characteristics: on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. When you see “metered pricing” and “self-service,” they are strong indicators of public cloud fundamentals.
Want to practice all questions on the go?
Download Cloud Pass for free — includes practice tests, progress tracking & more.


Want to practice all questions on the go?
Get the free app
Download Cloud Pass for free — includes practice tests, progress tracking & more.
You have an on-premises network that contains 100 servers. You need to recommend a solution that provides additional resources to your users. The solution must minimize capital and operational expenditure costs. What should you include in the recommendation?
A complete migration to the public cloud can reduce CapEx by eliminating on-prem hardware purchases and can lower some OpEx through managed services. However, “complete migration” is not always the most cost-minimizing recommendation when an organization already has substantial on-premises investment and only needs extra capacity. Migration projects also introduce transition costs, time, and potential refactoring.
An additional data center increases both CapEx and OpEx. You must purchase land/space, servers, networking, and storage, and you must pay ongoing costs for power, cooling, physical security, maintenance, and staffing. This directly conflicts with the requirement to minimize capital and operational expenditure and does not provide the elasticity benefits of cloud.
A private cloud provides cloud-like management and virtualization but is still owned and operated by the organization (or dedicated hosting). It typically requires significant upfront investment in hardware and ongoing operational management, so it does not minimize CapEx/OpEx compared to using public cloud resources on demand. It also lacks the same elasticity and pay-as-you-go economics.
A hybrid cloud lets you keep existing on-premises servers while using Azure for additional capacity when needed. This supports pay-as-you-go scaling (reducing CapEx) and offloads some operational responsibilities to the cloud provider (reducing OpEx). It’s a common approach for cloud bursting, dev/test, backup, and disaster recovery while maintaining on-premises workloads.
Core concept: This question tests cloud deployment models and cost optimization. The key idea is using cloud elasticity (scale up/down on demand) to add capacity without buying and operating more on-premises hardware. Why the answer is correct: A hybrid cloud combines on-premises infrastructure with public cloud resources. With 100 existing on-prem servers, the organization can “burst” into Azure when additional compute/storage is needed, rather than purchasing new servers (capital expense) or building/expanding a data center. This minimizes CapEx by avoiding upfront hardware purchases and minimizes OpEx by reducing ongoing costs for power, cooling, physical security, and hardware lifecycle management. In practice, you keep steady-state workloads on-premises and use Azure for variable or peak demand, disaster recovery, dev/test, or new services. Key features and best practices: Hybrid is commonly enabled through connectivity (VPN Gateway or ExpressRoute), identity integration (Microsoft Entra ID with hybrid identity), and management/governance (Azure Arc, Azure Policy). For “additional resources,” typical patterns include cloud bursting with virtual machines/scale sets, adding storage (Azure Storage), or using PaaS services to offload operational burden. From the Azure Well-Architected Framework cost optimization pillar, hybrid supports right-sizing and pay-as-you-go consumption while maintaining existing investments. Common misconceptions: A complete migration to the public cloud can also reduce CapEx, but it is not always the lowest-cost or fastest path when you already have significant on-premises assets and may have constraints (latency, data residency, legacy apps). A private cloud sounds “cloud-like,” but it still requires buying and operating hardware—often higher CapEx/OpEx. An additional data center is the most expensive option. Exam tips: When the question mentions an existing on-prem environment and the need for “additional resources” with minimal CapEx/OpEx, think “hybrid cloud” and “cloud bursting.” If the question instead says “eliminate data center” or “move everything,” then public cloud migration becomes more likely.
Your company plans to request an architectural review of an Azure environment from Microsoft. The company currently has a Basic support plan. You need to recommend a new support plan for the company. The solution must minimize costs. Which support plan should you recommend?
Premier support can provide architectural guidance and extensive proactive services, so it would satisfy the technical requirement. However, it is a high-end enterprise support offering intended for organizations needing broad, ongoing support management and proactive engagement. Because the question specifically asks to minimize costs, Premier is more expensive than necessary. Professional Direct provides the needed architectural advisory capability at a lower cost.
Developer support is intended for trial, development, and non-production use cases rather than enterprise architectural review needs. It offers limited support scope and does not include the advisory architecture services implied by a formal architectural review request. Although it is cheaper, it does not meet the requirement. Cost minimization only applies after the functional requirement is satisfied.
Professional Direct is the correct choice because it includes advisory support and architecture guidance capabilities that go beyond standard break-fix technical support. An architectural review from Microsoft implies access to experts who can assess design decisions and provide recommendations, which aligns with Professional Direct benefits. It is also less expensive than Premier, so it best meets the requirement to minimize costs while still enabling the requested service. Among the listed options, it is the lowest tier that fits the need for architecture-focused engagement.
Standard support provides technical support for production workloads, including faster response and 24/7 access for certain severities, but it is primarily a reactive support plan. It does not include the advisory or architecture review services associated with Professional Direct. The question is specifically about requesting an architectural review from Microsoft, which requires more than standard technical support. Therefore, Standard is insufficient even though it costs less.
Core concept: This question tests knowledge of Azure support plans and which plan includes advisory support such as architecture guidance or architectural reviews from Microsoft. In AZ-900, Basic covers only billing/subscription management, while higher tiers add technical support and, at the Professional Direct level, advisory services. Why correct: If a company wants Microsoft to perform or assist with an architectural review, it needs a plan that includes advisory support beyond break-fix technical assistance. Among the listed options, Professional Direct is the lowest-cost plan that provides architecture support/advisory capabilities from Microsoft. Therefore, it satisfies the requirement while still minimizing cost relative to Premier. Key features: Professional Direct includes business-critical support, faster response times than Standard, and advisory services such as architecture guidance and support from Microsoft experts. Standard is primarily reactive technical support for production workloads, not a proactive architecture review offering. Premier is a higher-end enterprise support model with broader proactive services, but it is more expensive than necessary here. Common misconceptions: A common mistake is assuming Standard is enough because it supports production workloads and offers 24/7 technical support. However, production support is not the same as architectural review or advisory engagement. Another misconception is choosing Premier because it certainly includes such services, but the question explicitly asks to minimize costs. Exam tips: On AZ-900, distinguish between reactive technical support and proactive/advisory support. Basic is billing only, Developer is for non-production, Standard is production technical support, and Professional Direct adds advisory capabilities. When the requirement mentions architecture reviews, advisory services, or proactive guidance, Professional Direct is typically the minimum suitable choice among standard support plans.
HOTSPOT - For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:
If you have Azure resources deployed to every region, you can implement availability zones in all the regions.
No. Availability Zones are not available in all Azure regions. Even if you deploy resources to every Azure region, you can only use Availability Zones in regions that are “zone-enabled.” Additionally, zone support can vary by service within a region (for example, a region may support zonal VMs but a specific PaaS offering might not be zone-redundant there yet). For AZ-900, the key point is that zones are a regional feature and are not universally supported. The correct approach is to check the official Azure “regions and availability zones” documentation (and service-specific documentation) to confirm whether a region supports zones before designing for zonal resiliency. Therefore, the statement that you can implement availability zones in all regions is false.
Only virtual machines that run Windows Server can be created in availability zones.
No. Availability Zones are not limited to Windows Server virtual machines. Azure supports creating zonal virtual machines running both Windows and Linux, and the zonal placement is an infrastructure attribute (zone 1/2/3) rather than an operating system feature. In practice, you can deploy Windows Server, various Linux distributions, and many marketplace images into a specific zone, assuming the VM size and the region support zonal deployment. The incorrect option reflects a common misconception that certain resiliency features are OS-specific; in Azure, high availability constructs like zones apply broadly across compute and other services. For exam purposes: OS choice does not determine whether a VM can be deployed into an Availability Zone.
Availability zones are used to replicate data and applications to multiple regions.
No. Availability Zones are designed for high availability within a single region by distributing resources across multiple physically separate datacenters in that region. Replicating data and applications to multiple regions is a multi-region disaster recovery (DR) strategy, not an Availability Zone capability. Cross-region replication is typically provided by services and patterns such as Azure Storage geo-redundant options (GRS/GZRS), Azure SQL active geo-replication, Azure Site Recovery, or multi-region application routing using Azure Front Door or Traffic Manager. Availability Zones help protect against a datacenter/zone failure; multi-region replication helps protect against a regional outage. Since the statement claims zones replicate to multiple regions, it is incorrect.
This question requires that you evaluate the underlined text to determine if it is correct.
Azure Key Vault is used to store secrets for Azure Active Directory (Azure AD) user accounts.
Instructions: Review the underlined text. If it makes the statement correct, select No change is needed. If the statement is incorrect, select the answer choice that makes the statement correct.
The statement is incorrect because Azure Key Vault is not used to store secrets for Azure AD user accounts (for example, user passwords). User authentication credentials are managed by Azure AD/Entra ID and are not intended to be stored and retrieved as Key Vault secrets. Key Vault is primarily for application/service secrets, keys, and certificates, not end-user credential storage. Leaving the text unchanged preserves the incorrect implication about user account secrets.
Even for Azure AD administrative accounts, Key Vault is not the system of record for their credentials (passwords/MFA methods). Administrative accounts are still user identities whose authentication data is managed within Azure AD/Entra ID, not stored as retrievable secrets in Key Vault. While admins may manage Key Vault, that does not mean their account secrets belong there. The option does not correct the fundamental mismatch between Key Vault’s purpose and user account credential storage.
Azure Key Vault is not intended as a general repository for Personally Identifiable Information (PII). Although Key Vault can store small secret values, PII is typically stored in databases or storage services with appropriate encryption, access controls, and data governance features, while Key Vault manages cryptographic material and application secrets. Using Key Vault for PII at scale is not its primary design and does not match the statement’s context about “secrets for Azure AD user accounts.” This option shifts the topic but does not accurately describe Key Vault’s core use case.
Azure Key Vault is designed to securely store and manage secrets that applications and services use, such as API keys, connection strings, and certificates. In Azure, “server applications” commonly need credentials like service principal client secrets or certificates to authenticate to resources, and Key Vault is a best-practice location to store and rotate those secrets. This aligns with Key Vault’s lifecycle management features (versioning, expiration, rotation) and controlled access via Entra ID (Azure AD). Therefore, replacing the statement with “server applications” makes it accurate.
Core concept: This question tests your understanding of what Azure Key Vault is designed to store (secrets, keys, certificates) and what types of identities or data it is typically used with, versus what Azure AD user accounts represent. Why the answer is correct: Azure Key Vault is commonly used to store and manage secrets used by applications and services (for example, connection strings, API keys, client secrets, and certificates) rather than “secrets for Azure AD user accounts.” Azure AD user accounts authenticate via credentials managed by Azure AD (passwords, MFA methods, etc.), and those user credentials are not stored and retrieved from Key Vault as application secrets. In contrast, server-side applications frequently need non-human credentials (like service principal client secrets or certificates) to authenticate to Azure resources, and Key Vault is the recommended secure store for those application secrets. Key features / configurations: - Secrets, Keys, Certificates objects in Azure Key Vault (and Managed HSM for HSM-backed keys) - Integration patterns: applications retrieve secrets at runtime via Key Vault REST API/SDK - Authentication/authorization to Key Vault via Microsoft Entra ID (Azure AD) and access control via RBAC or Key Vault access policies - Common use cases: storing service principal client secrets, TLS certificates, database connection strings, API tokens Common misconceptions: - Assuming Key Vault stores Azure AD user passwords or user credential material; user credentials are managed by the identity provider (Entra ID/Azure AD), not stored as retrievable secrets in Key Vault. - Confusing “Azure AD secrets” (app registrations/service principals) with “Azure AD user account secrets.” Key Vault is appropriate for app/service credentials, not end-user passwords. - Thinking Key Vault is a general-purpose data store for sensitive data like PII; it is optimized for secret/key/cert lifecycle management, not arbitrary sensitive records. Exam tips: - Key Vault stores: secrets/keys/certificates for apps and services—not user passwords. - For app-to-Azure authentication, store client secrets/certs in Key Vault and use Entra ID to authorize access. - PII belongs in appropriate data stores with encryption and access controls; Key Vault is for managing cryptographic material and secrets. - Watch wording: “user accounts” vs “applications/service principals” is often the deciding factor.
HOTSPOT - Which cloud deployment solution is used for Azure virtual machines and Azure SQL databases? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:
Azure virtual machines:
Azure virtual machines are Infrastructure as a Service (IaaS). In IaaS, Azure provides the physical servers, storage, networking, and hypervisor, but you are responsible for managing the guest operating system and everything above it. That includes OS configuration, patching, antivirus/endpoint protection, installed middleware, and your applications. You also control VM sizing, disks, and network settings (NSGs, routing, etc.). Why not PaaS? PaaS abstracts away the OS and much of the platform management (for example, Azure App Service), which is not the case for VMs—you still administer the OS. Why not SaaS? SaaS is a finished application consumed by users (like email or CRM). A VM is not a finished application; it is a compute building block you deploy and manage, which aligns directly with IaaS in the shared responsibility model.
Azure SQL databases:
Azure SQL Database is Platform as a Service (PaaS). Microsoft manages the underlying infrastructure and the database platform components, including the OS, SQL engine patching, automated backups, and built-in high availability (depending on service tier and configuration). You primarily manage the database itself: schema design, data, queries, performance tuning at the logical level, and security settings (logins/users, firewall rules, auditing, etc.). Why not IaaS? If you ran SQL Server on an Azure VM, that would be IaaS because you would manage the OS and SQL Server installation/patching. Azure SQL Database removes that responsibility. Why not SaaS? SaaS would be a complete end-user application. Azure SQL Database is a database platform service used by applications, not an end-user application itself, so it fits PaaS.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure subscription named Subscription1. You sign in to the Azure portal and create a resource group named RG1. From Azure documentation, you have the following command that creates a virtual machine named VM1. az vm create --resource-group RG1 --name VM1 --image UbuntuLTS --generate-ssh-keys You need to create VM1 in Subscription1 by using the command. Solution: From the Azure portal, launch Azure Cloud Shell and select PowerShell. Run the command in Cloud Shell. Does this meet the goal?
The solution meets the goal because Azure Cloud Shell provides access to the Azure CLI in both Bash and PowerShell sessions. The command shown, `az vm create --resource-group RG1 --name VM1 --image UbuntuLTS --generate-ssh-keys`, is a valid Azure CLI command and can be executed from Cloud Shell PowerShell without needing to convert it to an Az PowerShell cmdlet. Since the user is signed in to the Azure portal and Cloud Shell uses that authenticated context, the command can create VM1 in the subscription context available to the session. Therefore, launching Cloud Shell, selecting PowerShell, and running the command is sufficient to meet the requirement.
Answering "No" incorrectly assumes that Azure CLI commands cannot be run from the PowerShell version of Azure Cloud Shell. In reality, Cloud Shell PowerShell includes the Azure CLI alongside the Az PowerShell module, so `az` commands are supported there as well. While Bash is commonly used in examples, it is not a requirement for Azure CLI execution in Cloud Shell. Therefore, saying the solution does not meet the goal is technically inaccurate.
Core concept: This question tests whether Azure Cloud Shell in PowerShell can run an Azure CLI command to create a virtual machine in an Azure subscription. Azure Cloud Shell includes both Bash and PowerShell environments, and the Azure CLI is available in either shell. Therefore, selecting PowerShell does not prevent the `az vm create` command from working. A common misconception is that Azure CLI commands only work in Bash and that PowerShell requires Az module cmdlets instead; in Cloud Shell, both toolsets are available. Exam tip: distinguish between shell preference and tool availability—Cloud Shell PowerShell can still run Azure CLI commands such as `az vm create`.
Which Azure service should you use to store certificates?
Azure Security Center, now called Microsoft Defender for Cloud, is a security posture management and workload protection service. It helps identify security risks, provide recommendations, and monitor threats across Azure and hybrid environments. It does not serve as a repository for storing certificates, secrets, or private keys, so it is not the correct answer here.
An Azure Storage account can store any file type (including certificate files like .pfx or .cer) as blobs, but it is not designed for secure certificate lifecycle management. While you can apply encryption and access controls, it lacks certificate-specific features such as controlled retrieval patterns for apps, rotation/renewal workflows, and dedicated secret auditing. For exam purposes, Storage is not the correct service for certificate storage.
Azure Key Vault is the correct service for storing and managing certificates, secrets, and cryptographic keys. It provides secure storage, fine-grained access control (RBAC/access policies), auditing via logs, and integration with many Azure services and managed identities. Key Vault supports certificate import and management, and helps implement best practices like least privilege and centralized secret management.
Azure Information Protection (AIP) focuses on classifying, labeling, and protecting documents and emails using rights management and encryption policies. It helps prevent data leakage and enforce access to information, but it is not a service for storing certificates or private keys. AIP is about protecting content, not managing cryptographic assets for applications and infrastructure.
Core concept: This question tests knowledge of Azure’s dedicated service for securely storing sensitive cryptographic material. In Azure, certificates, secrets, and keys are stored in Azure Key Vault, which is designed to protect these assets and control access to them. Why correct: Azure Key Vault is the Azure service specifically built to store and manage certificates, secrets, and cryptographic keys. It provides secure storage, access control, and auditing, and it integrates with Azure applications and services so certificates do not need to be embedded in code or configuration files. Key features: Key Vault supports storing imported certificates and managing certificate objects, including associated keys and secrets. It offers access control through Azure RBAC or vault access policies, logging through Azure Monitor, and security features such as soft delete and purge protection. It is the standard Azure answer whenever an exam question asks where to securely store certificates, passwords, or keys. Common misconceptions: A Storage account can hold certificate files, but it is not the dedicated Azure service for secure certificate management. Microsoft Defender for Cloud helps monitor and improve security posture, but it does not store certificates. Azure Information Protection protects documents and emails with classification and rights management, not certificate storage. Exam tips: For AZ-900, associate certificates, secrets, and encryption keys with Azure Key Vault. If the question asks where to securely store sensitive application secrets or certificates in Azure, Key Vault is the expected answer.
Your company plans to deploy an Artificial Intelligence (AI) solution in Azure. What should the company use to build, test, and deploy predictive analytics solutions?
Azure Logic Apps is a low-code workflow automation service used to integrate apps, data, and services with connectors and triggers. It can orchestrate steps around an AI solution (e.g., call an ML endpoint, route results, send notifications), but it does not provide capabilities to build, train, test, and deploy predictive models. It’s primarily for process automation and integration, not machine learning development.
Azure Machine Learning Designer is a visual, drag-and-drop tool within Azure Machine Learning for creating end-to-end machine learning pipelines. It supports preparing data, training models, evaluating/testing performance, and deploying models for inference. This directly matches the requirement to build, test, and deploy predictive analytics solutions, especially in an exam context where “Designer” implies a no-code/low-code ML workflow.
Azure Batch is a service for running large-scale parallel and high-performance computing (HPC) workloads. While it can be used to execute compute-intensive tasks (including parts of data processing or model training scripts), it does not provide the managed ML lifecycle features such as experiment tracking, model registry, evaluation modules, or straightforward deployment to real-time endpoints. It’s compute orchestration, not an ML platform.
Azure Cosmos DB is a globally distributed NoSQL database for low-latency data storage and replication. It can store application data or even features used by ML systems, but it is not used to build, test, or deploy predictive analytics models. In AI architectures, Cosmos DB is typically a data layer component, whereas Azure Machine Learning is the model development and deployment layer.
Core concept: This question tests recognition of Azure services used to build, test, and deploy predictive analytics (machine learning) solutions. In Azure, the primary managed service for end-to-end machine learning lifecycle (data prep, training, evaluation, and deployment) is Azure Machine Learning, and its no-code/low-code interface is Azure Machine Learning Designer. Why the answer is correct: Azure Machine Learning Designer provides a visual drag-and-drop environment to create machine learning pipelines for predictive analytics. It supports building models (e.g., classification/regression), testing/evaluating them with built-in metrics, and operationalizing them by deploying as real-time endpoints (online inference) or batch inference pipelines. This aligns directly with “build, test, and deploy predictive analytics solutions,” which is the typical ML workflow. Key features and best practices: Designer includes prebuilt modules for data ingestion, feature engineering, algorithm selection, training, cross-validation, and scoring. It integrates with Azure Machine Learning workspaces for experiment tracking, model registry, and MLOps capabilities (often via GitHub/Azure DevOps). For deployment, Azure ML can host managed online endpoints and supports scaling, authentication, and monitoring—important for reliability and security (Azure Well-Architected Framework: Reliability, Security, Operational Excellence). It also supports responsible AI tooling and governance patterns through workspace-based access control (RBAC) and integration with Azure Monitor/Log Analytics. Common misconceptions: Some options are “automation” or “compute” services and can be used in AI solutions indirectly, but they are not purpose-built for ML model development and deployment. For example, Azure Batch can run large-scale jobs, but it doesn’t provide ML pipeline design, model management, or endpoint deployment. Logic Apps orchestrates workflows, not ML training. Cosmos DB stores data, not models. Exam tips: For AZ-900, map keywords to services: “predictive analytics,” “train model,” “deploy model,” and “ML lifecycle” typically point to Azure Machine Learning (Designer/Studio). If the question emphasizes no-code visual building, “Designer” is the strongest match. If it emphasizes custom code and notebooks, it might say “Azure Machine Learning” more generally, but Designer remains the correct option here.
You plan to deploy several Azure virtual machines. You need to control the ports that devices on the Internet can use to access the virtual machines. What should you use?
A network security group (NSG) is the correct choice because it filters network traffic to and from Azure resources. You create inbound rules to allow only specific ports (e.g., 80/443) from specific sources (e.g., Internet or a trusted IP range) and deny everything else. NSGs can be applied at the subnet or NIC level and are stateful with rule priorities and default deny behavior for inbound Internet traffic.
An Azure AD role is used for Azure role-based access control (RBAC), which governs what actions an identity can perform on Azure resources (management plane), such as starting/stopping a VM or modifying settings. It does not control network connectivity or which TCP/UDP ports are open to the Internet. Port control is a networking function handled by NSGs, firewalls, or similar network security services.
An Azure AD group is a collection of users/devices used to assign permissions via RBAC or to manage access to applications and resources. While groups can simplify authorization (who can manage or access something), they do not enforce network-layer rules like opening or closing ports to a VM. Network access control for VM ports is implemented with NSGs (and optionally Azure Firewall/WAF).
Azure Key Vault is designed to securely store and manage secrets, encryption keys, and certificates, supporting scenarios like TLS certificate management and application secret storage. Although Key Vault can be protected with access policies/RBAC and network controls for the vault itself, it does not control inbound/outbound ports to virtual machines. VM port exposure to the Internet is controlled by NSGs and related network security controls.
Core concept: This question tests Azure network traffic filtering for virtual machines. In Azure, controlling which inbound (and outbound) ports are allowed to reach VMs is primarily done with Network Security Groups (NSGs), which act as stateful packet filters at the network layer. Why the answer is correct: To control the ports that devices on the Internet can use to access Azure virtual machines, you use an NSG to define inbound security rules (for example, allow TCP 443 from Internet, deny TCP 3389 from Internet, allow SSH only from a specific IP range). NSGs can be associated with a subnet (affecting all resources in that subnet) or directly with a VM’s network interface (NIC) for more granular control. This aligns with the Azure Well-Architected Framework Security pillar: enforce least privilege network access and reduce exposure by allowing only required ports and sources. Key features and best practices: NSGs contain prioritized rules (lower number = higher priority) and include default rules (such as allowing VNet traffic and denying inbound from Internet). They are stateful, meaning return traffic is automatically allowed. You can restrict by source/destination IP, port, and protocol, and use service tags (e.g., Internet, VirtualNetwork) and application security groups (ASGs) to simplify rule management at scale. In real deployments, you often combine NSGs with Azure Firewall or a WAF for advanced inspection, but NSGs are the fundamental control for VM port access. Common misconceptions: Azure AD roles and groups control identity and permissions to Azure resources (management plane), not network ports (data plane). Azure Key Vault secures secrets/keys/certificates; it does not filter network traffic to VMs. Exam tips: If the question mentions “control ports,” “allow/deny inbound/outbound traffic,” “VM access,” or “subnet/NIC rules,” think NSG. If it mentions “who can administer resources,” think Azure RBAC/Azure AD. If it mentions “store secrets/certificates,” think Key Vault.