
Microsoft
584+ kostenlose Übungsfragen mit KI-verifizierten Antworten
Microsoft Azure Administrator
KI-gestützt
Jede Microsoft AZ-104-Antwort wird von 3 führenden KI-Modellen kreuzverifiziert, um maximale Genauigkeit zu gewährleisten. Erhalte detaillierte Erklärungen zu jeder Option und tiefgehende Fragenanalysen.
You have an Azure subscription that contains a policy-based virtual network gateway named GW1 and a virtual network named VNet1. You need to ensure that you can configure a point-to-site connection from an on-premises computer to VNet1. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
Incorrect. Service endpoints are used to provide optimized private access from a VNet to Azure PaaS services such as Storage or SQL Database. They do not create VPN connectivity and have no role in enabling point-to-site access from an on-premises computer. This option is unrelated to virtual network gateway type or P2S requirements.
Incorrect. Resetting a virtual network gateway can help recover from transient operational issues, but it does not alter the gateway architecture or supported features. A reset will not convert a policy-based gateway into a route-based gateway and therefore will not make P2S possible. The problem here is a design limitation, not a temporary fault.
Correct. Point-to-site VPN in Azure requires a route-based virtual network gateway because P2S relies on dynamic routing capabilities that policy-based gateways do not provide. Creating a route-based gateway is therefore a mandatory step before any P2S configuration can be applied. This is the supported gateway type for client VPN connections from individual on-premises devices into an Azure virtual network.
Incorrect. Adding a connection to GW1 is not the required step for enabling point-to-site on a policy-based gateway. P2S is configured directly on a supported route-based virtual network gateway using client address pools and authentication settings, not by simply adding a connection object to an unsupported gateway. Since GW1 cannot support P2S at all, this action would not solve the problem.
Correct. GW1 is currently policy-based, and Azure does not allow changing an existing policy-based gateway to route-based directly. To deploy the required route-based gateway, you must first delete the unsupported existing gateway. This is a standard replacement scenario in Azure networking and is often tested as a prerequisite for enabling P2S.
Incorrect. Azure virtual networks use private IP address spaces, and adding public IP address space to a VNet is neither required nor appropriate for point-to-site VPN. P2S requires a VPN client address pool and a public IP resource associated with the gateway deployment, not public address ranges assigned to the VNet itself. This option reflects a misunderstanding of Azure VNet addressing.
Core concept: Azure point-to-site (P2S) VPN connections are supported only on route-based virtual network gateways. A policy-based gateway cannot be used for P2S, and Azure does not support converting a policy-based gateway to route-based in place. Why correct: Because GW1 is policy-based, it must be deleted and replaced with a new route-based virtual network gateway before P2S can be configured. Key features: Route-based gateways support P2S, site-to-site, and VNet-to-VNet scenarios, while policy-based gateways are more limited and do not support P2S. Common misconceptions: Creating a connection object is not what enables P2S on an unsupported gateway type, and service endpoints or VNet address changes are unrelated to VPN gateway capabilities. Exam tips: When a question mentions P2S and a policy-based gateway, immediately think delete and recreate the gateway as route-based.
Möchtest du alle Fragen unterwegs üben?
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.
Möchtest du alle Fragen unterwegs üben?
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.
Möchtest du alle Fragen unterwegs üben?
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.


Lade Cloud Pass herunter und erhalte kostenlosen Zugang zu allen Microsoft AZ-104-Übungsfragen.
Möchtest du alle Fragen unterwegs üben?
Kostenlose App holen
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.
You have an on-premises server that contains a folder named D:\Folder1. You need to copy the contents of D:\Folder1 to the public container in an Azure Storage account named contosodata. Which command should you run?
This is only a URL to the container endpoint, not a command. While the URL format is correct for a container in the contosodata storage account, it does not perform any action by itself. In practice, you would use this URL as the destination parameter in a tool like AzCopy (often with a SAS token appended) or in SDK/CLI operations.
`azcopy sync` is used to synchronize a source and destination so they match, which is different from a straightforward copy requirement. Additionally, `--snapshot` is not the appropriate flag for uploading a local folder to a container; snapshots apply to blobs. For simply copying folder contents to Blob Storage, `azcopy copy ... --recursive` is the expected command.
This is the correct command. `azcopy copy` supports uploading from a local folder to an Azure Blob container. The `--recursive` flag is required to include all files and subdirectories under D:\Folder1. This aligns with common AZ-104 expectations: use AzCopy for bulk transfers from on-premises to Azure Storage and use recursive copy for directories.
`az storage blob copy start-batch` is an Azure CLI command intended for starting server-side copy operations between blobs/containers (typically source and destination are in Azure and referenced by URLs). It is not designed to upload content from a local Windows path like D:\Folder1 directly into Blob Storage. For local-to-blob uploads, AzCopy is the appropriate tool.
Core concept: This question tests how to upload data from an on-premises file system to an Azure Storage account blob container using the correct tool and syntax. For AZ-104, the expected approach is to use AzCopy for high-performance data transfer to Azure Blob Storage. Why the answer is correct: To copy the contents of a local folder (D:\Folder1) into a blob container (public) in the storage account contosodata, you use the AzCopy v10 command `azcopy copy` with the destination container URL and the `--recursive` flag. `--recursive` is required to traverse the directory and upload all files and subfolders. The command in option C correctly specifies a local source path and a blob container destination URL, and it includes `--recursive`, which is the key requirement when copying a directory. Key features / best practices: AzCopy is optimized for throughput, supports parallelism, and is the recommended tool for bulk uploads to Blob Storage. In real deployments, you typically authenticate using Azure AD (`azcopy login`) or a SAS token appended to the destination URL (common in automation). From an Azure Well-Architected Framework perspective (Performance Efficiency and Reliability), AzCopy is preferred over ad-hoc methods because it is resilient, restartable, and designed for large transfers. Common misconceptions: Many candidates confuse “copy” vs “sync.” `azcopy sync` is for mirroring and can delete destination files depending on flags; it’s not the simplest or safest choice when the requirement is just “copy the contents.” Another trap is using Azure CLI blob copy commands, which generally perform server-side copies between blobs/URLs, not uploads from local disk. Exam tips: When the source is on-prem/local and the target is Blob Storage, think AzCopy. If the source is a folder, look for `--recursive`. If you see `az storage blob copy start-batch`, remember it’s typically for copying existing blobs, not uploading local files. Also note that a plain container URL alone is not a command and authentication (AAD/SAS) is assumed unless explicitly asked.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure subscription that contains 10 virtual networks. The virtual networks are hosted in separate resource groups. Another administrator plans to create several network security groups (NSGs) in the subscription. You need to ensure that when an NSG is created, it automatically blocks TCP port 8080 between the virtual networks. Solution: You assign a built-in policy definition to the subscription. Does this meet the goal?
Yes is incorrect because assigning just any built-in policy definition does not guarantee that NSGs will automatically block TCP port 8080 between the virtual networks. The requirement is too specific to assume a matching built-in policy exists. To enforce this consistently, you would typically need a custom Azure Policy definition that checks for or deploys the required NSG rule. Without that specificity, the proposed solution is insufficient.
No is correct because the solution only says to assign a built-in policy definition, and there is no indication that a built-in policy exists to automatically create or enforce an NSG rule that blocks TCP 8080 between virtual networks. Azure Policy can govern NSGs, but highly specific traffic rules like this generally require a custom policy definition. The subscription scope is appropriate for all resource groups, but the built-in-policy limitation means the stated solution does not reliably meet the requirement. Therefore, the goal is not met by the proposed solution as written.
Core concept: This question tests Azure Policy enforcement for network security groups. Azure Policy can evaluate and enforce resource configuration at creation or update time, but the key detail is whether a built-in policy definition exists that specifically ensures NSGs block TCP port 8080 between virtual networks. Why correct: The proposed solution does not meet the goal because simply assigning a built-in policy definition is not sufficient unless there is a built-in policy that enforces the exact required NSG rule. In this scenario, the requirement is very specific: whenever an NSG is created, it must automatically block TCP 8080 traffic between the virtual networks. That level of custom rule enforcement typically requires a custom Azure Policy definition, often using DeployIfNotExists or Deny logic, rather than relying on a generic built-in policy. Key features: Azure Policy can be assigned at the subscription scope, which is appropriate because the virtual networks and NSGs are spread across separate resource groups. Policy can evaluate NSGs during deployment and can deny noncompliant resources or deploy required settings depending on the policy effect. However, built-in policies cover common governance scenarios and do not always match highly specific network rule requirements such as a custom deny rule for TCP 8080 between VNets. Common misconceptions: A common mistake is assuming that any built-in policy can automatically add or enforce detailed NSG rules. Built-in policies are limited to the definitions Microsoft provides, and many specific NSG rule requirements need a custom policy. Another misconception is that assigning a policy alone guarantees remediation; the policy definition and effect must explicitly support the desired enforcement behavior. Exam tips: For AZ-104, when a question asks whether a built-in policy can enforce a very specific configuration, be skeptical unless the requirement matches a known built-in policy exactly. If the requirement involves custom ports, directions, address prefixes, or traffic patterns, a custom Azure Policy is usually needed. Also remember that subscription-level assignment is useful for cross-resource-group governance, but scope alone does not make the policy capable of enforcing the exact rule.
You recently created a new Azure subscription that contains a user named Admin1.
Admin1 attempts to deploy an Azure Marketplace resource by using an Azure Resource Manager template. Admin1 deploys the template by using Azure
PowerShell and receives the following error message: User failed validation to purchase resources. Error message: Legal terms have not been accepted for this item on this subscription. To accept legal terms, please go to the Azure portal (http://go.microsoft.com/fwlink/?LinkId=534873) and configure programmatic deployment for the Marketplace item or create it there for the first time.`
You need to ensure that Admin1 can deploy the Marketplace resource successfully.
What should you do?
Set-AzApiManagementSubscription manages subscriptions within Azure API Management (APIM) for API consumers. It is unrelated to Azure Marketplace purchases or legal terms acceptance. This cmdlet won’t affect ARM template deployments of Marketplace resources and does not address the subscription-level Marketplace terms requirement indicated by the error.
Registering the Microsoft.Marketplace resource provider can be required for some Marketplace-related operations, but the error message is specifically about unaccepted legal terms, not provider registration. If provider registration were the issue, you would typically see errors about an unregistered namespace/provider. Registering the provider alone will not accept publisher terms.
Set-AzMarketplaceTerms is used to accept the legal terms for a specific Marketplace offer/plan in a subscription, enabling programmatic deployments (ARM/PowerShell/CLI). This directly resolves the stated validation failure. Typically you run Get-AzMarketplaceTerms to view the terms and then Set-AzMarketplaceTerms -Accept to record acceptance for that subscription.
Assigning the Billing administrator role changes who can manage billing, invoices, and some purchase-related settings, but it does not automatically accept Marketplace legal terms for a specific offer/plan. The deployment failure is due to missing acceptance, not lack of billing permissions. Even with billing rights, you still must accept the terms explicitly.
Core concept: This question tests Azure Marketplace image/legal terms acceptance for programmatic deployments (ARM/Bicep/PowerShell/CLI). Many Marketplace offers require accepting publisher legal terms on a per-subscription basis before you can deploy them via automation. This is a governance and deployment prerequisite, not a compute or RBAC issue. Why the answer is correct: The error explicitly states that “Legal terms have not been accepted for this item on this subscription” and instructs to “configure programmatic deployment for the Marketplace item or create it there for the first time.” In Azure PowerShell, the correct way to accept Marketplace terms is to retrieve the terms (Get-AzMarketplaceTerms) and then accept them using Set-AzMarketplaceTerms (typically with -Accept). Once accepted, ARM template deployments that reference that Marketplace offer can proceed successfully. Key features / how it works: Marketplace offers (VM images, managed applications, some SaaS) can require legal acceptance. Acceptance is stored at the subscription level for a specific publisher/offer/plan combination. For automation, you must accept terms programmatically (PowerShell/CLI/REST) or deploy once through the portal (which prompts for acceptance). This aligns with Azure Well-Architected Framework governance principles: ensure prerequisites and compliance requirements are met before automated deployments. Common misconceptions: A common mistake is assuming the issue is a missing resource provider registration (Microsoft.Marketplace) or insufficient RBAC/billing permissions. While provider registration can block deployments, the error message would indicate provider registration issues, not legal terms. Similarly, assigning Billing administrator doesn’t automatically accept legal terms; it only changes who can manage billing. Exam tips: When you see “Legal terms have not been accepted” for Marketplace deployments, think “accept Marketplace terms” (Set-AzMarketplaceTerms / az vm image terms accept). Also remember acceptance is per subscription and per plan; changing the plan in the template may require accepting terms again. Reference: Azure Marketplace programmatic deployment and terms acceptance (Get/Set-AzMarketplaceTerms) in Microsoft documentation.
You have an Azure Active Directory (Azure AD) tenant that contains 5,000 user accounts. You create a new user account named AdminUser1. You need to assign the User administrator administrative role to AdminUser1. What should you do from the user account properties?
Incorrect. Assigning a license from the Licenses blade enables access to services and may unlock features (e.g., Entra ID P1/P2 capabilities), but it does not grant Azure AD administrative permissions. Administrative privileges are controlled by directory role assignments (or PIM eligibility/activation), not by product licensing alone.
Correct. From the user account properties, the Directory role (often shown as Assigned roles) blade is where you add or modify Azure AD directory role assignments for that user. Selecting User administrator here grants AdminUser1 the built-in administrative permissions associated with managing users and groups in the tenant.
Incorrect. Inviting or adding the user to a group does not inherently assign an Azure AD administrative role. Group membership only grants admin permissions if the group is specifically configured as a role-assignable group and then assigned the User administrator role (a different workflow than described).
Core concept: This question tests Azure AD (Microsoft Entra ID) role-based access control for identity administration. Administrative permissions in Azure AD are granted through directory roles (Entra built-in roles such as User administrator), not through licenses or group invitations (unless using role-assignable groups, which is a different workflow). Why the answer is correct: To assign the User administrator role to AdminUser1 from the user account properties in the Azure portal, you use the user’s Directory role (or Assigned roles) blade and add/modify the directory role assignment. This directly grants AdminUser1 the permissions associated with the User administrator role across the tenant (subject to any scoped administrative units if used). With 5,000 users, the tenant size doesn’t change the method; role assignment is still done via directory roles. Key features / best practices: - Azure AD roles provide least-privilege administrative access. User administrator can manage users and groups but is less privileged than Global administrator. - Follow Azure Well-Architected Framework security principles: least privilege, separation of duties, and just-in-time access. In production, consider using Privileged Identity Management (PIM) to make the role eligible and require activation with approval/MFA. - Role assignments can be done at the user object level (as in this question) or via role-assignable groups (if enabled) to simplify administration. Common misconceptions: - Licenses enable product features (e.g., M365, Entra ID P1/P2 capabilities) but do not grant admin permissions by themselves. - Adding a user to a group only grants permissions if that group is assigned a role (role-assignable group) or used in an access policy; simply “inviting to a group” doesn’t assign an Azure AD admin role. Exam tips: - If the question says “assign an administrative role,” think “Directory role / Assigned roles,” not licenses. - Distinguish Azure RBAC roles (for Azure resources) from Azure AD directory roles (for tenant identity administration). User administrator is a directory role. - For modern exam scenarios, remember PIM is the recommended approach, but the portal blade for direct assignment remains Directory role/Assigned roles.
You have an Azure Active Directory (Azure AD) tenant named contoso.onmicrosoft.com that contains 100 user accounts. You purchase 10 Azure AD Premium P2 licenses for the tenant. You need to ensure that 10 users can use all the Azure AD Premium features. What should you do?
Correct. Azure AD Premium P2 is a per-user license, so the 10 users must have the P2 license assigned to them to use all Premium features. Assigning the license from the Licenses blade is the direct and standard way to provide entitlement. Once assigned, those users can use P2 capabilities such as Identity Protection and Privileged Identity Management, subject to configuration. This matches the requirement exactly because it ensures only the selected 10 users receive the Premium feature rights.
Incorrect. Simply adding or inviting users to a group does not grant Azure AD Premium P2 features by itself. Group membership only helps if group-based licensing is configured and the P2 license is assigned to that group, which is not stated in this option. The wording says to invite the users to a group, not to assign licenses through the group. Therefore, this action alone would not ensure the 10 users can use all Premium features.
Incorrect. Adding an enterprise application is used for application integration, single sign-on, and service principal management. It has nothing to do with assigning Azure AD Premium P2 licenses to users. Users do not gain Premium directory capabilities merely because an enterprise application exists in the tenant. This option does not address the licensing requirement in the question.
Incorrect. Directory roles determine administrative permissions within Azure AD, such as User Administrator or Global Administrator. They do not provide license entitlement for Azure AD Premium P2 features. A user can hold an admin role and still lack access rights to Premium features if no P2 license is assigned. Therefore, modifying the directory role would not satisfy the requirement.
Core concept: This question tests Azure AD (Microsoft Entra ID) licensing. Azure AD Premium features are enabled for users only when the appropriate Premium license is assigned to those users, either directly or through group-based licensing. Why correct: Because the tenant has purchased 10 Azure AD Premium P2 licenses and needs 10 users to use all Premium features, the required action is to assign those licenses to the 10 target users. Without license assignment, users in the tenant cannot legally or technically be considered entitled to use Premium P2 capabilities such as Identity Protection, Privileged Identity Management, and access reviews. Key features: - Azure AD Premium P2 is a per-user license. - Licenses can be assigned directly from Azure AD Licenses or indirectly by group-based licensing. - Only users with the assigned P2 license should use P2-only features. - Administrative roles and group membership alone do not grant Premium feature entitlement. Common misconceptions: - Adding users to groups does not by itself grant Premium features unless a license is assigned to that group. - Directory roles control permissions, not licensing. - Enterprise applications provide app integration and SSO configuration, not user license entitlement. Exam tips: For AZ-104, when a question asks how to enable Azure AD Premium features for specific users, think first about license assignment. If the option explicitly says assign a license, that is usually the correct answer unless the question specifically mentions group-based licensing.
Your on-premises network contains an SMB share named Share1. You have an Azure subscription that contains the following resources: ✑ A web app named webapp1 ✑ A virtual network named VNET1 You need to ensure that webapp1 can connect to Share1. What should you deploy?
Azure Application Gateway is a Layer 7 (HTTP/HTTPS) load balancer and reverse proxy, often used with Web Application Firewall (WAF). It helps publish and protect web endpoints and route web traffic to backends. It does not provide general network connectivity to on-premises resources over SMB, nor does it create a VPN tunnel. Therefore it won’t enable webapp1 to access an on-prem SMB share.
Azure AD Application Proxy is designed to provide secure remote access to on-premises web applications (HTTP/HTTPS) by placing a connector on-prem and using Azure AD for pre-authentication. It is not intended for file share access or SMB protocol forwarding. Since Share1 is an SMB share, Application Proxy cannot provide the required network path or protocol support for webapp1 to connect to it.
An Azure Virtual Network Gateway enables hybrid connectivity such as Site-to-Site VPN between an Azure VNet (VNET1) and an on-premises network. This is the correct building block to extend your network so Azure workloads can reach on-prem IPs, including an SMB share like Share1. Combined with App Service VNet Integration, webapp1 can route traffic into VNET1 and across the VPN tunnel to access Share1.
Core concept: This question tests hybrid connectivity from an Azure App Service (webapp1) to an on-premises SMB file share. App Service is a PaaS offering that doesn’t sit directly on your on-prem network, so you must provide a secure network path from Azure to on-premises. In AZ-104, this typically maps to site-to-site VPN (or ExpressRoute) connectivity using a Virtual Network Gateway. Why the answer is correct: To allow webapp1 to reach an on-prem SMB share (Share1), you need network-level connectivity between Azure (VNET1) and the on-prem network. Deploying an Azure Virtual Network Gateway enables a Site-to-Site VPN connection from VNET1 to your on-premises VPN device. Then, webapp1 can use VNet Integration to route outbound traffic into VNET1 and across the VPN tunnel to reach Share1 over SMB (TCP 445), assuming routing, DNS, and firewall rules allow it. Key features / configuration notes: - Deploy a Virtual Network Gateway in VNET1 and configure a Local Network Gateway representing on-prem address spaces. - Establish a Site-to-Site IPsec/IKE VPN to your on-prem VPN device. - Configure App Service VNet Integration for webapp1 (regional VNet integration) so the app can send traffic into VNET1. - Ensure name resolution for Share1 (private DNS, custom DNS servers, or Azure DNS Private Resolver) and allow SMB (445) through on-prem firewalls. - From an Azure Well-Architected Framework perspective, this improves Security (private connectivity), Reliability (stable tunnel with proper SKU), and Operational Excellence (centralized network control). Common misconceptions: - Application Gateway is for HTTP/HTTPS load balancing and WAF, not for enabling SMB connectivity to on-prem. - Azure AD Application Proxy publishes internal web apps to external users via HTTP/HTTPS, not SMB shares. Exam tips: When you see “Azure resource needs to connect to on-prem network,” think “VPN Gateway/ExpressRoute.” When you see “publish internal web app externally,” think “AAD App Proxy.” When you see “HTTP(S) reverse proxy/WAF,” think “Application Gateway.”
You have an Azure web app named webapp1. Users report that they often experience HTTP 500 errors when they connect to webapp1. You need to provide the developers of webapp1 with real-time access to the connection errors. The solution must provide all the connection error details. What should you do first?
Incorrect. Web server logging records HTTP request and response information such as URLs, methods, status codes, and timing data, but it generally does not include the full internal exception details that caused an HTTP 500 error. That makes it useful for confirming that failures occurred, but less useful for developers who need the root cause. Since the question emphasizes providing all the error details to developers, application logging is the better first action.
Incorrect. An Azure Monitor workbook is only a visualization and reporting tool. It depends on logs or metrics already being collected and does not itself enable diagnostic capture for the web app. Creating a workbook first would not provide any new real-time error details unless the appropriate logging had already been turned on.
Incorrect. Service Health alerts are designed to notify administrators about Azure platform outages, maintenance events, or service advisories. They do not capture application-specific HTTP 500 failures or provide request-by-request diagnostic details from a web app. This option does not help developers investigate the internal cause of intermittent server errors.
Correct. Application Logging captures diagnostic messages generated by the web app itself, including unhandled exceptions, framework errors, and trace output that explain why HTTP 500 responses are occurring. These logs can be accessed in near real time through log streaming, which directly satisfies the requirement to give developers immediate visibility into the failures. Because the question asks for all the connection error details relevant to troubleshooting the app failure, application-level diagnostics are the most useful first step.
Core concept: This question is about choosing the right Azure App Service diagnostic feature to help developers troubleshoot intermittent HTTP 500 errors in real time. HTTP 500 indicates a server-side application failure, so the most useful first step is to capture application-generated diagnostic output rather than only HTTP access records. Application Logging in Azure App Service records app-level errors, exceptions, and trace messages that reveal why the request failed. Why correct: Turning on Application Logging gives developers near real-time visibility into the actual runtime failures causing the 500 responses. These logs can be streamed live and typically include exception messages, stack traces, and framework/application diagnostics, which are the details developers need to fix the issue. Since the requirement is to provide developers with real-time access to the error details, enabling application logging is the most direct and appropriate first action. Key features: Application Logging supports live log streaming and can write to the file system for immediate troubleshooting. It captures messages emitted by the application framework or code, which is where most HTTP 500 root causes are exposed. This makes it more useful than request-only logging when diagnosing internal server errors. Common misconceptions: Web server logging records requests and response codes, but it usually does not contain the full exception details behind a 500 error. Workbooks only visualize existing telemetry and do not start data collection. Service Health alerts are for Azure platform incidents, not application-specific failures. Exam tips: For App Service troubleshooting, distinguish between request-level logs and application-level logs. If the problem is an HTTP 500 and developers need the underlying error details, prefer Application Logging first. If the requirement were specifically about request history, client IPs, or raw HTTP access records, Web server logging would be more appropriate.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure subscription named Subscription1. Subscription1 contains a resource group named RG1. RG1 contains resources that were deployed by using templates. You need to view the date and time when the resources were created in RG1. Solution: From the Subscriptions blade, you select the subscription, and then click Programmatic deployment. Does this meet the goal?
Yes is incorrect because Programmatic deployment from the Subscriptions blade is not the appropriate, direct method to view the date/time resources were created in a specific resource group. Creation timing for template-based deployments is best obtained from the resource group’s deployment history (RG1 -> Deployments) or from the Activity log filtered to RG1.
No is correct because the Programmatic deployment blade under a subscription does not display the date and time when resources in RG1 were created. That blade is intended to help you deploy resources through templates, PowerShell, CLI, or SDKs, not to review historical deployment timestamps. To determine when template-deployed resources were created, you should look at RG1's Deployments blade or review the Activity log for create operations. Therefore, the proposed navigation path does not meet the stated goal.
Core concept: This question tests Azure VM maintenance controls and how to proactively respond to platform maintenance. Azure may notify you that a VM is scheduled for maintenance (for example, host OS updates or hardware servicing). The relevant capabilities are found under VM Maintenance/Updates and include options like “Redeploy” (move to a new host) and, in some cases, “Self-service maintenance” controls. Why the answer is correct: Selecting “One-time update” from the VM1 Updates blade does not meet the goal of moving the VM to a different host immediately. “One-time update” is associated with applying updates/patches (guest OS updates or update management actions) and does not force a host change. To move a VM to a different host immediately, the typical action is to Redeploy the VM (which stops/deallocates and starts it on a new node) or to use maintenance controls specifically designed for platform maintenance events. Therefore, the proposed solution does not satisfy the requirement. Key features and best practices: - “Redeploy” is the common operational action to force a VM to move to a new Azure host. It results in downtime and a new host assignment, while preserving disks and configuration. - For higher availability, use Availability Sets or Availability Zones so that platform maintenance affects only a subset of instances and you can fail over at the application layer. - Azure Well-Architected Framework (Reliability) recommends designing for planned maintenance via redundancy (zones/sets) rather than relying on reactive host moves. Common misconceptions: It’s easy to confuse “Updates” (patching/Update Management) with “Maintenance” (platform host maintenance). Even though both relate to “maintenance,” only redeploy/maintenance controls impact host placement. Applying a one-time update may reduce guest OS vulnerability but won’t change the underlying host. Exam tips: For AZ-104, remember: “Redeploy” = move VM to a new host. “Restart” does not guarantee a host change. “Updates/One-time update” relates to patching, not host migration. If the question explicitly says “move to a different host immediately,” think Redeploy (or zone/availability design if asked for prevention).
You have an Azure subscription named Subscription1 and an on-premises deployment of Microsoft System Center Service Manager. Subscription1 contains a virtual machine named VM1. You need to ensure that an alert is set in Service Manager when the amount of available memory on VM1 is below 10 percent. What should you do first?
Create an automation runbook is not the first step for integrating Azure alerts with Service Manager. Runbooks (Azure Automation) can be used to execute remediation or custom integrations, but they require you to build and maintain the ticket-creation logic yourself (often via APIs). The question asks what to do first to ensure an alert is set in Service Manager; the standard prerequisite is the ITSM Connector integration.
Deploy a function app could be used to implement a custom webhook endpoint that receives Azure Monitor alerts and then calls Service Manager APIs. However, this is a custom solution and not the expected first step in an AZ-104 context. Microsoft provides the IT Service Management Connector specifically to integrate Azure Monitor with ITSM tools like Service Manager, making a Function App unnecessary for the baseline requirement.
Deploy the IT Service Management Connector (ITSM) is the correct first step because it establishes the integration between Azure Monitor and System Center Service Manager. Once ITSMC is configured, you can create an Azure Monitor alert rule for VM1 memory and use an action group to create a corresponding incident/alert in Service Manager. Without ITSMC, Azure Monitor cannot natively open Service Manager work items.
Create a notification is insufficient because a notification (email/SMS/push) does not create an alert/incident record inside Service Manager. Notifications are delivered to people or endpoints, but Service Manager requires an integration path to create work items. In Azure Monitor, that integration is typically implemented via an action group connected to the ITSM Connector, not by a simple notification alone.
Core concept: This question tests integrating Azure monitoring/alerting with an ITSM tool (System Center Service Manager) so that Azure alerts create incidents/alerts in Service Manager. In Azure, the typical flow is: collect metrics (Azure Monitor), create an alert rule (Metric alert), and route the alert to an ITSM system via an action group using the IT Service Management Connector (ITSMC). Why the answer is correct: The first prerequisite to get an alert “set in Service Manager” from an Azure resource is establishing the integration between Azure Monitor and Service Manager. That integration is provided by the IT Service Management Connector (ITSM). Without ITSMC configured, Azure Monitor action groups cannot create work items (incidents/alerts) in Service Manager. After ITSMC is deployed and configured, you would then create a metric alert on VM1 for “Available memory” (or the appropriate memory metric via VM insights/AMA/Log Analytics if needed) with a threshold of <10%, and attach an action group that targets the ITSM connector. Key features / best practices: - Azure Monitor metric alerts evaluate platform metrics on a schedule and trigger action groups. - Action groups can integrate with ITSM via ITSMC to create incidents in Service Manager. - From an Azure Well-Architected Framework perspective (Reliability/Operational Excellence), centralizing alert-to-ticket automation reduces MTTR and ensures consistent incident management. - Ensure the VM is emitting the required memory telemetry (often via Azure Monitor Agent + VM insights/Log Analytics for guest memory signals), then alert and route through ITSM. Common misconceptions: - Thinking you must start with an automation runbook or Function App: those can create custom ticketing workflows, but they are not the standard/required first step for Service Manager integration. - Thinking a “notification” alone is enough: notifications (email/SMS/webhook) don’t automatically create Service Manager work items without the connector. Exam tips: When the requirement explicitly says “set an alert in Service Manager” (an on-prem ITSM tool), look for the Azure Monitor-to-ITSM integration component. In Microsoft exam scenarios, ITSM Connector is the canonical first step before configuring alert rules and action groups.
You sign up for Azure Active Directory (Azure AD) Premium P2. You need to add a user named [email protected] as an administrator on all the computers that will be joined to the Azure AD domain. What should you configure in Azure AD?
Correct. Device settings in the Devices blade include the Azure AD configuration for additional local administrators on Azure AD joined devices. This tenant-level setting is specifically designed to control which users are automatically placed in the local Administrators group on Azure AD-joined Windows devices. Because the requirement applies to all computers that will be joined to Azure AD, a device-wide configuration is needed rather than a per-user or per-group administrative setting. This makes the Devices blade the correct administrative location for the task.
Incorrect. Providers in the MFA Server blade are used to configure multi-factor authentication integration and legacy MFA Server-related settings. Those settings affect how users authenticate, but they do not control local Windows administrator membership on Azure AD-joined devices. The question is about device administration after join, not authentication policy. Therefore, MFA Server settings are unrelated to the requested configuration.
Incorrect. User settings in the Users blade manage user-related options and profile-level configurations within Azure AD. They do not provide the tenant-wide control that determines who is added to the local Administrators group on Azure AD-joined Windows devices. Although the target of the configuration is a user account, the scope of the requirement is all joined computers, which makes this a device configuration problem. For that reason, the Users blade is not the correct place to configure it.
Incorrect. General settings in the Groups blade are intended for managing group behavior, such as naming conventions, expiration, and self-service group features. They do not define which users receive local administrator rights on Azure AD-joined devices. The question asks for a built-in Azure AD device administration setting that applies across all joined computers, which is found under Devices rather than Groups. As a result, this option does not satisfy the requirement.
Core concept: This question tests Azure AD device administration for Azure AD-joined devices. When Windows devices are joined to Azure AD, Azure AD controls which users are automatically added to the local Administrators group on those devices. This is configured at the tenant level in Azure AD device settings. Why the answer is correct: To make a specific user (user@contoso.com) an administrator on all Azure AD-joined computers, you configure the setting that determines “Additional local administrators on Azure AD joined devices.” This setting is found under Azure AD > Devices > Device settings. It allows you to specify users and/or groups that will be granted local admin rights on every Azure AD-joined device (in addition to the device owner and global administrators, depending on configuration). This is the intended control plane for local admin assignment across Azure AD-joined endpoints. Key features and best practices: - Use groups rather than individual users where possible (e.g., a “Device Local Admins” security group) to simplify lifecycle management and align with least privilege. - This aligns with Azure Well-Architected Framework security principles: minimize standing privilege and centralize identity-based access control. - In real environments, consider using Privileged Identity Management (PIM) (available with Azure AD Premium P2) to make local admin membership eligible/just-in-time via group assignment, reducing persistent admin exposure. Common misconceptions: - Many assume this is a per-user setting (Users blade) or a group setting (Groups blade). However, local admin on Azure AD-joined devices is governed by device join/device settings, not user profile settings. - MFA Server settings are unrelated; they control MFA provider configuration, not device local admin rights. Exam tips: - For Azure AD-joined Windows devices, remember: “Who becomes local admin by default?” is controlled in Azure AD device settings. - If the question mentions “all computers that will be joined,” think tenant-wide device configuration (Devices blade), not per-object settings. - P2 often hints at PIM, but the direct configuration asked here is still in Devices > Device settings.
You have a deployment template named Template1 that is used to deploy 10 Azure web apps. You need to identify what to deploy before you deploy Template1. The solution must minimize Azure costs. What should you identify?
Five Application Gateways are not required to deploy web apps. Application Gateway is a Layer 7 load balancer/WAF used for advanced routing, TLS termination, and protection. It adds significant cost and complexity and is only justified for specific requirements (WAF, path-based routing, private access via ILB, etc.). It is not a prerequisite for App Service deployment.
One App Service plan is the required foundational resource to host the 10 web apps and is the most cost-effective option when apps can share the same region/OS and scaling boundary. App Service pricing is primarily per plan (SKU and instance count), so multiple apps on one plan typically do not multiply compute costs, making this the best cost-minimizing prerequisite.
Ten App Service plans would allow isolation and independent scaling per app, but it is usually the most expensive approach because each plan provisions and bills its own compute resources. This is only appropriate when apps require different SKUs, regions, OS types, or strict isolation. For cost minimization, it is generally incorrect.
Azure Traffic Manager provides DNS-based global routing and failover across endpoints/regions. It is not required to deploy web apps and does not replace the need for an App Service plan. It’s used when you have multi-region deployments or need performance/failover routing, which is beyond the prerequisite for deploying 10 web apps.
One Application Gateway can front-end one or more web apps for Layer 7 routing and WAF, but it is optional and not required before deploying App Service apps. It also introduces additional cost. Unless the scenario explicitly requires WAF, private ingress, or advanced routing, identifying/deploying an Application Gateway is not the correct prerequisite.
Core concept: Azure Web Apps (App Service apps) must run in an App Service plan, which defines the underlying compute resources (region, OS, pricing tier, scale units). The web app itself is a logical container for your code and configuration, but it cannot be created without an App Service plan. Why the answer is correct: To deploy 10 Azure web apps using an ARM/Bicep template, you must ensure an App Service plan exists (or is created by the template). If the question asks what to identify/deploy before deploying Template1, and the goal is to minimize Azure costs, the best approach is to use a single App Service plan and place all 10 web apps into that plan (assuming they share the same region and OS requirements). Multiple apps can share the same plan and therefore share the same compute instances, which is typically far cheaper than provisioning separate plans. Key features and best practices: An App Service plan is the billing and scaling boundary for App Service. Costs are primarily driven by the plan’s SKU (Free/Shared/Basic/Standard/Premium/Isolated) and instance count, not by the number of apps. Hosting multiple low-to-moderate traffic apps on one appropriately sized plan is a common cost-optimization pattern aligned with the Azure Well-Architected Framework (Cost Optimization pillar). You can still scale out/in the plan to meet aggregate demand. Common misconceptions: Load-balancing services like Azure Application Gateway or Traffic Manager are not prerequisites to deploy web apps. They are optional components for routing, WAF, TLS offload, multi-region failover, etc. Also, many assume each web app needs its own plan; that’s only necessary when apps require isolation, different SKUs, different regions, different OS (Windows vs Linux), or independent scaling. Exam tips: For AZ-104, remember: Web App requires an App Service plan. To minimize cost, consolidate apps into fewer plans when requirements allow. Separate plans increase cost because each plan provisions its own compute. Gateways/Traffic Manager are architecture choices, not deployment prerequisites for basic web app creation.
You have an Azure subscription that contains a virtual machine named VM1. VM1 hosts a line-of-business application that is available 24 hours a day. VM1 has one network interface and one managed disk. VM1 uses the D4s v3 size. You plan to make the following changes to VM1: ✑ Change the size to D8s v3. ✑ Add a 500-GB managed disk. ✑ Add the Puppet Agent extension. ✑ Enable Desired State Configuration Management. Which change will cause downtime for VM1?
Enabling Desired State Configuration (DSC) management (commonly via Azure Automation State Configuration/DSC) is a configuration management capability. It applies configuration to the guest OS using an agent/extension and pull server model. This is typically performed while the VM is running and does not inherently require stopping/deallocating the VM. While DSC may restart services (or even the OS) depending on the configuration you apply, enabling the management feature itself is not expected to cause downtime on the exam.
Adding (attaching) a 500-GB managed disk to an existing Azure VM is generally an online operation. You can attach a new managed data disk while the VM is running; the downtime is not required by the platform. After attachment, you still need to initialize/partition/format the disk inside the guest OS, which can also be done online. This is a common AZ-104 point: data disk attach is typically hot-add.
Changing the VM size from D4s v3 to D8s v3 is a resize operation that commonly requires the VM to be stopped (deallocated) and restarted so Azure can reallocate compute resources on a host that supports the new SKU. This results in downtime for the workload running on VM1. Even within the same series, capacity constraints can force a move to different hardware, making downtime the expected outcome.
Adding the Puppet Agent extension is performed through the Azure VM Agent as a VM extension deployment. VM extensions are designed to be installed and updated while the VM is running and typically do not require a reboot or deallocation. Although the extension may install software and could restart services depending on how it’s configured, the act of adding the extension itself is not considered a downtime-causing platform operation in AZ-104.
Core concept: This question tests Azure VM lifecycle operations and which actions require a VM restart/deallocation (downtime) versus actions that are “hot” changes. In AZ-104, you must know which compute changes are online and which require the VM to stop. Why the answer is correct: Changing the VM size from D4s v3 to D8s v3 typically requires the VM to be stopped (deallocated) and then started again so Azure can move the VM to hardware that supports the new vCPU/memory allocation. Even when resizing within the same VM family, Azure often must reallocate the VM to a different host, which causes downtime. For a 24x7 line-of-business workload, this is a key availability consideration; the Azure Well-Architected Framework (Reliability pillar) recommends designing for redundancy (e.g., multiple instances behind a load balancer, Availability Zones/sets) so planned maintenance like resizing doesn’t impact availability. Key features and best practices: - VM resize is a compute host allocation change; it commonly triggers a stop/deallocate operation. - If the target size is not available on the current cluster/host, Azure must move the VM, guaranteeing downtime. - To avoid downtime, scale out (multiple VMs) rather than scale up, or use VM Scale Sets / availability constructs. Common misconceptions: - “Same series resize is always online”: not true; Azure may still need to reallocate. - “Extensions cause downtime”: extensions are installed by the Azure VM Agent and generally do not require a reboot (though some extensions or their configurations might). The exam expects you to treat extension installation as non-downtime unless explicitly stated. Exam tips: - Memorize which operations require deallocation: resizing VM SKU is a classic one. - Disk attach/detach and most extension installs are typically online. - For 24/7 apps, mention HA patterns (Availability Zones/sets, load balancing) to handle planned downtime operations. References to review: - Azure VM resizing behavior and deallocation requirements (Azure Virtual Machines documentation) - Azure Well-Architected Framework: Reliability
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure subscription that contains 10 virtual networks. The virtual networks are hosted in separate resource groups. Another administrator plans to create several network security groups (NSGs) in the subscription. You need to ensure that when an NSG is created, it automatically blocks TCP port 8080 between the virtual networks. Solution: From the Resource providers blade, you unregister the Microsoft.ClassicNetwork provider. Does this meet the goal?
Yes is incorrect because unregistering Microsoft.ClassicNetwork only affects the legacy classic networking resource provider and does not enforce security rules on newly created NSGs. It will not create a deny rule for TCP port 8080, nor will it control traffic between ARM-based virtual networks. The requirement is about automatic configuration of NSGs at creation time, which requires governance or deployment tooling. As a result, answering Yes would incorrectly assume provider registration can enforce NSG behavior.
No is correct because unregistering the Microsoft.ClassicNetwork provider does not configure NSGs or inject deny rules into them. NSGs in current Azure environments are ARM resources under Microsoft.Network, so the classic provider is unrelated to the requirement. To automatically block TCP port 8080 between virtual networks, you would need a mechanism that evaluates or deploys NSG rules, such as Azure Policy or automation. Therefore, the proposed action does not meet the stated goal.
Core concept: This question tests Azure VM maintenance controls and how to proactively respond to platform maintenance. Azure may notify you that a VM is scheduled for maintenance (for example, host OS updates or hardware servicing). The relevant capabilities are found under VM Maintenance/Updates and include options like “Redeploy” (move to a new host) and, in some cases, “Self-service maintenance” controls. Why the answer is correct: Selecting “One-time update” from the VM1 Updates blade does not meet the goal of moving the VM to a different host immediately. “One-time update” is associated with applying updates/patches (guest OS updates or update management actions) and does not force a host change. To move a VM to a different host immediately, the typical action is to Redeploy the VM (which stops/deallocates and starts it on a new node) or to use maintenance controls specifically designed for platform maintenance events. Therefore, the proposed solution does not satisfy the requirement. Key features and best practices: - “Redeploy” is the common operational action to force a VM to move to a new Azure host. It results in downtime and a new host assignment, while preserving disks and configuration. - For higher availability, use Availability Sets or Availability Zones so that platform maintenance affects only a subset of instances and you can fail over at the application layer. - Azure Well-Architected Framework (Reliability) recommends designing for planned maintenance via redundancy (zones/sets) rather than relying on reactive host moves. Common misconceptions: It’s easy to confuse “Updates” (patching/Update Management) with “Maintenance” (platform host maintenance). Even though both relate to “maintenance,” only redeploy/maintenance controls impact host placement. Applying a one-time update may reduce guest OS vulnerability but won’t change the underlying host. Exam tips: For AZ-104, remember: “Redeploy” = move VM to a new host. “Restart” does not guarantee a host change. “Updates/One-time update” relates to patching, not host migration. If the question explicitly says “move to a different host immediately,” think Redeploy (or zone/availability design if asked for prevention).
You need to deploy an Azure virtual machine scale set that contains five instances as quickly as possible. What should you do?
Incorrect. Deploying five standalone virtual machines does not create a virtual machine scale set at all, so it fails the core requirement of the question. Adjusting Availability Zones on each VM only adds more configuration work and does nothing to provide scale set orchestration, centralized management, or simplified scaling behavior. This approach is slower operationally because each VM must be created and managed individually. It also lacks the consistency and automation benefits that VMSS is specifically designed to provide.
Incorrect. Deploying five separate virtual machines and modifying their size settings still does not result in a virtual machine scale set. VM size affects compute capacity, not deployment model, so changing size has no bearing on whether the instances are managed as a scale set or how quickly the overall solution can be deployed. This option introduces unnecessary per-VM administration and misses the requirement for a single scalable resource. It is therefore both technically mismatched and less efficient than using VMSS.
Correct. Deploying one virtual machine scale set in VM (virtual machines) orchestration mode lets Azure provision the required five instances through a single resource definition instead of requiring five separate VM deployments. This orchestration mode maps to the flexible VMSS model, which is the newer and more broadly recommended deployment approach for many Azure scenarios. It supports efficient provisioning and centralized management while still meeting the requirement to deploy a scale set quickly. Because the question asks for the fastest way to deploy a VM scale set with five instances, this is the best fit among the available options.
Incorrect. ScaleSetVM orchestration mode refers to the classic uniform VMSS model, which is not the best answer here given the available choices and current Azure orchestration guidance. While it does create a scale set, the exam distinction typically favors VM orchestration mode for faster and more flexible deployment of multiple VM instances under one scale set resource. Uniform mode is more restrictive because instances are treated more identically and follow the classic scale set model. Since the question asks for the quickest deployment approach and includes VM orchestration mode as an option, D is not the best answer.
Core concept: This question tests Azure Virtual Machine Scale Sets (VMSS) orchestration modes. Azure supports two orchestration modes for scale sets: Uniform and Flexible. In some exam wording, these appear as ScaleSetVM orchestration mode and VM (virtual machines) orchestration mode respectively. Flexible/VM orchestration mode is designed to provide faster and more versatile deployment of VM instances, especially when you want to create and manage standard Azure VMs under a scale set umbrella. Why correct: To deploy five instances as quickly as possible, you should create one virtual machine scale set using VM (virtual machines) orchestration mode. This mode supports rapid provisioning of standard Azure VMs and is the recommended choice in newer Azure guidance for many general-purpose VMSS deployments. It allows you to deploy and manage multiple VMs through a single scale set resource while benefiting from simpler instance handling and broader feature compatibility. Key features: - VM orchestration mode corresponds to the more flexible VMSS model and supports standard IaaS VM behaviors. - It is optimized for scenarios where you want scale set management without the stricter constraints of classic uniform orchestration. - A single VMSS deployment is still much faster and easier to manage than deploying five separate virtual machines manually. Common misconceptions: - ScaleSetVM orchestration mode is often assumed to be the default best answer because it sounds like the native scale set option, but Azure exams increasingly align with the newer flexible orchestration model. - Deploying five standalone VMs does not satisfy the requirement to deploy a virtual machine scale set. - Changing VM size or availability zone settings does not address the need for fast, centralized deployment of five instances. Exam tips: - On AZ-104, when asked about the fastest way to deploy multiple VM instances in a scale set, prefer a single VMSS over separate VMs. - Be alert to orchestration mode terminology: VM often maps to Flexible, while ScaleSetVM maps to Uniform. - If the question emphasizes speed and modern VMSS deployment patterns, VM orchestration mode is typically the better answer.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure subscription that contains 10 virtual networks. The virtual networks are hosted in separate resource groups. Another administrator plans to create several network security groups (NSGs) in the subscription. You need to ensure that when an NSG is created, it automatically blocks TCP port 8080 between the virtual networks. Solution: You create a resource lock, and then you assign the lock to the subscription. Does this meet the goal?
Yes is incorrect because resource locks are not a network security enforcement feature. They do not inspect NSG contents, deploy deny rules, or ensure that traffic on TCP port 8080 is blocked across VNets. Even if a lock is inherited by resources in the subscription, it only restricts management operations like deletion or modification. The requirement is about automatic rule enforcement on NSGs, which locks cannot provide.
No is correct because a resource lock does not create, modify, or validate NSG security rules. Applying a lock at the subscription scope only affects whether resources can be deleted or changed through the management plane, depending on the lock type. It cannot automatically block TCP port 8080 between virtual networks when new NSGs are created. To meet the requirement, you would need a governance mechanism such as Azure Policy to enforce the presence of the required deny rule.
Core concept: This question tests Azure governance and network security enforcement. To automatically ensure that newly created network security groups block TCP port 8080 between virtual networks, you need a policy-based control such as Azure Policy that can audit or deny noncompliant NSG configurations, or deploy required rules. A resource lock only prevents modification or deletion of resources and does not configure security rules. Why correct: The proposed solution does not meet the goal because assigning a resource lock at the subscription level does not automatically add deny rules to NSGs. Locks control management-plane actions like delete or update protection, but they do not inspect or enforce NSG rule content. Therefore, they cannot ensure that TCP 8080 is blocked between VNets whenever an NSG is created. Key features: - Resource locks support CanNotDelete and ReadOnly behaviors for protecting resources from accidental changes. - Azure Policy can evaluate NSG resources and enforce required inbound or outbound rules across a subscription. - NSGs control traffic using security rules, including source, destination, protocol, and port matching. - Subscription-scope governance tools are appropriate when you need automatic enforcement across many resource groups. Common misconceptions: A common mistake is assuming that any subscription-level control can enforce network behavior. Locks protect resources from management changes, but they do not deploy or validate configuration details like NSG rules. Another misconception is that NSGs automatically understand 'between virtual networks' without explicitly defined source and destination address ranges or service tags. Exam tips: For AZ-104, remember that resource locks protect resources, RBAC controls access, and Azure Policy enforces configuration compliance. If the requirement says 'automatically ensure' or 'when a resource is created,' think Azure Policy rather than locks. If the requirement is specifically about traffic filtering, the actual enforcement still happens through NSG rules.
Your company has three offices. The offices are located in Miami, Los Angeles, and New York. Each office contains datacenter. You have an Azure subscription that contains resources in the East US and West US Azure regions. Each region contains a virtual network. The virtual networks are peered. You need to connect the datacenters to the subscription. The solution must minimize network latency between the datacenters. What should you create?
Incorrect. Azure Application Gateway is a Layer 7 (HTTP/HTTPS) load balancer and can provide WAF capabilities, but it does not connect on-premises networks to Azure VNets. The On-premises data gateway is used for securely accessing on-premises data sources from Microsoft cloud services (Power BI, Power Apps, Logic Apps), not for site-to-site network connectivity or latency-optimized WAN design.
Correct. Create one Virtual WAN and three Virtual Hubs (typically in regions closest to Miami/New York and Los Angeles—e.g., East US and West US, plus an additional hub region if needed). Each datacenter connects to the nearest hub using S2S VPN or ExpressRoute, minimizing latency. Virtual WAN provides managed routing and inter-hub connectivity over Microsoft’s backbone and simplifies multi-site connectivity at scale.
Incorrect. The typical and recommended architecture is one Virtual WAN with multiple Virtual Hubs. Creating three separate Virtual WANs fragments management and routing domains and does not inherently minimize latency. It can also complicate interconnectivity between sites and Azure networks because you lose the centralized, global transit benefits that a single vWAN provides.
Incorrect. On-premises data gateways are not networking components; they enable application-level data connectivity for specific Microsoft services. Azure Application Gateway is for web traffic management and does not provide site-to-site VPN/ExpressRoute connectivity. This option does not address connecting datacenters to VNets nor optimizing network latency between geographically distributed sites.
Core concept: This question tests Azure Virtual WAN (vWAN) and Virtual Hubs as a scalable, Microsoft-managed hub-and-spoke connectivity service for connecting multiple branch/datacenter sites to Azure with optimized routing and lower latency. Why the answer is correct: You have three on-premises datacenters (Miami, Los Angeles, New York) and Azure resources in East US and West US, each with a VNet (already peered). To minimize latency, each datacenter should connect to the closest Azure entry point/region. Azure Virtual WAN provides a global transit architecture where you deploy one Virtual WAN resource and then deploy multiple Virtual Hubs in different Azure regions. Each on-premises site connects (via Site-to-Site VPN or ExpressRoute) to the nearest virtual hub, reducing round-trip time and avoiding hairpinning through a single gateway/region. The hubs then provide managed inter-hub connectivity over the Microsoft backbone, and you connect your VNets to the hubs. Key features / best practices: - One Virtual WAN per architecture, multiple regional Virtual Hubs. - Each hub can host VPN/ER gateways and route traffic between on-prem sites and VNets. - Built-in any-to-any connectivity and optimized routing across hubs. - Aligns with Azure Well-Architected Framework (Performance Efficiency and Reliability): regional hubs reduce latency; managed routing improves operational consistency. - Practical note: while your VNets are currently peered, in vWAN designs you typically connect VNets to the hub(s) instead of relying on VNet peering for global transit. Common misconceptions: - Application Gateway is for HTTP(S) load balancing/WAF, not for private WAN connectivity. - On-premises data gateway is for Power BI/Power Platform data access, not network connectivity. - Creating multiple Virtual WANs is usually unnecessary and complicates routing/management; the standard pattern is one vWAN with multiple hubs. Exam tips: When you see “multiple branches/datacenters” + “minimize latency” + “connect to Azure regions,” think Virtual WAN with multiple regional virtual hubs. Remember: Virtual WAN is the container; Virtual Hubs are regional and host the gateways.
You have an Azure subscription named Subscription1 that contains two Azure virtual networks named VNet1 and VNet2. VNet1 contains a VPN gateway named VPNGW1 that uses static routing. There is a site-to-site VPN connection between your on-premises network and VNet1. On a computer named Client1 that runs Windows 10, you configure a point-to-site VPN connection to VNet1. You configure virtual network peering between VNet1 and VNet2. You verify that you can connect to VNet2 from the on-premises network. Client1 is unable to connect to VNet2. You need to ensure that you can connect Client1 to VNet2. What should you do?
Downloading and reinstalling the VPN client configuration package updates the route information that the point-to-site client receives. After VNet peering is configured, the client may still only have routes for VNet1 until the package is refreshed, so traffic to VNet2 is not sent through the VPN tunnel. Because on-premises can already reach VNet2, the peering and gateway path are working at the Azure side. The remaining issue is the client-side route set, which is corrected by reinstalling the updated VPN package.
Allow gateway transit is used on the VNet that contains the gateway so that a peered VNet can use that gateway as a remote gateway. That setting is relevant for gateway sharing between VNets, especially for site-to-site or ExpressRoute scenarios, but it does not by itself update the route table on an already installed P2S client. In this question, the client can connect to VNet1 and the peering already allows Azure-side connectivity, so the problem is not solved solely by enabling gateway transit. The client still needs refreshed route information to know that VNet2 should be reached over the VPN tunnel.
Selecting Allow gateway transit on VNet2 is incorrect because VNet2 does not contain the VPN gateway. In Azure peering, the VNet with the gateway exposes it using Allow gateway transit, while the peered VNet consumes it using Use remote gateways. Even if gateway-sharing settings were relevant, this specific option is applied on the wrong side. Therefore it would not resolve Client1's inability to reach VNet2.
BGP is used for dynamic route exchange with compatible VPN devices and gateways, but the gateway in the scenario uses static routing. More importantly, the issue described is not a lack of dynamic route exchange between Azure and on-premises, since on-premises already reaches VNet2 successfully. The failure is specific to the point-to-site client not having updated reachability information for the peered VNet. Enabling BGP would not be the required or direct fix here.
Core concept: Point-to-site VPN clients use a client configuration package that contains the routes pushed to the client for reachable address spaces. When you add or change connectivity such as virtual network peering, the P2S client may need an updated VPN package so the new peered VNet prefixes are included on the client. Why correct: Client1 can already connect to VNet1, and on-premises can reach VNet2, which shows the peering itself is functioning. The missing piece is that the P2S client does not yet have the route information for VNet2, so reinstalling the VPN client package updates the client-side routes. Key features: P2S route distribution is client-profile based, peering can extend reachability, and route updates often require regenerating/downloading the VPN client package. Common misconceptions: Many confuse gateway transit settings used for gateway sharing between VNets with the separate requirement to refresh P2S client routes after topology changes. Exam tips: If a P2S client cannot reach newly peered address spaces but the VNets themselves can communicate, think first about updating the downloaded VPN client package.
You have two Azure virtual networks named VNet1 and VNet2. VNet1 contains an Azure virtual machine named VM1. VNet2 contains an Azure virtual machine named VM2. VM1 hosts a frontend application that connects to VM2 to retrieve data. Users report that the frontend application is slower than usual. You need to view the average round-trip time (RTT) of the packets from VM1 to VM2. Which Azure Network Watcher feature should you use?
IP flow verify validates whether a specific traffic flow (source/destination IP, port, protocol) is allowed or denied by NSG rules at a VM’s NIC or subnet. It’s a security/routing permission check, not a performance tool. It does not measure latency, round-trip time, or provide time-series performance metrics, so it won’t help quantify “slower than usual” behavior.
Connection troubleshoot is an on-demand diagnostic that checks connectivity between a source and destination and helps identify where a failure occurs (NSG, UDR, DNS, firewall, etc.). While it can provide some latency-related details during a single test, it is not intended for continuous monitoring or reporting average RTT over time, which is what the question asks for.
Connection monitor continuously monitors connectivity and performance between endpoints and reports metrics such as reachability and average round-trip time (RTT). It supports ongoing probing, historical trending, and alerting via Azure Monitor. This makes it the correct feature to view average RTT from VM1 to VM2 when users report degraded application performance.
NSG flow logs record information about IP traffic flowing through NSGs (allowed/denied flows, 5-tuple, counts/bytes) and are used for traffic analytics, auditing, and security investigations. They do not capture end-to-end latency or RTT. You might use them to confirm traffic is flowing, but they won’t provide the average RTT metric required.
Core concept: This question tests Azure Network Watcher monitoring capabilities for measuring network performance between two endpoints. Specifically, it asks for the feature that can show average round-trip time (RTT) for traffic from VM1 to VM2. Why the answer is correct: Connection monitor (a capability within Network Watcher) is designed for continuous monitoring of connectivity and performance between a source and destination. It can measure latency/RTT over time, track packet loss, and provide historical trends. Because users report the application is slower than usual, you need performance telemetry (average RTT), not just a one-time connectivity check. Connection monitor provides the “average RTT” metric and time-series views, making it the best fit. Key features / how it works: - Uses Network Watcher agents/VM extensions (or Azure-managed probing depending on scenario) to run periodic tests between endpoints. - Supports monitoring between Azure VMs, VNets, on-premises endpoints, and via different protocols/ports. - Produces metrics such as reachability, latency (RTT), and sometimes packet loss, with visualization and alerting integration (Azure Monitor). - Aligns with Azure Well-Architected Framework (Reliability and Performance Efficiency) by enabling proactive detection of latency regressions and baselining. Common misconceptions: - “Connection troubleshoot” sounds similar, but it’s primarily an on-demand diagnostic to determine if a connection can be established and where it fails; it’s not intended for ongoing average RTT trending. - NSG flow logs provide traffic metadata (5-tuple, allow/deny, bytes/packets) but do not directly provide RTT/latency. - IP flow verify only checks whether a specific flow would be allowed/denied by NSGs and does not measure performance. Exam tips: When the question asks for latency/RTT, packet loss, or performance trends between endpoints, think Connection monitor. When it asks “is traffic allowed?” think IP flow verify. When it asks “why can’t I connect right now?” think Connection troubleshoot. When it asks “what traffic is flowing/being denied?” think NSG flow logs.
You have an Azure subscription that contains a virtual network named VNet1. VNet1 contains four subnets named Gateway, Perimeter, NVA, and Production. The NVA subnet contains two network virtual appliances (NVAs) that will perform network traffic inspection between the Perimeter subnet and the Production subnet. You need to implement an Azure load balancer for the NVAs. The solution must meet the following requirements: ✑ The NVAs must run in an active-active configuration that uses automatic failover. ✑ The load balancer must load balance traffic to two services on the Production subnet. The services have different IP addresses. Which three actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
Basic Load Balancer is not the right choice for a modern NVA architecture that requires active-active behavior and automatic failover. It is a legacy SKU with fewer capabilities and is not the recommended option for production network virtual appliance deployments. Standard Load Balancer is the expected answer in Azure certification scenarios unless the question explicitly constrains you to Basic. Therefore, choosing Basic would not align with best practice or the required feature set.
Standard Load Balancer is the correct SKU for an NVA deployment that requires high availability and automatic failover. It supports HA Ports, health probes, and the production-grade capabilities expected in modern Azure network architectures. Basic Load Balancer is legacy and lacks the feature set typically required for this type of design. For AZ-104, NVA and active-active requirements strongly indicate the Standard SKU.
This option is incorrect because HA Ports rules for Azure gateway/NVA scenarios should not have Floating IP enabled. Enabling Floating IP changes packet handling behavior in a way that is not required for this load-balancing pattern and does not match Microsoft guidance for HA Ports with NVAs. Although HA Ports itself is correct, the inclusion of Floating IP makes the overall option wrong. On the exam, this is a subtle but important distinction.
HA Ports is required because the NVAs must inspect traffic broadly rather than only on a small set of defined ports. In Azure Load Balancer gateway-style configurations, HA Ports load-balancing rules must be created with Floating IP disabled. The question states there are two services on the Production subnet with different IP addresses, so two rules are needed to steer traffic appropriately while still using HA Ports. This satisfies the requirement without misconfiguring the rule behavior.
A frontend IP configuration, a backend pool, and a health probe are the essential components needed for this design. The frontend IP provides the virtual address that receives the traffic, the backend pool contains the two NVA instances, and the health probe detects appliance failure for automatic failover. This directly supports the active-active requirement because both NVAs can receive traffic while unhealthy instances are removed from rotation. One backend pool is sufficient because the load balancer is balancing across the NVAs, not across the Production services.
Two backend pools are unnecessary because the backend pool should contain the two NVAs that are performing inspection, not the two destination services in the Production subnet. The services with different IP addresses are downstream targets beyond the appliances, so they do not each require their own load balancer backend pool in this design. What is needed is one backend pool for the NVAs and separate load-balancing rules to handle the different service destinations. Using two backend pools would misrepresent the role of the load balancer in this architecture.
Core concept: This question is about designing an Azure Standard Load Balancer in front of two NVAs so they can operate in an active-active configuration with automatic failover while inspecting traffic between subnets. In Azure, this is a classic internal load balancer plus health probe design, where the NVAs are placed in a single backend pool and traffic is distributed across them. The correct rule type for this scenario is HA Ports, which allows the appliances to process all flows without creating many individual port rules. Why correct: A Standard Load Balancer is required for modern, production-grade NVA deployments and supports HA Ports. You also need the standard load balancer building blocks: a frontend IP configuration, one backend pool containing both NVAs, and a health probe to detect failure and remove an unhealthy appliance automatically. Because the Production subnet hosts two services with different IP addresses, you need two load-balancing rules, but for this NVA pattern those rules should use HA Ports with Floating IP disabled. Key features: Standard Load Balancer supports health probes, backend pools, multiple frontend IPs, and HA Ports for all-port load balancing. HA Ports is specifically intended for scenarios such as NVAs where the appliance must inspect many protocols and ports. The health probe provides the automatic failover behavior required by the question. Common misconceptions: A frequent mistake is enabling Floating IP for NVA load-balancing rules. Floating IP is used for specific scenarios such as direct server return and SQL failover patterns, but HA Ports rules for gateway/NVA designs require Floating IP to remain disabled. Another misconception is creating multiple backend pools for the destination services, even though the backend pool should contain the NVAs, not the final application endpoints. Exam tips: When you see Azure NVAs in active-active mode with automatic failover, think Standard Load Balancer plus health probe plus HA Ports. If the load balancer is distributing traffic to the NVAs rather than directly to the application servers, the backend pool is the NVA instances. For HA Ports in this Azure NVA pattern, remember that Floating IP should be disabled.










