CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Microsoft AZ-500
Microsoft AZ-500

Practice Test #2

Simule a experiência real do exame com 50 questões e limite de tempo de 100 minutos. Pratique com respostas verificadas por IA e explicações detalhadas.

50Questões100Minutos700/1000Nota de Aprovação
Ver Questões de Prática

Powered by IA

Respostas e Explicações Verificadas por 3 IAs

Cada resposta é verificada por 3 modelos de IA líderes para garantir máxima precisão. Receba explicações detalhadas por alternativa e análise aprofundada das questões.

GPT Pro
Claude Opus
Gemini Pro
Explicações por alternativa
Análise aprofundada da questão
Precisão por consenso de 3 modelos

Questões de Prática

1
Questão 1

HOTSPOT - You have an Azure subscription named Sub1. You create a virtual network that contains one subnet. On the subnet, you provision the virtual machines shown in the following table. Name Network interface Application security group assignment IP address VM1 NIC1 AppGroup12 10.0.0.10 VM2 NIC2 AppGroup12 10.0.0.11 VM3 NIC3 AppGroup3 10.0.0.100 VM4 NIC4 AppGroup4 10.0.0.200 Currently, you have not provisioned any network security groups (NSGs). You need to implement network security to meet the following requirements: ✑ Allow traffic to VM4 from VM3 only. ✑ Allow traffic from the Internet to VM1 and VM2 only. ✑ Minimize the number of NSGs and network security rules. How many NSGs and network security rules should you create? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Parte 1:

NSGs: ______

Create 1 NSG. Since all VMs are in the same subnet, you can associate a single NSG to the subnet and control traffic to all NICs/VMs in that subnet. This minimizes NSG count and is a common exam pattern: use one subnet-level NSG unless you need different policies per subnet or you must apply different NSGs to different NICs. Here, the requirements can be expressed with ASG-based rules inside one NSG (target AppGroup12 for Internet access and AppGroup4 for VM4 restrictions). Creating 2–4 NSGs would not reduce the number of rules needed and would increase management overhead, violating the “minimize the number of NSGs” requirement.

Parte 2:

Network security rules: ______

Create 3 security rules (inbound) in the single NSG. 1) Allow Internet -> AppGroup12 (VM1 and VM2) on required ports (ports aren’t specified, so conceptually “traffic”). This satisfies “Allow traffic from the Internet to VM1 and VM2 only” because other VMs remain blocked by default DenyAllInBound. 2) Allow AppGroup3 (VM3) -> AppGroup4 (VM4). This permits VM3 to reach VM4. 3) Deny VirtualNetwork -> AppGroup4. This is required because the default AllowVnetInBound would otherwise allow VM1/VM2 (and any other VNet source) to reach VM4. Place rule (2) at a higher priority (lower number) than rule (3) so VM3 remains allowed while all other VNet sources are denied. With only 2 rules, you cannot both allow VM3 and block all other VNet sources due to the default allow.

2
Questão 2

Your company has an Azure subscription named Sub1 that is associated to an Azure Active Directory (Azure AD) tenant named contoso.com. The company develops an application named App1. App1 is registered in Azure AD. You need to ensure that App1 can access secrets in Azure Key Vault on behalf of the application users. What should you configure?

Application permissions are intended for app-only access where no signed-in user is involved, such as background services or automation jobs. That does not match the requirement to access secrets on behalf of application users, because the resulting token would represent only the application identity. In addition, application permissions generally require admin consent, so the “without admin consent” part is also incorrect. This option fails on both the permission type and the consent requirement.

Although delegated permissions are the right general category for acting on behalf of users, Azure Key Vault delegated permissions are not user-consentable in the normal sense. They are admin-restricted permissions and require administrator approval in the tenant before the app can use them. Therefore, saying delegated permission without admin consent is incomplete and technically incorrect for Key Vault. This makes the current answer wrong even though it correctly identified the delegated model.

Delegated permissions are the correct permission type because App1 must access Azure Key Vault in the context of signed-in users rather than as a standalone daemon. The wording “on behalf of the application users” directly indicates a user-delegated access model, where the token represents both the user and the application. For Azure Key Vault, these delegated permissions are admin-restricted, so tenant administrator consent is required before users can use the app in this way. This makes a delegated permission that requires admin consent the only option that satisfies both the user-context requirement and the consent model for Key Vault.

Application permissions that require admin consent are used when an app accesses resources as itself, without any signed-in user context. That is appropriate for daemon apps, scheduled tasks, or service principals, but not for a user-driven scenario. Because the question explicitly says “on behalf of the application users,” the app must not use app-only permissions. Even though admin consent is commonly required for application permissions, the permission type itself is wrong here.

Análise da Questão

Core concept: This question tests the difference between delegated and application permissions in Microsoft Entra ID (Azure AD), and when admin consent is required for Azure Key Vault access. The phrase “on behalf of the application users” means the app must use delegated permissions because a signed-in user is present and the app is acting in that user’s context. Why correct: Azure Key Vault supports delegated access for user-based scenarios, but those delegated permissions are admin-restricted and require administrator consent. Therefore, the correct configuration is a delegated permission that requires admin consent. This allows App1 to request tokens representing both the user and the application when accessing Key Vault. Key features: - Delegated permissions are used when a user is signed in and the app acts on behalf of that user. - Application permissions are used for daemon or service-to-service scenarios where no user is present. - Azure Key Vault delegated permissions require admin consent in Entra ID. - Access to secrets still must be authorized in Key Vault through access policies or Azure RBAC. Common misconceptions: - “On behalf of users” does not mean users can always self-consent; some delegated permissions are admin-restricted. - Application permissions are not appropriate when the requirement explicitly includes user context. - Azure AD consent alone does not grant secret access; Key Vault authorization must also be configured. Exam tips: - If the question says “on behalf of a user,” start with delegated permissions. - Then check whether the target API’s delegated permissions are admin-restricted. - For Azure Key Vault, delegated permissions require admin consent, so choose delegated permission with admin consent rather than app-only permission.

3
Questão 3

You have an Azure subscription that contains a virtual machine named VM1. You create an Azure key vault that has the following configurations: ✑ Name: Vault5 ✑ Region: West US ✑ Resource group: RG1 You need to use Vault5 to enable Azure Disk Encryption on VM1. The solution must support backing up VM1 by using Azure Backup. Which key vault settings should you configure?

Access policies control who can access Key Vault objects, but they are not the specific Key Vault setting that ADE fundamentally uses to store the VM encryption material. The question asks which vault setting should be configured to enable ADE with backup support, and the encryption dependency is on secrets rather than on the permission model itself. While permissions are necessary operationally, they are not the best answer from the listed choices. In exam questions like this, Microsoft often distinguishes between the object type used by ADE and the authorization mechanism used to reach it.

Secrets are the core Key Vault object used by Azure Disk Encryption to store the BitLocker Encryption Key for the VM. ADE writes and retrieves this encryption material from Key Vault during enablement and subsequent operations. Azure Backup supports ADE-enabled virtual machines when the encryption configuration uses the supported secret-based integration with Key Vault. Because the question asks which key vault setting should be configured, Secrets is the best match among the available options.

Keys are optional in Azure Disk Encryption and are only used when you choose to implement a Key Encryption Key to wrap the BitLocker Encryption Key. ADE can be enabled successfully without configuring a KEK at all. Since the question does not state that customer-managed key wrapping is required, Keys is not the mandatory setting. Therefore, this option is too specific and not universally required for ADE with Azure Backup.

Locks are Azure resource management controls that prevent accidental deletion or modification of resources. They do not participate in the encryption workflow and have no role in storing or retrieving disk encryption material. Applying a lock to the vault would not enable Azure Disk Encryption on the VM. Locks also do not affect Azure Backup compatibility for ADE-protected virtual machines.

Análise da Questão

Core concept: Azure Disk Encryption (ADE) integrates with Azure Key Vault by storing the BitLocker Encryption Key (BEK) as a secret. To enable ADE on a VM and maintain compatibility with Azure Backup, the key vault must support storing and retrieving secrets used by the encryption extension. Why correct: ADE for Azure VMs relies on Key Vault secrets to hold the BEK. Azure Backup supports ADE-protected VMs when the encryption material is managed through the supported Key Vault secret mechanism. Therefore, among the listed settings, Secrets is the required configuration area. Key features: ADE stores the BEK as a secret in Key Vault, and may optionally use a key encryption key (KEK) in more advanced scenarios. Azure Backup can back up ADE-enabled VMs as long as the encryption setup follows supported patterns. The vault does not need resource locks for this purpose, and keys are optional rather than mandatory. Common misconceptions: Access policies are important for permissions, but they are not the primary vault setting being asked for in this option set. Keys are only needed when using a KEK, which is optional for ADE. Locks are unrelated to encryption functionality. Exam tips: When a question asks what Key Vault component ADE uses, think Secrets first because the BEK is stored as a secret. If the question instead asks about permissions or authorization, then access policies or RBAC would be the focus. Distinguish between the object type used by ADE and the permission model that allows access to it.

4
Questão 4

HOTSPOT - You have an Azure subscription. The subscription contains Azure virtual machines that run Windows Server 2016. You need to implement a policy to ensure that each virtual machine has a custom antimalware virtual machine extension installed. How should you complete the policy? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Parte 1:

"effect": "______"

The correct effect is DeployIfNotExists because the requirement is to ensure every VM has a specific antimalware VM extension installed. DeployIfNotExists evaluates compliance and, when the required related resource/configuration is missing (the VM extension), it can automatically deploy it via an embedded ARM template. This is the standard pattern for enforcing VM extensions, diagnostic settings, and other “should be configured” requirements. Why not Deny? Deny would only prevent creation or update operations that don’t meet the condition; it does not remediate existing VMs that are already deployed without the extension, and it can also disrupt legitimate VM operations if not carefully scoped. Why not Append? Append (and its modern replacement Modify) is used to add or alter properties on the resource being created/updated, but it cannot reliably create a separate child resource like Microsoft.Compute/virtualMachines/extensions. Therefore, DeployIfNotExists is the only option that both detects absence and installs the extension to reach compliance.

Parte 2:

"parameters": { "______": {

In a DeployIfNotExists policy, the policy rule's details section includes an existenceCondition to determine whether the related resource already exists and is compliant. For a VM extension scenario, this condition checks whether the required antimalware extension is present on the virtual machine. Template is used later inside the deployment definition to describe what to deploy for remediation, and resources is only a section within an ARM template, not the policy field being asked for here.

5
Questão 5

HOTSPOT - You are evaluating the effect of the application security groups on the network communication between the virtual machines in Sub2. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:

Parte 1:

From VM1, you can successfully ping the private IP address of VM4.

No. Ping uses ICMP, and NSG rules only allow traffic that matches an explicit allow (or the default “AllowVNetInBound” if no higher-priority deny/limit exists). In typical ASG-based designs for a web tier, a custom inbound rule on VM4 (or its subnet) allows TCP/80 (and possibly TCP/443) from a specific source ASG (for example, VM1/VM2 in an “App” ASG) to the destination ASG containing VM4 (“Web”). If the inbound rules are restricted to TCP 80/443, ICMP is not included, so VM1 cannot successfully ping VM4. Also, if there is any higher-priority deny rule for intra-VNet traffic except web ports, that deny will override the default AllowVNetInBound. Therefore, even though VM1 may be permitted to reach the web service, ICMP echo requests will be blocked and ping will fail.

Parte 2:

From VM2, you can successfully ping the private IP address of VM4.

No. The same reasoning as sub-question 0 applies: ICMP is not TCP/UDP and is commonly not permitted when NSG rules are written to allow only web traffic to the web server ASG. Even if VM2 is in the same subnet/VNet as VM4, a custom inbound rule set that only allows TCP 80 (and/or 443) to VM4’s ASG will not match ICMP, so the traffic will fall through to a deny (either an explicit deny rule or the default DenyAllInBound after the allow rules are evaluated). Exam tip: don’t assume “same VNet means ping works”; NSGs can block ICMP, and ASG-based segmentation often intentionally blocks non-required protocols to reduce lateral movement risk.

Parte 3:

From VM1, you can connect to the web server on VM4.

Yes. Connecting to the web server on VM4 implies TCP port 80 (HTTP) is allowed from VM1 to VM4. With ASGs, this is typically implemented as an inbound NSG rule on VM4’s NIC/subnet that allows Source = ASG containing VM1 (for example, “AppServers” or “Mgmt”) and Destination = ASG containing VM4 (“WebServers”), Service = TCP/80, Action = Allow, with a priority higher than any deny rules. Because NSG processing requires both outbound from VM1 and inbound to VM4 to allow the flow, and outbound is usually permitted by default (AllowVNetOutBound) unless explicitly denied, the decisive control is the inbound allow to VM4 on TCP/80. Therefore VM1 can establish an HTTP connection even though ping (ICMP) is blocked.

Quer praticar todas as questões em qualquer lugar?

Baixe o Cloud Pass — inclui simulados, acompanhamento de progresso e mais.

6
Questão 6

HOTSPOT - You have the Azure Information Protection labels as shown in the following table.

diagram

You have the Azure Information Protection policies as shown in the following table.

NameApplies toUse labelSet the default label
GlobalNot applicableNoneNone
Policy1User1Label1None
Policy2User1Label2None
You need to identify how Azure Information Protection will label files.
What should you identify? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Parte 1:

If User1 creates a Microsoft Word file that includes the text “Black and White”, the file will be assigned: ______

User1 receives both Policy1 and Policy2, so both Label1 and Label2 are available for evaluation. In the Word document text "Black and White", Label1 matches because it looks for "White" with case sensitivity on, and that exact casing is present. Label2 also matches because it looks for "Black" with case sensitivity off, so "Black" is detected as well. Therefore, for this hotspot the correct identification is Label1 and Label2; the other options are incorrect because both conditions are satisfied, and "No label" is clearly wrong for a supported Office file type.

Parte 2:

If User1 creates a Microsoft Notepad file that includes the text “Black or white”, the file will be assigned: ______

The file is created in Microsoft Notepad, which produces a plain text (.txt) file. AIP automatic labeling based on content inspection is designed primarily for supported Office file types (Word, Excel, PowerPoint) and certain other supported formats. Plain text files created by Notepad are not automatically labeled by scanning their content in the same way. Even though the text “Black or white” contains both target words, automatic labeling won’t trigger for this Notepad file type. Additionally, even if content were evaluated, Label1 requires case-sensitive match for “White” (capital W). The text contains “white” in lowercase, so Label1 would not match. Label2 is case-insensitive and would match “Black”, but again the key blocker is that Notepad files aren’t auto-labeled via content conditions. Therefore, the file will be assigned no label. Why others are wrong: Label1 only is wrong due to case sensitivity (“white” != “White”) and file type. Label2 only is wrong because auto-labeling doesn’t apply to Notepad files. Label1 and Label2 is also impossible because only one label can be applied and auto-labeling won’t occur here.

7
Questão 7

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have a hybrid configuration of Azure Active Directory (Azure AD). You have an Azure HDInsight cluster on a virtual network. You plan to allow users to authenticate to the cluster by using their on-premises Active Directory credentials. You need to configure the environment to support the planned authentication. Solution: You deploy an Azure AD Application Proxy. Does this meet the goal?

Yes is incorrect because deploying Azure AD Application Proxy would not enable the HDInsight cluster to authenticate against on-premises Active Directory in the way required. The cluster needs access to domain services, not just a reverse proxy for web applications. This option reflects a common confusion between application publishing and providing AD DS-backed identity services for Azure-hosted workloads.

No is correct because Azure AD Application Proxy does not provide the domain services that HDInsight needs to authenticate users with on-premises Active Directory credentials. Application Proxy is used to expose on-premises web applications through Azure AD preauthentication or pass-through access, but it does not supply LDAP, Kerberos, or domain-join functionality. For HDInsight in a hybrid setup, you typically need Azure AD DS or equivalent AD DS availability in the virtual network so the cluster can use traditional directory-based authentication.

Análise da Questão

Core concept: This question tests how to enable enterprise security package (ESP) or domain-based authentication for Azure HDInsight in a hybrid identity scenario. HDInsight users authenticating with on-premises Active Directory credentials requires directory services such as Active Directory Domain Services available to the cluster, typically through Azure AD DS synchronized from on-premises AD or a domain controller presence reachable from the VNet. Why the answer is correct: The proposed solution does not meet the goal because Azure AD Application Proxy is designed to publish on-premises web applications for remote access through Azure AD. It does not provide LDAP, Kerberos, domain join, or managed domain services needed for HDInsight cluster authentication with on-premises AD credentials. To support this scenario, you would typically configure Azure AD DS integrated with the hybrid Azure AD environment and join HDInsight to that managed domain. Key features / best practices: - HDInsight domain-based authentication relies on domain services, not just Azure AD application access. - Azure AD DS can provide managed Kerberos/LDAP/domain join capabilities in Azure for workloads in a virtual network. - In hybrid identity scenarios, synchronize identities from on-premises AD to Azure AD, then enable Azure AD DS for Azure-hosted services that require traditional AD features. - Ensure network connectivity, DNS configuration, and proper OU/service account setup when integrating HDInsight with domain services. Common misconceptions: - Confusing Azure AD Application Proxy with a general-purpose hybrid identity bridge. It only publishes web apps and does not extend AD DS protocols into Azure workloads. - Assuming Azure AD alone is sufficient for services that require classic domain capabilities such as Kerberos or LDAP. - Believing any Azure AD-related service can enable on-prem credential authentication for infrastructure workloads. Exam tips: When a question mentions using on-premises Active Directory credentials for Azure services inside a VNet, think about AD DS requirements such as domain join, LDAP, Kerberos, or Azure AD DS. If an option mentions Application Proxy, remember it is for publishing web applications, not for providing domain services to VMs or clusters.

8
Questão 8

You onboard Azure Sentinel. You connect Azure Sentinel to Azure Security Center. You need to automate the mitigation of incidents in Azure Sentinel. The solution must minimize administrative effort. What should you create?

An alert rule (analytics rule) in Microsoft Sentinel is primarily for detection: it queries data, creates alerts, and can generate incidents. While it can trigger automation indirectly, the rule itself does not implement mitigation steps. If the requirement is to automate response/mitigation actions, you still need a playbook (Logic App) typically executed via an automation rule.

A playbook is the native Microsoft Sentinel automation mechanism built on Azure Logic Apps. It can be triggered by Sentinel incidents/alerts and execute mitigation actions (disable user, isolate device via Defender, block IP, open tickets, notify teams). It is low-code with many built-in connectors, which minimizes administrative effort compared to building and maintaining custom code or separate automation tooling.

A Function App can automate mitigation through custom code, but it increases administrative effort: you must write, secure, deploy, monitor, and maintain the code and integrations (APIs, authentication, retries, error handling). In Sentinel, Function Apps are not the primary built-in incident response automation approach; they are better suited for specialized custom processing when Logic Apps connectors are insufficient.

A runbook (Azure Automation) can automate operational tasks and remediation, but it is not the primary, most streamlined Sentinel incident automation method. Integrating runbooks with Sentinel typically requires additional wiring (webhooks/Logic Apps) and more operational overhead (modules, hybrid workers, credential management). For minimizing effort and using Sentinel-native automation, playbooks are preferred.

Análise da Questão

Core concept: This question tests Microsoft Sentinel (formerly Azure Sentinel) incident response automation. In Sentinel, automation is implemented through Playbooks, which are Azure Logic Apps workflows triggered by Sentinel incidents or alerts. When Sentinel is connected to Microsoft Defender for Cloud (formerly Azure Security Center), Defender alerts can create Sentinel incidents, and playbooks can automatically remediate or contain threats. Why the answer is correct: To automate mitigation of incidents while minimizing administrative effort, you should create a Playbook. Playbooks provide low-code/no-code orchestration with built-in Sentinel connectors and security connectors (Microsoft Defender, Microsoft Entra ID, Microsoft 365, ServiceNow, Teams, etc.). You can attach playbooks to Automation rules so they run automatically when incidents are created/updated or when alerts fire. This reduces manual triage and response steps and centralizes response actions. Key features / configurations: - Playbooks are Logic Apps with the “Microsoft Sentinel” trigger (incident or alert) and actions to update incidents, add comments, change severity, or run containment actions. - Use Automation rules to automatically run a playbook based on conditions (severity, tactics, analytics rule name, entity type), aligning with Azure Well-Architected Framework operational excellence (repeatable operations) and security (consistent response). - Managed identity and least privilege: grant the playbook identity only the permissions needed (e.g., Sentinel Responder/Contributor, Defender actions, Entra ID roles) to reduce blast radius. Common misconceptions: Alert rules (analytics rules) detect and create alerts/incidents but do not perform mitigation actions by themselves. Runbooks (Azure Automation) and Function Apps can automate tasks, but they require more custom development/integration and typically more operational overhead than Logic Apps playbooks with native Sentinel triggers and connectors. Exam tips: For Sentinel automation questions, remember: Detection = Analytics/Alert rules; Response automation = Playbooks (Logic Apps) often invoked via Automation rules. If the question emphasizes “minimize administrative effort,” prefer built-in, low-code Sentinel automation (playbooks) over custom code (functions) or separate automation platforms (runbooks).

9
Questão 9

HOTSPOT - You are evaluating the security of the network communication between the virtual machines in Sub2. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:

Parte 1:

From VM1, you can successfully ping the public IP address of VM2.

No. Pinging a VM’s public IP from another VM is treated as traffic to the Internet-facing endpoint. For the ping to succeed, the destination VM (VM2) must have an inbound rule allowing ICMP from the source (or from the Internet/AzureLoadBalancer depending on the path). In most secure configurations (and commonly in AZ-500 questions), NSGs do not include an allow rule for ICMP inbound, so the default NSG behavior (implicit deny) blocks it. Even if VM1 can send the outbound ICMP, the return traffic will not be permitted unless the inbound path is explicitly allowed. Also, many enterprise designs force outbound through a firewall/NVA and deny ICMP to public endpoints. Therefore, VM1 cannot successfully ping VM2’s public IP. Why “Yes” is wrong: it assumes public IPs are reachable by default. In Azure, inbound to a VM via public IP is not open unless NSG rules explicitly allow it.

Parte 2:

From VM1, you can successfully ping the private IP address of VM3.

Yes. Pinging VM3’s private IP from VM1 is private east-west traffic. If VM1 and VM3 are in the same VNet (or in VNets with peering that allows traffic), Azure system routes provide connectivity automatically. In many AZ-500 scenarios, private VM-to-VM traffic inside a subnet/VNet is allowed unless an NSG explicitly denies it. NSGs are stateful, so if outbound ICMP is allowed from VM1 to VM3, the return traffic is automatically allowed. Why “No” would be correct only in specific cases: if there is an NSG rule denying ICMP (or denying all intra-VNet traffic), or a UDR that forces traffic through a firewall/NVA that blocks ICMP, or if there is no private routing relationship (different VNets without peering/VPN/ER). Absent those explicit blocks, private IP ping succeeds.

Parte 3:

From VM1, you can successfully ping the private IP address of VM5.

No. VM5’s private IP is typically used in these questions to represent a different network segment where routing exists but security controls block east-west traffic (for example, a different subnet protected by an NSG deny rule, or traffic forced through Azure Firewall with an application/network rule set that does not permit ICMP). Even if VNets are peered, NSGs on either subnet/NIC can deny the traffic, and a UDR can steer traffic to a firewall/NVA that blocks it. Because ping uses ICMP, it is frequently not explicitly allowed in hardened environments. Why “Yes” is wrong: it assumes private IP reachability implies permission. In Azure, reachability (routes) and permission (NSG/Firewall policy) are separate; private connectivity can still be denied by NSG rules or centralized inspection/segmentation controls.

10
Questão 10

You have 10 virtual machines on a single subnet that has a single network security group (NSG). You need to log the network traffic to an Azure Storage account. What should you do?

Network Performance Monitor (NPM) is an older Log Analytics solution aimed at monitoring network performance, latency, and connectivity (often via agents) and visualizing network paths. It is not the primary feature for logging NSG-evaluated traffic flows to an Azure Storage account. For AZ-500, distinguish performance monitoring from security flow logging; NPM doesn’t satisfy the explicit Storage-based NSG traffic logging requirement.

A Log Analytics workspace is a data store for Azure Monitor logs and enables KQL queries, alerts, and workbooks. However, creating a workspace alone does not capture NSG traffic. You would still need to enable NSG flow logs (and optionally Traffic Analytics) to send data to Log Analytics. Since the requirement is specifically to log traffic to an Azure Storage account, a workspace is not sufficient or necessary as the primary action.

Enabling diagnostic logging for an NSG can be confused with flow logging, but diagnostic settings typically target control-plane/resource logs and metrics rather than detailed per-flow network traffic decisions. The exam expects NSG flow logs for recording allowed/denied flows. If you need actual traffic flow records written to Storage, diagnostic logging is not the correct mechanism compared to NSG flow logs.

NSG flow logs (via Azure Network Watcher) record information about IP traffic flowing through an NSG, including whether traffic was allowed or denied, plus 5-tuple details and timestamps. Flow logs are stored in an Azure Storage account by design (and can optionally be analyzed with Traffic Analytics/Log Analytics). With one subnet and one NSG protecting 10 VMs, enabling flow logs on that NSG meets the requirement directly.

Análise da Questão

Core concept: This question tests Azure network traffic logging at the subnet/NSG layer. In Azure, Network Security Groups (NSGs) enforce L3/L4 allow/deny rules. To log actual IP traffic decisions (accepted/denied flows) for resources associated with an NSG, you use NSG flow logs (a feature of Azure Network Watcher). Flow logs are written to an Azure Storage account (and optionally sent to Log Analytics/Traffic Analytics). Why the answer is correct: Because all 10 VMs are on a single subnet protected by a single NSG, enabling NSG flow logs on that NSG captures the network flow records for traffic evaluated by the NSG rules. NSG flow logs are specifically designed to log network traffic metadata (5-tuple, direction, decision, timestamps, counters) and store it in an Azure Storage account, meeting the requirement directly. Key features / configuration notes: - Prerequisites: Network Watcher enabled in the region, and an existing Storage account (often in the same region for cost/performance). - Scope: You enable flow logs per NSG; it covers NICs/subnets associated with that NSG. - Versions: Flow log version 2 provides richer fields and better analytics compatibility. - Retention/cost: Storage costs accrue; configure retention and lifecycle management. Consider sending to Log Analytics for querying, but Storage is the required sink in this question. - Well-Architected alignment: Supports Security (visibility/auditing), Operational Excellence (monitoring), and Cost Optimization (retention controls). Common misconceptions: - “Diagnostic logging for the NSG” can sound correct, but NSG diagnostic logs are not the same as flow logs and historically do not provide the per-flow traffic records required for network traffic logging to Storage. - Creating a Log Analytics workspace is useful for querying and alerting, but it doesn’t itself capture NSG traffic; you still need flow logs (or other data sources) and the question explicitly requires logging to a Storage account. - Network Performance Monitor focuses on performance monitoring and dependency mapping, not authoritative NSG traffic decision logging to Storage. Exam tips: When you see “log network traffic” + “NSG/subnet” + “Storage account,” think: Network Watcher -> NSG flow logs. If the requirement is “analyze/query,” add Log Analytics/Traffic Analytics, but the foundational control remains NSG flow logs.

Outros Simulados

Practice Test #1

50 Questões·100 min·Aprovação 700/1000

Practice Test #3

50 Questões·100 min·Aprovação 700/1000

Practice Test #4

50 Questões·100 min·Aprovação 700/1000

Practice Test #5

50 Questões·100 min·Aprovação 700/1000
← Ver Todas as Questões de Microsoft AZ-500

Comece a Praticar Agora

Baixe o Cloud Pass e comece a praticar todas as questões de Microsoft AZ-500.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

App de Prática para Certificações de TI

Get it on Google PlayDownload on the App Store

Certificações

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

Perguntas FrequentesPolítica de PrivacidadeTermos de Serviço

Empresa

ContatoExcluir Conta

© Copyright 2026 Cloud Pass, Todos os direitos reservados.

Quer praticar todas as questões em qualquer lugar?

Baixe o app

Baixe o Cloud Pass — inclui simulados, acompanhamento de progresso e mais.