
Simulate the real exam experience with 50 questions and a 100-minute time limit. Practice with AI-verified answers and detailed explanations.
AI-Powered
Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.
You create a new Azure subscription. You need to ensure that you can create custom alert rules in Azure Security Center. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
Azure AD Identity Protection is an identity-focused service that detects risky sign-ins and risky users in Microsoft Entra ID. It is not a prerequisite for configuring custom alert rules in Azure Security Center. Although identity signals may complement a broader security strategy, onboarding Identity Protection does not enable Security Center custom alerts. Therefore, it is unrelated to the required setup in this question.
An Azure Storage account is required in the classic Azure Security Center custom alert workflow to hold collected security data used for analysis. Security Center can use this stored telemetry as part of its detection pipeline for generating alerts. Without the storage account, the service lacks the necessary backing store referenced in this exam scenario. This makes the storage account a prerequisite for creating custom alert rules in the older Security Center model.
Azure Advisor provides best-practice recommendations for cost, reliability, performance, operational excellence, and security. Implementing those recommendations can improve an environment, but it does not activate or configure custom alert rule functionality in Azure Security Center. Advisor is a recommendation engine, not the platform used to define Security Center custom alerts. As a result, it is not part of the required solution.
A Log Analytics workspace is commonly associated with Azure Monitor, Microsoft Sentinel, and many Defender for Cloud data collection scenarios. However, in the classic Azure Security Center custom alert rule context tested by this question, it is not the specific prerequisite being asked for. The exam expects the combination of Standard tier enablement and a storage account instead. Therefore, selecting a Log Analytics workspace here reflects a different monitoring architecture than the one targeted by the question.
The Standard pricing tier of Azure Security Center unlocks advanced security capabilities, including threat detection and custom alerting features. The Free tier is limited primarily to security posture assessment and recommendations, not advanced configurable detections. Because the question asks specifically about custom alert rules, enabling Standard is necessary. This is a common AZ-500 pattern whenever advanced Security Center functionality is required.
Core concept: This question is about the prerequisites for creating custom alert rules in Azure Security Center (now Microsoft Defender for Cloud) in the classic AZ-500 context. Custom alerts in Security Center relied on collected security data and the advanced capabilities available only in the Standard tier. A storage account is needed to store the security event data used for these detections, and the subscription must be upgraded to Standard to unlock custom alert functionality. Why correct: Creating a storage account provides the location for storing collected security events and telemetry that Security Center can analyze for custom alerting scenarios. Upgrading Security Center to the Standard tier enables advanced threat detection and custom alert rule capabilities that are not available in the Free tier. Key features: Security Center Standard adds advanced threat protection, richer detections, and configurable alerting. Storage accounts can be used as part of the data collection pipeline for security events in older Security Center workflows. The Free tier focuses mainly on security posture and recommendations rather than advanced alert customization. Common misconceptions: A Log Analytics workspace is important for many Azure Monitor and Sentinel scenarios, but it is not the required prerequisite for this specific Security Center custom alert rule question as commonly tested. Azure AD Identity Protection and Azure Advisor are separate services and do not enable Security Center custom alert creation. Exam tips: For older Azure Security Center exam questions, distinguish between Azure Monitor/Sentinel analytics rules and Security Center custom alerts. If the question asks specifically about Security Center custom alert rules, look for Standard tier enablement and the supporting data storage requirement rather than assuming Log Analytics is always mandatory.
Want to practice all questions on the go?
Download Cloud Pass for free — includes practice tests, progress tracking & more.


Want to practice all questions on the go?
Get the free app
Download Cloud Pass for free — includes practice tests, progress tracking & more.
HOTSPOT - You have an Azure subscription that contains the virtual machines shown in the following table. Name Resource group Status VM1 RG1 Stopped (Deallocated) VM2 RG2 Stopped (Deallocated) You create the Azure policies shown in the following table.
You create the resource locks shown in the following table.
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:
Not allowed resource types: Resource type: virtualMachines, Scope: RG1
Yes. A policy assignment scoped to RG1 that specifies a Not allowed resource types list including Microsoft.Compute/virtualMachines (often shown as “virtualMachines”) will deny create operations for that resource type within RG1. Azure Policy with a Deny effect is evaluated by Azure Resource Manager when a request is made to create or update a resource. Because the scope is RG1, the restriction applies to resources in RG1 only. This does not retroactively delete or disable existing VMs; it prevents future create/update requests that would violate the policy. Also, it does not automatically block runtime actions unless those actions are implemented as ARM writes and are explicitly denied by policy conditions (most “not allowed resource types” policies focus on resource creation). Therefore, the statement that virtualMachines are not allowed at scope RG1 is true.
Allowed resource types: Resource type: virtualMachines, Scope: RG2
Yes. A policy assignment scoped to RG2 that specifies Allowed resource types including Microsoft.Compute/virtualMachines means only the listed types are permitted for create/update operations in RG2. If virtualMachines is in the allowed list, then creating a VM is permitted by that policy (assuming no other policy assignments deny it). In Azure Policy, “Allowed resource types” is typically implemented with a Deny effect for any resource type not in the list. So the presence of virtualMachines in the allowed list explicitly permits that type at that scope. This is a common governance control to restrict what can be deployed in a resource group. Therefore, the statement that virtualMachines are allowed at scope RG2 is true.
Lock1 is Read-only and created on VM1.
Yes. The statement is about the lock configuration: Lock1 is Read-only and created on VM1. A Read-only lock applied directly to a VM resource means the VM resource cannot be modified through ARM operations. This includes operations that change state or configuration (for example, start/stop/restart, resizing, updating extensions), because these are executed via ARM and treated as write operations. In exam questions, when a lock is described as being created “on VM1,” it means the lock scope is the VM resource itself (not inherited from the resource group). That scope is narrower than an RG-level lock, but it is still sufficient to block modifications to VM1. Therefore, the statement describing Lock1 is true.
Lock2 is Read-only and created on RG2.
Yes. The statement is about the lock configuration: Lock2 is Read-only and created on RG2. A Read-only lock at the resource group scope applies to the resource group and all resources within it (inheritance). That means any resource in RG2 becomes effectively read-only from an ARM perspective: you can’t create, delete, or modify resources in that RG while the lock is in place. This is a strong protection mechanism used to prevent accidental changes to critical environments. In the context of the question, it also means that even if Azure Policy would allow a VM type in RG2, the Read-only lock would still block creation or modification operations. Therefore, the statement that Lock2 is Read-only and created on RG2 is true.
You can start VM1.
No. You cannot start VM1 because Lock1 is a Read-only lock applied to VM1. Starting a VM is an ARM control-plane action (Microsoft.Compute/virtualMachines/start/action) and is treated as a write/modify operation against the VM resource. Read-only locks block write operations, so the start request will be denied by Azure Resource Manager. Even though VM1 is currently Stopped (Deallocated), the deallocated state does not bypass locks. Locks are evaluated at the time of the management operation. Also, Azure Policy about “not allowed resource types” in RG1 primarily affects creation of new resources; it is the Read-only lock that directly prevents the start operation. Therefore, the statement “You can start VM1” is false.
You can start VM2.
No. You cannot start VM2 because Lock2 is a Read-only lock applied at the RG2 scope. Resource group locks are inherited by all resources in the group, including VM2. As with VM1, starting a VM is a management operation executed via ARM and is considered a modification of the VM resource state. A Read-only lock at the resource group level blocks any write operations on resources in that group, including start/stop/restart and configuration changes. Azure Policy in RG2 allowing virtualMachines does not override the lock; locks are enforced independently by ARM and will still deny the operation. Therefore, the statement “You can start VM2” is false.
You can create a virtual machine in RG2.
No. Even though the Azure Policy in RG2 allows the virtualMachines resource type, the Read-only lock (Lock2) on RG2 prevents creating any new resources in that resource group. Creating a VM is an ARM create operation (a write) and is blocked by a Read-only lock at the resource group scope. This is a key exam point: Azure Policy determines whether a request is compliant and can be accepted, but a lock can still block the operation even if it is compliant. In other words, “allowed by policy” does not mean “possible” when a Read-only lock exists. To create a VM in RG2, you would need to remove or change the lock (for example, remove Read-only or use a different protection strategy). Therefore, the statement “You can create a virtual machine in RG2” is false.
From Azure Security Center, you create a custom alert rule. You need to configure which users will receive an email message when the alert is triggered. What should you do?
Incorrect. Azure Monitor action groups are used to define notification targets for Azure Monitor alerts, such as metric alerts, log alerts, and activity log alerts. The question is specifically about Azure Security Center custom alert notifications, which are configured through Security Center's own policy-based email notification settings. Choosing an action group confuses Azure Monitor alerting with Defender for Cloud alert notification configuration.
Correct. In Azure Security Center, the recipients for alert email notifications are configured in the Security policy settings for the subscription. This is where you define whether subscription owners are notified and where you add additional email addresses for security alert notifications. Because the question asks specifically about Azure Security Center and who receives an email when the alert is triggered, modifying the subscription's Security policy settings is the appropriate action.
Incorrect. The Security Reader role in Azure AD or Azure RBAC determines who can view security-related information, such as alerts and recommendations. It does not control who receives email notifications when a Security Center alert is triggered. Email delivery settings are configured separately in Security Center policy settings.
Incorrect. Modifying the alert rule affects the rule definition itself, such as what condition generates the alert. It does not determine the list of users who receive email notifications for Security Center alerts. Recipient configuration is handled through the subscription's Security policy notification settings, not directly inside the alert rule.
Core concept: In Azure Security Center (now Microsoft Defender for Cloud), email notifications for security alerts are configured in the subscription's Security policy settings. These settings let you specify which users or email addresses should receive notifications when security alerts are generated. Why correct: To control who receives email messages for Security Center alerts, you modify the email notification configuration under the Security policy for the Azure subscription. This is the built-in mechanism Security Center uses for alert notification recipients, rather than Azure Monitor action groups. Key features: - Security policy settings include email notification options for security alerts. - You can notify subscription owners and specify additional email recipients. - These settings apply at the subscription level and are part of Defender for Cloud's security configuration. - RBAC roles affect access to alerts, but not email delivery configuration. Common misconceptions: - Azure Monitor action groups are used for Azure Monitor alerts, but Security Center alert email recipients in this context are configured through Security policy settings. - Editing an alert rule does not define the recipient list for Security Center email notifications. - Assigning users to Security Reader only grants visibility into alerts and recommendations; it does not subscribe them to emails. Exam tips: For AZ-500, when a question asks about who receives email notifications from Azure Security Center/Defender for Cloud alerts, think about Security policy email notification settings at the subscription level. Reserve Azure Monitor action groups for Azure Monitor alerting scenarios unless the question explicitly references Azure Monitor alerts.
HOTSPOT - You have an Azure subscription named Sub1 that is associated to an Azure Active Directory (Azure AD) tenant named contoso.com. You plan to implement an application that will consist of the resources shown in the following table. Name Type Description CosmosDBAccount1 Azure Cosmos DB A Cosmos DB account containing a account database named CosmosDB1 that serves as a back-end tier of the application WebApp1 Azure web app A web app configured to serve as the middle tier of the application Users will authenticate by using their Azure AD user account and access the Cosmos DB account by using resource tokens. You need to identify which tasks will be implemented in CosmosDB1 and WebApp1. Which task should you identify for each resource? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:
CosmosDB1: ______
CosmosDB1 (the database in the Cosmos DB account) is where Cosmos DB security principals for resource tokens live: Cosmos DB “users” and “permissions” are created in the database context. A resource token is effectively derived from a permission (e.g., read on a container, or read/write on specific partition scope) that is associated with a Cosmos DB database user. Therefore, the task that belongs to CosmosDB1 is creating database users (and their permissions), which is the prerequisite for issuing resource tokens. Why not A or B? Cosmos DB does not authenticate Azure AD users for the purpose of issuing resource tokens, and it does not generate tokens based on Azure AD sign-in. Azure AD authentication happens at the application tier, and Cosmos DB resource tokens are generated by calling Cosmos DB APIs with privileged credentials, not by Cosmos DB “authenticating” the Azure AD user directly.
WebApp1: ______
WebApp1 is the trusted middle tier that authenticates users with Azure AD and, after authorizing them, uses privileged access to Cosmos DB to create/read permissions and generate a resource token for the client. In the Cosmos DB resource token pattern, the application server issues the token; Cosmos DB stores the users and permissions that the token is based on. Option B is incomplete because relaying implies the token was generated elsewhere, but the middle tier is the component that actually generates it. Option C is less accurate because creating database users is a Cosmos DB database task, even though WebApp1 may invoke the APIs to do so.
You are configuring and securing a network environment. You deploy an Azure virtual machine named VM1 that is configured to analyze network traffic. You need to ensure that all network traffic is routed through VM1. What should you configure?
System routes are the default routes Azure automatically provides (VNet local, internet, virtual network gateway, etc.). They determine baseline connectivity but are not typically configurable to force traffic through a specific VM as a next hop. You can view effective routes, but you generally cannot edit system routes to steer traffic through VM1. For traffic inspection scenarios, you override system routes using UDRs.
A network security group (NSG) is a stateful packet filter used to allow or deny inbound/outbound traffic based on source/destination, ports, and protocol. NSGs do not perform routing and cannot force traffic to traverse a particular VM. While NSGs are important for securing VM1 and subnets, they won’t ensure that all traffic is routed through VM1.
A user-defined route (UDR) in a route table is the correct way to steer traffic through VM1. You create a route (often 0.0.0.0/0 or specific prefixes) with next hop type “Virtual appliance” and set VM1’s private IP as the next hop, then associate the route table to the relevant subnet(s). Combined with IP forwarding on VM1, this forces traffic to traverse VM1 for inspection.
Core concept: This question tests Azure routing control within a virtual network (VNet). To force traffic inspection through a network virtual appliance (NVA) such as VM1 (used to analyze traffic), you must influence the effective routes used by NICs/subnets. In Azure, this is done with route tables and user-defined routes (UDRs), often called “forced tunneling” or “traffic steering.” Why the answer is correct: A user-defined route lets you override Azure’s default system routes and direct traffic to a specific next hop, such as a Virtual appliance (the IP of VM1). By associating a route table to the relevant subnet(s), you can ensure that traffic destined for other subnets, on-premises, or the internet is routed via VM1. This is the standard pattern for inserting an inspection VM/firewall into the data path. Without a UDR, Azure will typically route traffic directly using system routes (e.g., VNet local routing) and bypass VM1. Key features / configuration details: - Create an Azure route table, add routes (e.g., 0.0.0.0/0 or specific prefixes) with next hop type “Virtual appliance” and next hop IP = VM1’s private IP. - Associate the route table to the subnet(s) whose traffic must be inspected. - Ensure VM1 can forward traffic: enable IP forwarding on VM1’s NIC and configure OS-level forwarding/NVA software. - Consider asymmetric routing: ensure return paths also traverse VM1 (often requires UDRs on multiple subnets). - Aligns with Azure Well-Architected Framework (Security + Reliability): centralized inspection, consistent policy enforcement, and predictable routing. Common misconceptions: NSGs control allow/deny at L4 (5-tuple) but do not change the path traffic takes; they cannot “route” traffic through VM1. System routes exist but are managed by Azure and generally cannot be edited to force next-hop via an appliance. Exam tips: If the requirement is “force/ensure traffic goes through an NVA/VM,” think “UDR + route table + next hop = Virtual appliance.” If the requirement is “permit/deny traffic,” think NSG. If the requirement is “Azure default routing,” think system routes (not configurable for this purpose).
You need to ensure that User2 can implement PIM. What should you do first?
Assigning User2 the Global administrator role is the correct first step because implementing PIM requires elevated directory permissions. A user cannot configure PIM, assign eligible roles, or manage privileged access settings without an appropriate administrative role. Although Privileged Role Administrator would be a more least-privilege choice in practice, it is not offered here, so Global Administrator is the best available answer. This reflects a common exam pattern where the broader built-in role is selected when the precise role is absent.
Configuring authentication methods for contoso.com is not the first step to ensure User2 can implement PIM. This is a tenant-wide authentication configuration task and does not grant User2 the permissions required to administer PIM. Even if authentication methods are configured, User2 still cannot implement PIM without an appropriate admin role. Therefore, this option addresses authentication readiness rather than administrative authorization.
Configuring the identity secure score for contoso.com does not enable or authorize PIM implementation. Secure Score is an assessment and recommendation tool that helps measure identity security posture, but it does not provide permissions or activate PIM capabilities. User2 would still lack the necessary administrative rights to implement PIM. This makes Secure Score unrelated to the immediate prerequisite in the question.
Enabling multi-factor authentication for User2 is a security best practice and may be required later for activating privileged roles through PIM. However, MFA alone does not give User2 the administrative permissions needed to implement or manage PIM in the tenant. The question asks what should be done first to ensure User2 can implement PIM, and that requires assigning an appropriate admin role before considering activation controls like MFA. Therefore, MFA is important but not the initial prerequisite here.
Core concept: This question tests the prerequisites for implementing Microsoft Entra Privileged Identity Management (PIM). To implement or manage PIM, a user must first have sufficient directory privileges, typically Global Administrator or Privileged Role Administrator. MFA is commonly required for activating privileged roles in PIM, but it is not the first prerequisite for being able to implement PIM. Why correct: Assigning User2 the Global administrator role is the correct first step because only users with the necessary administrative permissions can configure and manage PIM in the tenant. Without an appropriate admin role, User2 cannot access or implement PIM settings regardless of MFA status. In many exam questions, Global Administrator is used as the expected answer when the more granular Privileged Role Administrator option is not provided. Key features: PIM enables just-in-time access, approval workflows, time-bound role activation, access reviews, and auditing for privileged roles. Administrative setup of PIM requires elevated directory permissions, while MFA is typically enforced during role activation after PIM is already configured. The exam often distinguishes between prerequisites to administer PIM and requirements to activate roles through PIM. Common misconceptions: A common mistake is assuming MFA is always the first requirement because PIM often requires MFA during activation. However, MFA does not grant the permissions needed to configure PIM. Another misconception is that Secure Score or authentication method configuration enables PIM directly; these are related security features but not the core prerequisite for implementation. Exam tips: When a question asks who can implement or manage PIM, think first about administrative roles such as Global Administrator or Privileged Role Administrator. If the least-privilege role is not listed, choose the broader role that definitely has the required permissions. Reserve MFA-related answers for questions about activating eligible roles or satisfying PIM policy requirements after setup.
You have a web app named WebApp1. You create a web application firewall (WAF) policy named WAF1. You need to protect WebApp1 by using WAF1. What should you do first?
Correct. Azure Front Door is a WAF-capable global Layer 7 entry point. To protect WebApp1 with WAF1, you must deploy Front Door first (if not already present), then associate WAF1 with the Front Door endpoint/domain and configure WebApp1 as the origin. This ensures all inbound HTTP/S traffic is inspected and filtered before reaching the app.
Incorrect. App Service (WebApp1) does not support enabling Azure WAF by installing an extension. WAF in Azure is enforced by a reverse proxy service (Azure Front Door or Application Gateway WAF) that inspects HTTP/S requests. Extensions may add app functionality or agents, but they do not provide platform WAF request inspection and rule enforcement.
Incorrect. Azure Firewall is a network firewall primarily for L3/L4 filtering and some application-level controls (e.g., FQDN filtering), but it is not a web application firewall that applies OWASP rules, request body inspection, and typical WAF protections. It also doesn’t directly “attach” a WAF policy like WAF1 to protect a web app’s HTTP traffic.
Core concept: A WAF policy (WAF1) is not a standalone security control; it must be associated with a supported “WAF-enabled” reverse-proxy service that sits in front of the web app. In Azure, WAF policies can be attached to services such as Azure Front Door (AFD) or Application Gateway (WAF v2). The question asks what you should do first to protect WebApp1 using WAF1. Why the answer is correct: To use WAF1 to protect a web app, you must first deploy a compatible entry point that can enforce the WAF policy. Among the options, Azure Front Door is the only WAF-capable service listed. Once Front Door is deployed, you can associate WAF1 to the Front Door profile/endpoint (or specific domains/routes depending on SKU) and route traffic to WebApp1 as the backend. This aligns with the security architecture principle of placing a Layer 7 protection control at the edge before traffic reaches the application. Key features / configuration points: - Azure Front Door Premium/Standard supports WAF policies, managed rule sets (e.g., OWASP CRS), custom rules, rate limiting, bot protection (SKU dependent), and TLS termination. - You configure WebApp1 as an origin (backend) and ensure DNS points to Front Door so inbound HTTP/S traffic flows through WAF. - Best practice (Azure Well-Architected Framework – Security): centralize edge protection, enable logging/monitoring (WAF logs to Log Analytics), and start in Detection mode before Prevention to reduce false positives. Common misconceptions: - Thinking a WAF policy alone “protects” the app. A policy must be bound to a WAF-enabled service. - Confusing Azure Firewall with WAF. Azure Firewall is primarily L3/L4 and some L7 FQDN filtering, not an OWASP-style web application firewall for HTTP request inspection. - Assuming an App Service “extension” can provide Azure WAF. App Service doesn’t use WAF extensions for platform WAF enforcement. Exam tips: When you see “WAF policy” in Azure, immediately ask: “Where will I attach it?” The correct next step is to deploy (or identify) a WAF-capable gateway/edge service (Front Door or Application Gateway). If those aren’t present, deploying one is typically the first action before associating the policy and updating DNS/traffic flow.
You have an Azure subscription that contains virtual machines. You enable just in time (JIT) VM access to all the virtual machines. You need to connect to a virtual machine by using Remote Desktop. What should you do first?
Incorrect. Activating the Security administrator role in Azure AD PIM may grant permissions to manage security settings, but it is not the required first operational step to connect via RDP when JIT is enabled. Even with the right role, you still must request JIT access to open port 3389 temporarily. PIM is only relevant if you currently lack sufficient RBAC permissions.
Incorrect. Activating the Owner role via PIM could provide broad permissions, but it is not inherently required to initiate an RDP session. The key blocker with JIT is that RDP is closed until you request access. Also, PIM role activation depends on how roles are assigned; the question does not state you need elevation, only that JIT is enabled.
Correct. With JIT enabled, inbound RDP is denied by default. The first step is to request JIT access from the VM’s Connect experience (or Defender for Cloud JIT page). This action creates a temporary NSG rule allowing your source IP to connect to port 3389 for a limited time, after which the rule is removed automatically.
Incorrect. The Network Watcher Agent (or related VM extensions) is used for network diagnostics (e.g., packet capture, connection troubleshoot) and is not required for JIT. JIT is implemented through network security rules at the control plane (NSG/Azure Firewall). Installing an agent will not open RDP nor satisfy the JIT access workflow.
Core concept: Just-in-time (JIT) VM access (Microsoft Defender for Cloud) reduces attack surface by keeping management ports (RDP 3389/SSH 22) closed by default and only opening them temporarily, on demand, for approved users and source IPs. It works by creating time-bound NSG (or Azure Firewall) rules. Why the answer is correct: After JIT is enabled for a VM, inbound RDP is blocked unless you explicitly request access. Therefore, the first step to connect via Remote Desktop is to request JIT access for that VM from the Azure portal (VM > Connect > Request access, or via Defender for Cloud JIT page). Once approved, Defender for Cloud updates the NSG to allow your public IP to reach port 3389 for the requested duration. Only after that should you initiate the RDP client connection. Key features / best practices: - Time-bound access: You specify ports, source IP range (ideally a single public IP), and duration. - Least privilege: Only users with appropriate permissions (typically Security Admin/Contributor/Owner on the VM or its resource group, depending on configuration) can request access. - Auditing: Requests and rule changes are logged (Activity Log), supporting governance and incident investigation. - Azure Well-Architected (Security pillar): JIT is a control to minimize exposure and reduce brute-force attempts on management ports. Common misconceptions: - PIM role activation is not inherently the “first” step unless the user currently lacks permissions. The question asks what you should do first to connect after JIT is enabled; operationally, you must request access to open the port. - Installing agents/extensions is not required for JIT; JIT is enforced at the network control plane (NSG/Firewall), not by a VM agent. Exam tips: - If you see “JIT enabled” and “need to RDP/SSH,” think: request access to open the port temporarily. - Remember JIT is part of Defender for Cloud and primarily a Secure Networking control because it manipulates inbound network rules. - Distinguish between identity prerequisites (RBAC/PIM) and the JIT workflow action (Request access).
You have an Azure subscription that contains a user named Admin1 and a virtual machine named VM1. VM1 runs Windows Server 2019 and was deployed by using an Azure Resource Manager template. VM1 is the member of a backend pool of a public Azure Basic Load Balancer. Admin1 reports that VM1 is listed as Unsupported on the Just in time VM access blade of Azure Security Center. You need to ensure that Admin1 can enable just in time (JIT) VM access for VM1. What should you do?
Creating and configuring a network security group is the correct action because JIT VM access depends on Defender for Cloud being able to create and remove inbound rules for management ports. If VM1 lacks an NSG on its NIC or subnet, the JIT blade can show the VM as Unsupported because there is no supported rule-enforcement mechanism available. After associating and configuring the NSG, Defender for Cloud can lock down RDP or SSH by default and open access only on approved JIT requests. This directly addresses the technical prerequisite needed for Admin1 to enable JIT on VM1.
Adding an additional public IP address does not solve the JIT prerequisite problem. JIT is intended to reduce exposure of management ports, not create more internet-facing endpoints for the VM. Even with another public IP, Defender for Cloud would still need an NSG or equivalent supported control to enforce temporary access rules. This option increases attack surface and does not address why the VM is marked Unsupported.
Replacing a Basic Load Balancer with a Standard Load Balancer is not the required step to enable JIT in this scenario. The essential dependency for JIT is the presence of an NSG on the VM's NIC or subnet so Defender for Cloud can manage inbound access rules. Although Standard Load Balancer is the newer SKU and often recommended for production workloads, the question asks what must be done so JIT can be enabled, and that is to provide the NSG-based enforcement mechanism. Changing the load balancer alone would not guarantee JIT support if no NSG exists.
Assigning Azure Active Directory Premium Plan 1 to Admin1 is unrelated to whether a VM is supported for JIT VM access. JIT is a Microsoft Defender for Cloud feature that depends on VM networking configuration and Defender for Cloud capabilities, not on Azure AD P1 licensing for the requesting user. Identity licensing may affect other security features such as Conditional Access, but it does not change the VM's Unsupported status in the JIT blade. Therefore this action would not enable JIT for VM1.
Core concept: Just-in-time (JIT) VM access in Microsoft Defender for Cloud works by controlling inbound management access such as RDP or SSH through network security rules. For Azure virtual machines, Defender for Cloud typically needs a network security group (NSG) associated with the VM's NIC or subnet so it can add temporary allow rules and keep management ports closed by default. Why correct: VM1 is shown as Unsupported in the JIT blade because Defender for Cloud cannot enforce JIT without a supported traffic-filtering control. The required action is to create and configure an NSG for the VM's subnet or NIC so JIT can manage inbound access to the management ports. Once the NSG is in place, Admin1 can enable JIT for VM1. Key features: JIT reduces attack surface by denying persistent inbound access and opening ports only for approved users, source IPs, and time windows. It relies on NSG rule manipulation for most Azure VM scenarios, though Azure Firewall can also be used in some architectures. The VM does not need an additional public IP for JIT, and user licensing in Azure AD is not the gating factor for VM support status. Common misconceptions: A common mistake is assuming the load balancer SKU is the blocker whenever a load balancer is mentioned. While Standard Load Balancer is generally preferred for modern deployments, JIT support is fundamentally tied to Defender for Cloud being able to manage NSG rules. Another misconception is that adding more public exposure helps JIT; in reality, JIT is about restricting exposure, not increasing it. Exam tips: For AZ-500, when a VM is listed as Unsupported for JIT, first check whether an NSG exists on the NIC or subnet and whether Defender for Cloud can manage the relevant ports. Focus on the control plane JIT uses to enforce access, which is usually NSG rules. If the question asks what must be done to enable JIT, the safest exam answer is often to ensure an NSG is present and properly associated.
You are collecting events from Azure virtual machines to an Azure Log Analytics workspace. You plan to create alerts based on the collected events. You need to identify which Azure services can be used to create the alerts. Which two services should you identify? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
Azure Monitor is the primary service for creating alerts from Log Analytics data. With VM events stored in a Log Analytics workspace, you can create Scheduled Query Rules (log alerts) using KQL, define thresholds and evaluation frequency, and trigger Action Groups for notifications or automation. This is the standard approach for alerting on collected logs and metrics in Azure.
Azure Security Center (now Microsoft Defender for Cloud) focuses on security posture management, recommendations, and threat protection alerts from Defender plans. While it can surface security alerts, it is not the general-purpose service you use to create custom alerts directly from arbitrary VM event data in a Log Analytics workspace in the same way as Azure Monitor or Sentinel analytics rules.
Azure Analysis Services is a BI/semantic modeling service used to build tabular models for analytics (similar to SSAS). It does not provide monitoring or alerting capabilities for VM events or Log Analytics workspaces. It is unrelated to security event collection and alert creation in Azure Monitor Logs.
Microsoft Sentinel is a SIEM/SOAR built on Log Analytics. It can create alerts via Analytics rules that run KQL queries against the connected Log Analytics workspace, generate incidents, and support investigation and automated response through playbooks. For security-focused alerting and incident management based on collected events, Sentinel is a complete solution.
Azure Advisor provides best-practice recommendations for cost, reliability, security, operational excellence, and performance. It does not create alerts based on Log Analytics event data. Advisor recommendations can be exported or acted upon, but it is not an alerting engine for VM event logs.
Core concept: Events from Azure VMs are being collected into a Log Analytics workspace (Azure Monitor Logs). Alerts can be created either directly from Azure Monitor (log query alerts) or via Microsoft Sentinel analytics rules, both of which use the Log Analytics workspace as the data store. Why the answer is correct: Azure Monitor is the native platform for metrics and logs alerting. When VM events (for example, Windows Event Logs or Syslog) are ingested into a Log Analytics workspace, you can create log alerts (scheduled query rules) that run KQL queries against the workspace and trigger action groups. Microsoft Sentinel is a SIEM/SOAR solution that is built on top of Log Analytics. Sentinel uses the same workspace data and provides Analytics rules (scheduled query analytics) to generate incidents and alerts from KQL queries, plus automation playbooks. Key features and best practices: - Azure Monitor Alerts: Use “Log search alert rules” (scheduled query rules) with KQL, set evaluation frequency, lookback period, thresholds, and attach Action Groups (email/SMS/ITSM/webhook/Logic Apps). This aligns with Azure Well-Architected operational excellence (monitoring and alerting) and reliability (proactive detection). - Microsoft Sentinel: Create Analytics rules (scheduled or near-real-time), map entities, tune rule thresholds, and generate incidents for SOC workflows. Use automation rules and playbooks (Logic Apps) for response, supporting security posture and operational excellence. - Design tip: Keep security operations alerts (investigations, incidents, response) in Sentinel; keep platform/ops alerts (availability, performance, basic log thresholds) in Azure Monitor. Ensure workspace retention and cost controls are considered. Common misconceptions: - Microsoft Defender for Cloud (formerly Security Center) can generate security alerts and recommendations, but the question is specifically about creating alerts based on collected events in Log Analytics; the direct, complete solutions are Azure Monitor and Sentinel. - Azure Advisor provides recommendations, not event-driven alerting. - Azure Analysis Services is unrelated to monitoring/alerting. Exam tips: If the data source is a Log Analytics workspace, think “Azure Monitor log alerts” for general alerting and “Microsoft Sentinel analytics rules” for SIEM-style detections and incidents. Remember Sentinel is an Azure Monitor Logs-based solution and requires a Log Analytics workspace connection.