
50問と100分の制限時間で実際の試験をシミュレーションしましょう。AI検証済み解答と詳細な解説で学習できます。
AI搭載
すべての解答は3つの主要AIモデルで交差検証され、最高の精度を保証します。選択肢ごとの詳細な解説と深い問題分析を提供します。
You are designing a large Azure environment that will contain many subscriptions. You plan to use Azure Policy as part of a governance solution. To which three scopes can you assign Azure Policy definitions? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
Incorrect. Azure AD administrative units are used to delegate administrative control over subsets of Azure AD objects (users, groups, devices). Azure Policy is an Azure Resource Manager governance feature and does not assign to Azure AD administrative units. This option is a common distractor mixing identity governance with resource governance.
Incorrect. An Azure AD tenant is an identity boundary, not an ARM resource scope. Azure Policy assignments are made within the Azure resource hierarchy (management groups, subscriptions, resource groups, resources). While many Azure services integrate with Azure AD, Azure Policy does not assign at the tenant level as an Azure AD construct.
Correct. Subscription is a primary Azure Policy assignment scope. Assigning a policy at the subscription level applies to all resource groups and resources within that subscription (unless excluded). This is commonly used to enforce standards for a single subscription, such as allowed regions, required tags, or security configurations.
Incorrect. “Compute resources” is not a valid Azure Policy assignment scope. Azure Policy can evaluate and enforce rules on compute resource types (VMs, VMSS, AKS, etc.) via policy conditions, but the assignment scope is still management group, subscription, resource group (or individual resource), not a generic compute category.
Correct. Resource group is a valid Azure Policy assignment scope. This is useful when different workloads within the same subscription require different governance rules (e.g., stricter policies for production RGs). Policies assigned at the resource group scope apply to resources within that resource group.
Correct. Management group is a key scope for large environments with many subscriptions. Assigning policies at the management group level enables centralized governance and consistent enforcement across multiple subscriptions. This is a best-practice approach for enterprise-scale landing zones and aligns with the Governance pillar of the Azure Well-Architected Framework.
Core concept: Azure Policy is an Azure Resource Manager (ARM) governance service used to enforce standards and assess compliance for Azure resources. Policy definitions (rules) are assigned at a scope within the Azure resource hierarchy so they can be inherited by child scopes. Why the answer is correct: Azure Policy definitions can be assigned at three primary governance scopes in the ARM hierarchy: management groups, subscriptions, and resource groups. Assigning at a higher scope (management group) enables consistent governance across many subscriptions, which is a common AZ-305 design scenario for large enterprises. Assigning at subscription scope targets a single subscription and all its resource groups/resources. Assigning at resource group scope targets a specific workload boundary. Key features, configurations, and best practices: - Scope inheritance: A policy assignment at a management group applies to all subscriptions (and their resource groups/resources) beneath it, unless excluded. - Exclusions: You can exclude specific child scopes from a policy assignment to support exceptions (e.g., sandbox subscriptions). - Initiatives: Group multiple policy definitions into an initiative and assign once at the desired scope for easier governance. - Well-Architected alignment: Policy supports the Governance pillar by enforcing tagging, allowed locations/SKUs, security baselines, and resource configuration standards at scale. Common misconceptions: - Azure AD tenant or administrative units are identity governance scopes, not ARM resource scopes. Azure Policy evaluates ARM resources, not Azure AD objects. - “Compute resources” sounds like a resource-level scope, but Azure Policy assignment scopes are not “resource type categories.” While you can target specific resource types using policy rules and conditions, the assignment itself is made at management group/subscription/resource group (and also individual resource scope, though that option is not listed here). Exam tips: Remember the ARM hierarchy for governance: Management group > Subscription > Resource group > Resource. For questions asking “to which scopes can you assign Azure Policy,” pick the ARM scopes. If Azure AD scopes appear as distractors, they are typically incorrect for Azure Policy assignments.
外出先でもすべての問題を解きたいですか?
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。


外出先でもすべての問題を解きたいですか?
無料アプリを入手
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。
You are designing an application that will be hosted in Azure. The application will host video files that range from 50 MB to 12 GB. The application will use certificate-based authentication and will be available to users on the internet. You need to recommend a storage option for the video files. The solution must provide the fastest read performance and must minimize storage costs. What should you recommend?
Azure Files provides managed file shares over SMB/NFS and is best for lift-and-shift file server scenarios, shared application configuration, or user home directories. It is not typically the fastest or most cost-effective option for serving large video files to internet users at scale. Internet delivery usually requires additional components and doesn’t align as well as object storage with CDN caching and tiering for cost optimization.
Azure Data Lake Storage Gen2 is essentially Blob Storage with a hierarchical namespace and POSIX-like ACLs, optimized for big data analytics (Spark/Hadoop) and data engineering. While it can store large files, its primary value is analytics and filesystem semantics rather than lowest-cost, highest-performance internet content delivery. For a video hosting app focused on fast reads and minimal cost, standard Blob Storage is the more direct fit.
Azure Blob Storage is designed for unstructured data such as video and supports very large objects with high throughput. It offers Hot/Cool/Archive tiers and lifecycle management to minimize storage costs while maintaining performance for frequently accessed content. For fastest read performance to internet users, Blob integrates with Azure CDN/Front Door for edge caching. Secure access can be implemented via short-lived SAS tokens issued after certificate-based authentication.
Azure SQL Database is a relational database service intended for structured data and transactional workloads. Storing 50 MB to 12 GB video files in a database (as BLOBs) is inefficient and expensive, complicates scaling, and typically results in worse read performance and higher costs than object storage. The correct pattern is to store metadata in SQL (if needed) and store the actual video content in Blob Storage.
Core concept: This question tests choosing the right Azure storage service for large unstructured objects (video files) with internet access, strong authentication, high read performance, and low cost. For AZ-305, this maps to selecting the appropriate data platform and access pattern (object storage vs file shares vs database). Why the answer is correct: Azure Blob Storage is the primary Azure service for storing and serving large binary objects (50 MB to 12 GB) efficiently. It provides the best cost/performance fit for internet-facing content delivery scenarios because it is optimized for high-throughput reads, supports tiering to minimize cost, and integrates cleanly with secure access mechanisms. For “fastest read performance,” Blob Storage can be paired with Azure CDN or Front Door for edge caching and acceleration, which is the typical architecture for global video delivery. For “minimize storage costs,” Blob supports Hot/Cool/Archive access tiers and lifecycle management rules to automatically move older/less-accessed videos to cheaper tiers. Key features and best practices: - Performance: Blob Storage supports high throughput and large object sizes; using Premium Block Blob can further increase performance for demanding workloads, while Standard is usually most cost-effective. - Cost optimization: Use lifecycle policies to transition blobs from Hot to Cool/Archive based on last access/modified time; consider reserved capacity for predictable storage. - Secure internet access: Use Azure AD integration where applicable, SAS tokens, stored access policies, and/or private endpoints (for internal access). For certificate-based authentication, a common pattern is to authenticate users/app via certificates to an identity provider or API, then issue short-lived SAS for blob reads. - Well-Architected alignment: Cost Optimization (tiering, lifecycle), Performance Efficiency (CDN/Front Door caching), Security (least privilege via SAS/AAD, encryption at rest). Common misconceptions: Azure Files can look attractive because it’s “file storage,” but it’s SMB/NFS-oriented and typically not the best/cheapest for large-scale internet content distribution. ADLS Gen2 is built on Blob but is optimized for analytics and hierarchical namespace scenarios, not primarily for serving public video content. SQL Database is not suitable for large video binaries. Exam tips: For large media files served to internet users, default to Azure Blob Storage (often with CDN/Front Door). Use access tiers and lifecycle management for cost control, and use SAS/AAD patterns for secure access rather than exposing storage keys.
You have an Azure subscription that contains an Azure Blob Storage account named store1. You have an on-premises file server named Server1 that runs Windows Server 2016. Server1 stores 500 GB of company files. You need to store a copy of the company files from Server1 in store1. Which two possible Azure services achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
An Azure Logic Apps integration account is used for enterprise integration scenarios (B2B/EDI agreements, schemas, maps, certificates) and supports Logic Apps workflows. It does not provide a direct, purpose-built mechanism to bulk copy an on-premises Windows file server’s files into Azure Blob Storage. While Logic Apps can orchestrate some transfers, the integration account itself is not a complete file-copy solution for this scenario.
Azure Import/Export is designed to transfer large amounts of data to/from Azure Storage by shipping physical drives to an Azure datacenter. You copy the 500 GB from Server1 to encrypted disks, create an Import job, and Azure imports the data into the target storage account (store1). This is a complete solution especially when network bandwidth is constrained or you want an offline bulk seeding approach.
Azure Data Factory can copy data from on-premises sources to Azure Storage using a Self-hosted Integration Runtime installed in the on-premises environment. With the file system connector, ADF can read files from Server1 (or a share it hosts) and write them to Azure Blob Storage in store1. It supports scheduling, monitoring, retries, and repeatable pipelines, making it a complete online transfer solution.
The Azure Analysis Services On-premises data gateway is intended to provide secure connectivity for semantic models and reporting tools (e.g., Power BI) to access on-premises data sources. It is not a data movement service for copying file shares into Azure Blob Storage. It enables query connectivity rather than bulk file ingestion, so it does not meet the requirement to store a copy of the files in store1.
An Azure Batch account is used to run large-scale parallel and high-performance computing workloads by scheduling jobs across pools of compute nodes. It is not a data transfer or migration service. While Batch jobs can process data once it is in Azure, Batch does not provide a straightforward, managed mechanism to copy an on-premises file server’s data into Azure Blob Storage as a complete solution.
Core concept: This question tests how to copy on-premises file data into Azure Blob Storage. The key is selecting Azure services that can ingest/move data from an on-premises Windows file server into a storage account (store1). In AZ-305, this maps to designing data movement/ingestion patterns for storage solutions. Why the answers are correct: Azure Import/Export (B) is a complete offline transfer solution. You copy the 500 GB from Server1 to encrypted disks, ship them to an Azure datacenter, and Microsoft imports the data directly into the target storage account (store1). This is appropriate when bandwidth is limited, transfer windows are tight, or you want a predictable bulk copy process. Azure Data Factory (C) is a complete online data integration service. Using a Self-hosted Integration Runtime installed on Server1 (or another on-premises machine with access to the file share), ADF can copy files from an on-premises file system (SMB/file system connector) into Azure Blob Storage. This supports scheduled or one-time copy, incremental patterns, monitoring, and retry logic. Key features / best practices: - Import/Export: supports BitLocker encryption, chain-of-custody shipping workflow, and direct ingestion into Blob. Good for bulk seeding and aligns with Well-Architected reliability (controlled process) and cost optimization (avoid large egress/ingress over WAN). - Data Factory: supports orchestration, monitoring, alerting, and repeatable pipelines. Use managed identities/service principals for Azure-side auth and least-privilege access to the storage account. Consider network security (private endpoints, firewall rules) and throughput limits of your on-premises link. Common misconceptions: - Logic Apps integration account is for B2B/EDI artifacts, not file migration. - Analysis Services on-premises data gateway is for semantic model connectivity (Power BI/Analysis Services), not bulk file copy. - Azure Batch is for parallel compute jobs, not data transfer tooling. Exam tips: When the goal is “copy on-premises files to Blob,” think: (1) online pipeline tools (Azure Data Factory with self-hosted IR) or (2) offline bulk transfer (Import/Export, Data Box—though not listed here). Match the service to constraints like bandwidth, repeatability, and operational monitoring.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements: ✑ Provide access to the full .NET framework. ✑ Provide redundancy if an Azure region fails. ✑ Grant administrators access to the operating system to install custom application dependencies. Solution: You deploy an Azure virtual machine scale set that uses autoscaling. Does this meet the goal?
Yes is incorrect because the proposed solution is incomplete for the stated disaster recovery requirement. Azure VM Scale Sets do provide the needed guest OS control and can run full .NET Framework applications on Windows virtual machines, which makes them a strong fit for custom dependency installation. The problem is that a single VMSS deployment with autoscaling remains a regional resource pattern unless explicitly duplicated in another region. Without a second regional deployment and a failover mechanism such as Azure Front Door or Traffic Manager, the application would not remain available during a regional failure.
No is correct because Azure Policy that enforces resource group location governs only the resource group’s metadata location, not the location of resources deployed into it. To meet the requirement, you must apply Azure Policy to restrict allowed locations for the relevant resource types (e.g., Microsoft.Web/serverfarms, Microsoft.Web/sites, Microsoft.Sql/servers) or use an initiative combining these policies.
Core concept: This question tests Azure governance controls for regional compliance using Azure Policy. The requirement is to deploy App Service instances only to specific Azure regions, and ensure the resources for the App Service instances reside in the same region. Why the answer is correct: Recommending an Azure Policy initiative that enforces the location of resource groups does NOT meet the goal. Resource group location is metadata for the resource group container and does not control or guarantee the locations of resources deployed into that resource group. You can create a resource group in one region (e.g., West Europe) and deploy resources into another region (e.g., North Europe). Therefore, enforcing resource group location does not enforce App Service (or App Service plan) region, nor does it ensure all related resources are co-located. Key features / correct approach: To meet the regulatory requirement, you should enforce allowed locations at the resource level using built-in Azure Policy definitions such as: - “Allowed locations” (subscription or management group scope) to restrict which regions resources can be created in. - Resource-type specific location policies (e.g., for Microsoft.Web/serverfarms and Microsoft.Web/sites) if you need more granular control. Additionally, to ensure App Service resources are in the same region, remember that an App Service app’s region is determined by its App Service plan. Enforce the plan’s location and/or deny creation of apps not matching the plan’s region (often handled operationally and via policy targeting the plan and app types). For Azure SQL Database, enforce its allowed locations separately (Microsoft.Sql/servers). This aligns with Azure Well-Architected Framework governance and compliance practices (Policy-as-code, preventative controls). Common misconceptions: A frequent misunderstanding is assuming resource group location dictates resource location. It does not. Another misconception is thinking a single policy on resource groups will automatically co-locate dependent resources; co-location requires policies applied to the actual resource types (and sometimes deployment templates/initiatives that coordinate multiple resources). Exam tips: For AZ-305, distinguish between governance at the container level (resource groups) versus enforcement at the resource provider/type level. When a question says “deploy only to specific regions,” think “Allowed locations” (deny effect) at subscription/management group scope, not resource group location. Also remember App Service app location follows the App Service plan’s region.
Your company has 300 virtual machines hosted in a VMware environment. The virtual machines vary in size and have various utilization levels. You plan to move all the virtual machines to Azure. You need to recommend how many and what size Azure virtual machines will be required to move the current workloads to Azure. The solution must minimize administrative effort. What should you use to make the recommendation?
Azure Pricing Calculator is used to estimate Azure costs once you already know the target architecture (VM series, size, disks, region, licensing, etc.). It does not discover on-prem VMware VMs or analyze utilization to recommend right-sized Azure VM SKUs. It’s helpful after you have sizing outputs (often from Azure Migrate) but doesn’t minimize administrative effort for inventory and sizing.
Azure Advisor provides best-practice recommendations (cost, security, reliability, operational excellence, performance) for resources that are already deployed in Azure. It can suggest resizing Azure VMs based on observed utilization, but it cannot assess an on-prem VMware environment directly to recommend initial Azure VM sizes for migration planning.
Azure Migrate is designed for migrating to Azure and includes discovery and assessment for VMware environments. Using the Azure Migrate appliance, it collects VM configuration and performance data and generates right-sizing recommendations (how many VMs and what sizes) and cost estimates. This automation minimizes administrative effort and aligns with cost optimization and operational excellence best practices.
Azure Cost Management helps monitor, allocate, and optimize Azure spending using billing and usage data (budgets, alerts, cost analysis, chargeback/showback). It is primarily for governance and ongoing cost control after workloads are in Azure. It does not perform on-prem VMware discovery or generate pre-migration VM sizing recommendations.
Core concept: This question tests right-sizing and migration planning from VMware to Azure with minimal administrative effort. The key capability is automated discovery, assessment, and sizing recommendations based on actual performance/utilization data. Why the answer is correct: Azure Migrate is the purpose-built service for moving on-premises VMware VMs to Azure. Its assessment tools (via the Azure Migrate appliance) discover the 300 VMs, collect configuration and performance metrics (CPU, memory, disk, network), and then generate Azure VM size recommendations. It can also produce dependency insights and cost estimates. This directly meets the requirement to recommend “how many and what size” Azure VMs are required, while minimizing admin effort through automated data collection and analysis rather than manual inventorying. Key features and best practices: Azure Migrate supports agentless discovery for VMware, continuous performance-based sizing, and assessment settings such as target region, VM series restrictions, reserved instances/Azure Savings Plan assumptions, and disk type selection. It can recommend right-sized SKUs (e.g., D/E series) and identify over/under-utilized VMs. From an Azure Well-Architected Framework perspective, this supports Cost Optimization (right-sizing) and Operational Excellence (repeatable assessments, reduced manual work). It also helps with Reliability planning by highlighting dependencies and readiness. Common misconceptions: The Pricing Calculator can estimate costs, but it requires you to already know the target VM sizes and quantities; it does not discover or right-size from VMware utilization. Azure Advisor provides optimization recommendations for resources already running in Azure, not for on-prem VMware estates. Azure Cost Management focuses on analyzing and governing Azure spend after resources exist (or via billing data), not on pre-migration sizing from VMware telemetry. Exam tips: When the question involves migrating from VMware/Hyper-V/physical servers and asks for discovery, assessment, right-sizing, and migration planning, the default answer is usually Azure Migrate. Use Pricing Calculator only when sizes are already known and you’re purely estimating costs. Use Advisor/Cost Management for post-deployment optimization and governance in Azure.
DRAG DROP - Your company has an existing web app that runs on Azure virtual machines. You need to ensure that the app is protected from SQL injection attempts and uses a layer-7 load balancer. The solution must minimize disruptions to the code of the app. What should you recommend? To answer, drag the appropriate services to the correct targets. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place:
Azure service: ______
The required Azure service is Azure Application Gateway because it is Azure’s native layer-7 load balancer (reverse proxy) for HTTP/HTTPS. It supports application-layer routing and can front-end a pool of Azure VMs (or VMSS) with health probes and autoscaling (v2 SKU). This meets the requirement for a layer-7 load balancer with minimal disruption to the existing VM-hosted app (no code changes; you change ingress to go through the gateway). Why others are wrong: Azure Load Balancer is layer-4 only and cannot inspect HTTP payloads or provide WAF protections. Azure Traffic Manager is DNS-based global distribution/failover and is not an L7 proxy/WAF. Web Application Firewall (WAF) is not a standalone load balancer service in this option set; in Azure it is a capability attached to Application Gateway (or Front Door). SSL offloading and URL-based content routing are features, not the primary service to deploy.
Feature: ______
The needed feature is Web Application Firewall (WAF) because it provides protection against common web vulnerabilities, including SQL injection, by inspecting HTTP/S requests at layer 7 and applying managed rules (OWASP Core Rule Set) and optional custom rules. This approach minimizes disruptions to the application code because the filtering and blocking occur at the edge (the gateway) rather than requiring changes to input validation logic inside the app. Why others are wrong: Azure Application Gateway is the service, not the specific security feature asked here. SSL offloading improves performance and certificate management but does not block SQL injection. URL-based content routing enables path-based routing to different backends, not threat protection. Azure Load Balancer and Traffic Manager do not provide WAF capabilities. Therefore, WAF is the correct feature to meet the SQL injection protection requirement.
HOTSPOT - You have an on-premises database that you plan to migrate to Azure. You need to design the database architecture to meet the following requirements: ✑ Support scaling up and down. ✑ Support geo-redundant backups. ✑ Support a database of up to 75 TB. ✑ Be optimized for online transaction processing (OLTP). What should you include in the design? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:
Service: ______
Azure SQL Database is the best fit because it provides a fully managed PaaS OLTP database with built-in automated backups and the ability to scale compute up/down with minimal operational overhead. Most importantly, Azure SQL Database offers the Hyperscale tier, which supports very large databases (up to around 100 TB), meeting the 75 TB requirement. Why others are wrong: - Azure SQL Managed Instance is PaaS and OLTP-friendly, but its storage limits are typically much lower than 75 TB, so it won’t meet the size requirement. - Azure Synapse Analytics is optimized for analytics/OLAP (data warehousing), not OLTP. - SQL Server on Azure Virtual Machines can support large databases and OLTP, but scaling and backups (including geo-redundant strategy) are largely customer-managed and not as aligned with the requirement to “support scaling up and down” in a PaaS sense for exam scenarios.
Service tier: ______
Hyperscale is the correct service tier because it is the Azure SQL Database tier designed for very large OLTP databases and rapid scalability. It uses a distributed storage architecture with separate compute and storage, enabling much larger database sizes than General Purpose or Business Critical and supporting scale operations with less impact. It also supports automated backups and can use geo-redundant backup storage to meet the geo-redundant backups requirement. Why others are wrong: - Basic/Standard/Premium are legacy DTU tiers and do not support anywhere near 75 TB. - General Purpose and Business Critical (vCore tiers) have significantly lower maximum database sizes than required. - Business Critical focuses on low latency and high IOPS with local SSD, but it still doesn’t meet the 75 TB size requirement.
What should you include in the identity management strategy to support the planned changes?
Deploying corp.fabrikam.com domain controllers to Azure can improve authentication for Azure-hosted corp workloads and provide resiliency if on-prem connectivity fails. However, if the planned changes focus on new R&D projects and rd.fabrikam.com is a separate forest, this does not directly support the R&D authentication boundary. It may also add cost/complexity without solving the primary requirement.
Moving all corp.fabrikam.com domain controllers to Azure is generally not recommended unless explicitly required. It increases migration risk, complicates disaster recovery, and can negatively impact on-premises authentication if connectivity to Azure is interrupted. Exam scenarios typically prefer a hybrid approach (some DCs on-prem, some in Azure) for resilience and to avoid a single dependency.
Deploying a new Azure AD tenant for new R&D projects is a common trap. Azure AD (Microsoft Entra ID) is an identity provider for modern auth (OAuth/SAML/OpenID Connect) and does not provide AD DS capabilities like Kerberos/LDAP/Group Policy. A new tenant also creates governance overhead (cross-tenant collaboration, licensing, conditional access duplication) unless strict tenant isolation is required.
Deploying domain controllers for the rd.fabrikam.com forest to Azure virtual networks best supports Azure-hosted R&D workloads that need AD DS authentication. It keeps authentication within the correct forest boundary, reduces latency and WAN dependency for Kerberos/LDAP, and improves reliability by allowing logons and directory lookups even during on-prem connectivity issues. This is the typical AZ-305 design pattern for hybrid identity with separate forests.
Core concept: This question tests hybrid identity design when extending on-premises Active Directory Domain Services (AD DS) to Azure. In AZ-305, a common requirement is ensuring authentication/authorization remains available with low latency for workloads moved to Azure, while maintaining proper forest/domain boundaries and minimizing risk. This aligns with the Azure Well-Architected Framework pillars of Reliability (resilient identity), Security (least privilege and boundary control), and Operational Excellence (clear identity architecture). Why the answer is correct: Deploying domain controllers for the rd.fabrikam.com forest to Azure virtual networks (Option D) supports planned changes where new R&D workloads/projects are expected to run in Azure and rely on the R&D forest for authentication. Placing at least two domain controllers in Azure (in separate Availability Zones or Availability Sets where supported) provides local authentication, reduces dependency on WAN/VPN/ExpressRoute for every Kerberos/LDAP request, and improves resiliency if on-prem connectivity is degraded. It also preserves the separation between corp and R&D by keeping authentication within the correct forest boundary. Key features and best practices: - Deploy a minimum of two DCs per domain/forest in Azure for high availability; use separate fault domains/Update domains (Availability Sets) or Zones. - Use Azure DNS carefully: either host AD-integrated DNS on the DCs and point VNets to those DC IPs, or integrate with Azure DNS Private Resolver if needed for hybrid name resolution. - Ensure secure connectivity (ExpressRoute preferred for predictable latency; VPN acceptable) and configure AD Sites and Services to optimize replication and logon traffic. - Harden DCs (NSGs, JIT/JEA where applicable, restricted management, backup strategy) and monitor with Azure Monitor/Log Analytics. Common misconceptions: Option A (corp DCs in Azure) can be valid if corp workloads move to Azure, but it doesn’t directly address R&D identity needs if R&D is a separate forest. Option B (move all corp DCs) is risky and rarely required; it can introduce outages and complicate recovery. Option C (new Azure AD tenant) is typically incorrect because Azure AD is not a drop-in replacement for AD DS (Kerberos/LDAP/Group Policy) and creating a separate tenant increases governance and collaboration complexity unless there is a strong isolation requirement. Exam tips: - If workloads require traditional AD DS (Kerberos/LDAP/GPO), think “deploy DCs close to the workloads,” not “create a new Azure AD tenant.” - Respect forest/domain boundaries: deploy DCs for the domain/forest that will authenticate the workloads. - Avoid “move all DCs” answers unless the scenario explicitly mandates full cloud-only AD DS hosting and addresses connectivity, DR, and operational impacts.
You need to design a highly available Azure SQL database that meets the following requirements: ✑ Failover between replicas of the database must occur without any data loss. ✑ The database must remain available in the event of a zone outage. ✑ Costs must be minimized. Which deployment option should you use?
Azure SQL Managed Instance Business Critical is the best fit because it uses a local availability replica architecture based on Always On availability groups with synchronous replication. That synchronous commit model supports automatic failover between replicas with an RPO of zero, which satisfies the requirement for no data loss during failover. In supported regions, Business Critical can also be configured as zone-redundant so replicas are distributed across Availability Zones, allowing the database to remain available during a zone outage. Although it is more expensive than General Purpose, it is the least costly option in this list that still meets both the zero-data-loss and zone-outage requirements.
Azure SQL Database Premium provides strong performance and high availability, but the exam-preferred answer for strict zero-data-loss failover between replicas plus zone-outage resilience is the Business Critical architecture. Premium is an older DTU-based purchasing model and is not the clearest match when compared directly with Managed Instance Business Critical, which is explicitly designed around synchronous replicas and automatic failover. In questions that emphasize replica-based failover with no data loss and zone resilience, Business Critical is typically the best-fit choice. Therefore, Premium is not the best answer among the listed options.
Azure SQL Database Basic is intended for low-cost, low-throughput workloads and does not meet stringent high availability requirements such as zero-data-loss failover between replicas. It lacks the advanced replica architecture associated with Business Critical service tiers and is not the right choice for surviving a zone outage with strong availability guarantees. While it minimizes cost, it fails the reliability requirements in the scenario. Therefore, it cannot be selected.
Azure SQL Managed Instance General Purpose is optimized for cost efficiency and uses remote premium storage rather than the local SSD-based replica architecture used by Business Critical. Its high availability model does not provide the same synchronous multi-replica failover characteristics expected for zero data loss between replicas. It is also not the best option for the strongest zone-outage resilience requirement in this scenario. Although cheaper than Business Critical, it does not satisfy all stated requirements.
Core concept: This question tests Azure SQL high availability and resiliency choices, specifically how to achieve zero data loss (synchronous replication) and zone-outage resilience while minimizing cost. In Azure SQL, these requirements map to HA architectures that use multiple replicas and automatic failover. Why the answer is correct: Azure SQL Managed Instance (MI) Business Critical uses an Always On availability group architecture with multiple replicas and synchronous replication within the region. This enables automatic failover with an RPO of 0 (no data loss) because transactions are committed only after being hardened on synchronous replicas. Business Critical also supports zone redundancy (in supported regions) so replicas can be distributed across Availability Zones, keeping the database available during a zone outage. Among the listed options, it is the only one that clearly aligns with “failover between replicas without data loss” and “available during a zone outage.” Key features / configurations: - Synchronous replication across multiple replicas (RPO 0) with automatic failover. - Built-in HA; no need to manage clustering/AGs yourself. - Zone redundancy option (where available) to survive a full zone failure. - Aligns with Azure Well-Architected Framework reliability pillar: redundancy, fault isolation (zones), and automated failover. Common misconceptions: - “Standard” or “Premium” Azure SQL Database tiers can provide HA, but the question explicitly emphasizes failover between replicas with no data loss and zone outage resilience. Those requirements strongly imply synchronous multi-replica architecture with zone distribution; not all tiers/offerings guarantee this in the way Business Critical does. - “Serverless” focuses on cost optimization via auto-pause/auto-scale for intermittent workloads, not strict HA/zone-outage requirements. Exam tips: - RPO 0 typically implies synchronous replication. - Zone outage resilience requires zone-redundant deployment (replicas across AZs), not just local redundancy within a single datacenter. - When options include “Business Critical,” associate it with multiple replicas, synchronous commit, and the strongest in-region HA characteristics. If the question also required cross-region DR, you’d look for active geo-replication or auto-failover groups (often asynchronous, RPO > 0).
HOTSPOT - You are planning an Azure Storage solution for sensitive data. The data will be accessed daily. The dataset is less than 10 GB. You need to recommend a storage solution that meets the following requirements: ✑ All the data written to storage must be retained for five years. ✑ Once the data is written, the data can only be read. Modifications and deletion must be prevented. ✑ After five years, the data can be deleted, but never modified. ✑ Data access charges must be minimized. What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:
Storage account type: ______
Correct answer: C (General purpose v2 with Hot access tier for blobs). The data is accessed daily, so transaction and data retrieval charges dominate. The Hot tier has the lowest read/access costs and is intended for frequently accessed data. With a dataset under 10 GB, the higher per-GB storage price of Hot is typically insignificant compared to the cumulative cost of daily reads in Cool or Archive. Why not B (Cool): Cool reduces storage cost but increases read and transaction costs and is optimized for infrequently accessed data (typically 30+ days). Daily access would increase charges. Why not A (Archive): Archive is designed for rarely accessed data with high retrieval latency and additional rehydration costs; it is not suitable for daily access and would significantly increase access-related costs and operational friction.
Configuration to prevent modifications and deletions: ______
Correct answer: B (Container access policy). To prevent modifications and deletions while enforcing a five-year retention period, you use immutable blob storage configured at the container level via an immutability policy (time-based retention). This is the Azure-native WORM capability: once blobs are written and the policy is locked, blobs cannot be modified or deleted until the retention period expires. After expiration, deletion can be permitted, satisfying the “can be deleted after five years” requirement. Why not A (Container access level): This only controls anonymous/public access (private/blob/container) and does not enforce immutability or retention. Why not C (Storage account resource lock): Resource locks prevent management-plane actions (e.g., deleting the storage account) but do not stop data-plane operations like overwriting or deleting blobs by authorized users/applications. It is not a WORM/retention control.