
Simulate the real exam experience with 50 questions and a 45-minute time limit. Practice with AI-verified answers and detailed explanations.
AI-Powered
Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.
HOTSPOT - For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:
By creating additional resource groups in an Azure subscription, additional costs are incurred.
Correct answer: No (B). A resource group is a logical container used to organize Azure resources for lifecycle management, RBAC, policy assignment, and cost management reporting. Creating additional resource groups does not, by itself, deploy any billable resources or consume metered capacity. Therefore, Azure does not charge simply for having more resource groups. Costs occur when you create or use resources inside the resource groups (for example, virtual machines, managed disks, storage accounts, VPN gateways, databases) or when you generate billable activity (such as outbound data transfer, transactions, or log ingestion). Resource groups are part of Azure Resource Manager governance/management features and are not priced as a standalone item. Why “Yes” is wrong: it confuses organizational constructs with billable services. You might see costs associated with resources placed into those groups, but the act of creating the groups themselves does not incur additional charges.
By copying several gigabits of data to Azure from an on-premises network over a VPN, additional data transfer costs are incurred.
Correct answer: No (B). Copying data from an on-premises network to Azure is inbound data transfer to Azure (ingress). In Azure’s general bandwidth pricing model, inbound data transfer is typically free of charge. Therefore, copying several gigabits of data into Azure over a VPN does not usually incur additional data transfer (bandwidth) charges from Azure for the transfer itself. Important nuance for exam readiness: while ingress bandwidth is generally free, the overall solution might still have costs if you are using a billable connectivity component (for example, an Azure VPN Gateway is billed per hour and may have other charges). However, the question specifically asks about “additional data transfer costs” incurred by copying data into Azure, which points to bandwidth directionality rather than gateway runtime. Why “Yes” is wrong: it assumes all network transfer is billed. For AZ-900, remember the common rule: inbound to Azure is generally free; outbound from Azure is generally charged.
By copying several GB of data from Azure to an on-premises network over a VPN, additional data transfer costs are incurred.
Correct answer: Yes (A). Copying data from Azure to an on-premises network is outbound data transfer from Azure (egress). Azure commonly charges for outbound bandwidth, and those charges increase with the amount of data transferred. Therefore, copying several GB of data from Azure back to on-premises over a VPN generally incurs additional data transfer costs. This is a frequent AZ-900 exam concept: data egress is a typical cost driver, and architects should design with egress in mind (for example, keep workloads and dependent services in the same region, minimize cross-boundary transfers, and use caching/CDN patterns where appropriate). This aligns with the Cost Optimization pillar of the Azure Well-Architected Framework. Why “No” is wrong: it incorrectly applies the “inbound is free” rule to outbound traffic. While some scenarios may have special pricing (or free allowances in specific services), the default principle tested here is that outbound data transfer from Azure is billed.
Want to practice all questions on the go?
Download Cloud Pass for free — includes practice tests, progress tracking & more.


Want to practice all questions on the go?
Get the free app
Download Cloud Pass for free — includes practice tests, progress tracking & more.
What is required to use Azure Cost Management?
A Dev/Test subscription is a specific offer intended to reduce costs for development and testing workloads (often via Visual Studio subscriptions). It is not a prerequisite to use Azure Cost Management. Cost Management can analyze costs for many subscription types, including Dev/Test, but you do not need Dev/Test to access the service.
Software Assurance is a licensing benefit associated with Microsoft volume licensing (e.g., Windows Server/SQL Server benefits, Azure Hybrid Benefit eligibility). It does not grant access to Azure Cost Management. Cost Management is tied to Azure billing and scopes (billing account/subscription), not to Software Assurance entitlements.
An Enterprise Agreement (EA) is a large-organization purchasing agreement and is not required to use Azure Cost Management. While EA customers commonly use Cost Management at scale (often with advanced chargeback/showback needs), the tooling is available for non-EA customers as well. EA is a purchasing model, not a functional prerequisite.
A pay-as-you-go subscription is the most straightforward requirement among the choices because it represents having an Azure subscription with billable usage and a billing relationship. Azure Cost Management relies on billing and usage data; a pay-as-you-go subscription provides that baseline. Many other subscription/billing types also work, but pay-as-you-go is the best answer here.
Core Concept: Azure Cost Management (often referred to as Cost Management + Billing) is an Azure governance and financial management capability used to monitor, allocate, and optimize cloud spend. It supports cost analysis, budgets, alerts, cost allocation (tags/management groups/subscriptions), and recommendations to improve cost efficiency—aligning strongly with the Azure Well-Architected Framework Cost Optimization pillar. Why the Answer is Correct: To use Azure Cost Management in the context of AZ-900 fundamentals, you need an Azure subscription that generates billable usage and is supported by Cost Management. A pay-as-you-go subscription is the baseline, commonly referenced subscription type that enables billing and therefore cost tracking and analysis. Cost Management is not restricted to special licensing programs like Software Assurance or Enterprise Agreement; it is available broadly for Azure billing accounts and subscriptions. In exam terms, “pay-as-you-go subscription” is the most universally correct requirement among the options. Key Features / What You Can Do: With Cost Management you can: view cost by scope (management group, subscription, resource group), filter/group by tags, create budgets and alerts, export cost data, and use recommendations to reduce spend (e.g., right-sizing, reserved instances/savings plans where applicable). Access is controlled via Azure RBAC roles such as Cost Management Reader/Contributor or Billing Reader, depending on scope. Common Misconceptions: Learners often assume you need an Enterprise Agreement (EA) because Cost Management historically had strong EA integration. Others confuse Software Assurance (a licensing benefit) with cost tooling access. Dev/Test subscriptions are a pricing offer for development workloads, not a prerequisite for cost analysis. Exam Tips: For AZ-900, remember: Cost Management is a governance tool available with Azure subscriptions and billing. If asked what is “required,” pick the option that represents having a standard Azure subscription with billing (pay-as-you-go). Also watch for wording about permissions—sometimes the real requirement is appropriate RBAC/billing access, but that is not offered in this question.
You have an on-premises network that contains several servers. You plan to migrate all the servers to Azure. You need to recommend a solution to ensure that some of the servers are available if a single Azure data center goes offline for an extended period. What should you include in the recommendation?
Fault tolerance is the ability of a system to keep running when a failure occurs (server, rack, or datacenter). For an extended datacenter outage, you design redundancy and failover using Availability Zones (separate datacenters within a region) and/or cross-region disaster recovery. This directly matches the requirement that “some of the servers are available” even if one datacenter is down.
Elasticity refers to automatically adding or removing resources to match demand (for example, autoscaling during peak usage and scaling in when demand drops). While elasticity improves cost efficiency and performance under variable load, it does not inherently protect against a datacenter outage unless combined with a fault-tolerant multi-zone or multi-region design.
Scalability is the ability to increase capacity to handle growth, either vertically (bigger VM) or horizontally (more instances). Like elasticity, scalability focuses on capacity and performance, not resiliency. A scalable system can still be taken down if all instances are in a single datacenter and that datacenter becomes unavailable.
Low latency means minimizing network delay so users get faster responses, often achieved by choosing closer regions, using CDNs, or optimizing routing. Latency is a performance characteristic, not an availability strategy. A low-latency deployment can still experience downtime if it lacks redundancy across datacenters or zones.
Core concept: This question tests the cloud concept of resiliency, specifically fault tolerance (and closely related high availability). Fault tolerance is the ability of a system to continue operating when a component fails. In Azure terms, the “component” can be a server, rack, or an entire datacenter (availability zone). Why the answer is correct: If a single Azure datacenter goes offline for an extended period, you need workloads to keep running from another isolated location. That requirement maps directly to fault tolerance: designing the solution so that failure of one datacenter does not cause an outage. In practice, you achieve this by deploying across fault domains and, more importantly for datacenter-level failures, across Availability Zones (zone-redundant architecture) or across paired regions (disaster recovery). The question is phrased at the concept level, and “fault tolerance” is the correct term among the options. Key features / configurations (what you would include in a recommendation): - Use Availability Zones for zonal redundancy: place VMs in a zone-redundant configuration using VM Scale Sets, or use Availability Zones with load balancers. - Use zone-redundant services where available (e.g., zone-redundant storage, Azure SQL zone redundancy in supported tiers/regions). - Consider region-to-region DR for extended outages or regional disasters: Azure Site Recovery, database geo-replication, and traffic routing via Azure Front Door or Traffic Manager. These align with the Azure Well-Architected Framework Reliability pillar: design for redundancy, failover, and recovery. Common misconceptions: Elasticity and scalability are about handling changing demand (performance/capacity), not surviving a datacenter outage. Low latency is about response time and proximity, not resiliency. Exam tips: When you see “datacenter goes offline,” think Availability Zones (datacenter-level isolation). When you see “region goes offline,” think paired regions and disaster recovery. If the question asks for the overarching concept, choose fault tolerance/high availability rather than performance-related terms.
HOTSPOT - How should you calculate the monthly uptime percentage? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:
First selection
The numerator of the monthly uptime percentage formula must represent the time the service was actually available during the month. That is calculated as total possible time minus the time it was unavailable. Therefore, the correct first selection is (Maximum Available Minutes – Downtime in Minutes). This produces the “available minutes” for the month. Why the others are wrong: - Downtime in Minutes alone (A) is not uptime; it’s the opposite measure. Using downtime as the numerator would incorrectly increase the percentage as downtime increases. - Maximum Available Minutes (B) alone would imply 100% uptime regardless of outages, because it ignores downtime entirely. In SLA terms, you always subtract downtime from the total measurement window to get actual uptime, then divide by the total measurement window.
Second selection
To compute maximum available minutes for a month, you typically build it from days → hours → minutes. The key conversion factor from hours to minutes is 60 minutes per hour. Given the options (60, 1,440, Maximum Available Minutes), the correct second selection is 1,440 because it represents the number of minutes in a day (24 × 60 = 1,440). In many uptime calculations, you’ll see monthly minutes computed as (number of days in month × 1,440). Why the others are wrong: - 60 (A) is minutes per hour, but the common monthly shortcut is minutes per day (1,440). Using 60 would require an additional step (hours per day). - Maximum Available Minutes (C) is the final computed value, not the conversion factor used to calculate it.
Third selection
Uptime percentage is expressed as a percentage, so after calculating the uptime ratio (available minutes divided by maximum available minutes), you multiply by 100. Therefore, the correct third selection is 100. Why the others are wrong: - 99.99 (B) is a specific SLA target (often referred to as “four nines”), not the multiplier used to convert a ratio into a percentage. You might compare your computed result to 99.99%, but you don’t multiply by 99.99. - 1.440 (C) appears to be a misformatted version of 1,440 (minutes per day). That value is used earlier to compute total minutes in the month, not as the final percentage multiplier. So the final structure is: ((Maximum Available Minutes – Downtime in Minutes) / Maximum Available Minutes) × 100.
You attempt to create several managed Microsoft SQL Server instances in an Azure environment and receive a message that you must increase your Azure subscription limits. What should you do to increase the limits?
Creating a service health alert is used to get notified about Azure service incidents, planned maintenance, or health advisories. It does not change subscription quotas or allow additional SQL Managed Instances to be created. Alerts are a monitoring/governance feature, not a mechanism to request capacity or quota changes from Microsoft.
Upgrading your support plan can provide faster response times, additional support channels, or architectural guidance, but it does not automatically increase subscription limits. Quota increases still require submitting a quota request. A higher support plan might help expedite handling, but it is not the direct action that raises the limit.
Azure Policy is used to enforce organizational standards and assess compliance (for example, restricting regions, requiring tags, or limiting allowed SKUs). It cannot override Microsoft-enforced service quotas. Even if policy allowed the deployment, the platform would still block it when the subscription quota is reached.
Creating a new support request is the correct way to increase Azure subscription quotas/limits. In the Azure portal, you submit a quota increase request under “Service and subscription limits (quotas).” Microsoft reviews and adjusts the quota for the relevant subscription, region, and resource type so you can deploy additional SQL Managed Instances.
Core concept: This question tests Azure subscription and service limits (also called quotas). Many Azure resources—including Azure SQL Managed Instance—are governed by per-subscription, per-region quotas (for example, vCore limits, instance counts, or other capacity constraints). When you hit a quota, Azure blocks additional deployments until the quota is increased. Why the answer is correct: To increase Azure subscription limits/quotas, you typically submit a quota increase request through Azure Support by creating a new support request (often categorized as “Service and subscription limits (quotas)”). This routes the request to Microsoft to adjust the backend quota for your subscription/region. For managed services like SQL Managed Instance, quota increases are not something you can self-serve via policy or monitoring; they require a support workflow. Key features and best practices: - Quotas are scoped (subscription, region, resource type) and are separate from role-based access (RBAC) permissions. - The request is made in the Azure portal: Help + support -> Create a support request -> Quota. - Plan capacity early (Azure Well-Architected Framework: Cost Optimization and Reliability). Quota constraints can become a reliability risk if scaling is blocked during peak demand. - Consider regional strategy: if a region is constrained, deploying to another region may be a workaround, but it impacts latency, data residency, and DR design. Common misconceptions: - People confuse “limits” with “policies.” Azure Policy governs compliance (allowed SKUs/locations/tags), not Microsoft-enforced service quotas. - Upgrading a support plan may improve response times or access to support, but it does not automatically raise quotas. - Service Health alerts notify you about outages/advisories, not quota exhaustion. Exam tips: On AZ-900, when you see “increase subscription limits/quotas,” the expected action is “create a support request.” Remember: monitoring/alerts inform you, policy restricts you, but quota increases are handled through Azure Support requests.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You plan to deploy several Azure virtual machines. You need to ensure that the services running on the virtual machines are available if a single data center fails. Solution: You deploy the virtual machines to two or more availability zones. Does this meet the goal?
Yes. Standard support includes technical support with access to Microsoft support engineers and provides contact options such as phone and email (with severity-based response targets). Because the policy requires an option to reach support engineers by phone or email, Standard satisfies the requirement and is commonly considered the baseline plan for production operational needs.
No would be correct only if the solution did not address datacenter-level failure. However, Availability Zones are specifically designed to protect against a single datacenter (zone) outage. The main caveat is that the application must be configured to use instances in multiple zones (e.g., load balancing and resilient dependencies), but the proposed deployment approach does meet the stated goal.
Core Concept: This question tests understanding of Azure Availability Zones and how they improve resiliency for virtual machines. Availability Zones are physically separate locations within an Azure region, each with independent power, cooling, and networking. Deploying VMs across multiple zones helps protect workloads from the failure of a single datacenter. Why the Answer is Correct: The goal is to keep services available if a single datacenter fails. Placing virtual machines in two or more Availability Zones directly addresses that requirement because each zone is isolated from the others within the same region. If one zone becomes unavailable, VMs in the other zone or zones can continue running. Key Features / What to Know: - Availability Zones provide high availability within a single Azure region. - Each zone is a separate physical location with independent infrastructure. - Zonal or zone-redundant architectures are used to tolerate datacenter-level failures. - For full service availability, workloads typically also need load balancing and application-level resiliency. Common Misconceptions: A common misconception is that an Availability Set protects against datacenter failure. Availability Sets protect against host and rack-level failures within a datacenter, but not the loss of an entire datacenter. Another misconception is that simply using multiple VMs is enough; they must be distributed across zones to achieve datacenter-failure tolerance. Exam Tips: - If the requirement mentions surviving a single datacenter failure, think Availability Zones. - If the requirement is only about maintenance events or hardware failures within one datacenter, Availability Sets may be sufficient. - In AZ-900, distinguish clearly between Availability Sets (same datacenter) and Availability Zones (separate datacenters within a region).
HOTSPOT - You plan to deploy a critical line-of-business application to Azure. The application will run on an Azure virtual machine. You need to recommend a deployment solution for the application. The solution must provide a guaranteed availability of 99.99 percent. What is the minimum number of virtual machines and the minimum number of availability zones you should recommend for the deployment? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:
Minimum number of virtual machines
Minimum number of virtual machines: 2. To achieve a 99.99% availability target for a VM-based application, you must avoid a single-instance deployment. With only 1 VM, any planned maintenance, host failure, OS crash, or VM-level issue causes downtime for the entire application, so you cannot meet a 99.99% guarantee. With 2 VMs running the same workload (typically in an active/active configuration behind a load balancer), the application can continue serving traffic if one VM becomes unavailable. This is the minimum instance count that enables redundancy at the compute layer. Why not 3? Three VMs can improve capacity and reduce risk further, but it is not the minimum required to meet 99.99% in Azure’s standard high-availability patterns. Exam questions asking for “minimum” focus on the smallest architecture that satisfies the SLA requirement, which is two instances.
Minimum number of availability zones
Minimum number of availability zones: 2. Availability Zones are physically separate datacenters within an Azure region, each with independent power, cooling, and networking. To claim zone-level resiliency (and meet the common exam expectation for a 99.99% guarantee for critical workloads), you must deploy across at least two zones so that a zone outage does not take down the application. Using only 1 zone provides no zone redundancy; it is effectively a single failure boundary at the datacenter level. Why not 3? Three zones can further increase resiliency and is often recommended for higher fault tolerance, but it is not the minimum. Two zones is the smallest number that provides protection against the loss of a single zone while still allowing the application to run on the remaining zone.
This question requires that you evaluate the underlined text to determine if it is correct.
Azure Germany can be used by legal residents of Germany only.
Instructions: Review the underlined text. If it makes the statement correct, select No change is needed. If the statement is incorrect, select the answer choice that makes the statement correct.
The original statement is incorrect because Azure Germany was not defined as a service for legal residents of Germany only. The eligibility focus in the exam material was on enterprises, not on an individual's residency status. Legal residency is therefore the wrong criterion. Leaving the statement unchanged would preserve that error.
This option correctly replaces the inaccurate phrase about legal residents of Germany with the exam-expected eligibility criterion for Azure Germany. In classic AZ-900 content, Azure Germany was presented as a sovereign cloud offering for enterprises registered in Germany rather than for individuals based on nationality or residency. That makes this answer the best fit because it reflects organizational eligibility instead of personal residency. It also avoids introducing unsupported requirements such as purchasing through a local partner.
This option is incorrect because Azure Germany access was not based on where the customer purchased Azure licenses. Buying through a partner in Germany is a procurement detail, not the defining eligibility rule for the sovereign cloud. The exam objective did not require a Germany-based reseller or partner as the condition for use. Therefore this answer adds a restriction that is not the core rule.
This option sounds plausible if the question were about ordinary Azure regions in Germany, but it is not the best answer for Azure Germany as a sovereign cloud offering. Azure Germany was not simply available to any user or enterprise worldwide based only on a desire for German data residency. The exam distinction was that Azure Germany had a narrower eligibility model tied to German-registered enterprises. As a result, this option overgeneralizes access and is not the correct replacement.
Core concept: This question tests knowledge of Azure Germany, which was a special sovereign cloud offering distinct from standard Azure public regions. The key point is that Azure Germany was not limited to legal residents of Germany, but it also was not simply available to any customer anywhere just because they preferred German data residency. Why correct: The correct replacement is that Azure Germany can be used by only enterprises that are registered in Germany. This aligns with the older AZ-900 exam objective language around Azure Germany eligibility, which focused on German-registered organizations rather than individual legal residents or procurement channel requirements. Key features: - Azure Germany was a separate cloud environment designed to address German data residency and compliance expectations. - It was intended for business and organizational use cases, not restricted by personal citizenship or legal residency. - Eligibility was tied to enterprise registration in Germany in the exam context, not to buying through a German partner. Common misconceptions: - Confusing Azure Germany with ordinary Azure regions in Germany, which are selected for workload placement. - Assuming any customer can use Azure Germany merely because they want data stored in Germany. - Believing access depended on purchasing through a Germany-based licensing partner. Exam tips: - Watch for older AZ-900 questions that distinguish Azure Germany from standard Azure public regions. - If the wording mentions Azure Germany specifically, think sovereign cloud eligibility rather than normal regional deployment. - Eliminate answers based on citizenship, residency, or partner channel unless the service explicitly requires them.
You have a virtual machine named VM1 that runs Windows Server 2016. VM1 is in the East US Azure region. Which Azure service should you use from the Azure portal to view service failure notifications that can affect the availability of VM1?
Azure Service Fabric is a distributed systems platform for building and running microservices and containerized applications. It helps with application reliability and orchestration but does not provide Azure platform service failure notifications for a VM in a region. It’s not the tool used to view Azure service incidents or planned maintenance events affecting VM1.
Azure Monitor is the correct choice because it is the central monitoring service in Azure and includes Azure Service Health in the portal experience. Service Health provides service issue notifications, planned maintenance, and health advisories that can affect resources in a given region (East US) and subscription. You can also configure alerts from these events to notify administrators proactively.
The Azure virtual machines service (or the VM resource blade) is where you manage VM1 (start/stop, sizing, disks, networking) and view VM-level metrics and logs. While you might see some VM status information, it is not the primary portal location for Azure-wide service failure notifications and planned maintenance affecting the region or underlying platform.
Azure Advisor provides best-practice recommendations across cost, security, reliability, performance, and operational excellence. It can suggest actions to improve resiliency (e.g., use availability zones), but it is not designed to be the main interface for viewing real-time service failure notifications or planned maintenance events impacting VM1 in East US.
Core concept: This question tests how to monitor Azure platform health events that can impact a resource’s availability. In Azure, service failure notifications (planned maintenance, unplanned outages, and health advisories) are surfaced through Azure Service Health, which is accessed and acted on through Azure Monitor in the Azure portal. Why the answer is correct: From the Azure portal, Azure Monitor is the hub for monitoring and alerting. Within Azure Monitor, you can access Service Health to view service issues and planned maintenance affecting your subscriptions and regions (for example, East US) and to see how those events may impact resources like VM1. Azure Monitor also lets you create alerts (email/SMS/webhook/ITSM) based on Service Health events so you can be notified proactively when an incident or maintenance event could affect VM availability. Key features and best practices: Azure Monitor + Service Health provides: - Service issues: unplanned outages or degradations in an Azure service. - Planned maintenance: scheduled updates that may require VM reboots or cause brief interruptions. - Health advisories: important notices (e.g., security or performance guidance). Best practice aligned with the Azure Well-Architected Framework (Reliability): configure Service Health alerts for critical regions and subscriptions, integrate with incident management, and design for resiliency (availability zones, region pairs, backups) so a single-region event has reduced impact. Common misconceptions: Many learners confuse “Azure Advisor” with outage notifications. Advisor provides recommendations (cost, security, reliability, operational excellence) but it is not the primary place to view live platform incidents. Others choose “Azure virtual machines” because VM blades show some status info, but they don’t provide comprehensive platform-wide service failure notifications. “Service Fabric” is an application platform and unrelated to Azure platform health notifications. Exam tips: For AZ-900, remember: platform incident/maintenance notifications = Azure Service Health (found under Azure Monitor). If the question mentions “notifications,” “outages,” “planned maintenance,” or “region/service issues,” think Service Health/Monitor rather than Advisor or the resource blade.
Which Azure service provides a set of version control tools to manage code?
Azure Repos is the Azure DevOps service for source control. It provides Git repositories and TFVC, enabling teams to manage code with branching, merging, pull requests, code reviews, and access controls. It integrates tightly with Azure Pipelines for CI/CD and supports governance via branch policies (e.g., required reviewers and build validation), making it the correct choice for version control tools.
Azure DevTest Labs is used to create, manage, and control development/test environments (often using preconfigured VM images) with policies to reduce cost and improve consistency. It helps teams quickly provision lab resources and enforce quotas and schedules, but it does not provide version control capabilities such as repositories, branching, or pull requests.
Azure Storage provides scalable storage services (Blob, File, Queue, Table) for unstructured and structured data. While you can store code files in Blob Storage, it does not provide version control features like commit history, branching/merging, pull requests, or code review workflows. It is a storage platform, not a source control system.
Azure Cosmos DB is a globally distributed, multi-model NoSQL database designed for application data with low latency and high availability. It supports APIs like NoSQL, MongoDB, Cassandra, Gremlin, and Table. It is not intended for managing source code or providing version control workflows, so it does not meet the requirement in the question.
Core concept: This question tests recognition of Azure DevOps services, specifically the component that provides version control (source control) tools for managing code. In Azure, version control is typically delivered through Azure DevOps, which includes Azure Repos, Azure Pipelines, Azure Boards, Azure Test Plans, and Azure Artifacts. Why the answer is correct: Azure Repos is the Azure DevOps service that provides a set of version control tools to manage code. It supports both Git repositories (distributed version control) and Team Foundation Version Control (TFVC, centralized version control). This aligns directly with the requirement: “a set of version control tools to manage code.” In real-world DevOps workflows, Azure Repos is where teams store source code, manage branches, perform pull requests, enforce policies (like required reviewers), and integrate with CI/CD pipelines. Key features and best practices: Azure Repos offers Git/TFVC hosting, branch policies, pull requests with code reviews, repository permissions, and integration with Azure Pipelines for automated builds and deployments. Best practices include using Git with a clear branching strategy (e.g., trunk-based development or GitFlow depending on team needs), enforcing branch policies (build validation, minimum reviewers), and using pull requests to improve code quality and security. These practices support Azure Well-Architected Framework principles such as Operational Excellence (repeatable processes, automation) and Security (controlled changes, least privilege). Common misconceptions: Azure DevTest Labs sounds developer-focused, but it is for creating and managing dev/test environments (VMs, lab policies, cost controls), not source control. Azure Storage can store files/blobs, but it does not provide version control workflows like branching, merging, and pull requests. Azure Cosmos DB is a globally distributed NoSQL database for application data, not code management. Exam tips: For AZ-900, map “version control/source control” to Azure DevOps and specifically Azure Repos. Map “CI/CD” to Azure Pipelines, “work tracking” to Azure Boards, and “packages” to Azure Artifacts. If the question mentions Git repos, pull requests, or branch policies, the answer is almost always Azure Repos.