CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Microsoft AZ-104
Microsoft AZ-104

Practice Test #3

Simule a experiência real do exame com 50 questões e limite de tempo de 100 minutos. Pratique com respostas verificadas por IA e explicações detalhadas.

50Questões100Minutos700/1000Nota de Aprovação
Ver Questões de Prática

Powered by IA

Respostas e Explicações Verificadas por 3 IAs

Cada resposta é verificada por 3 modelos de IA líderes para garantir máxima precisão. Receba explicações detalhadas por alternativa e análise aprofundada das questões.

GPT Pro
Claude Opus
Gemini Pro
Explicações por alternativa
Análise aprofundada da questão
Precisão por consenso de 3 modelos

Questões de Prática

1
Questão 1

You have an Azure subscription that contains a virtual machine named VM1. VM1 hosts a line-of-business application that is available 24 hours a day. VM1 has one network interface and one managed disk. VM1 uses the D4s v3 size. You plan to make the following changes to VM1: ✑ Change the size to D8s v3. ✑ Add a 500-GB managed disk. ✑ Add the Puppet Agent extension. ✑ Enable Desired State Configuration Management. Which change will cause downtime for VM1?

Enabling Desired State Configuration (DSC) management (commonly via Azure Automation State Configuration/DSC) is a configuration management capability. It applies configuration to the guest OS using an agent/extension and pull server model. This is typically performed while the VM is running and does not inherently require stopping/deallocating the VM. While DSC may restart services (or even the OS) depending on the configuration you apply, enabling the management feature itself is not expected to cause downtime on the exam.

Adding (attaching) a 500-GB managed disk to an existing Azure VM is generally an online operation. You can attach a new managed data disk while the VM is running; the downtime is not required by the platform. After attachment, you still need to initialize/partition/format the disk inside the guest OS, which can also be done online. This is a common AZ-104 point: data disk attach is typically hot-add.

Changing the VM size from D4s v3 to D8s v3 is a resize operation that commonly requires the VM to be stopped (deallocated) and restarted so Azure can reallocate compute resources on a host that supports the new SKU. This results in downtime for the workload running on VM1. Even within the same series, capacity constraints can force a move to different hardware, making downtime the expected outcome.

Adding the Puppet Agent extension is performed through the Azure VM Agent as a VM extension deployment. VM extensions are designed to be installed and updated while the VM is running and typically do not require a reboot or deallocation. Although the extension may install software and could restart services depending on how it’s configured, the act of adding the extension itself is not considered a downtime-causing platform operation in AZ-104.

Análise da Questão

Core concept: This question tests Azure VM lifecycle operations and which actions require a VM restart/deallocation (downtime) versus actions that are “hot” changes. In AZ-104, you must know which compute changes are online and which require the VM to stop. Why the answer is correct: Changing the VM size from D4s v3 to D8s v3 typically requires the VM to be stopped (deallocated) and then started again so Azure can move the VM to hardware that supports the new vCPU/memory allocation. Even when resizing within the same VM family, Azure often must reallocate the VM to a different host, which causes downtime. For a 24x7 line-of-business workload, this is a key availability consideration; the Azure Well-Architected Framework (Reliability pillar) recommends designing for redundancy (e.g., multiple instances behind a load balancer, Availability Zones/sets) so planned maintenance like resizing doesn’t impact availability. Key features and best practices: - VM resize is a compute host allocation change; it commonly triggers a stop/deallocate operation. - If the target size is not available on the current cluster/host, Azure must move the VM, guaranteeing downtime. - To avoid downtime, scale out (multiple VMs) rather than scale up, or use VM Scale Sets / availability constructs. Common misconceptions: - “Same series resize is always online”: not true; Azure may still need to reallocate. - “Extensions cause downtime”: extensions are installed by the Azure VM Agent and generally do not require a reboot (though some extensions or their configurations might). The exam expects you to treat extension installation as non-downtime unless explicitly stated. Exam tips: - Memorize which operations require deallocation: resizing VM SKU is a classic one. - Disk attach/detach and most extension installs are typically online. - For 24/7 apps, mention HA patterns (Availability Zones/sets, load balancing) to handle planned downtime operations. References to review: - Azure VM resizing behavior and deallocation requirements (Azure Virtual Machines documentation) - Azure Well-Architected Framework: Reliability

2
Questão 2

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Directory (Azure AD) tenant named Adatum and an Azure Subscription named Subscription1. Adatum contains a group named Developers. Subscription1 contains a resource group named Dev. You need to provide the Developers group with the ability to create Azure logic apps in the Dev resource group. Solution: On Subscription1, you assign the DevTest Labs User role to the Developers group. Does this meet the goal?

Yes is incorrect because the proposed role is not designed for general Azure resource deployment. DevTest Labs User is intended for interacting with resources in Azure DevTest Labs, such as using lab VMs, rather than creating arbitrary services like Logic Apps. Even when assigned on Subscription1, it does not gain Contributor-like capabilities. To meet the goal, a role such as Contributor on the Dev resource group would be appropriate.

No is correct because the DevTest Labs User role does not include the permissions required to create Azure Logic Apps. Creating a Logic App requires write access to Microsoft.Logic/workflows resources, which is not part of that built-in role. Although the role is assigned at the subscription scope, scope only determines where permissions apply, not what actions are allowed. Therefore, the Developers group still would not be able to create Logic Apps in the Dev resource group.

Análise da Questão

Core concept: This question tests Azure RBAC role assignments at the correct scope. To allow a group to create Azure Logic Apps in a specific resource group, the assigned role must include write permissions for Logic App resources, such as Microsoft.Logic/workflows/*, at the Dev resource group scope or higher. Why correct: The DevTest Labs User role does not grant general resource creation permissions for Azure services like Logic Apps. It is intended for users of Azure DevTest Labs to connect to, start, restart, and use virtual machines within a lab environment. Therefore, assigning this role on Subscription1 would not allow the Developers group to create Logic Apps in the Dev resource group. Key features: - Logic Apps are Azure resources under the Microsoft.Logic provider and require resource write permissions. - Built-in roles such as Contributor at the resource group scope would allow creating Logic Apps. - RBAC scope matters: assigning a suitable role at the Dev resource group is sufficient and follows least privilege better than assigning at the subscription level. Common misconceptions: A common mistake is assuming any role with the word "User" or a development-related name grants broad deployment rights. DevTest Labs User is narrowly focused on lab usage, not general Azure resource deployment. Another misconception is that assigning a role at the subscription scope compensates for insufficient permissions in the role definition; scope expands reach, but not capabilities. Exam tips: For AZ-104, remember to evaluate both the role definition and the assignment scope. If the task is to create or manage most resource types in a resource group, Contributor is often the expected built-in role unless the question specifies a more limited custom role. Roles related to DevTest Labs, monitoring, or support typically do not grant broad create permissions for unrelated services.

3
Questão 3

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure virtual machine named VM1 that runs Windows Server 2016. You need to create an alert in Azure when more than two error events are logged to the System event log on VM1 within an hour. Solution: You create an Azure Log Analytics workspace and configure the data settings. You install the Microsoft Monitoring Agent on VM1. You create an alert in Azure Monitor and specify the Log Analytics workspace as the source. Does this meet the goal?

This solution correctly establishes a pipeline to collect Windows System event log entries from VM1 into Azure Monitor Logs by using a Log Analytics workspace and the Microsoft Monitoring Agent. With the System log data ingested, Azure Monitor can create a scheduled query (log) alert that counts Error-level events over the last hour and triggers when the count is greater than two. The workspace being specified as the alert source is exactly how log-based alerts are evaluated. Therefore, it satisfies the requirement to alert based on the number of System error events within a one-hour window.

Answering "No" would imply the proposed approach cannot generate the required alert, but it can because Azure Monitor log alerts are designed for this exact scenario. Windows Event Logs are not available as native Azure metrics, so the correct method is to ingest them into Log Analytics and query them with KQL. Installing the agent and configuring event log collection ensures the data is present for the alert rule to evaluate. As a result, rejecting this solution is incorrect because it does meet the stated goal.

Análise da Questão

Core concept: This question tests how to collect Windows Event Logs from an Azure VM into Azure Monitor Logs (Log Analytics) and create a log-based alert that triggers when a threshold of specific events (System log errors) occurs within a time window. Why the answer is correct: Creating a Log Analytics workspace and configuring data collection for Windows Event Logs enables ingestion of the VM’s System event log entries into Azure Monitor Logs. Installing the Microsoft Monitoring Agent (MMA) on a Windows Server 2016 VM is a supported way (especially for classic/legacy setups) to send event log data to the workspace. Once the data is in the workspace, an Azure Monitor log alert rule can run a KQL query that counts “Error” level events in the System log over the last hour and triggers when the count exceeds 2. Therefore, using the workspace as the alert source meets the requirement. Key features / configurations: - Log Analytics workspace (Azure Monitor Logs) as the central store for collected logs. - Data settings / data sources configured to collect Windows Event Logs (System log, Error level). - Microsoft Monitoring Agent (MMA) installed and connected to the workspace to forward event logs. - Azure Monitor log alert rule (Scheduled query rule) using KQL, e.g., filtering EventLog == "System" and Level == "Error", summarizing count over 1h, and threshold > 2. - Alert evaluation frequency (e.g., every 5 minutes) and time window (1 hour) configured in the alert rule. Common misconceptions: - Assuming metric alerts can directly count Windows Event Log entries; event logs require log-based alerts via Log Analytics. - Confusing Activity Log alerts (control-plane events) with guest OS event logs (data-plane/VM OS logs). - Forgetting that the VM must actually send event logs to a workspace (agent + data collection configuration) before a log alert can query them. Exam tips: - Use Log Analytics + log alerts for guest OS logs (Windows Event Logs, syslog). - Activity Log alerts are for Azure resource operations, not VM System/Application event logs. - Ensure the collection pipeline exists: agent (or AMA/DCR) + workspace + query-based alert. - Threshold/time-window requirements map naturally to KQL summarize/count over a defined timespan.

4
Questão 4

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure virtual machine named VM1 that runs Windows Server 2016. You need to create an alert in Azure when more than two error events are logged to the System event log on VM1 within an hour. Solution: You create an Azure Log Analytics workspace and configure the data settings. You add the Microsoft Monitoring Agent VM extension to VM1. You create an alert in Azure Monitor and specify the Log Analytics workspace as the source. Does this meet the goal?

This solution correctly onboards VM1 to Azure Monitor Logs by creating a Log Analytics workspace and installing the Microsoft Monitoring Agent VM extension. Configuring the workspace data settings to collect Windows System event logs ensures that error events are ingested and available for querying. An Azure Monitor log alert can then run a KQL query that counts System log error events over a 1-hour window and triggers when the count exceeds two. Therefore, it satisfies the requirement to alert when more than two System error events occur within an hour.

Answering "No" would imply the proposed approach cannot generate the required alert, but it can when properly configured. Azure Monitor log alerts are specifically designed to evaluate log data in a Log Analytics workspace using KQL, including counting Windows Event Log entries over a time window. With the MMA extension installed and System error events collected, the alert condition (more than two errors in one hour) is straightforward to implement. The only way it would fail is if event collection or the query/time window were misconfigured, which is not indicated in the solution.

Análise da Questão

Core concept: This question tests how to collect Windows Event Logs from an Azure VM into Azure Monitor Logs (Log Analytics) and create a log-based alert that triggers when a threshold of specific events (System log errors) occurs within a time window. Why the answer is correct: To alert on Windows Event Log entries (like System log “Error” events), Azure must first ingest those events into a queryable store. Creating a Log Analytics workspace and configuring data collection for Windows Event Logs enables ingestion of the System log. Installing the Microsoft Monitoring Agent (MMA) VM extension on VM1 connects the VM to the workspace and sends the configured event logs. Once the data is in the workspace, an Azure Monitor log alert rule can run a KQL query that counts error events over the last hour and triggers when the count exceeds 2, meeting the requirement. Key features / configurations: - Log Analytics workspace (Azure Monitor Logs) as the destination for collected event data. - Data settings / data sources: Windows Event Logs (System) with level = Error (or specific Event IDs if needed). - Microsoft Monitoring Agent (MMA) VM extension to onboard the VM and send logs to the workspace. - Azure Monitor log alert rule (scheduled query rule) using KQL, e.g., count of System errors in the last 60 minutes > 2. - Alert evaluation frequency and time window aligned to “within an hour.” Common misconceptions: - Assuming metric alerts can directly read Windows Event Logs; they cannot without log ingestion. - Thinking Azure Activity Log alerts apply to guest OS events; Activity Log is for Azure control-plane operations, not Windows System logs. - Forgetting that the VM must be connected to Log Analytics (via agent/extension) and that the specific event log (System) must be enabled in data collection. Exam tips: - Use Log Analytics + agent/extension when the requirement involves guest OS logs (Windows Event Logs, syslog). - For “X events within Y time,” think “Azure Monitor log alert (scheduled query) with a count() over a time range.” - Ensure the correct log channel (System/Application/Security) is configured for collection. - Distinguish control-plane logs (Activity Log) from data-plane/guest logs (Log Analytics).

5
Questão 5

HOTSPOT - You plan to deploy five virtual machines to a virtual network subnet. Each virtual machine will have a public IP address and a private IP address. Each virtual machine requires the same inbound and outbound security rules. What is the minimum number of network interfaces and network security groups that you require? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Parte 1:

Minimum number of network interfaces: ______

Minimum number of network interfaces = 5. In Azure, each VM must have at least one NIC to connect to a subnet and receive a private IP address. A public IP address is not a separate NIC; it is a resource that is associated to an IP configuration on a NIC (or to a load balancer). Since the requirement is that each of the five VMs has both a private IP and a public IP, the simplest and minimum configuration is one NIC per VM with one private IP configuration and an associated public IP. Why the other options are wrong: - 10, 15, 20 NICs would imply multiple NICs per VM. Multiple NICs are optional and used for scenarios like multi-homing, traffic separation, or NVA designs, but they are not required here. The question asks for the minimum, and nothing indicates additional NICs are needed.

Parte 2:

Minimum number of network security groups: ______

Minimum number of network security groups = 1. An NSG can be associated to a subnet and/or to individual NICs. When all VMs in the same subnet require the same inbound and outbound security rules, you can apply a single NSG to the subnet. This enforces a consistent rule set for every NIC/VM in that subnet and is the minimum number of NSGs needed. It also reduces administrative overhead and the risk of inconsistent rules across VMs. Why the other options are wrong: - 2 NSGs is unnecessary unless you need different rule sets (e.g., separate tiers) or exceptions. - 5 NSGs (one per VM/NIC) would work but is not minimal and increases management complexity. - 10 NSGs is even more excessive and not justified by the requirements. Note: You could also attach the single NSG to each NIC instead of the subnet, but it would still be one NSG total.

Quer praticar todas as questões em qualquer lugar?

Baixe o Cloud Pass — inclui simulados, acompanhamento de progresso e mais.

6
Questão 6

You have Azure subscription that includes data in following locations: Name Type container1 Blob container share1 Azure files share DB1 SQL database Table1 Azure Table You plan to export data by using Azure import/export job named Export1. You need to identify the data that can be exported by using Export1. Which data should you identify?

DB1 is an Azure SQL Database (PaaS). Azure Import/Export does not export SQL databases because it operates at the Azure Storage layer (copying storage objects to/from disks). SQL Database exports are typically done using bacpac (Export Data-tier Application), Azure Database Migration Service, replication, or other SQL-native backup/migration approaches.

container1 is a Blob container in Azure Blob Storage. Azure Import/Export export jobs support exporting data from Azure Blob storage to customer-provided disk drives shipped to an Azure datacenter. This matches the service’s intended use: offline transfer of large blob datasets when network transfer is too slow or impractical.

share1 is an Azure Files share. Although it is part of Azure Storage, Import/Export export jobs do not support exporting Azure Files shares. To move Azure Files data, you would typically use AzCopy/robocopy over SMB, Azure File Sync, or Data Box family services depending on scale and offline requirements.

Table1 is an Azure Table (Table storage). Import/Export does not export Table storage entities because it is not a blob/object export mechanism for that service. Table data is usually exported by writing a copy process using Azure Storage SDKs, Azure Data Factory, or other ETL/data movement tools.

Análise da Questão

Core concept: Azure Import/Export is a service for transferring large amounts of data to or from Azure Storage by shipping physical disk drives to an Azure datacenter. In AZ-104, it’s tested as part of storage operations and data movement options. Why the answer is correct: An Export job (Export1) supports exporting data from Azure Blob storage (blob containers) to customer-provided drives. Therefore, data in a blob container (container1) can be exported using an Import/Export export job. The service is designed around Azure Storage accounts and specifically supports Blob storage for export scenarios. Key features and best practices: - Export jobs are used when network transfer is impractical (very large datasets, limited bandwidth, or strict transfer windows). - You select the storage account and the blob data to export; Azure copies the selected blobs to the shipped drives. - Plan for operational considerations: job creation, drive preparation, shipping logistics, and chain-of-custody. Also consider encryption and access controls (least privilege) aligned with Azure Well-Architected Framework security principles. - For many scenarios, online transfer tools (AzCopy, Azure Data Box, Storage account replication) may be preferable, but Import/Export remains relevant for certain offline workflows. Common misconceptions: - Azure Files shares (SMB/NFS) are often assumed to be exportable because they are “storage,” but Import/Export export jobs do not export Azure Files shares. - Azure SQL Database and Azure Table storage are data services; they are not exported via Import/Export jobs. They require service-specific export/migration methods (e.g., SQL bacpac/backup/replication, Table data copy via tools/SDK). Exam tips: - Memorize the mapping: Import/Export is for Azure Storage data movement using physical drives; for export, think “Blob.” - If you see Azure Files, SQL Database, or Table storage in options, expect they are not supported by Import/Export export jobs and require other migration/export approaches. - Domain alignment: this is squarely in “Implement and Manage Storage.”

7
Questão 7

DRAG DROP - You have an on-premises file server named Server1 that runs Windows Server 2016. You have an Azure subscription that contains an Azure file share. You deploy an Azure File Sync Storage Sync Service, and you create a sync group. You need to synchronize files from Server1 to Azure. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place:

Parte 1:

Select the correct answer(s) in the image below.

question-image

Answer is A (Pass) because the correct sequence of actions for Azure File Sync is well-defined: 1) Install the Azure File Sync agent on Server1. 2) Register Server1 with the Storage Sync Service. 3) Add a server endpoint in the sync group (select Server1 and the local path to sync). Why this is correct: The agent is required to communicate with the Storage Sync Service and perform sync operations. Registration is required before the server can be selected when creating a server endpoint. The server endpoint is what actually links a specific folder on Server1 to the sync group (and therefore to the Azure file share/cloud endpoint). Why others are wrong: Azure on-premises data gateway is unrelated to file sync. Recovery Services vault is for Azure Backup/ASR, not Azure File Sync. DFS Replication is not required and is a different replication technology.

8
Questão 8

You have an Azure subscription named Subscription1. You have 5 TB of data that you need to transfer to Subscription1. You plan to use an Azure Import/Export job. What can you use as the destination of the imported data?

Azure Import/Export does not import data directly into a virtual machine’s OS disk, data disk, or file system. The service writes the imported data into an Azure Storage account (Blob or Files) as the destination. To get data onto a VM, you would first import into Azure Storage and then copy/mount it from there (for example, mount an Azure file share or download from Blob).

Azure Cosmos DB is a database service and is not a supported target for Azure Import/Export jobs. Import/Export cannot write documents/records directly into Cosmos DB; it only uploads files into Azure Storage (Blob or Files). To load data into Cosmos DB, you would typically import into Blob/Files first and then use a data ingestion process (e.g., Data Factory, SDK, or bulk import tools).

Azure Import/Export supports importing data into Azure Storage, specifically into Blob storage containers or Azure Files shares. Azure File Storage (Azure Files) is a native storage destination within an Azure Storage account and is explicitly supported for import jobs. This matches the scenario of transferring 5 TB of data using shipped drives and having Microsoft upload the data into the chosen Azure Files share.

The Azure File Sync Storage Sync Service is a synchronization/orchestration service used to sync Windows Server file shares with Azure Files (cloud endpoint). It is not a storage destination where Import/Export can place data. Import/Export can target Azure Files directly, and then File Sync could optionally be configured afterward to synchronize that Azure file share with on-prem servers.

Análise da Questão

Core concept: This question tests what Azure Import/Export supports as a target (destination) for imported data and which Azure storage services are compatible with the Import/Export service. Why the answer is correct: Azure Import/Export is designed to move large amounts of data by shipping physical drives to an Azure datacenter, where Microsoft uploads the data into an Azure Storage account. For import jobs, the supported destinations are Azure Blob storage and Azure Files (file shares) within an Azure Storage account. Therefore, Azure File Storage is a valid destination for the imported data. Key features / configurations: - Azure Import/Export imports data into an Azure Storage account (not directly into compute or databases). - Supported import targets: Azure Blob storage and Azure Files (Azure File Storage shares). - Typical workflow: prepare disks with the WAImportExport tool, create an Import job, ship disks, data is copied into the specified storage account/container or file share. Common misconceptions: - Assuming Import/Export can import directly into a VM’s disk or file system; it cannot—data lands in Azure Storage first. - Assuming it can populate PaaS databases (e.g., Azure Cosmos DB) directly; Import/Export does not write to database services. - Confusing Azure File Sync (a synchronization service) with Azure Files (the storage destination). File Sync uses Azure Files as cloud endpoints but is not itself an import destination. Exam tips: - Remember: Import/Export targets Azure Storage only (Blob or Files). - If the option is a compute resource (VM) or a database (Cosmos DB), it’s not a direct Import/Export destination. - Azure File Sync is for syncing on-prem servers with Azure Files; it’s not where Import/Export writes data.

9
Questão 9

You have an Azure Resource Manager template named Template1 that is used to deploy an Azure virtual machine. Template1 contains the following text:

"location": {
    "type": "String",
    "defaultValue": "eastus",
    "allowedValues": [
        "canadacentral",
        "eastus",
        "westeurope",
        "westus" ]
}

The variables section in Template1 contains the following text: "location": "westeurope" The resources section in Template1 contains the following text:

"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2018-10-01",
"name": "[variables('vmName')]",
"location": "westeurope",

You need to deploy the virtual machine to the West US location by using Template1. What should you do?

Correct. The VM resource’s "location" is explicitly hard-coded to "westeurope" in the resources section. ARM will deploy the VM to whatever is specified on that property. Changing it to "westus" (or ideally to an expression referencing a parameter) is the only way, among the options, to ensure the VM is deployed to West US.

Incorrect. Selecting West US during deployment only supplies values to parameters (if the template uses them) and may set the resource group location, but it does not override a resource property that is hard-coded. Since the VM resource has "location": "westeurope", it will still deploy to West Europe regardless of what you select in the portal.

Incorrect. Although the variables section defines "location": "westeurope", the VM resource does not reference [variables('location')]; it uses a literal string "westeurope". Therefore, changing the variable to "westus" has no effect on the VM’s deployment region unless the resource location property is changed to use that variable.

Análise da Questão

Core concept: This question tests how ARM templates determine a resource’s deployment region. In ARM, the effective location for a resource is taken from the resource’s "location" property (or from an expression used there), not from a parameter or variable unless the resource explicitly references it. Why the answer is correct: In Template1, the virtual machine resource explicitly sets: "location": "westeurope" Because this is a hard-coded literal value, it overrides any parameter named location and any variable named location. To deploy the VM to West US, you must change the resource’s location value to "westus" (or better, change it to an expression like [parameters('location')] and then pass westus at deployment time). Given the options, only modifying the resources section ensures the VM is created in West US. Key features and best practices: - Parameters define inputs that can be supplied at deployment time; allowedValues restrict valid inputs. However, parameters only matter if they are referenced. - Variables are template-time conveniences; they also only matter if referenced. - Best practice for reusable templates is to set resource location as "[parameters('location')]" and optionally default it to resourceGroup().location or a chosen default. This aligns with Azure Well-Architected Framework operational excellence (repeatable deployments) and reliability (consistent configuration across environments). Common misconceptions: - Selecting a region during deployment (portal/CLI) only affects parameter values you provide and the resource group location; it does not override a hard-coded resource "location" property. - Changing the variable "location" won’t help because the VM resource is not using [variables('location')]. Exam tips: Always trace where a parameter/variable is actually used. If a resource property is a literal (e.g., "westeurope"), deployment-time choices won’t override it. Look for expressions like [parameters('location')] or [variables('location')] to know what can be controlled at deployment time.

10
Questão 10

HOTSPOT - You have a hybrid deployment of Azure Active Directory (Azure AD) that contains the users shown in the following table. Name | Type | Source User1 | Member | Azure AD User2 | Member | Windows Server Active Directory User3 | Guest | Microsoft account You need to modify the JobTitle and UsageLocation attributes for the users. For which users can you modify the attributes from Azure AD? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Parte 1:

JobTitle: ______

JobTitle is a user profile attribute that is typically mastered by the identity source. - User1 (Member, Source: Azure AD): Cloud-only user, so Azure AD is the source of authority. You can edit JobTitle directly in Azure AD. - User2 (Member, Source: Windows Server Active Directory): Synced via Azure AD Connect. For synced users, profile attributes such as JobTitle are mastered on-premises and are read-only in Azure AD. To change JobTitle, you must update it in on-prem AD and let it sync. - User3 (Guest, Source: Microsoft account): Guest users are external identities. Many profile attributes (including JobTitle) are not reliably editable from your tenant because the identity is mastered externally; Azure AD does not treat them like fully managed member accounts for profile fields. Therefore, only User1 can have JobTitle modified from Azure AD.

Parte 2:

UsageLocation: ______

UsageLocation is a tenant-specific attribute used for licensing and service availability (for example, some Microsoft 365/Azure AD licensing scenarios require UsageLocation to be set). Microsoft allows administrators to set UsageLocation in Azure AD even when the user is synced from on-premises. - User1 (cloud-only): Editable in Azure AD. - User2 (synced from Windows Server AD): Although many profile attributes are read-only, UsageLocation is commonly editable in Azure AD because it’s needed for cloud licensing and isn’t always sourced from on-prem AD. - User3 (guest): Guests typically cannot be assigned the same licenses as members in the same way, and UsageLocation is generally not a meaningful/manageable attribute for guest accounts in the tenant. Thus, you can modify UsageLocation from Azure AD for User1 and User2 only.

Outros Simulados

Practice Test #1

50 Questões·100 min·Aprovação 700/1000

Practice Test #2

50 Questões·100 min·Aprovação 700/1000

Practice Test #4

50 Questões·100 min·Aprovação 700/1000

Practice Test #5

50 Questões·100 min·Aprovação 700/1000

Practice Test #6

50 Questões·100 min·Aprovação 700/1000

Practice Test #7

50 Questões·100 min·Aprovação 700/1000

Practice Test #8

50 Questões·100 min·Aprovação 700/1000

Practice Test #9

50 Questões·100 min·Aprovação 700/1000
← Ver Todas as Questões de Microsoft AZ-104

Comece a Praticar Agora

Baixe o Cloud Pass e comece a praticar todas as questões de Microsoft AZ-104.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

App de Prática para Certificações de TI

Get it on Google PlayDownload on the App Store

Certificações

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

Perguntas FrequentesPolítica de PrivacidadeTermos de Serviço

Empresa

ContatoExcluir Conta

© Copyright 2026 Cloud Pass, Todos os direitos reservados.

Quer praticar todas as questões em qualquer lugar?

Baixe o app

Baixe o Cloud Pass — inclui simulados, acompanhamento de progresso e mais.