CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Microsoft AZ-204
Microsoft AZ-204

Practice Test #2

Simula la experiencia real del examen con 50 preguntas y un límite de tiempo de 100 minutos. Practica con respuestas verificadas por IA y explicaciones detalladas.

50Preguntas100Minutos700/1000Puntaje de aprobación
Explorar preguntas de práctica

Impulsado por IA

Respuestas y explicaciones verificadas por triple IA

Cada respuesta es verificada de forma cruzada por 3 modelos de IA líderes para garantizar la máxima precisión. Obtén explicaciones detalladas por opción y análisis profundo de cada pregunta.

GPT Pro
Claude Opus
Gemini Pro
Explicaciones por opción
Análisis profundo de preguntas
Precisión por consenso de 3 modelos

Preguntas de práctica

1
Pregunta 1

You are developing an application that uses Azure Blob storage. The application must read the transaction logs of all the changes that occur to the blobs and the blob metadata in the storage account for auditing purposes. The changes must be in the order in which they occurred, include only create, update, delete, and copy operations and be retained for compliance reasons. You need to process the transaction logs asynchronously. What should you do?

Event Grid + Azure Functions is good for asynchronous processing and near-real-time reactions to blob events. However, it is not an authoritative transaction log for compliance: delivery is at-least-once, events can be duplicated, and ordering is not guaranteed across all events. Retention is also not inherent; you would need to persist events yourself, and you could still miss events if misconfigured.

Enabling Blob Storage Change Feed provides an immutable, append-only log of blob and blob metadata changes, including create, update, delete, and copy operations. It is designed for auditing and compliance scenarios and supports asynchronous processing by reading the feed files. It also preserves the sequence of changes as recorded in the feed, making it the best match for ordered transaction log requirements.

Storage Analytics logging (classic) captures request-level logs but is a legacy approach and not optimized for the requirement of an ordered, concise change log specifically for blob and metadata changes. It can be noisy (many operation types), requires parsing, and is not the recommended modern solution for change tracking/auditing compared to Blob Change Feed and Azure Monitor diagnostic settings.

The Azure Monitor HTTP Data Collector API is used to send custom logs into Log Analytics. It does not provide native access to storage transaction history. You would still need a source of truth for blob operations, and scanning request bodies is not a supported or reliable way to reconstruct ordered create/update/delete/copy changes for blobs and metadata.

Análisis de la pregunta

Core concept: This question tests Azure Blob Storage Change Feed, which provides an immutable, ordered log of changes to blobs and blob metadata. It is designed for auditing, compliance, and downstream processing scenarios where you need a durable record of create/update/delete/copy operations. Why the answer is correct: The requirement is to read transaction logs of all changes to blobs and blob metadata, in the order they occurred, limited to create, update, delete, and copy, retained for compliance, and processed asynchronously. Blob Change Feed is purpose-built for this: it records changes as append-only log files stored in the storage account, preserves ordering within the feed, and includes exactly the relevant change types (create, update, delete, and copy). Because the feed is stored in Blob Storage, you can process it asynchronously using batch jobs, Functions, Databricks/Synapse, or custom workers, and you can retain it according to compliance needs using storage lifecycle management and immutability policies. Key features / configuration notes: - Enable Change Feed at the storage account level. - Consume the feed via SDKs/APIs that read change feed segments and events. - Retention/compliance: use lifecycle management to retain for required duration, and consider immutable blob policies (WORM) if regulatory requirements demand tamper resistance. - Aligns with Azure Well-Architected Framework (Reliability and Security): durable, replayable log; decoupled asynchronous processing. Common misconceptions: Event Grid is excellent for near-real-time notifications, but it is not a compliance-grade, complete, ordered transaction log. Events can be delivered at-least-once, may arrive out of order, and are not intended as an authoritative audit trail. Storage Analytics logs are legacy and not as targeted for ordered change tracking of blob metadata changes. Azure Monitor HTTP Data Collector is for custom log ingestion, not for extracting authoritative storage transaction history. Exam tips: If you see “ordered log of blob changes,” “auditing/compliance,” and “create/update/delete/copy,” think “Blob Change Feed.” If you see “react to events” or “trigger serverless processing,” think “Event Grid,” but not for strict ordered audit logs.

2
Pregunta 2

HOTSPOT - A company is developing a Java web app. The web app code is hosted in a GitHub repository located at https://github.com/Contoso/webapp. The web app must be evaluated before it is moved to production. You must deploy the initial code release to a deployment slot named staging. You need to create the web app and deploy the code. How should you complete the commands? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Parte 1:

az ______ create --location centralus --name $resourcegroupname

Correct: A (group). The command `az group create --location centralus --name $resourcegroupname` creates an Azure Resource Group, which is the logical container for the App Service plan, web app, and deployment slot. Resource groups are required to organize and manage lifecycle, RBAC, and billing boundaries. Why others are wrong: - B (webapp) creates an App Service Web App, not a resource group. - C (appservice plan) creates the hosting plan, but it must be placed in an existing resource group. - D (webapp deployment slot) creates a slot under an existing web app. - E (webapp deployment source) configures source control deployment for an existing web app/slot. In Azure CLI, `az group` is always used for resource group operations, and `--location` is required because the resource group has a region metadata value.

Parte 2:

az ______ create --name $webappname --resource-group $resourcegroupname --sku S3

Correct: C (`appservice plan`). The command `az appservice plan create --name $webappname --resource-group $resourcegroupname --sku S3` creates the App Service plan that will host the web app. An App Service plan must exist before the web app can be created, and a Standard-tier SKU supports deployment slots. The other options are incorrect because they create different resource types. `group` creates a resource group, `webapp` creates the app itself, `webapp deployment slot` creates a slot only after the app exists, and `webapp deployment source` configures source control deployment rather than hosting capacity.

Parte 3:

az ______ create --name $webappname --resource-group $resourcegroupname --plan $webappname

Correct: B (webapp). The command `az webapp create --name $webappname --resource-group $resourcegroupname --plan $webappname` creates the App Service web app and associates it with the App Service plan created earlier. The `--plan` parameter binds the web app to the compute resources and pricing tier. Why others are wrong: - A (group) is unrelated; resource group already exists. - C (appservice plan) would create/modify a plan, but here we are creating the web app that runs in the plan. - D (webapp deployment slot) requires an existing web app; it does not create the main app. - E (webapp deployment source) configures deployment, not the web app itself. Real-world note: For Java, you often also set runtime stack (e.g., `--runtime "JAVA|11-java11"`) or use `az webapp config set`, but this question focuses on the minimal commands for app + slot + GitHub deployment.

Parte 4:

az ______ create --name $webappname --resource-group $resourcegroupname --slot staging

Correct: D (`webapp deployment slot`). The command `az webapp deployment slot create --name $webappname --resource-group $resourcegroupname --slot staging` creates a deployment slot named `staging` for the existing web app. This is required because the initial code release must be deployed to the staging slot rather than directly to production. The other options do not create a slot. `group` creates a resource group, `appservice plan` creates the hosting plan, `webapp` creates the main app, and `webapp deployment source` only configures repository-based deployment after the slot already exists.

Parte 5:

az ______ config --name $webappname --resource-group $resourcegroupname --slot staging --repo-url $gitrepo --branch master --manual-integration

Correct: E (webapp deployment source). The command `az webapp deployment source config --name $webappname --resource-group $resourcegroupname --slot staging --repo-url $gitrepo --branch master --manual-integration` configures GitHub-based deployment for the staging slot specifically. The presence of `deployment source config`, `--repo-url`, and `--branch` indicates source control integration. Why others are wrong: - A (group), C (appservice plan), and D (webapp deployment slot) do not manage repository integration. - B (webapp) manages the app resource but not the deployment source configuration subcommand. Key exam detail: `--slot staging` ensures code is deployed to the staging slot rather than production. `--manual-integration` means Azure won’t automatically pull on every commit via webhook; instead, you manually sync (useful for controlled evaluation). In production CI/CD, you’d more commonly use GitHub Actions or Azure DevOps pipelines, but this CLI approach is valid and frequently tested.

3
Pregunta 3

Your company is developing an Azure API. You need to implement authentication for the Azure API. You have the following requirements: All API calls must be secure.

✑ Callers to the API must not send credentials to the API. Which authentication mechanism should you use?

Parte 1:

Select the correct answer(s) in the image below.

question-image

Answering “Pass” is appropriate because the correct authentication mechanism is well-defined by the requirements. To keep API calls secure while ensuring callers do not send credentials to the API, use OAuth 2.0/OpenID Connect with Microsoft Entra ID (Azure AD) issuing access tokens (JWTs). The caller authenticates to Entra ID, obtains an access token, and then calls the API with the token (Bearer). The API validates the token and authorizes based on scopes/roles. Why alternatives are wrong: API keys, shared access signatures, or basic authentication require the caller to send a secret/credential to the API, violating the requirement. Mutual TLS/client certificates also involve presenting a credential to the API endpoint. Entra ID token-based auth avoids credential handling in the API, supports short-lived tokens, and enables centralized policy controls (conditional access, MFA, app roles/scopes), aligning with exam best practices.

4
Pregunta 4

HOTSPOT - You are building a website to access project data related to teams within your organization. The website does not allow anonymous access. Authentication is performed using an Azure Active Directory (Azure AD) app named internal. The website has the following authentication requirements: ✑ Azure AD users must be able to login to the website. ✑ Personalization of the website must be based on membership in Active Directory groups. You need to configure the application's manifest to meet the authentication requirements. How should you configure the manifest? To answer, select the appropriate configuration in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Parte 1:

"optionalClaims" ______

optionalClaims is not used to turn on Azure AD group membership claims. It is meant for requesting extra token claims such as specific user attributes in ID, access, or SAML tokens. Since the requirement is specifically to personalize the site based on AD groups, the relevant manifest setting is groupMembershipClaims. Therefore, optionalClaims should not be set to "All" for this requirement.

Parte 2:

"groupMembershipClaims" ______

groupMembershipClaims controls whether and which group memberships are emitted in the token, and it uses string values such as "None", "SecurityGroup", "DirectoryRole", or "All". For a website that personalizes based on Active Directory groups, setting this property to "All" ensures the necessary group claims are included. A value of true is not the correct manifest format for this property. optionalClaims is unrelated to enabling group claims, so it is not the correct mechanism for this requirement.

5
Pregunta 5

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You develop an HTTP triggered Azure Function app to process Azure Storage blob data. The app is triggered using an output binding on the blob. The app continues to time out after four minutes. The app must process the blob data. You need to ensure the app does not time out and processes the blob data. Solution: Pass the HTTP trigger payload into an Azure Service Bus queue to be processed by a queue trigger function and return an immediate HTTP success response. Does the solution meet the goal?

Yes. Event Grid system topics provide scalable, managed event ingestion from Azure services. Advanced Filters ensure only relevant events are delivered. Azure Functions with an Event Grid trigger meets the serverless processing requirement and scales automatically. Event Grid’s retry policy plus subscription dead-lettering provides a strong reliability pattern so failed deliveries aren’t silently lost and can be reprocessed.

No is incorrect because the proposed solution does meet the goal of preventing the HTTP-triggered function from timing out. By offloading the work to Service Bus, the function can acknowledge the request quickly and let a separate queue-triggered function process the blob data in the background. This removes the long-running processing from the HTTP execution path, which is exactly what is needed here. Unless the question explicitly required synchronous completion within the HTTP response, this asynchronous design is appropriate.

Análisis de la pregunta

Core concept: This scenario tests Azure Event Grid for event routing/filtering across multiple Azure services, and Azure Functions (Event Grid trigger) for serverless processing. It also touches reliability patterns such as dead-lettering and retry behavior. Why the answer is correct: Using Event Grid system topics per service is an appropriate way to ingest events from Azure services like Blob Storage and Azure Resource Manager in near-real time without managing infrastructure. Event subscriptions with Advanced Filters (by event type, subject, data fields, or resource patterns) ensure the application receives only relevant events, minimizing downstream processing and custom routing code. Azure Functions with an Event Grid trigger provides a fully managed, auto-scaling compute option that meets the “no infrastructure” requirement. Ensuring no events are missed: Event Grid provides at-least-once delivery with built-in retries over a defined retry window. If the Function endpoint is temporarily unavailable or returns failures, Event Grid retries delivery. Enabling dead-lettering on the subscription (to a Storage account container) captures events that could not be delivered/processed after retries, enabling later reprocessing and preventing silent loss. This aligns with reliability best practices in the Azure Well-Architected Framework (design for failure, use asynchronous messaging, and implement poison/dead-letter handling). Key features and best practices: - System topics: managed integration for Azure resource events (e.g., Storage, ARM). - Advanced Filters: reduce noise and cost by filtering at the broker. - Azure Functions Event Grid trigger: serverless, scales with event volume. - Dead-lettering: operational safety net for undeliverable events. - Consider idempotency: because Event Grid is at-least-once, Functions should handle duplicate events safely. Common misconceptions: Some assume Event Grid “stores” events like a queue. It does not provide long-term retention; reliability comes from retries plus dead-lettering and your reprocessing strategy. If you needed durable buffering for long outages, you might add Service Bus/Storage Queue as an intermediary, but the proposed design still meets the stated requirements via dead-lettering and retries. Exam tips: For near-real-time eventing from Azure services with filtering, choose Event Grid. For serverless processing, pair it with Azure Functions triggers. When the question emphasizes “no missed events,” look for retries + dead-lettering and remember to mention idempotent handlers.

¿Quieres practicar todas las preguntas en cualquier lugar?

Descarga Cloud Pass — incluye exámenes de práctica, seguimiento de progreso y más.

6
Pregunta 6

HOTSPOT - You are creating a CLI script that creates an Azure web app and related services in Azure App Service. The web app uses the following variables: Variable name | Value $gitrepo | https://github.com/Contos/webapp $webappname | Webapp1103 You need to automatically deploy code from GitHub to the newly created web app. How should you complete the script? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Parte 1:

az group create --location westeurope --name myResourceGroup ______ --name $webappname --resource-group myResourceGroup --sku FREE

The blank ends with "--name $webappname --resource-group myResourceGroup --sku FREE", which matches the syntax for creating an App Service plan: "az appservice plan create --name <planName> --resource-group <rg> --sku <tier>". The plan is required before creating the web app because the web app must run in a plan (unless using certain specialized offerings). Why others are wrong: - A (az webapp) would be used to create the web app, but the presence of "--sku FREE" indicates a plan SKU, not a web app property. - C (az webapp deployment) is for deployment-related operations, not plan creation. - D (az group delete) deletes a resource group and is unrelated to provisioning.

Parte 2:

______ --repo-url $gitrepo --branch master --manual-integration

This fragment is tied to configuring source control for the web app because it includes repository-specific arguments such as "--repo-url", "--branch", and "--manual-integration". A plan parameter is only used during "az webapp create" to associate the app with an App Service plan, and it is not valid in the deployment source configuration step. Among the provided choices, "git clone $gitrepo" is the only repo-related option, even though in a real Azure CLI script the proper command would be part of "az webapp deployment source config" rather than a local Git clone. The current explanation is inaccurate because it places this fragment in the wrong command context.

Parte 3:

______ source config --name $webappname --resource-group myResourceGroup

Azure CLI uses the command "az webapp deployment source config" to configure GitHub or other source control deployment for an App Service web app. Because the prompt already includes "source config", the missing prefix must be "az webapp deployment". Option A, "az webapp", is incomplete and would not form a valid command in this syntax. The remaining options are for plan creation or resource group deletion and do not apply.

Parte 4:

______ --repo-url $gitrepo --branch master --manual-integration

After "az webapp deployment source config", you provide the repository settings. The correct continuation is "--repo-url $gitrepo --branch master --manual-integration". This configures the web app to pull from the specified GitHub repository and branch. "--manual-integration" indicates App Service won’t automatically create/manage the GitHub webhook integration (often used when you’re not authenticating to GitHub in the CLI session), but it still sets the deployment source. Why the other option is wrong: - B (--plan $webappname) is only relevant when creating the web app (binding it to an App Service plan). It is not a valid parameter for "deployment source config".

7
Pregunta 7

HOTSPOT - You are developing a solution that uses the Azure Storage Client library for .NET. You have the following code: (Line numbers are included for reference only.)

CloudBlockBlob src = null;
try
{
    src = container.ListBlobs().OfType().FirstOrDefault();
    var id = await src.AcquireLeaseAsync(null);
    var dst = container.GetBlockBlobReference(src.Name);
    string cpid = await dst.StartCopyAsync(src);
    await dst.FetchAttributeAsync();
    return id;
}
catch (Exception e)
{
    throw;
}
finally
{
    if (src != null)
        await src.FetchAttributesAsync();
    if (src.Properties.LeaseState != LeaseState.Available)
        await src.BreakLeaseAsync(new TimeSpan(0));
}

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:

Parte 1:

The code creates an infinite lease

Yes. The code calls: - line 06: var id = await src.AcquireLeaseAsync(null); In the Azure Storage .NET client library, AcquireLeaseAsync takes a nullable TimeSpan? for the lease duration. For blobs, a null duration requests an infinite lease (as opposed to a fixed-duration lease of 15–60 seconds). An infinite lease remains in effect until it is explicitly released (ReleaseLeaseAsync with the lease ID) or broken (BreakLeaseAsync). Why “No” is wrong: a finite lease would require a specific duration value within the allowed range. Since the code passes null, it is explicitly requesting the infinite lease behavior.

Parte 2:

The code at line 06 always creates a new blob

No. The code at line 06 does not create a new blob. Line 06 is AcquireLeaseAsync on the existing source blob (src). That operation only changes the lease state of the existing blob; it does not create any blob. Even if the intent of the question was “line 07” (GetBlockBlobReference), that also does not create a blob in the service—it only creates a local object that points to a blob name. The destination blob may be created later by StartCopyAsync (line 08) if it doesn’t already exist. Therefore, the statement “always creates a new blob” is incorrect: leasing never creates a blob, and even the destination reference creation is not a server-side create operation.

Parte 3:

The finally block releases the lease

No. The finally block does not “release” the lease; it attempts to break it. Specifically: - It checks src.Properties.LeaseState != LeaseState.Available - Then calls BreakLeaseAsync(TimeSpan.Zero) Release vs Break: - ReleaseLeaseAsync is the normal way to release a lease and requires the lease ID returned by AcquireLeaseAsync. - BreakLeaseAsync forcibly ends the lease without needing the lease ID, transitioning the lease through Broken and eventually Available. Also, there’s a logic issue: src.Properties.LeaseState is only populated after FetchAttributesAsync, but the code checks LeaseState outside the src != null block. If src were null, this would throw. Assuming src is not null, the operation is still a break, not a release, so the statement is false.

8
Pregunta 8
(Selecciona 2)

You are preparing to deploy a website to an Azure Web App from a GitHub repository. The website includes static content generated by a script. You plan to use the Azure Web App continuous deployment feature. You need to run the static generation script before the website starts serving traffic. What are two possible ways to achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

Incorrect. WEBSITE_RUN_FROM_PACKAGE is an App Service application setting that tells the app to run from a mounted ZIP/package (or a URL). It is not configured in host.json (host.json is for Azure Functions), and it does not execute a static generation tool. This option mixes unrelated concepts (Functions configuration + App Service setting) and won’t run any script.

Correct. A PreBuild (or similar) MSBuild target in the Web App’s .csproj can execute a script/command before the build/publish output is produced. With continuous deployment, the build happens during deployment, so the generated static content becomes part of the deployed artifact. This ensures content exists before the app starts serving traffic.

Incorrect. App Service does not use a special /run folder with run.cmd for deployment customization in the way described. While Windows App Service can run startup commands in some scenarios, that is runtime startup behavior and is not the standard mechanism for customizing GitHub continuous deployment. The recognized approach for Kudu customization is .deployment + deploy.cmd.

Correct. A .deployment file in the repository root can point Kudu to a custom deployment script (commonly deploy.cmd). In that script you can run the static generation step first and then perform the normal deployment steps. This integrates directly with App Service continuous deployment and ensures the generated content is ready before traffic is served.

Análisis de la pregunta

Core concept: This question tests Azure App Service (Web Apps) continuous deployment from GitHub and how to run build-time steps (static site generation) before the app serves traffic. In App Service, the typical build engine is Kudu/Oryx, and you can customize the deployment pipeline using MSBuild targets (for .NET apps) or Kudu custom deployment scripts via a .deployment file. Why the answers are correct: B is correct because adding a PreBuild target in the project’s .csproj integrates the static generation script into the build process. With continuous deployment, App Service builds the app during deployment; PreBuild runs before compilation/publish output is produced. That ensures the generated static assets are included in the deployed artifact, so when the site starts, the content is already present. D is correct because a .deployment file at the repo root can instruct Kudu to run a custom deployment script (e.g., deploy.cmd). In that script you can run the static generation step first, then proceed with the normal deployment steps. This is a standard App Service technique to customize CI/CD behavior when using built-in continuous deployment. Key features / best practices: - Kudu custom deployment: .deployment + deploy.cmd lets you control pre/post steps, environment variables, and tooling installation. - MSBuild targets: PreBuild/BeforeBuild/AfterBuild targets are reliable for .NET-based Web Apps and keep build logic versioned with the code. - From an Azure Well-Architected perspective, doing generation at build/deploy time improves reliability and performance (no runtime generation delays) and supports repeatable deployments. Common misconceptions: - Confusing runtime startup scripts with deployment-time build steps. The requirement is “before serving traffic,” which is best met by generating content during deployment/build, not at first request. - Misusing app settings like WEBSITE_RUN_FROM_PACKAGE; it controls package mounting behavior, not running tools. Exam tips: For AZ-204, remember: App Service built-in CI/CD uses Kudu. To customize deployment, use .deployment/deploy.cmd. For .NET builds, MSBuild targets in .csproj are a clean way to run pre-build tasks so assets are included in the published output.

9
Pregunta 9

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are developing a medical records document management website. The website is used to store scanned copies of patient intake forms. If the stored intake forms are downloaded from storage by a third party, the contents of the forms must not be compromised. You need to store the intake forms according to the requirements. Solution:

  1. Create an Azure Key Vault key named skey.
  2. Encrypt the intake forms using the public key portion of skey.
  3. Store the encrypted data in Azure Blob storage. Does the solution meet the goal?

Yes is incorrect because the proposed design misuses the Key Vault key for direct file encryption. Public-key encryption has payload size limitations and is computationally inefficient for large blobs such as scanned medical forms. Although the intent of protecting downloaded files is valid, the specific implementation described is not the correct Azure approach for storing encrypted documents at scale.

No. The solution does not appropriately meet the requirement because it proposes encrypting the intake forms directly with the public key portion of an Azure Key Vault key. Azure Key Vault asymmetric keys such as RSA keys are intended for encrypting small amounts of data or wrapping symmetric keys, not for bulk encryption of full scanned documents. For document storage, the proper design is to use symmetric encryption for the file contents and protect that symmetric key with Key Vault, which is the standard envelope encryption pattern.

Análisis de la pregunta

Core concept: This question is about protecting sensitive documents at rest so that if blobs are downloaded by an unauthorized third party, the document contents remain unreadable. In Azure, this is typically achieved with encryption, but Azure Key Vault asymmetric keys are not intended for directly encrypting large files such as scanned intake forms. Why correct: The proposed solution says to encrypt the intake forms using the public key portion of a Key Vault key and then store the encrypted data in Blob Storage, but asymmetric encryption is only suitable for small payloads and key-wrapping scenarios, not full document encryption. Key features: The correct pattern is envelope encryption, where a symmetric key encrypts the document and an asymmetric key or Key Vault protects the symmetric key. Common misconceptions: Many candidates assume any Key Vault key can be used to encrypt arbitrary files directly, but RSA key operations have size limits and are not designed for bulk data encryption. Exam tips: When a question involves encrypting documents or blobs, prefer client-side encryption or envelope encryption rather than direct asymmetric encryption of the entire file.

10
Pregunta 10

HOTSPOT - You need to configure API Management for authentication. Which policy values should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Parte 1:

Policy ______

Correct: D. Validate JWT. The validate-jwt policy is the dedicated APIM mechanism to authenticate/authorize callers using OAuth 2.0/OpenID Connect access tokens (JWTs). It validates the token signature (trust), token lifetime (exp/nbf), and intended recipient (aud), and can enforce issuer (iss) and required claims/scopes/roles. This is the expected answer when the prompt is about configuring APIM for authentication. Why others are wrong: A. Check HTTP header only verifies that a header exists or matches a value; it does not cryptographically validate a token or prove identity. B. Restrict caller IPs is a network control (useful for narrowing exposure) but does not authenticate a user/app and fails for roaming clients/NAT. C. Limit call rate by key is throttling/abuse protection; it can use subscription keys but that is not robust authentication and is often considered an additional control rather than primary auth.

Parte 2:

Policy section ______

Correct: A. Inbound. Authentication must be enforced before the request is sent to the backend service. The inbound policy section is executed as the request enters APIM, making it the correct place to validate credentials/tokens and reject unauthorized requests with 401/403 immediately. This reduces backend load, improves security posture, and is consistent with best practices (fail fast at the gateway). Why not outbound: B. Outbound policies run after the backend has processed the request and APIM is returning a response to the client. At that point, the backend may already have executed business logic and accessed data, which defeats the purpose of authentication enforcement. Outbound is intended for response transformations, adding/removing response headers, response caching directives, and similar post-processing tasks.

Otros exámenes de práctica

Practice Test #1

50 Preguntas·100 min·Aprobación 700/1000

Practice Test #3

50 Preguntas·100 min·Aprobación 700/1000

Practice Test #4

50 Preguntas·100 min·Aprobación 700/1000

Practice Test #5

50 Preguntas·100 min·Aprobación 700/1000
← Ver todas las preguntas de Microsoft AZ-204

Comienza a practicar ahora

Descarga Cloud Pass y comienza a practicar todas las preguntas de Microsoft AZ-204.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

App de práctica para certificaciones TI

Get it on Google PlayDownload on the App Store

Certificaciones

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

Preguntas frecuentesPolítica de privacidadTérminos de servicio

Empresa

ContactoEliminar cuenta

© Copyright 2026 Cloud Pass, Todos los derechos reservados.

¿Quieres practicar todas las preguntas en cualquier lugar?

Obtén la app

Descarga Cloud Pass — incluye exámenes de práctica, seguimiento de progreso y más.