CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Microsoft AZ-204
Microsoft AZ-204

Practice Test #5

50問と100分の制限時間で実際の試験をシミュレーションしましょう。AI検証済み解答と詳細な解説で学習できます。

50問題100分700/1000合格点
練習問題を見る

AI搭載

3重AI検証済み解答&解説

すべての解答は3つの主要AIモデルで交差検証され、最高の精度を保証します。選択肢ごとの詳細な解説と深い問題分析を提供します。

GPT Pro
Claude Opus
Gemini Pro
選択肢ごとの解説
深い問題分析
3モデル合意の精度

練習問題

1
問題 1
(2つ選択)

You are developing a solution that will use Azure messaging services. You need to ensure that the solution uses a publish-subscribe model and eliminates the need for constant polling. What are two possible ways to achieve the goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

Service Bus supports publish-subscribe via Topics and Subscriptions. A message sent to a Topic can be copied to multiple Subscriptions, enabling fan-out. It provides durable enterprise messaging features such as dead-letter queues, duplicate detection, sessions for ordered processing, and transactions. With Azure Functions/Logic Apps triggers, consumers can process messages without implementing constant polling logic.

Event Hubs is optimized for high-throughput event streaming (telemetry ingestion) and uses partitions and consumer groups. While multiple consumer groups can read the same stream, it is not the typical pub-sub broker pattern used for business messaging, and consumption is generally pull-based reading from partitions. It’s best for streaming pipelines (Stream Analytics, Spark) rather than push-based pub-sub notifications.

Event Grid is a fully managed event routing service designed for event-driven architectures. It uses a publish-subscribe model where publishers emit events and Event Grid pushes them to subscribers (webhooks, Azure Functions, Logic Apps, etc.) with retries and optional dead-lettering. This push delivery model eliminates the need for consumers to poll for changes and is ideal for reactive integrations.

Queue (such as Azure Storage Queues) implements point-to-point messaging with competing consumers: each message is typically processed by a single consumer. It does not provide a native publish-subscribe fan-out model. While queues reduce tight coupling and can reduce polling when used with triggers, they don’t meet the pub-sub requirement where multiple subscribers receive the same message.

問題分析

Core Concept: This question tests Azure messaging patterns—specifically the publish-subscribe (pub-sub) model and event-driven delivery that avoids constant polling. In Azure, pub-sub means publishers send messages/events to a broker, and multiple independent subscribers receive them via subscriptions/handlers. Why the Answer is Correct: Azure Service Bus (A) supports pub-sub through Topics and Subscriptions. A publisher sends a message to a Topic, and each Subscription receives its own copy, enabling fan-out to multiple consumers. Consumers can receive messages using push-like mechanisms (e.g., Service Bus-triggered Azure Functions) rather than polling in application code. Azure Event Grid (C) is a native event routing service built for reactive architectures. It delivers events to subscribers (webhooks, Azure Functions, Logic Apps, Service Bus, etc.) with push delivery and retry, which eliminates the need for consumers to poll for changes. Key Features / Best Practices: - Service Bus Topics/Subscriptions: durable messaging, at-least-once delivery, dead-lettering, sessions (ordering), duplicate detection, filters/actions on subscriptions, and transactions. Use when you need enterprise messaging guarantees and decoupling between services. - Event Grid: push-based event distribution, built-in integration with many Azure sources (Storage, Resource Groups, etc.), filtering, advanced routing, and managed retries with dead-lettering to Storage. Use for event notification and reactive workflows. From an Azure Well-Architected Framework perspective, both improve Reliability and Performance Efficiency by decoupling producers/consumers and avoiding wasteful polling. Common Misconceptions: - Event Hubs (B) is often mistaken for pub-sub. It is primarily for high-throughput telemetry/stream ingestion with partitioned consumer groups; it’s more “streaming” than classic pub-sub messaging and typically requires consumers to read from partitions (often perceived as polling/reading) rather than push delivery. - Queue (D) (e.g., Storage Queue) is point-to-point (competing consumers) rather than pub-sub; one message is processed by one consumer, not fanned out to multiple subscribers. Exam Tips: - If the question says “publish-subscribe” and “multiple subscribers,” think Service Bus Topics or Event Grid. - If it emphasizes “event notifications” and “push to handlers,” Event Grid is the best fit. - If it emphasizes “enterprise messaging, workflows, ordering, dead-lettering, transactions,” Service Bus is the best fit. - If it emphasizes “telemetry, logs, millions of events/sec, streaming analytics,” think Event Hubs.

2
問題 2

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You develop a software as a service (SaaS) offering to manage photographs. Users upload photos to a web service which then stores the photos in Azure Storage Blob storage. The storage account type is General-purpose V2. When photos are uploaded, they must be processed to produce and save a mobile-friendly version of the image. The process to produce a mobile-friendly version of the image must start in less than one minute. You need to design the process that starts the photo processing. Solution: Convert the Azure Storage account to a BlockBlobStorage storage account. Does the solution meet the goal?

Answering "Yes" is incorrect because converting the storage account type does not create an event-driven pipeline or guarantee that processing begins within one minute. While Premium BlockBlobStorage can improve blob read/write performance, it does not provide a mechanism to detect uploads and invoke processing logic. The missing component is an event/trigger service (such as Event Grid) and a compute target (such as Azure Functions) to start the processing. As a result, the goal of starting processing quickly is not satisfied by this change alone.

Changing from a GPv2 storage account to a BlockBlobStorage account does not implement any trigger to start image processing after an upload. The requirement is about initiating processing within one minute, which is achieved by wiring blob creation events to compute (for example, Event Grid triggering an Azure Function). BlockBlobStorage mainly changes the performance tier and characteristics for blob workloads, not the eventing or orchestration behavior. Therefore, the solution does not meet the goal because it does not address how processing is started.

問題分析

Core concept: This question tests how to trigger near-real-time processing when blobs are uploaded to Azure Storage, and whether changing the storage account type affects event-driven processing latency. Why the answer is correct: Converting a General-purpose v2 (GPv2) storage account to a BlockBlobStorage (Premium) account does not, by itself, create or accelerate an event trigger to start image processing within one minute. The requirement is about initiating a processing workflow quickly after an upload, which is typically achieved using eventing (Azure Event Grid) or messaging (Storage queues/Service Bus) plus compute (Azure Functions/WebJobs). Storage account performance tier/type may improve throughput/latency for blob operations, but it does not provide an automatic “start processing” mechanism. Key features / configurations: - Azure Event Grid + BlobCreated events to trigger Azure Functions (near real-time, typically seconds) - Azure Functions Blob trigger (polling-based; can have delays depending on plan/runtime and is not as deterministic as Event Grid) - Storage queues or Service Bus to decouple upload from processing and ensure reliable processing - GPv2 supports Event Grid integration; no need to change account type for eventing Common misconceptions: - Assuming a Premium/BlockBlobStorage account type automatically triggers workflows or reduces trigger latency. - Confusing storage performance characteristics (IOPS/throughput) with event notification/trigger mechanisms. - Believing that changing account type is required to use Event Grid; GPv2 already supports it. Exam tips: - Use Event Grid for fast, event-driven blob processing initiation (BlobCreated → Function/Logic App). - Storage account type changes affect performance/cost, not workflow triggering. - For “start within X time” requirements, prefer push-based events (Event Grid) over polling triggers when possible.

3
問題 3

You need to secure the Shipping Logic App. What should you use?

Azure App Service Environment (ASE) provides an isolated and dedicated environment for hosting Azure App Service resources (Web Apps, API Apps, and some Function App scenarios) inside a VNet. However, it is not the standard mechanism to secure or host Azure Logic Apps. Choosing ASE is a common mistake when candidates equate “secure hosting” with “ASE,” but Logic Apps use ISE for VNet injection.

Integration Service Environment (ISE) is the correct choice because it is the dedicated, single-tenant Logic Apps runtime deployed into your VNet. It enables private network access, tighter inbound/outbound control, and secure connectivity to VNet and on-prem resources. For exam wording like “secure the Logic App” or “run Logic Apps in a VNet,” ISE is the canonical solution.

A VNet service endpoint extends a VNet’s identity to supported Azure PaaS services (such as Azure Storage or Azure SQL) so traffic stays on the Azure backbone and access can be restricted to that VNet. It does not secure the Logic App endpoint itself or place the Logic Apps runtime into a VNet. It’s useful for securing dependencies, not the Logic App hosting plane.

Azure AD B2B integration is used to collaborate with external identities (guest users) and manage access to applications using Azure AD. It addresses authentication/authorization for users, not network isolation or securing the Logic App runtime. While identity controls are important, B2B does not provide the private VNet deployment and traffic control typically implied by “secure the Shipping Logic App.”

問題分析

Core concept: This question tests how to secure an Azure Logic App by isolating it from the public internet and enabling private network access. For Logic Apps (especially Logic Apps (Consumption)), the primary way to run workflows in a dedicated, network-isolated environment is the Integration Service Environment (ISE). Why the answer is correct: An Integration Service Environment (ISE) is a dedicated, single-tenant deployment of the Logic Apps runtime that is injected into your Azure virtual network (VNet). This allows the Shipping Logic App to be accessed privately and to reach resources in the VNet (or connected networks) without traversing the public internet. In exam scenarios, “secure the Logic App” commonly implies network isolation, private endpoints/VNet integration, and controlling inbound/outbound traffic—ISE is the Logic Apps-specific solution designed for that. Key features / best practices: - Network isolation: ISE deploys into your VNet, enabling private IP addressing and tighter control of traffic paths. - Enterprise connectivity: Works well with VNet-connected resources (SQL, SAP, on-prem via VPN/ExpressRoute) and supports integration account artifacts. - Governance and compliance: Single-tenant isolation helps meet stricter compliance requirements and aligns with Azure Well-Architected Framework security pillar (network segmentation, least exposure). - Predictable performance: Dedicated capacity (priced differently than Consumption) can be important for mission-critical shipping workflows. Common misconceptions: - App Service Environment (ASE) is for hosting App Service apps (web apps, APIs, functions in some cases), not Logic Apps runtime. It won’t “move” a Logic App into a private environment. - VNet service endpoints secure access from a VNet to certain Azure PaaS services (e.g., Storage, SQL) but do not place Logic Apps into a VNet or secure the Logic App’s inbound endpoint. - Azure AD B2B is identity collaboration for external users; it doesn’t provide network isolation for Logic Apps. Exam tips: - If the question is specifically about securing/isolating a Logic App with VNet-level control, think ISE (Logic Apps-specific). - If the question is about securing access to a downstream service (Storage/SQL) from a VNet, think service endpoints or private endpoints. - Always map the service to the correct isolation construct: ASE = App Service; ISE = Logic Apps.

4
問題 4

DRAG DROP - Contoso, Ltd. provides an API to customers by using Azure API Management (APIM). The API authorizes users with a JWT token. You must implement response caching for the APIM gateway. The caching mechanism must detect the user ID of the client that accesses data for a given location and cache the response for that user ID. You need to add the following policies to the policies file: ✑ a set-variable policy to store the detected user identity ✑ a cache-lookup-value policy ✑ a cache-store-value policy ✑ a find-and-replace policy to update the response body with the user profile information To which policy section should you add the policies? To answer, drag the appropriate sections to the correct policies. Each section may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place:

パート1:

Policy: Set-variable, Policy section: ______

Set-variable should be placed in the Inbound section because you need to derive and persist request-scoped data (the detected user identity) before any caching decision is made. Inbound is where APIM has access to the incoming request headers (including Authorization: Bearer <JWT>) and can parse JWT claims (for example, sub, oid, upn) to compute a stable user identifier. Storing the user ID in a variable during inbound allows subsequent inbound policies (like cache-lookup-value) to build a cache key such as "{location}:{userId}". If you waited until outbound, you would already have called the backend, defeating the purpose of using the cache to avoid backend calls. Outbound is primarily for response transformations and post-processing, not for request-derived routing/caching decisions.

パート2:

Policy: Cache-lookup-value, Policy section: ______

Cache-lookup-value belongs in the Inbound section because the goal is to check the cache before forwarding the request to the backend. When cache-lookup-value finds an entry, APIM can immediately return the cached payload and skip the backend call, which is the main performance benefit of gateway caching. Placing cache lookup in outbound would be too late: the backend would already have been invoked, so you would not reduce backend load or latency. Inbound lookup also ensures the cache key can be constructed from request context (location parameter/path + user ID variable extracted from JWT). This is also important for security: per-user caching must be enforced before any response is generated to prevent mixing responses between users.

パート3:

Policy: Cache-store-value, Policy section: ______

Cache-store-value should be placed in the Outbound section because you typically want to cache the response after the backend has produced it (and after any outbound transformations that should be reflected in the cached content). Outbound is the correct stage to capture the final response body/status/headers that will be returned to the client. If you stored in inbound, you would not yet have the backend response to cache. Also, caching after outbound modifications ensures consistency: the cached representation matches what clients receive (for example, after injecting user profile information or normalizing fields). This approach supports the Performance Efficiency pillar by reducing repeated backend calls for the same user/location combination.

パート4:

Policy: Find-and-replace, Policy section: ______

Find-and-replace is a response body transformation policy, so it should be applied in the Outbound section. The requirement explicitly says to update the response body with the user profile information, which is only available/meaningful once a response exists. Inbound policies operate on the request before it is sent to the backend; they are not the right place to modify the response body. In outbound, APIM can read and rewrite the response payload (for example, replacing a placeholder token with user-specific profile data). Doing this in outbound also ensures that what you cache (via cache-store-value) can include the transformed content, so subsequent requests for the same user/location can be served entirely from cache without re-running backend logic.

5
問題 5

HOTSPOT - You are working for Contoso, Ltd. You define an API Policy object by using the following XML markup:


  ("bodySize"))
 
 
 
 
 

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:

パート1:

The XML segment belongs in the <inbound> section of the policy.

Yes. Policies that inspect the incoming request (such as checking request body size) and policies that rewrite or normalize the request URL (such as retaining a particular API version segment) are applied before the request is sent to the backend. That is exactly what the <inbound> section is for. In APIM, <inbound> is the correct place for request validation, header/query/path manipulation, and routing decisions. By contrast, <outbound> is used to transform responses coming back from the backend, and <backend> is used to configure how APIM forwards the request (e.g., set-backend-service), not typically to validate request payload size. <on-error> only runs after an error has occurred. Therefore, an XML segment that evaluates request body size and/or rewrites the request path belongs in <inbound>.

パート2:

If the body size is >256k, an error will occur.

No. A body size greater than 256 KB does not automatically cause an error unless the policy explicitly enforces that limit (for example, using validate-content with max-size, or a choose/when that returns an error response when the size exceeds a threshold). APIM does have practical limits and considerations when reading/buffering request bodies, but simply referencing a variable like "bodySize" (as implied by the snippet) does not itself throw an error at 256 KB. Errors occur when you attempt to read a body beyond configured limits, when buffering is required but not possible, or when a validation policy is configured to reject the request. For the exam, distinguish between (a) a computed value/variable and (b) an enforcement policy. Without an explicit reject/return-response/validate-content rule tied to 256 KB, exceeding 256 KB alone is not guaranteed to produce an error.

パート3:

If the request is http://contoso.com/api/9.2/, the policy will retain the higher version.

Yes. With a request like http://contoso.com/api/9.2/, a policy designed to compare versions and retain the higher version would keep 9.2 when compared against a lower version (for example, 9.1 or 9.0). This is a common APIM pattern: extract the version segment from the path, compare it to a configured/expected version, and rewrite/route accordingly. In APIM policy expressions, you must be careful about how versions are compared. If the policy uses numeric/semantic parsing (e.g., converting to a Version type or splitting into major/minor integers), then 9.2 correctly evaluates as higher than 9.1. If it used naive string comparison, results can be incorrect in some cases (e.g., "9.10" vs "9.2"). However, the statement specifically says the policy will retain the higher version for 9.2, which is consistent with the intended behavior of such a policy. Thus, given the scenario, the policy retains the higher version, and 9.2 would be kept.

外出先でもすべての問題を解きたいですか?

Cloud Passをダウンロード — 模擬試験、学習進捗の追跡などを提供します。

6
問題 6
(2つ選択)

You are creating a hazard notification system that has a single signaling server which triggers audio and visual alarms to start and stop. You implement Azure Service Bus to publish alarms. Each alarm controller uses Azure Service Bus to receive alarm signals as part of a transaction. Alarm events must be recorded for audit purposes. Each transaction record must include information about the alarm type that was activated. You need to implement a reply trail auditing solution. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Correct. ReplyToSessionId is designed for request/reply patterns when using Service Bus sessions. By setting ReplyToSessionId to the original hazard message’s SessionId, any reply/audit/ack message can be routed back to the correct session (logical conversation). This supports ordered, grouped processing and makes it easier to build an auditable transaction trail tied to the same session context.

Incorrect. DeliveryCount is a system-managed property that indicates how many times Service Bus has delivered the message (including redeliveries after abandon/lock expiry). It is not meant to be assigned by the application, and it is not a stable identifier for correlation or auditing. Also, mapping MessageId into DeliveryCount is semantically wrong and not supported.

Incorrect. SequenceNumber is assigned by Service Bus when the message is enqueued and is read-only from the sender’s perspective. You cannot set SequenceNumber to SessionId. Additionally, SequenceNumber is queue/topic-specific and not a portable business correlation identifier across entities, making it a poor choice for an audit/reply trail design.

Correct. CorrelationId is the standard property for linking related messages across a workflow. Setting CorrelationId on downstream messages (such as audit events) to the original hazard message’s MessageId provides a stable, application-defined transaction identifier. This enables reliable audit queries and end-to-end tracing across components and message entities.

Incorrect. DeliveryCount is system-controlled and reflects delivery attempts, not business correlation. Even if you could map SequenceNumber to DeliveryCount (you should not), it would not create a meaningful audit trail. DeliveryCount changes with retries and does not uniquely identify the original transaction or alarm type activation.

Incorrect. SequenceNumber is assigned by Service Bus at enqueue time and cannot be set by the sender to the MessageId. Even conceptually, MessageId is an application identifier, while SequenceNumber is an entity-specific ordering identifier. Using SequenceNumber for auditing across multiple entities/components is brittle and not the intended pattern.

問題分析

Core concept: This question tests Azure Service Bus messaging patterns for end-to-end traceability (audit/reply trail) in transactional processing. In Service Bus, you typically correlate related messages (command, processing, reply/audit event) using message metadata rather than trying to overwrite system-managed properties. Two key properties are CorrelationId (for tracing a business operation across messages) and ReplyToSessionId (for routing replies back to the correct session when using sessions). Why the answer is correct: You have a single signaling server publishing alarm commands and multiple alarm controllers receiving them “as part of a transaction” (common with PeekLock + complete/abandon and/or sessions for ordered, grouped processing). You also must record alarm events for audit, and each transaction record must include the alarm type. A robust reply/audit trail requires (1) a stable identifier that ties the audit record to the original alarm command and (2) a deterministic way to route any reply/audit message to the correct logical conversation when sessions are used. Action D sets the outgoing message’s CorrelationId to the original hazard message’s MessageId. MessageId is application-defined and stable, making it ideal as the root identifier for a transaction. By copying it into CorrelationId on subsequent messages (audit events, acknowledgements, stop/start confirmations), you can query logs and message traces and reliably link all records to the initiating alarm. Action A sets ReplyToSessionId to the original message’s SessionId. When sessions are used, SessionId groups related messages and enforces ordered processing. ReplyToSessionId is specifically intended for request/reply patterns with sessions: the responder can send a reply that targets the requester’s session, enabling correct routing and correlation at scale. Key features/best practices: Use MessageId/CorrelationId for distributed tracing; use SessionId/ReplyToSessionId for session-based request/reply. Keep audit data (like alarm type) in the message body or custom application properties; don’t misuse system properties. Common misconceptions: SequenceNumber and DeliveryCount are system-managed runtime properties and are not meant to be assigned by applications. They change per delivery/lock renewal and are not stable identifiers for auditing. Exam tips: For Service Bus: MessageId is app-set; CorrelationId is for tracing; SessionId enables sessions; ReplyToSessionId supports session-based replies; SequenceNumber and DeliveryCount are read-only/system-controlled and not used as business correlation keys.

7
問題 7

DRAG DROP - You are maintaining an existing application that uses an Azure Blob GPv1 Premium storage account. Data older than three months is rarely used. Data newer than three months must be available immediately. Data older than a year must be saved but does not need to be available immediately. You need to configure the account to support a lifecycle management rule that moves blob data to archive storage for data not modified in the last year. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place:

パート1:

Select the correct answer(s) in the image below.

question-image

Lifecycle management for moving blobs based on age requires a GPv2 storage account. A GPv1 account must first be upgraded to GPv2 before lifecycle rules can be used. After the upgrade, set the account's default access tier to Cool so data older than three months remains immediately available at lower cost, while lifecycle management can later move blobs older than one year to Archive. Creating a new Standard GPv2 account and copying data is not required because GPv1 accounts can be upgraded directly to GPv2.

8
問題 8

HOTSPOT - You need to configure Azure Service Bus to Event Grid integration. Which Azure Service Bus settings should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

パート1:

Tier ______

Correct answer: Standard. Azure Service Bus to Event Grid integration is supported for Standard and Premium namespaces, but not for Basic. The Basic tier is intentionally limited (for example, fewer features and integration capabilities) and is not eligible for Event Grid events. Standard is the minimum tier that supports the Event Grid integration feature set commonly tested in AZ-204. Why not Premium? Premium also supports Event Grid integration, but the question asks which settings you should use; unless there is a requirement for dedicated resources, predictable latency, higher throughput, or VNet/private endpoints, Premium is not required. For exam questions, choose the lowest-cost tier that meets the technical requirement. Why not Basic? Basic lacks the required integration capability, so you cannot configure Service Bus events to be emitted to Event Grid from a Basic namespace.

パート2:

RBAC role ______

Correct answer: Contributor. Configuring Azure Service Bus to Event Grid integration involves creating or managing Event Grid subscriptions and possibly updating resource configuration. These are management-plane operations governed by Azure RBAC on ARM resources. The Contributor role can create and manage resources (including Event Grid subscriptions) but cannot grant access to others. Why not Owner? Owner would work because it includes Contributor plus the ability to assign roles, but it is more privilege than necessary. Exams typically expect least privilege, so Contributor is preferred when role assignment is not required. Why not Azure Service Bus Data Owner/Data Receiver? These are data-plane roles used for messaging operations (send/receive/manage entities via the Service Bus data plane). They do not provide permissions to create Event Grid subscriptions or manage ARM-level integration settings, so they are insufficient for configuring the integration.

9
問題 9

You are developing an Azure function that connects to an Azure SQL Database instance. The function is triggered by an Azure Storage queue. You receive reports of numerous System.InvalidOperationExceptions with the following message: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. You need to prevent the exception. What should you do?

Correct. Decreasing batchSize in host.json reduces how many queue messages are fetched and processed concurrently per Functions host instance. That directly lowers the number of simultaneous SQL connection attempts, preventing exhaustion of the client connection pool and avoiding InvalidOperationException timeouts. This is an application-level throttling/back-pressure control aligned with load leveling best practices.

Incorrect. Converting to Event Hubs changes the trigger source but does not inherently prevent SQL connection pool exhaustion. Event Hubs processing can be highly parallel (partitions, prefetch, multiple concurrent events), which can still overwhelm SQL connections unless you explicitly control concurrency and connection usage. It’s a redesign, not a targeted fix for pooling.

Incorrect. Moving to Premium plan increases available compute and can scale out more aggressively, potentially increasing concurrent executions and making pool exhaustion more likely unless concurrency is limited. Premium can help if you also tune concurrency and/or increase SQL capacity, but by itself it does not address the root cause of too many concurrent SQL connections.

Incorrect. There is no supported function.json binding type value of queueScaling for Azure Storage queue triggers. Scaling behavior is managed by the Functions runtime and configured primarily via host.json and the hosting plan. Changing the binding type is not how you control queue-trigger scaling or concurrency.

問題分析

Core concept: This question tests Azure Functions queue-trigger concurrency and its downstream impact on Azure SQL Database connection pooling. The exception indicates the .NET SQL client could not obtain a connection from the pool before the timeout because too many concurrent executions attempted to open SQL connections, exhausting the pool (default max pool size is commonly 100 per connection string). Why A is correct: For Azure Storage queue triggers, the runtime can pull and process messages in batches and can run multiple function invocations concurrently. If each invocation opens a SQL connection (or holds it longer than necessary), high concurrency can quickly exceed the available pooled connections, causing “Timeout expired… max pool size was reached.” Reducing host.json batchSize lowers the number of messages fetched and processed concurrently per host instance, reducing simultaneous SQL connection demand and preventing pool exhaustion. This is a classic back-pressure/throttling fix: limit ingestion concurrency to match downstream capacity. Key features / best practices: - host.json queue settings (batchSize, and often newBatchThreshold / maxDequeueCount) control how aggressively the Functions host drains the queue. - Use efficient connection management: open late/close early, dispose connections, and prefer async I/O. (But the question asks what to do to prevent the exception; throttling concurrency is the direct control here.) - Align with Azure Well-Architected Framework (Performance Efficiency + Reliability): apply throttling and load leveling so the database isn’t overwhelmed. Common misconceptions: - Scaling up the plan (Premium) can increase concurrency and may worsen the problem unless you also control concurrency or increase SQL capacity/pool sizing. - Switching triggers (Event Hubs) doesn’t inherently solve connection pool exhaustion; it changes ingestion semantics but can still drive high parallelism. - “queueScaling” is not a valid function.json type for Storage queue triggers; scaling is controlled by the host/runtime, not by changing the binding type. Exam tips: When you see SQL connection pool exhaustion in serverless, think: (1) reduce concurrency at the trigger/host level, (2) ensure connections are disposed, (3) consider increasing pool size or using managed identity/connection resiliency, and (4) scale the database appropriately. For queue-triggered Functions, host.json batch/concurrency settings are the primary lever tested on AZ-204.

10
問題 10

HOTSPOT - You are configuring a development environment for your team. You deploy the latest Visual Studio image from the Azure Marketplace to your Azure subscription. The development environment requires several software development kits (SDKs) and third-party components to support application development across the organization. You install and customize the deployed virtual machine (VM) for your development team. The customized VM must be saved to allow provisioning of a new team member development environment. You need to save the customized VM for future provisioning. Which tools or services should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

パート1:

Generalize the VM. ______

Correct: B. Visual Studio command prompt. To generalize a Windows VM before capturing it as an image, you run Sysprep (typically: %WINDIR%\System32\Sysprep\Sysprep.exe with /generalize /oobe /shutdown). On a Visual Studio Marketplace image, you would execute this from within the VM using local OS tooling/commands; the “Visual Studio command prompt” option represents running the required generalization command locally on the VM. Why others are wrong: A. Azure PowerShell can automate capture steps (stop/deallocate, create image), but it does not itself generalize the OS; Sysprep must be run inside the guest. C. Azure Migrate is for assessing and migrating servers to Azure, not preparing/capturing a reusable dev image. D. Azure Backup creates recovery points for restore, not a generalized image suitable for provisioning new VMs.

パート2:

Store images. ______

Correct: A. Azure Blob Storage. VM images (especially unmanaged/VHD-based artifacts) are stored as page blobs in Azure Blob Storage. Even when you use managed images or Azure Compute Gallery, the underlying storage technology for VHDs is still based on Azure Storage concepts, and Blob Storage is the appropriate service for storing image VHDs. Why others are wrong: B. Azure Data Lake Storage is optimized for big data analytics workloads and hierarchical namespace; it’s not the standard target for VM image VHDs. C. Azure File Storage provides SMB/NFS file shares, not page blobs for OS disk/image VHD storage. D. Azure Table Storage is a NoSQL key/attribute store and cannot store VM image binaries.

他の模擬試験

Practice Test #1

50 問題·100分·合格 700/1000

Practice Test #2

50 問題·100分·合格 700/1000

Practice Test #3

50 問題·100分·合格 700/1000

Practice Test #4

50 問題·100分·合格 700/1000
← すべてのMicrosoft AZ-204問題を見る

今すぐ学習を始める

Cloud Passをダウンロードして、すべてのMicrosoft AZ-204練習問題を利用しましょう。

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT認定試験問題演習アプリ

Get it on Google PlayDownload on the App Store

認定資格

AWSGCPMicrosoftCiscoCompTIADatabricks

法務

FAQプライバシーポリシー利用規約

会社

お問い合わせアカウント削除

© Copyright 2026 Cloud Pass, All rights reserved.

外出先でもすべての問題を解きたいですか?

アプリを入手

Cloud Passをダウンロード — 模擬試験、学習進捗の追跡などを提供します。