
50問と100分の制限時間で実際の試験をシミュレーションしましょう。AI検証済み解答と詳細な解説で学習できます。
AI搭載
すべての解答は3つの主要AIモデルで交差検証され、最高の精度を保証します。選択肢ごとの詳細な解説と深い問題分析を提供します。
You successfully run the following HTTP request. POST https://management.azure.com/subscriptions/18c51a87-3a69-47a8-aedc-a54745f708a1/resourceGroups/RG1/providers/ Microsoft.CognitiveServices/accounts/contoso1/regenerateKey?api-version=2017-04-18 Body{"keyName": "Key2"} What is the result of the request?
Incorrect. The regenerateKey ARM action regenerates a Cognitive Services account key on the resource itself; it does not automatically create or update a secret in Azure Key Vault. Key Vault integration is a best practice you implement separately (e.g., storing the returned key as a secret, using automation). The request shown targets management.azure.com and the CognitiveServices provider, not Key Vault APIs.
Incorrect. “Query keys” are associated with Azure AI Search (formerly Azure Cognitive Search) and are managed via the Search service APIs, not via Microsoft.CognitiveServices/accounts/regenerateKey. Cognitive Services accounts use subscription keys (Key1/Key2) for authentication to the service endpoint. The request body explicitly references Key2, which is an account key, not a query key.
Incorrect. Rotating both keys would require two separate calls: one with keyName=Key1 and another with keyName=Key2 (or an explicit operation that rotates both, which this API does not do). The regenerateKey action regenerates only the specified key. Since the request specifies Key2, only the secondary key changes; Key1 remains valid and unchanged.
Correct. The request invokes the Cognitive Services account action regenerateKey with {"keyName":"Key2"}. Cognitive Services accounts have two keys (Key1 and Key2). Regenerating Key2 resets the secondary subscription key, invalidating the old Key2 and returning a new Key2 value. This supports standard key rotation practices while keeping Key1 unchanged for continuity.
Core concept: This question tests management-plane key rotation for an Azure Cognitive Services (Azure AI services) account using the Azure Resource Manager (ARM) REST API. Cognitive Services accounts expose two access keys (Key1 and Key2) used by client applications to authenticate to the data-plane endpoint. The ARM action regenerateKey is specifically for rotating one of those account keys. Why the answer is correct: The request calls: POST .../Microsoft.CognitiveServices/accounts/contoso1/regenerateKey?api-version=2017-04-18 with body {"keyName":"Key2"}. In Cognitive Services, Key1 is commonly treated as the primary key and Key2 as the secondary key. The regenerateKey action regenerates (resets) the specified key only. Because keyName is Key2, the service invalidates the existing Key2 and returns a newly generated Key2 value. Key1 remains unchanged. Therefore, the result is that the secondary subscription key was reset. Key features / best practices: Having two keys enables safe key rotation with minimal downtime: update applications to use the non-rotated key, regenerate the other key, then switch and repeat. This aligns with Azure Well-Architected Framework security guidance (credential rotation, least privilege, and secret hygiene). In production, store the keys in Azure Key Vault and reference them from apps (managed identities where possible), but the regenerateKey call itself does not create or store secrets in Key Vault. Common misconceptions: Some confuse “regenerateKey” with rotating both keys, but it targets only the named key. Others confuse Cognitive Services account keys with “query keys” (a concept associated with Azure AI Search), which are different resources and APIs. Exam tips: Recognize management-plane vs data-plane operations: management.azure.com with a provider action like regenerateKey is an ARM operation affecting the resource configuration/credentials. Also remember the two-key rotation pattern: specify Key1 or Key2; only that key changes. If a question mentions Key Vault, that is typically a separate step (storing/reading secrets), not an automatic outcome of key regeneration.
外出先でもすべての問題を解きたいですか?
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。


外出先でもすべての問題を解きたいですか?
無料アプリを入手
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。
You deploy a web app that is used as a management portal for indexing in Azure Cognitive Search. The app is configured to use the primary admin key. During a security review, you discover unauthorized changes to the search index. You suspect that the primary access key is compromised. You need to prevent unauthorized access to the index management endpoint. The solution must minimize downtime. What should you do next?
Incorrect. Regenerating the primary admin key first would immediately invalidate the credential the app is currently using. That means the management portal could lose access until its configuration is updated to the secondary key, which does not minimize downtime. Although the final state would rotate both keys, the sequence is operationally wrong for a live app currently bound to the compromised primary key.
Incorrect. A query key cannot call index management endpoints in Azure Cognitive Search. Since the web app is a management portal for indexing, switching it to a query key would remove the permissions it needs to function. Regenerating the admin keys is useful, but this option breaks required administrative capabilities.
Correct. The application is currently using the primary admin key, so regenerating the secondary key first creates a fresh alternate credential without affecting the running app. After updating the app to use that new secondary key, you can safely regenerate the compromised primary key and immediately invalidate unauthorized access. This sequence preserves management functionality while minimizing downtime and follows the standard dual-key rotation pattern.
Incorrect. Query keys are only for read-only search requests and have no effect on admin-level index management access. Adding or deleting query keys does nothing to invalidate a compromised admin key that can modify indexes. This option also fails because the management portal cannot use a query key for administrative operations.
Core concept: Azure Cognitive Search provides two admin keys specifically so you can rotate credentials without interrupting applications that require management access. Because admin keys allow full control over indexes, indexers, and other search resources, a suspected compromise requires immediate rotation. The safest low-downtime pattern is to regenerate the key not currently in use, move the application to that newly generated key, and then regenerate the compromised key. Why correct: The app is currently using the primary admin key, and that key is suspected to be compromised. If you regenerate the primary key first, the app will immediately lose access until its configuration is updated, which increases downtime risk. By regenerating the secondary key first, you create a fresh valid admin credential, switch the app to it, and then regenerate the compromised primary key to invalidate the attacker’s access. Key features: - Admin keys are required for index management operations; query keys are read-only. - Azure Cognitive Search exposes both a primary and secondary admin key to support seamless rotation. - Regenerating a key invalidates the previous value for that slot immediately. - Rotating the unused key first is the standard approach when minimizing service interruption. Common misconceptions: A common mistake is to regenerate the compromised key first because it seems most urgent, but doing so can break the application if it is still using that key. Another misconception is that query keys can be substituted for admin keys; they cannot perform management operations. It is also unnecessary to regenerate both keys immediately unless completing a full rotation cycle is explicitly required. Exam tips: - If a key is compromised and the app is currently using it, do not regenerate it first unless downtime is acceptable. - For Azure services with primary/secondary keys, the exam often expects: rotate the unused key, switch clients, then rotate the old key. - Remember that query keys are only for search queries, not for administrative changes.
You have the following C# method for creating Azure Cognitive Services resources programmatically.
static void create_resource(CognitiveServicesManagementClient client, string resource_name, string kind, string account_tier, string location)
{
CognitiveServicesAccount parameters =
new CognitiveServicesAccount(null, null, kind, location, resource_name,
new CognitiveServicesAccountProperties(), new Sku(account_tier));
var result = client.Accounts.Create(resource_group_name, account_tier, parameters);
}
You need to call the method to create a free Azure resource in the West US Azure region. The resource will be used to generate captions of images automatically. Which code should you use?
Correct. "ComputerVision" is the appropriate kind for prebuilt image analysis features including caption generation. "F0" is the free SKU, satisfying the requirement for a free resource. "westus" specifies the West US Azure region. This combination provisions a free Computer Vision resource suitable for generating image captions (within free-tier limits).
Incorrect. "CustomVision.Prediction" is used to host and run predictions for models you trained with Custom Vision (classification/object detection). It does not provide the prebuilt captioning capability that comes with Computer Vision/Image Analysis. Even though "F0" is free and "westus" is valid, the service kind is wrong for automatic caption generation.
Incorrect. "ComputerVision" is the correct service kind for captioning, but "S0" is the paid Standard tier, not the free tier. The question explicitly requires creating a free Azure resource, so using S0 violates the requirement even though it would work functionally for captioning.
Incorrect. This option is wrong on two dimensions: it uses "CustomVision.Prediction" (not the right service for prebuilt captioning) and it uses "S0" (paid tier) instead of the required free tier. Custom Vision prediction resources are intended for custom model inference, not generating captions from images.
Core concept: This question tests how to provision the correct Azure AI (Cognitive Services) account type ("kind") and SKU for an image captioning scenario. Image captioning is a Computer Vision capability (Image Analysis) and is provisioned under the ComputerVision kind in Azure AI services. Why the answer is correct: To create a free resource in West US that can generate captions, you must select: 1) kind = "ComputerVision" (the service that supports image analysis features such as caption generation), 2) SKU = "F0" (the free tier), and 3) location = "westus". Option A matches all three requirements. In the method signature, the parameters map directly: resource_name="res1", kind="ComputerVision", account_tier="F0", location="westus". Key features and best practices: - Computer Vision (Image Analysis) provides captioning (natural-language description) and other features like tags, objects, OCR, and smart cropping. For AI-102, remember captioning is not a Custom Vision (training) feature. - SKU selection matters: F0 is the free tier (typically limited transactions per minute/day and often restricted to one free account per subscription per region/service). S0 is a paid standard tier. - Region must support the service and SKU. West US is a valid region for Computer Vision, but in real deployments you should confirm regional availability and quota limits. - From an Azure Well-Architected Framework perspective: choose the correct service for the workload (performance efficiency), use least-cost tier for dev/test (cost optimization), and plan for scaling/upgrade to S0 in production. Common misconceptions: - Confusing Custom Vision with Computer Vision: Custom Vision is for building and hosting custom image classification/object detection models (training and prediction resources). It does not provide the prebuilt captioning capability. - Picking S0 because it is “more capable”: the question explicitly requires a free resource, so F0 is required. Exam tips: - Memorize the mapping: captioning/describing images => ComputerVision kind (Image Analysis). CustomVision.* kinds are for custom model training/prediction. - Watch for SKU keywords: F0 = Free, S0 = Standard (paid). Also ensure the region string matches Azure location naming (e.g., "westus").
You are developing a new sales system that will process the video and text from a public-facing website. You plan to notify users that their data has been processed by the sales system. Which responsible AI principle does this help meet?
Transparency is about being open and clear with users regarding when and how AI systems process their data and influence outcomes. By notifying users that their video and text have been processed, you are providing an explicit disclosure of AI-driven or automated processing. This reduces “black box” surprise and helps users understand the system’s behavior and data handling. In responsible AI frameworks, such user notifications are a direct mechanism to meet transparency expectations in public-facing applications.
Fairness focuses on ensuring the system does not produce biased or discriminatory results across different user groups (for example, across gender, age, or ethnicity). Simply notifying users that processing occurred does not address whether the model’s outputs are equitable or whether bias has been measured and mitigated. Fairness would require steps like representative training data, bias evaluation, and mitigation techniques. Therefore, a notification alone does not satisfy the fairness principle.
Inclusiveness is about designing AI systems that are accessible and usable by people with diverse abilities, languages, and backgrounds. A notification that data has been processed does not ensure the system supports accessibility needs (e.g., screen readers), multilingual users, or varied user contexts. Inclusiveness would involve UX and feature design choices that broaden participation and reduce barriers. Thus, inclusiveness is not the primary principle addressed by the notification.
Reliability and safety concern whether the system performs consistently, securely, and safely under expected and unexpected conditions. Notifying users about processing does not improve model robustness, error handling, security controls, or safety guardrails. Reliability and safety would be demonstrated through testing, monitoring, fallback behaviors, and protections against harmful outputs or failures. Therefore, this option does not match the scenario’s action of user notification.
Core concept: This question tests your understanding of Responsible AI principles—specifically which principle is addressed by informing users that their data (video/text) has been processed by an AI-enabled system. Why the answer is correct: Notifying users that their data has been processed is an example of being open about how the system works and how user data is used. This aligns with the transparency principle, which emphasizes clear communication to users about AI involvement, data usage, and system behavior. In public-facing scenarios, transparency often includes disclosures (e.g., “AI is used,” “your content is analyzed,” “processing has occurred”) so users are not surprised by automated processing. Key features / configurations: - User-facing disclosures/notifications that AI is used to process content - Clear explanations of what data is processed (video, text), when it is processed, and for what purpose - Auditability/traceability artifacts (logs, processing status) to support accurate user notifications - Consent and privacy notices often accompany transparency (though they are distinct concepts) Common misconceptions: - Confusing transparency with fairness: fairness is about avoiding biased outcomes across groups, not about notifying users. - Confusing transparency with reliability and safety: reliability/safety focuses on robust, secure, and safe operation, not user disclosure. - Confusing transparency with inclusiveness: inclusiveness is about accessibility and ensuring the system works well for diverse users, not about communicating processing. Exam tips: - Transparency = disclose AI use, explain decisions/processing, make system behavior understandable. - Fairness = mitigate bias and ensure equitable outcomes across demographics. - Inclusiveness = design for accessibility and diverse user needs. - Reliability and safety = robustness, security, fail-safes, and safe operation under expected conditions.
You are building an Azure WebJob that will create knowledge bases from an array of URLs. You instantiate a QnAMakerClient object that has the relevant API keys and assign the object to a variable named client. You need to develop a method to create the knowledge bases. Which two actions should you include in the method? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
Incorrect for this scenario. FileDTO objects are used when you create a KB from file content (for example, uploading documents). The question states the KBs are created from an array of URLs, so you don’t need to represent WebJob data as FileDTO. Even if files were involved, you would still need CreateKbDTO and CreateAsync; FileDTO alone is not sufficient.
Correct. client.Knowledgebase.CreateAsync is the SDK call that initiates creation of a knowledge base in QnA Maker. KB creation is an asynchronous management operation, so CreateAsync returns an operation you typically poll until it succeeds or fails. This is a required action because without calling CreateAsync, no KB is created regardless of how you model the input URLs.
Incorrect for the stated requirement. QnADTO is used when you supply explicit question/answer pairs (curated QnA) as the KB content. The question specifies creating KBs from an array of URLs, which are represented as URL sources in the create request. You could optionally add QnADTO entries, but it is not required to meet the URL-based creation requirement.
Correct. CreateKbDTO is the request object that defines the knowledge base to be created, including its name and its sources (such as URLs). To create KBs from URLs, you must build a CreateKbDTO that includes those URLs in the appropriate field(s). This DTO is then passed to Knowledgebase.CreateAsync to start the creation operation.
Core concept: This question tests how to programmatically create a QnA Maker knowledge base using the QnA Maker SDK (QnAMakerClient). In the classic QnA Maker workflow, you submit a “create KB” request that describes the KB name and its sources (URLs, files, or QnA pairs). The SDK models this request with a CreateKbDTO object, and you submit it via the Knowledgebase.CreateAsync operation. Why the answer is correct: To create a knowledge base from an array of URLs, your method must (1) construct the request payload that describes the KB and its content sources, and (2) call the API that starts KB creation. In the SDK, the payload is CreateKbDTO (which can include a list of URLs as sources), and the API call is client.Knowledgebase.CreateAsync. CreateAsync returns an operation that you typically poll until completion, then you can publish the KB. Key features / best practices: - CreateKbDTO is the central request model for KB creation; it can include URLs, files, and/or QnA pairs. - Knowledgebase.CreateAsync initiates an asynchronous operation; production code should poll the operation status and handle failures/timeouts. - After creation, you generally call PublishAsync to make the KB available to runtime queries. - From an Azure Well-Architected perspective, implement retry/backoff for transient failures, log operation IDs for troubleshooting, and consider throttling limits of the QnA Maker management endpoint. Common misconceptions: It’s easy to confuse “creating a KB from URLs” with “uploading files” or “manually providing QnA pairs.” While those are valid inputs, the question explicitly states URLs, and the required steps are still: build CreateKbDTO and call CreateAsync. Exam tips: Remember the pattern for QnA Maker management operations: build the appropriate DTO (CreateKbDTO/UpdateKbOperationDTO, etc.), call the corresponding Knowledgebase method (CreateAsync/UpdateAsync), then poll the operation and publish. If you see CreateKbDTO + Knowledgebase.CreateAsync together, that’s the canonical create-KB flow.
You are developing a solution to generate a word cloud based on the reviews of a company's products. Which Text Analytics REST API endpoint should you use?
keyPhrases is the Key Phrase Extraction endpoint in Azure AI Language/Text Analytics. It returns the most relevant phrases from each review (for example, “battery life”, “poor packaging”), which can be aggregated and weighted to generate a word cloud. This is the most direct feature for summarizing common topics in unstructured review text.
sentiment analyzes opinion polarity and returns scores/labels (positive, neutral, negative) and optionally sentence-level sentiment. While useful for understanding how customers feel, it does not provide the words or phrases needed to render a word cloud. You might combine sentiment with key phrases, but sentiment alone is insufficient.
languages detects the language of input text. This can be a helpful preprocessing step to route documents to the right models or provide language hints, improving downstream extraction quality. However, language detection does not extract keywords or phrases, so it cannot directly support building a word cloud.
entities/recognition/general performs named entity recognition (NER) to identify entities like organizations, people, locations, and other categories. This can help if your word cloud is specifically entity-focused, but typical review word clouds need descriptive key phrases (features, issues, attributes) that are better captured by key phrase extraction.
Core Concept: This question tests Azure AI Language (formerly Text Analytics) capabilities for extracting meaningful terms from unstructured text. A word cloud is typically built from the most frequent and/or most relevant phrases in a corpus (for example, product reviews). The API feature that directly supports this is Key Phrase Extraction. Why the Answer is Correct: To generate a word cloud from reviews, you want to identify the main topics and terms customers mention (for example, “battery life”, “customer service”, “easy setup”). The Text Analytics REST API endpoint for this is keyPhrases (Key Phrase Extraction). It returns a set of key phrases per document, already filtered for relevance, which is ideal input for word-cloud weighting (count occurrences across reviews, optionally weight by relevance or review rating). Key Features / Best Practices: Key Phrase Extraction works best when you: - Provide the correct language hint (or detect language first) to improve accuracy. - Preprocess text (remove boilerplate, deduplicate near-identical reviews) to avoid skewing the cloud. - Aggregate results across documents (count phrases, normalize casing, optionally merge synonyms). From an Azure Well-Architected Framework perspective, consider performance efficiency (batch documents per request within service limits), cost optimization (avoid unnecessary calls like sentiment if not needed), and reliability (retry on transient failures, handle throttling). Common Misconceptions: Sentiment analysis is often associated with reviews, but it outputs polarity scores (positive/neutral/negative) rather than the terms to display in a word cloud. Entity recognition can extract named entities (brands, people, locations) but may miss descriptive phrases that make word clouds useful (like “fast shipping”). Language detection is helpful as a preprocessing step, but it doesn’t produce words/phrases for the cloud. Exam Tips: For AI-102, map the desired output to the correct Language feature: - “What are the main terms/topics?” -> keyPhrases - “How do people feel?” -> sentiment - “What language is it?” -> languages - “What named entities are mentioned?” -> entities/recognition Also remember endpoint naming may vary by API version, but the capability name (Key Phrase Extraction) remains consistent in Azure AI Language documentation.
You build a bot by using the Microsoft Bot Framework SDK and the Azure Bot Service. You plan to deploy the bot to Azure. You register the bot by using the Bot Channels Registration service. Which two values are required to complete the deployment? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
botId is not the standard required value for Bot Channels Registration deployment. In Bot Framework and Azure Bot Service, the key identity used by channels is the Azure AD application (Microsoft App) identity. Some tooling may display a bot resource ID, but it is not the credential pair required for authenticating requests to your bot endpoint.
tenantId identifies the Azure AD tenant where an app is registered. While it can matter for multi-tenant vs single-tenant app scenarios, Bot Channels Registration does not require tenantId to complete deployment. The bot’s runtime authentication with channels relies on the appId and a credential (secret/cert), not explicitly on tenantId.
appId (Microsoft App ID / client ID) is required because it uniquely identifies the Azure AD application used by the bot. Channels use this identity when issuing tokens, and the bot uses it to validate incoming requests. In deployment settings, this is commonly provided as MicrosoftAppId and must match the app registration used for the bot.
objectId is the directory object identifier for the app registration’s service principal or application object. It is useful for Graph API operations and some Azure AD management tasks, but it is not required to configure Bot Channels Registration for a bot’s authentication. The bot framework expects the appId and a credential, not the objectId.
appSecret (Microsoft App Password / client secret) is required as the credential paired with the appId. It enables the bot to authenticate as the Azure AD application and is used in token acquisition/validation flows. For production, store it securely (e.g., Key Vault) and rotate it regularly to meet security best practices.
Core concept: This question tests how Azure Bot Service (specifically Bot Channels Registration) authenticates a bot built with the Microsoft Bot Framework SDK. When you register a bot, Azure needs an Azure AD application identity so channels (Teams, Web Chat, Direct Line, etc.) can securely call your bot endpoint. That identity is represented by an Application (client) ID and a client secret (or certificate). Why the answer is correct: To complete deployment/registration with Bot Channels Registration, you must provide the Microsoft App ID (client ID) and a credential (commonly the Microsoft App Password, i.e., app secret). The Bot Framework uses these values to validate incoming requests (JWT tokens) and to obtain tokens when the bot calls other services. Without the appId, the bot has no identity; without the appSecret, the bot cannot authenticate as that identity. Key features/configuration and best practices: - The appId corresponds to an Azure AD app registration (or a managed identity-like bot identity created during registration). - The appSecret is a client secret for that app registration; in production, store it in Azure Key Vault and reference it via App Service/Function configuration settings. - Prefer certificate credentials over secrets when possible for stronger security and easier rotation. - Ensure the bot’s messaging endpoint is reachable over HTTPS and that the App Service/Function configuration includes MicrosoftAppId and MicrosoftAppPassword (or equivalent settings). These align with Azure Well-Architected Framework security principles: least privilege, secure secret storage, and credential rotation. Common misconceptions: - botId is often confused with appId; however, Bot Channels Registration uses the Azure AD app identity, not a separate “botId” value for authentication. - tenantId and objectId are Azure AD identifiers but are not required to configure the bot’s channel registration for runtime authentication. Exam tips: For Bot Framework/Azure Bot Service questions, look for “Microsoft App ID” and “Microsoft App Password/secret” as the required pair for channel registration and bot authentication. Tenant/object IDs are typically used for directory-scoped operations, not for basic bot channel deployment.
You are developing a method that uses the Computer Vision client library. The method will perform optical character recognition (OCR) in images. The method has the following code.
public static async Task ReadFileUrl(ComputerVisionClient client, string urlFile)
{
const int numberOfCharsInOperationId = 36;
var txtHeaders = await client.ReadAsync(urlFile, language: "en");
string opLocation = textHeaders.OperationLocation;
string operationId = opLocation.Substring(opLocation.Length - numberOfCharsInOperationId);
ReadOperationResult results;
results = await client.GetReadResultAsync(Guid.Parse(operationId));
var textUrlFileResults = results.AnalyzeResult.ReadResults;
foreach (ReadResult page in textUrlFileResults)
{
foreach (Line line in page.Lines)
{
Console.WriteLine(line.Text);
}
}
}
During testing, you discover that the call to the GetReadResultAsync method occurs before the read operation is complete. You need to prevent the GetReadResultAsync method from proceeding until the read operation is complete. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
Incorrect. GetReadResultAsync requires the operation identifier to retrieve the OCR operation status and results. In the .NET SDK, this is typically passed as a Guid (hence Guid.Parse). Removing the parameter would not compile and does not address the asynchronous nature of the Read operation. The timing issue is solved by polling, not by changing how the ID is passed.
Correct. The get_read_result response includes a status field (commonly "notStarted", "running", "succeeded", "failed"). You must check this status and only process analyze_result/read_results after it becomes "succeeded". This is the key guard that prevents your code from attempting to read lines before the OCR operation has finished.
Incorrect. The object returned from ReadAsync primarily provides the Operation-Location header used to poll for results. The completion status is not reliably obtained from the initial response headers; instead, you must call GetReadResultAsync and inspect the returned ReadOperationResult.Status. Checking txtHeaders.Status is not the correct mechanism for determining when OCR processing has finished.
Correct. Because the operation is asynchronous, you must poll get_read_result until completion. Wrapping get_read_result in a loop with a delay prevents hammering the service (rate limits/throttling) and provides time for the backend OCR job to finish. Combine this with status checking and a timeout for a robust, exam-appropriate solution.
Core concept: Azure AI Vision Read (OCR) is an asynchronous operation. The initial call (read) returns immediately with an Operation-Location header that points to a server-side job. You must poll the service until the job reaches a terminal state before consuming results. Why the answer is correct: To prevent get_read_result (GetReadResultAsync in .NET) from proceeding before completion, you implement polling. Polling requires (1) repeatedly calling get_read_result and (2) checking the returned status until it is "succeeded" (or a terminal failure state). Therefore: - Add code to verify the read_results/status value (B). The Read API returns a status such as "notStarted", "running", "succeeded", or "failed". You should only iterate analyze_result.read_results after status is "succeeded". - Wrap the call to get_read_result within a loop that contains a delay (D). Without a delay, you risk tight-looping (wasting CPU, hitting rate limits) and still not guaranteeing completion. Key features / best practices: - Use a polling loop with a sleep/backoff (for example, 1–2 seconds) and a maximum timeout to avoid infinite waits. - Handle terminal failure states ("failed") and raise/log the error. - This aligns with Azure Well-Architected Framework reliability and performance efficiency: avoid unnecessary calls, handle transient states, and implement timeouts. Common misconceptions: Some assume Operation-Location itself changes status (it does not); it’s just a URL to query. Others think removing operation_id would force synchronous behavior; the Read API is inherently async. Exam tips: For AI-102, remember: Vision Read/OCR uses an async pattern (submit -> Operation-Location -> poll get result). Any question about “operation not complete” typically expects “poll status in a loop with delay until succeeded/failed.”
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Cognitive Search service. During the past 12 months, query volume steadily increased. You discover that some search query requests to the Cognitive Search service are being throttled. You need to reduce the likelihood that search query requests are throttled. Solution: You migrate to a Cognitive Search service that uses a higher tier. Does this meet the goal?
Yes. A higher Azure Cognitive Search tier provides greater resource capacity and higher service limits, which directly reduces the chance of requests being throttled under growing query volume. Throttling (often HTTP 429) occurs when the service reaches tier/scale-unit limits for throughput or resource utilization. By migrating to a higher tier, you increase available compute and may unlock higher maximum replicas/partitions, improving query handling capacity and lowering throttling likelihood.
No is incorrect because moving to a higher tier is a recognized mitigation for throttling caused by insufficient capacity. Higher tiers generally offer increased performance characteristics and higher limits, which helps absorb increased query load. While scaling out replicas is often the most direct fix for query throughput, tier can be the limiting factor for how far you can scale and the baseline capacity available. Therefore, upgrading the tier can meet the goal of reducing throttling likelihood.
Core concept: This question tests how Azure Cognitive Search handles capacity limits and throttling, and which scaling actions reduce throttling for query (read) workloads. Why the answer is correct: Azure Cognitive Search enforces request rate and resource limits based on the service tier and the number of replicas/partitions available. Throttling typically occurs when the service is at or near its capacity for queries (for example, too many requests per second, too much CPU/memory pressure, or too many concurrent operations). Migrating to a higher tier increases the available resources and higher service limits, which reduces the likelihood of hitting throttling thresholds under increased query volume. Therefore, moving to a higher tier is a valid way to reduce throttling risk. Key features / configurations: - Service tier determines overall capacity and service limits (e.g., throughput, storage, and performance characteristics). - Replicas scale query throughput and availability (more replicas = more query capacity). - Partitions scale index/storage and ingestion capacity (more partitions = more index capacity; can also affect performance depending on design). - Throttling is commonly surfaced as HTTP 429 (Too Many Requests) and can be mitigated by scaling up (tier) and/or scaling out (replicas). Common misconceptions: - Assuming partitions primarily fix query throttling: partitions mainly address index size and ingestion; replicas are the primary lever for query throughput. - Thinking throttling is only a client-side issue: while retries/backoff help, sustained throttling usually indicates insufficient service capacity. - Believing tier changes are irrelevant if replicas can be increased: some tiers cap the maximum replicas/partitions and have lower performance limits, so tier can be the gating factor. Exam tips: - If the problem is query throttling, think: increase replicas and/or move to a higher tier. - If the problem is indexing/storage limits, think: increase partitions and/or move to a higher tier. - Watch for HTTP 429 in scenarios; it often signals capacity throttling. - Tier upgrades can raise maximum allowed replicas/partitions and provide more headroom for throughput.
You plan to perform predictive maintenance. You collect IoT sensor data from 100 industrial machines for a year. Each machine has 50 different sensors that generate data at one-minute intervals. In total, you have 5,000 time series datasets. You need to identify unusual values in each time series to help predict machinery failures. Which Azure service should you use?
Anomaly Detector is the best match because the requirement is to identify unusual values in time-series sensor data collected from industrial machines. Predictive maintenance commonly depends on finding abnormal telemetry patterns such as spikes, dips, or shifts in behavior over time. Among the options, this is the only service intended for anomaly detection rather than document processing, search, or image analysis. In the context of this exam question, it is the expected Azure service for analyzing sensor-based time series.
Cognitive Search is used for knowledge mining: indexing, enriching, and querying text and documents (including PDFs, Office files, and structured data). While it can store and search records, it does not provide native time-series anomaly detection algorithms for sensor telemetry. It would be useful for searching maintenance logs or manuals, not for detecting unusual numeric values in time series.
Form Recognizer (Azure AI Document Intelligence) extracts text, key-value pairs, tables, and structured fields from documents like invoices, receipts, and forms. It is not intended for analyzing numeric telemetry streams or detecting anomalies in time-series sensor data. It could help digitize maintenance reports, but it won’t identify outliers in minute-by-minute sensor readings.
Custom Vision is a computer vision service for training image classification or object detection models using labeled images. It is appropriate for visual inspection scenarios (e.g., detecting defects in photos), not for analyzing time-series sensor measurements. Since the problem is purely time-series anomaly detection across many sensors, Custom Vision does not fit the requirement.
Core concept: This question is about choosing the Azure service intended for detecting anomalies in time-series sensor telemetry. Predictive maintenance scenarios commonly analyze streams of sensor readings to find spikes, dips, and other unusual patterns that may indicate impending equipment failure. Why the answer is correct: Among the listed options, Anomaly Detector is the service specifically associated with anomaly detection in time-series data. The scenario describes 5,000 separate sensor series collected at one-minute intervals, and the requirement is to identify unusual values in each series rather than search documents, extract form fields, or classify images. In exam terms, this maps directly to the Azure service for time-series anomaly detection. Key features: - Designed for numeric time-series analysis rather than text, documents, or images. - Used to detect outliers and abnormal patterns in telemetry streams. - Fits predictive maintenance scenarios where sensor behavior deviates from normal operating ranges. - Supports analyzing each sensor stream independently to surface unusual readings. Common misconceptions: Cognitive Search is for indexing and querying content, not anomaly scoring. Form Recognizer is for extracting information from documents. Custom Vision is for image classification and object detection. None of those services are intended for numeric time-series anomaly detection. Exam tips: When a question mentions time series, telemetry, unusual values, outliers, or predictive maintenance, the expected exam answer is Anomaly Detector if it is available in the options. Do not confuse anomaly detection in sensor data with document AI, search, or computer vision tasks.