
65問と130分の制限時間で実際の試験をシミュレーションしましょう。AI検証済み解答と詳細な解説で学習できます。
AI搭載
すべての解答は3つの主要AIモデルで交差検証され、最高の精度を保証します。選択肢ごとの詳細な解説と深い問題分析を提供します。
A developer is building a serverless image-processing API on AWS Lambda behind Amazon API Gateway that must read two adjustable runtime parameters—maximum parallel transformations (initially 300) and maximum images processed per minute (initially 900)—which operations may update weekly, and these updates must be rolled out automatically with a canary release (e.g., 10% over 20 minutes), allow fast rollback, and apply across all function instances without redeploying the function code or causing downtime; which solution meets these requirements?
Lambda environment variables are static per function configuration and updating them requires a function configuration update (often paired with publishing a new version/alias update). CodeDeploy AllAtOnce is the opposite of a canary and increases blast radius. This also couples config changes to Lambda deployments and does not provide a controlled 10% over 20 minutes rollout for configuration-only changes.
Using CDK/CloudFormation to manage parameters and redeploy on each change ties operational parameter updates to infrastructure deployments. While CodeDeployDefault.LambdaCanary10Percent15Minutes provides canary for Lambda version traffic shifting, it still requires publishing new versions and updating aliases, which violates the requirement to apply changes without redeploying function code and adds unnecessary deployment complexity for simple runtime limits.
AWS AppConfig is designed for dynamic application configuration with controlled deployments. Storing the limits as hosted configuration and using the AppConfig Lambda extension allows the function to read updated values at runtime (with local caching) without redeploying code or changing environment variables. AppConfig deployment strategies (e.g., Canary10Percent20Minutes) provide progressive rollout and easy rollback by redeploying a previous configuration version.
Polling S3 every 5 minutes and programmatically updating Lambda environment variables introduces delay, complexity, and failure modes (missed updates, race conditions). Updating environment variables triggers function configuration changes and typically requires publishing new versions/CodeDeploy actions, which is effectively a redeployment path. It also does not provide a native, reliable canary rollout of configuration changes across invocations comparable to AppConfig deployment strategies.
Core Concept: This question tests dynamic configuration management for serverless workloads and safe rollout/rollback of configuration changes without redeploying code. The key services are AWS AppConfig (part of AWS Systems Manager) and the AWS AppConfig Lambda extension. Why the Answer is Correct: The requirement is to update two runtime parameters weekly, roll them out automatically with a canary (10% over 20 minutes), support fast rollback, and have the change apply across all Lambda instances without redeploying code or causing downtime. AWS AppConfig is purpose-built for this: you store the limits as hosted configuration, then deploy configuration changes using a deployment strategy such as Canary10Percent20Minutes. The AppConfig Lambda extension allows the function to fetch configuration at runtime (locally cached by the extension), so updates propagate without changing Lambda code packages, publishing new versions, or updating environment variables. Key AWS Features: 1) AWS AppConfig hosted configuration: central, versioned configuration store. 2) Deployment strategies: built-in canary/linear strategies with bake time and automatic progression. 3) Rollback: you can quickly redeploy a prior known-good configuration version (or stop a deployment) without touching Lambda deployments. 4) Lambda extension caching: reduces latency and throttling risk by caching configuration locally per execution environment while still refreshing on a defined interval. 5) Separation of config from code: aligns with AWS Well-Architected best practices (operational excellence and reliability) by enabling controlled changes and reducing blast radius. Common Misconceptions: Many assume Lambda environment variables are the right place for runtime parameters. However, changing environment variables requires updating the function configuration, which triggers a new function version/config update and does not provide native canary rollout of “just config” across invocations without a deployment. Similarly, using CDK/CloudFormation redeployments couples config changes to infrastructure deployments, increasing risk and operational overhead. Exam Tips: When you see “runtime parameters,” “frequent updates,” “no redeploy,” and “canary/rollback,” think AWS AppConfig (or Parameter Store/Secrets Manager for simpler cases). If the question explicitly requires progressive rollout (10%/20 minutes) and fast rollback, AppConfig deployment strategies are the strongest match—especially with Lambda via the AppConfig extension.
A logistics company uses AWS CodeDeploy to perform in-place deployments of version 2.3.7 to 120 Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer with MinimumHealthyHosts set to 95% and an overall deployment timeout of 15 minutes; during a rollout that aborted at 6 minutes, the CodeDeploy console showed the error 'HEALTH_CONSTRAINTS: The overall deployment failed because too many individual instances failed or too few healthy instances were available,' which two issues could explain this failure?
Correct. The CodeDeploy agent must be installed and running on every target EC2 instance for in-place deployments. If the agent is stopped or missing on multiple instances, those instances cannot run lifecycle events or report success/failure. CodeDeploy will mark them as failed or time out waiting for status, quickly reducing the number of healthy instances and triggering HEALTH_CONSTRAINTS when the minimum healthy hosts threshold is breached.
Incorrect. The CloudWatch agent is used for publishing OS/application metrics and logs to Amazon CloudWatch. CodeDeploy does not require the CloudWatch agent to perform deployments or evaluate deployment success. While CloudWatch can help with troubleshooting (e.g., viewing logs/metrics during a rollout), its absence would not directly cause CodeDeploy to fail with HEALTH_CONSTRAINTS.
Incorrect. If the developer’s IAM user lacked codedeploy:CreateDeployment, the deployment would fail to start and would be denied immediately by IAM. The scenario describes a rollout that began and then aborted at 6 minutes due to health constraints, which indicates instance-level deployment execution/health issues rather than a caller authorization problem.
Correct. EC2 instances need an IAM instance profile with permissions to retrieve the application revision (commonly from Amazon S3, possibly with KMS decrypt) and to interact with CodeDeploy as required. If these permissions are missing, instances will fail early during DownloadBundle or related steps. Multiple instance failures reduce the healthy host count below 95%, causing CodeDeploy to abort with HEALTH_CONSTRAINTS.
Incorrect. There is no universal requirement to enable a special “CodeDeploy health checks” feature for every deployment group. CodeDeploy health behavior is driven by deployment configurations (minimum healthy hosts, batch size) and optional integrations like load balancers and alarms. HEALTH_CONSTRAINTS is typically caused by real instance failures or insufficient healthy capacity, not by a missing mandatory feature toggle.
Core Concept: This question tests AWS CodeDeploy in-place deployments with health constraints. In an Auto Scaling group behind an Application Load Balancer (ALB), CodeDeploy must keep a minimum percentage of instances healthy while it deploys. The error HEALTH_CONSTRAINTS indicates CodeDeploy stopped because it could not maintain the configured minimum healthy hosts or too many instances reported deployment failures. Why the Answer is Correct: With 120 instances and MinimumHealthyHosts set to 95%, CodeDeploy must keep at least 114 instances healthy during the deployment. That means only up to 6 instances can be simultaneously unhealthy/out of service/failed (depending on how health is evaluated for the deployment configuration). If the CodeDeploy agent is not running on a meaningful number of instances (A), those instances cannot execute lifecycle hooks (e.g., DownloadBundle, Install, AfterInstall) and will quickly be marked as failed or never transition to success, reducing the healthy count below the threshold. Similarly, if the EC2 instances’ IAM instance profile lacks required permissions (D)—commonly s3:GetObject for the revision bundle, KMS decrypt if encrypted, and permissions to communicate status—instances will fail early when trying to fetch the application revision or report progress. Multiple early failures can rapidly violate the 95% healthy constraint, causing an abort within minutes (as observed at 6 minutes), well before the 15-minute overall timeout. Key AWS Features: CodeDeploy relies on the CodeDeploy agent on each instance and on an instance profile with appropriate permissions. Health constraints are enforced via deployment configuration (e.g., minimum healthy hosts) and, when using load balancers, by coordinating deregistration/registration and observing instance health. In-place deployments are especially sensitive because instances are updated in batches; if too many in a batch fail, the minimum healthy threshold is breached. Common Misconceptions: CloudWatch agent (B) is unrelated to CodeDeploy’s ability to deploy. IAM permission to create the deployment (C) would prevent starting the deployment at all, not cause a mid-deployment HEALTH_CONSTRAINTS failure. There is no mandatory “dedicated CodeDeploy health checks” feature required for every group (E); health constraints are configured through deployment configs and (optionally) load balancer integration. Exam Tips: When you see HEALTH_CONSTRAINTS, think: “too many instance-level failures or not enough healthy capacity to continue.” Immediately check CodeDeploy agent health, instance profile permissions, and minimum healthy host percentage vs. fleet size. High minimum healthy percentages (like 95%) leave very little room for failures in large fleets.
A media analytics startup needs to publish a public REST API with 7 endpoints in Amazon API Gateway within 2 weeks so partner teams can integrate now, even though its backend microservices will not be ready until the next sprint, and the company requires returning static JSON payloads (<=2 KB) that produce HTTP 200 or 404 based on request parameters, with no backend integration and no compute/VPC costs—what solution should the developers implement?
Correct. API Gateway REST API with MOCK integration can return responses without invoking any backend. By defining method responses for 200 and 404 and using integration responses with mapping templates, developers can generate static JSON bodies and choose the status code based on request parameters. Deploying to a stage provides a stable endpoint for partners, meeting the 2-week timeline and avoiding compute/VPC costs.
Incorrect. Lambda can certainly return mocked JSON and varied HTTP status codes, but it introduces compute cost (Lambda invocations), operational overhead (code packaging, IAM, logging), and potentially VPC configuration later. The requirement explicitly states “no backend integration and no compute/VPC costs,” which MOCK integration satisfies more directly than Lambda proxy integration.
Incorrect. EC2 behind an ALB adds unnecessary infrastructure, cost, patching/maintenance, scaling considerations, and longer setup time. It violates the requirement for no compute costs and is not aligned with the goal of quickly publishing static responses with minimal operational burden. API Gateway can do this natively without standing up servers.
Incorrect. HTTP_PROXY requires a real upstream endpoint; pointing to an external mock still creates a backend dependency and potential latency/availability/security issues. Also, “attach an AWS Lambda layer to the API” is not a valid API Gateway construct—Lambda layers attach to Lambda functions, not directly to API Gateway. This option mixes incompatible concepts and doesn’t meet the no-backend requirement.
Core Concept: This question tests Amazon API Gateway’s ability to publish an API without any backend by using MOCK integrations and mapping templates to generate static responses. It also touches on cost/operational constraints (no compute/VPC) and rapid delivery for partner integration. Why the Answer is Correct: Option A uses API Gateway REST API resources/methods with integration type MOCK, which allows API Gateway to return responses without calling any backend service. By configuring method responses (declaring 200 and 404) and integration responses (mapping templates that produce the JSON body and select the status code), the team can deliver 7 endpoints quickly and deterministically. This meets all constraints: static JSON payloads <=2 KB, conditional 200 vs 404 based on request parameters, no backend integration, and no compute or VPC costs. Key AWS Features: - REST API (not HTTP API) supports MOCK integration. - Method Request can define required/optional query/path parameters. - Integration Response selection can be driven by mapping templates (Velocity Template Language) and/or selection patterns to route to different status codes. - Method Response must explicitly declare each status code and response model/headers. - Stages (e.g., beta) and deployments provide a stable URL for partners; stage variables can help manage versions. Common Misconceptions: Many assume Lambda is the simplest way to mock, but Lambda introduces compute cost, operational overhead, and potentially VPC considerations (if later attached). Others think HTTP proxying to an external mock is “no backend,” but it is still a backend dependency and adds network reliability/security concerns. EC2/ALB is clearly overkill and violates the no-compute-cost requirement. Exam Tips: When you see “no backend,” “static responses,” and “API Gateway,” think MOCK integration (REST API) plus mapping templates and explicit method/integration responses. Also remember that API Gateway requires you to define method responses for each status code you intend to return; otherwise, you’ll get unexpected default behavior during testing.
A sports analytics startup exposes a prediction API through Amazon API Gateway that invokes an AWS Lambda function; the team maintains two Lambda function versions: version 18 for PROD and version 17 for DEV, with aliases prod -> v18 and dev -> v17; currently there is a single API Gateway stage named prod that routes to the prod alias; the company needs both PROD and DEV Lambda versions to be simultaneously and distinctly reachable to clients under the same API using separate stages, with 100% of traffic in each stage going to its matching alias and without duplicating the API; what should the company do?
Incorrect. Lambda authorizers control authentication/authorization decisions, not which backend Lambda alias the API method integration invokes. Creating separate authorizers per alias does not route the main integration traffic to different Lambda versions. Stage variables can be used with authorizers, but that would only affect auth, not the core requirement of routing each stage’s requests to its matching Lambda alias.
Incorrect. Gateway responses in API Gateway customize responses for errors (e.g., 4XX/5XX) and can add headers or templates. They do not influence integration target selection or Lambda alias routing. Creating gateway responses for different aliases is not a valid concept; gateway responses are not tied to Lambda aliases and won’t make PROD and DEV versions separately reachable.
Incorrect. API Gateway does not provide “environment variables” in the way Lambda does. The per-stage configuration mechanism is stage variables. While you can use Lambda environment variables inside the function, that would not help API Gateway choose between aliases; the requirement is to route to distinct Lambda versions/aliases per stage, not to change behavior within one alias.
Correct. Create a new API Gateway stage (dev) and use stage variables (e.g., lambdaAlias=prod in prod stage, lambdaAlias=dev in dev stage). Reference the stage variable in the Lambda integration ARN so each stage invokes the qualified Lambda alias ARN. This keeps one API definition, provides distinct stage URLs (/prod and /dev), and ensures 100% of each stage’s traffic goes to the intended alias.
Core Concept: This question tests API Gateway stage-based deployments and how to route each stage to a different AWS Lambda alias/version without duplicating the API. The key mechanism is API Gateway stage variables combined with a Lambda integration URI/ARN that can reference those variables. Why the Answer is Correct: To make both PROD and DEV simultaneously reachable under the same API, you create a second API Gateway stage (dev) and configure each stage to invoke a different Lambda alias (prod -> v18, dev -> v17). API Gateway stage variables are designed for exactly this: per-stage configuration (e.g., backend endpoint, Lambda alias, or other parameters) while keeping a single API definition. By defining a stage variable like lambdaAlias in each stage and referencing it in the integration ARN, each stage deterministically routes 100% of its traffic to the intended alias. Key AWS Features: - Lambda versions and aliases: versions are immutable; aliases are stable pointers to versions and are the recommended way to reference “prod” vs “dev” in integrations. - API Gateway stages: separate deployment environments (e.g., /prod and /dev) under the same REST API. - Stage variables: key/value pairs available at runtime for a stage; commonly used to parameterize integration endpoints. - Lambda integration ARN format supports alias qualification (function ARN ending with :alias). By parameterizing the alias portion with a stage variable, you avoid duplicating methods/resources. Common Misconceptions: Options mentioning “environment variables in API Gateway” are incorrect because API Gateway does not have environment variables in the Lambda sense; stage variables are the correct construct. Gateway responses and Lambda authorizers are unrelated to selecting which Lambda alias the backend integration invokes; they address error handling and request authorization, not backend routing. Exam Tips: When you see “same API, multiple stages, different backends,” think “API Gateway stage variables.” When you see “Lambda versions/aliases,” remember that API Gateway can invoke a qualified Lambda ARN (alias/version). Combine these to achieve clean multi-environment deployments without cloning APIs or using separate custom domains unless explicitly required.
A media startup is building a microservice that pulls raw audio segments from a third-party transcription API and compiles them into a single MP3 file and a separate JSON transcript archive. Each resulting file can be between 2 MB and 25 MB in size. The service must encrypt these files at rest on disk by using AWS Key Management Service (AWS KMS) with a symmetric customer managed key and must decrypt a file when a user requests to download it. The retrieval and formatting code is complete. The developer now needs to use the GenerateDataKey API to encrypt the files so that they can be decrypted later. Which solution will meet these requirements?
Correct. This is the standard KMS envelope encryption workflow. Use the plaintext data key returned by GenerateDataKey to encrypt the file locally with a symmetric algorithm (e.g., AES-GCM), then discard the plaintext key. Store the encrypted data key (CiphertextBlob) on disk (often alongside the ciphertext) so you can later call KMS Decrypt to retrieve the plaintext key and decrypt the file.
Incorrect. Storing the plaintext data key on disk is a security anti-pattern and violates the intent of using KMS to protect key material. Also, the encrypted data key (CiphertextBlob) is not used directly for data encryption; it must be decrypted by KMS first to obtain the plaintext data key. This option reverses the correct roles of the two outputs.
Incorrect. While it correctly stores the encrypted data key, it incorrectly uses KMS Encrypt to encrypt the file. KMS Encrypt is designed for small payloads and is not appropriate for encrypting multi-megabyte files (2–25 MB). The correct approach is to encrypt the file locally using the plaintext data key and only use KMS to wrap/unwrap the data key.
Incorrect. It stores the plaintext data key (insecure) and also attempts to use the encrypted data key with KMS Encrypt to encrypt the file, which is conceptually wrong. The encrypted data key is meant to be stored and later decrypted with KMS Decrypt; it is not an encryption key for KMS Encrypt, nor should plaintext keys be persisted.
Core Concept: This question tests envelope encryption with AWS KMS using the GenerateDataKey API. KMS customer managed keys (CMKs) are not used to encrypt large payloads directly. Instead, KMS generates a one-time symmetric data key for local encryption, and KMS protects (wraps) that data key for later decryption. Why the Answer is Correct: With GenerateDataKey, KMS returns two versions of the same data key: a plaintext key (for immediate use) and an encrypted key (CiphertextBlob) encrypted under the specified symmetric CMK. The correct pattern is to use the plaintext data key in your application with a symmetric algorithm (e.g., AES-GCM) to encrypt the 2–25 MB file locally, then discard the plaintext key from memory. You store the encrypted data key (CiphertextBlob) alongside the encrypted file (e.g., as metadata or a separate header). When a user downloads the file, you call KMS Decrypt on the stored CiphertextBlob to recover the plaintext data key and decrypt the file. Key AWS Features: - GenerateDataKey for envelope encryption (returns Plaintext + CiphertextBlob). - Symmetric customer managed KMS key for wrapping/unwrapping data keys. - KMS Decrypt to recover the data key later. - Best practice: never persist plaintext keys; store only the encrypted data key and ciphertext. Common Misconceptions: A frequent trap is thinking you should store the plaintext data key for later decryption (insecure and defeats KMS’s purpose). Another is attempting to use KMS Encrypt to encrypt the file itself; KMS Encrypt is intended for small data (KB-scale) and is inefficient/costly for multi-megabyte objects. Exam Tips: If you see “GenerateDataKey” and “encrypt files at rest,” think envelope encryption: encrypt data locally with the plaintext data key, store the encrypted data key (CiphertextBlob), and use KMS Decrypt when needed. Also remember: KMS Encrypt/Decrypt are for small blobs; large files should be encrypted client-side with data keys.
外出先でもすべての問題を解きたいですか?
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。
A ride-sharing company operates a real-time pricing service on Amazon EC2 instances in an Auto Scaling group across three Availability Zones behind an Application Load Balancer, and each instance must retrieve multiple application secrets (for example, a third-party billing API key and a database password) during boot and export them as environment variables, with the secrets required to be encrypted at rest and rotated every 30 days; which solution will meet these requirements with the least development effort?
Storing secrets in S3 (even encrypted with a CMK) is not a best-practice secrets solution. You must build secure retrieval, parsing, and rotation semantics yourself. “S3 Object Lambda to rotate” is not a rotation mechanism; it transforms objects on retrieval and does not manage secret lifecycle, versioning, or coordinated updates with external systems. This increases development/ops effort and risk compared to Secrets Manager.
Systems Manager Parameter Store SecureString provides encryption at rest (including with KMS), and user data can fetch parameters at boot. However, rotation is not a native managed capability; you must implement scheduled Lambda rotation, update external systems (DB password/API key), handle versioning, and ensure safe cutover/rollback. Using the default KMS key also does not meet a CMK requirement if one is implied by policy.
Base64 encoding is not encryption and does not satisfy “encrypted at rest.” Embedding secrets in configuration files (or AMIs) is insecure, hard to rotate across an Auto Scaling fleet, and increases blast radius if an instance or artifact is compromised. A custom monthly script for rotation is high development and operational effort and is error-prone compared to managed secret rotation services.
Secrets Manager is designed for storing application secrets, encrypting them at rest with a customer managed KMS key, and rotating them automatically on a schedule (30 days). EC2 instances in an Auto Scaling group can use an instance profile to retrieve multiple secrets at boot and export them as environment variables. This minimizes custom code to only boot-time retrieval and (if needed) a one-time rotation Lambda for non-AWS services.
Core Concept: This question tests AWS-managed secret storage and rotation for applications running on EC2 Auto Scaling, focusing on encryption at rest, automated rotation, and minimizing development effort. The primary service is AWS Secrets Manager (purpose-built for secrets) with AWS KMS for encryption. Why the Answer is Correct: Option D best meets all requirements with the least development effort: store multiple secrets centrally, encrypt them at rest with a customer managed KMS key (CMK), and enable built-in automatic rotation on a 30-day schedule. EC2 instances can retrieve secrets during boot (via user data) using the AWS SDK/CLI and export them as environment variables. This aligns with ephemeral, horizontally scaled instances where secrets should not be baked into AMIs or configs. Key AWS Features: 1) Secrets Manager encryption at rest using KMS CMKs, plus fine-grained access control via IAM policies and KMS key policies. 2) Automatic rotation: Secrets Manager integrates with Lambda rotation functions and supports rotation schedules (e.g., every 30 days). For supported engines (like RDS), AWS provides templates; for third-party API keys, you implement rotation logic once in Lambda. 3) Retrieval patterns: instances use an IAM role (instance profile) to call GetSecretValue at boot; caching can be added later (Secrets Manager caching library) but is not required. 4) Operational benefits: versioning/staging labels (AWSCURRENT/AWSPREVIOUS), auditability via CloudTrail, and centralized management. Common Misconceptions: Parameter Store SecureString can store encrypted values, but rotation is not a native managed feature; you must build and maintain rotation workflows and handle versioning/rollback yourself. S3 is not a secrets manager and adds unnecessary complexity and risk (distribution, access patterns, rotation semantics). Base64 encoding is not encryption and fails security requirements. Exam Tips: When you see “secrets,” “rotation,” and “least development effort,” default to Secrets Manager over Parameter Store unless rotation is explicitly not needed or cost constraints dominate. Also note the distinction between “default KMS key” vs “customer managed KMS key” requirements—CMK requirements typically point to Secrets Manager (or Parameter Store with CMK), but rotation pushes strongly toward Secrets Manager. References: AWS Secrets Manager User Guide (rotation and encryption), AWS Well-Architected Framework Security Pillar (manage secrets centrally, automate rotation, least privilege).
A food-delivery startup runs four microservices—Orders, Payments, Courier, and Notifications—on 24 Amazon EC2 instances in two private subnets; during peak traffic (~1,500 requests per minute), the team cannot correlate log lines to a single customer checkout and needs end-to-end request tracing across services to analyze message flow and debug intermittent 502 errors. Which combination of steps should the developer take to enable per-transaction tracing across the microservices? (Choose two.)
Correct. The X-Ray daemon (or agent) must run close to the application (commonly on each EC2 instance) to receive segments/subsegments over UDP port 2000 from the SDK. The daemon then sends data to the X-Ray service over HTTPS (443). In private subnets, ensure outbound 443 via NAT or a VPC endpoint for X-Ray APIs. This is a standard required component for EC2-based tracing.
Incorrect. You do not connect to a “global AWS X-Ray daemon” over TCP 2000. Port 2000 is used locally (typically UDP) between your application and the daemon you run. While interface VPC endpoints can be used for private connectivity to the X-Ray service APIs, that connectivity is over HTTPS (443) to regional endpoints, not TCP 2000 to a managed daemon.
Incorrect. CloudWatch Logs cannot be configured to push raw application logs directly into X-Ray to create distributed traces. X-Ray traces are generated by the SDK/agent capturing request context and timing data. You can correlate logs with trace IDs (e.g., by logging the X-Ray trace ID and using CloudWatch Logs Insights), but logs ingestion is not a substitute for X-Ray instrumentation.
Correct. The AWS X-Ray SDK must be added to each microservice to create segments and propagate trace context across service boundaries (incoming requests, outgoing HTTP calls, AWS SDK calls, messaging). Without SDK instrumentation, X-Ray cannot build an end-to-end trace for a single checkout transaction, and you will not get a service map showing dependencies and latency/error hotspots.
Incorrect. CloudWatch metric streams export metrics (typically to Kinesis Data Firehose destinations) for near-real-time analytics, not per-request tracing. Metrics are aggregated and cannot correlate individual customer checkouts across microservices. For intermittent 502 debugging and message flow analysis, distributed tracing (X-Ray) and structured logging with trace IDs are the appropriate tools, not metric streaming.
Core Concept - The question is testing distributed tracing for microservices using AWS X-Ray. X-Ray provides end-to-end visibility by propagating trace context across service boundaries and collecting segments/subsegments to build a service map and per-request timeline. Why the Answer is Correct - To get per-transaction tracing across four EC2-hosted microservices, you need (1) application instrumentation to create/propagate trace IDs and record downstream calls, and (2) a local X-Ray daemon/agent to receive trace data and forward it to the X-Ray service. Option D adds the AWS X-Ray SDK to each microservice and instruments inbound/outbound requests so a single checkout request can be correlated across Orders, Payments, Courier, and Notifications. Option A installs/runs the X-Ray daemon on each EC2 instance and ensures the standard local UDP 2000 path from app to daemon plus outbound HTTPS 443 from daemon to the X-Ray service, which is required for trace submission. Key AWS Features - X-Ray SDK supports automatic instrumentation for common frameworks (HTTP servers/clients, AWS SDK calls) and manual subsegments for custom logic. The X-Ray daemon listens on UDP 2000 locally and batches/sends trace data over TLS to the regional X-Ray API endpoints on 443. In private subnets, outbound connectivity can be via NAT gateway/instance, or via AWS PrivateLink (interface VPC endpoint) for X-Ray APIs (not “a global daemon”). IAM instance roles must allow xray:PutTraceSegments and xray:PutTelemetryRecords. Common Misconceptions - CloudWatch Logs does not “push logs into X-Ray” to create traces; logs and traces are complementary but distinct telemetry types. Metric streams provide near-real-time metrics export, not request-level correlation. Also, the daemon is not a managed global endpoint you connect to on TCP 2000; UDP 2000 is local between app and daemon. Exam Tips - For X-Ray on EC2/ECS: think “SDK + daemon.” If the question mentions correlating a single request across microservices, you must propagate trace headers and instrument outbound calls. If instances are in private subnets, remember the daemon needs a path to X-Ray service endpoints (NAT or interface endpoint) and the instance role permissions.
A travel booking platform places cancellation events into an Amazon Simple Queue Service (Amazon SQS) standard queue and, for compliance, must wait exactly 15 minutes (900 seconds) after each message arrives before invoking an AWS Lambda function to process the cancellation, even during peak loads of 2,000 messages per minute; which solution is the most operationally efficient?
Step Functions can implement a 900-second Wait, but starting an execution per message at 2,000 messages/min creates very high execution churn and cost/limits considerations. Using EventBridge to invoke every 5 minutes also does not guarantee “exactly 15 minutes after each message arrives,” because messages could sit up to ~5 minutes before being picked up for orchestration. This is more complex than necessary compared to native SQS delivery delay.
Custom polling plus republishing messages with a future timestamp is operationally heavy and error-prone. You must manage polling fleets/concurrency, handle clock/time comparisons, and deal with duplicates and message attribute logic. It also increases SQS traffic (receive + send again) and complicates failure handling. This violates the “most operationally efficient” goal when SQS already supports delivery delay natively.
Correct. Setting DelaySeconds to 900 delays the initial visibility of each message by exactly 15 minutes, so Lambda (via SQS event source mapping) cannot receive and process messages until the delay expires. This is a native, managed capability requiring minimal code and operations, and it scales with SQS/Lambda under peak throughput. It directly matches the compliance requirement of waiting after message arrival.
Visibility Timeout does not delay initial processing; it only hides a message after it has been received. With a Lambda event source mapping, Lambda would receive messages immediately and invoke the function immediately, violating the requirement to wait exactly 15 minutes after arrival. A 900-second visibility timeout is useful to allow long processing without duplicates, but it is not a scheduling/delay mechanism.
Core Concept: This question tests how to implement a fixed, per-message delay before processing SQS messages with AWS Lambda in an operationally efficient way. The key concepts are SQS delivery delay (DelaySeconds), Lambda event source mappings for SQS, and the difference between delaying message availability vs. controlling in-flight processing with visibility timeouts. Why the Answer is Correct: Setting the SQS queue’s DelaySeconds to 900 (15 minutes) ensures that every newly sent message is not visible/receivable by consumers until exactly 900 seconds after it is enqueued. When you then configure a Lambda event source mapping for the queue, Lambda will only poll and receive messages once they become visible—therefore the Lambda invocation naturally occurs only after the required 15-minute compliance delay. This scales automatically to peak loads (2,000 messages/min) because SQS and Lambda polling scale horizontally; you can further tune batch size and concurrency, but the delay behavior remains consistent per message. Key AWS Features: - SQS DelaySeconds (queue-level default delay) delays message delivery/visibility for newly sent messages. - Lambda event source mapping for SQS handles polling, batching, and scaling without custom schedulers. - Standard queues support high throughput; ordering is not guaranteed, but the requirement is time-based delay, not ordering. Common Misconceptions: A frequent trap is confusing DelaySeconds with Visibility Timeout. Visibility Timeout controls how long a message stays hidden after it has been received (in-flight) to prevent duplicate processing; it does not delay the initial receive. Another misconception is that you need Step Functions or custom “requeue until time” logic to implement a delay; SQS already provides a native delivery delay that is simpler and more operationally efficient. Exam Tips: - If the requirement is “wait before any consumer can receive the message,” think SQS DelaySeconds (or per-message delay). - If the requirement is “prevent other consumers from seeing a message while it’s being processed,” think Visibility Timeout. - For operational efficiency, prefer managed integrations (Lambda event source mapping) over custom polling, schedulers, or state machines unless you need complex orchestration.
A startup is launching a telemedicine API behind an Amazon CloudFront distribution with an Application Load Balancer origin; each viewer request contains 24 JSON fields including 8 sensitive fields (such as national ID, insurance number, and partial payment details) that must be encrypted on every transaction, and only the billing microservice behind the ALB should be able to decrypt those specific fields while other services must not, so which approach best meets these requirements while scoping decryption to a single component?
Incorrect. Although a Lambda@Edge function can invoke AWS services such as KMS, building custom per-field encryption at the edge is not the best solution for this requirement. It adds unnecessary complexity for parsing JSON, selecting fields, handling encryption logic, and managing performance on every request. CloudFront Field-Level Encryption is the native feature designed specifically to encrypt selected request fields and is therefore the better architectural choice for this scenario.
Incorrect. AWS WAF helps protect against common web exploits and can inspect/allow/block requests, but it does not encrypt request fields. Using a Lambda function with self-managed keys for encryption/decryption adds key management overhead, rotation challenges, and higher risk compared to managed services. It also doesn’t inherently ensure only one microservice can decrypt unless you build a full custom key distribution system.
Correct. CloudFront Field-Level Encryption encrypts only the specified sensitive JSON fields using an RSA public key before the request reaches the ALB. Storing the private key in Secrets Manager and restricting access to only the billing microservice’s IAM role enforces least privilege so only that component can decrypt. This meets the requirement to encrypt every transaction and scope decryption to a single service.
Incorrect. Requiring HTTPS and using signed URLs/cookies improves transport security and access control to CloudFront content, but it does not encrypt specific JSON fields within the request payload. Other services behind the ALB would still see the sensitive fields in plaintext once the request is decrypted at the edge/ALB. This fails the “only billing can decrypt” requirement.
Core Concept: This question tests Amazon CloudFront Field-Level Encryption (FLE) and least-privilege decryption design. FLE is specifically intended to encrypt only selected sensitive fields in viewer requests at the CloudFront edge by using an RSA public key, so those fields remain unreadable until an authorized origin-side component decrypts them with the matching private key. Why the Answer is Correct: Option C is the best fit because CloudFront FLE natively supports encrypting specific request fields, such as the 8 sensitive JSON fields, before forwarding the request to the ALB origin. That means intermediate components and other backend services can process the request without being able to read those protected values. By storing the corresponding private key securely and granting retrieval access only to the billing microservice’s IAM role, decryption is scoped to exactly one component, satisfying the least-privilege requirement. Key AWS Features: - CloudFront Field-Level Encryption: Encrypts selected request fields at the edge using an uploaded RSA public key. - Public/private key separation: CloudFront uses only the public key for encryption; only the holder of the private key can decrypt. - Secrets Manager + IAM: Secure storage and tightly controlled retrieval of the private key for the billing microservice only. - Defense in depth: TLS still protects transport, while FLE protects specific application-layer fields end to end through internal hops. Common Misconceptions: A is tempting because custom encryption with Lambda@Edge and KMS sounds flexible, but it is unnecessary and operationally complex compared to native CloudFront FLE. B is incorrect because AWS WAF is for filtering and threat protection, not payload encryption. D addresses transport security and access control, but not selective field encryption or limiting decryption to one backend service. Exam Tips: When a question mentions CloudFront, encrypting only certain request fields, and allowing only one backend component to decrypt them, the strongest signal is CloudFront Field-Level Encryption. Prefer the managed, purpose-built service over custom Lambda-based cryptography unless the question explicitly requires custom logic unsupported by FLE.
A media analytics company must centrally manage and secure database connection secrets for an Amazon Redshift RA3 4xl cluster in us-east-1, an Amazon RDS for MySQL 8.0 Multi-AZ DB instance, and a 4-node Amazon DocumentDB (MongoDB compatibility) cluster, and the security team requires all database credentials to be encrypted at rest with AWS KMS and automatically rotated every 30 days with no manual steps while minimizing custom code and supporting future engines; which solution meets these requirements most securely?
IAM database authentication can reduce password usage for some engines (notably Amazon RDS for MySQL and Amazon Redshift), but it is not a universal solution across all listed databases. Amazon DocumentDB does not use IAM database authentication for native MongoDB-style users/passwords in the same way, so this would not centrally manage/rotate those credentials. Also, token generation shifts operational burden to clients and does not meet the “rotate every 30 days” requirement for stored DB users.
Systems Manager Parameter Store SecureString supports KMS encryption at rest, but it does not provide a built-in, end-to-end automatic rotation mechanism for database credentials comparable to Secrets Manager. You would need to build and operate custom automation (e.g., Lambda/Step Functions) to rotate the actual database users/passwords and update parameters. That violates “no manual steps” and “minimizing custom code,” and it is less purpose-built for DB credential rotation.
Storing credentials in S3 (even with SSE-KMS and blocked public access) is not a secure or operationally appropriate secrets management pattern. S3 does not provide native secret rotation, version staging for rotation workflows, or tight integration with database credential updates. KMS key rotation only rotates the encryption key material, not the database password itself. This option fails the automatic 30-day credential rotation requirement and increases risk of accidental exposure.
AWS Secrets Manager is designed to centrally store and manage secrets encrypted with AWS KMS and to rotate them on a schedule (e.g., every 30 days). Using SecretsManagerRotationTemplate Lambda blueprints minimizes custom code while implementing the standard rotation lifecycle (create pending secret, set in DB, test, promote). It supports RDS for MySQL and can be used for Redshift and DocumentDB credential rotation patterns, and it is the most secure, AWS-recommended approach for scalable, future engine support.
Core Concept: This question tests centralized secrets management, encryption at rest with AWS KMS, and automated credential rotation across multiple database engines. The AWS-native service designed for this is AWS Secrets Manager, which stores secrets encrypted with KMS and supports scheduled rotation via AWS Lambda. Why the Answer is Correct: Option D uses AWS Secrets Manager to store credentials for Amazon Redshift, Amazon RDS for MySQL, and Amazon DocumentDB, and to rotate them automatically every 30 days with no manual steps. Secrets Manager is purpose-built for database credentials: it integrates with KMS for encryption at rest, provides fine-grained access control via IAM and resource policies, supports cross-account/central governance patterns, and offers rotation scheduling. Using the SecretsManagerRotationTemplate minimizes custom code because AWS provides rotation blueprints (Lambda templates) that implement the standard rotation steps (create new secret, set in DB, test, finalize). Key AWS Features: - KMS encryption at rest: Secrets are encrypted using an AWS KMS key (AWS managed or customer managed), meeting the “encrypted at rest with KMS” requirement. - Automatic rotation: Native rotation schedules (e.g., every 30 days) trigger a Lambda rotation function. - Engine support and future-proofing: Secrets Manager supports many engines and patterns; rotation templates/solutions exist for common databases and can be extended when adding future engines. - Operational security: Versioning/staging labels (AWSCURRENT/AWSPENDING), auditability via CloudTrail, and least-privilege IAM access. Common Misconceptions: - Confusing “store encrypted” with “rotate automatically”: Many services can encrypt values, but only Secrets Manager is designed to rotate database credentials end-to-end. - Assuming IAM auth replaces passwords everywhere: IAM database authentication is not universally supported across all listed engines (notably DocumentDB), and it doesn’t satisfy “rotate credentials” for engines that still require stored usernames/passwords. Exam Tips: When you see “database credentials,” “automatic rotation,” “KMS encryption,” and “minimize custom code,” default to AWS Secrets Manager with rotation. Parameter Store SecureString is for configuration/secret storage but lacks native rotation workflows for DB credentials. S3 is not a secrets manager. Always verify engine compatibility for IAM database authentication and rotation templates.
学習期間: 1 month
점수는 793점으로 합격했어요! 하루에 최소 30문제는 풀었어요. 밖에서도 짬 날 때마다 풀 수 있으니 좋네요ㅎㅎ
学習期間: 2 months
앱 문제가 시험이랑 굉장히 유사했어요. 그리고 해설들이 왜 틀렸는지 이해하는데 도움이 됐어요.
学習期間: 1 month
Thank you very much, these questions are wonderful !!!
学習期間: 2 months
1달 전에 합격했는데 지금 후기쓰네요. 시험하고 문제 구성이 비슷했어요
学習期間: 2 months
I just passed the exam, and I can confidently say that this app was instrumental in helping me thoroughly review the exam material.
外出先でもすべての問題を解きたいですか?
無料アプリを入手
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。