
65問と130分の制限時間で実際の試験をシミュレーションしましょう。AI検証済み解答と詳細な解説で学習できます。
AI搭載
すべての解答は3つの主要AIモデルで交差検証され、最高の精度を保証します。選択肢ごとの詳細な解説と深い問題分析を提供します。
A company runs a payment-authorization microservice on Amazon ECS with AWS Fargate, and each authorization must finish in under 2 seconds; if any single authorization takes 2 seconds or longer, the development team must be notified—how can a developer implement the timing and notification with the least operational overhead?
Correct. Publishing each authorization’s latency as a CloudWatch custom metric is the lowest-overhead way to capture the timing data in a managed AWS service. To satisfy the requirement that any single authorization taking 2 seconds or longer triggers notification, the CloudWatch alarm should evaluate the Maximum statistic for the period against a 2000 ms threshold. SNS is the standard managed notification target for CloudWatch alarms, so this design minimizes custom code and operational burden while still providing timely alerts.
Incorrect. SQS plus a custom consumer can detect >2000 ms values, but it introduces operational overhead: you must build, deploy, scale, and monitor the consumer, handle failures, retries, and poison messages. This is not the least-ops solution when CloudWatch alarms can natively evaluate thresholds and trigger SNS notifications.
Incorrect. An alarm on the average over 1 minute can miss single slow authorizations because the average may remain below 2000 ms even if one request exceeds 2 seconds. It also uses SES, which is not the standard service for CloudWatch alarm notifications (SNS is). This fails the “any single authorization” requirement.
Incorrect. Kinesis is unnecessary and adds operational complexity and cost for this use case. Also, CloudWatch alarms do not evaluate “any stream value” inside Kinesis records directly; you would still need a consumer (Lambda/KCL) to extract values and publish a metric. That defeats the least operational overhead requirement compared to direct CloudWatch custom metrics.
Core Concept: This question tests how to implement low-overhead latency monitoring and alerting for an ECS on Fargate microservice. The best AWS-native approach is to publish authorization latency as a CloudWatch custom metric and use a CloudWatch alarm to notify through Amazon SNS. Why the Answer is Correct: The requirement is to notify the team if any single authorization takes 2 seconds or longer. By publishing each authorization’s processing time as a custom metric datapoint, the developer can configure a CloudWatch alarm on the Maximum statistic with a threshold of 2000 ms so that if any datapoint in the evaluation period is 2000 ms or higher, the alarm triggers. SNS is the standard managed notification target for CloudWatch alarms, which keeps operational overhead low. Key AWS Features: - CloudWatch custom metrics: publish per-authorization latency values in milliseconds, optionally with dimensions such as service name or environment. - CloudWatch alarms: configure a short period and evaluate the Maximum statistic so a single slow authorization in that period causes a breach; high-resolution metrics can improve responsiveness. - Amazon SNS: provides managed fan-out notifications to email, SMS, HTTPS endpoints, or incident-management integrations. Common Misconceptions: A common mistake is assuming an average latency alarm will catch individual slow requests; averages can hide outliers. Another misconception is that services like SQS or Kinesis are needed for simple threshold detection, but they add unnecessary consumers and operational complexity. Also, CloudWatch alarms work on metric statistics over a period, so using the right statistic such as Maximum is important. Exam Tips: When the requirement is 'alert if any single request exceeds a threshold' with minimal operational overhead, think custom CloudWatch metric + alarm on Maximum + SNS. Avoid solutions that require building queue or stream consumers unless the problem explicitly requires custom processing, replay, or downstream analytics. Pay close attention to wording like 'any single event' versus 'average over time.'
A logistics company receives telematics events from 8,000 delivery vans at a peak of 1,200 events per second, averaging 1.5 KB per event. Three EC2-based processing applications (anomaly detection, billing, and live dashboards) must consume the same event stream concurrently in near real time with sub-second latency. Each processor must be able to pause during deployments or crashes and then resume from its last checkpoint without data loss. The team plans to add two more processors next month and wants to avoid duplicating the data pipeline for each consumer. Which AWS service should the developer use to ingest and fan out the stream to meet these requirements?
Amazon SQS is a message queue optimized for decoupling producers and consumers with at-least-once delivery, but it uses a competing-consumer model: once a message is consumed and deleted, other consumers cannot read it. You could duplicate messages to multiple queues or use SNS-to-SQS fanout, but that explicitly duplicates the pipeline per consumer and complicates replay/checkpointing semantics compared to a true stream.
Amazon Kinesis Data Firehose is a fully managed delivery service that buffers and delivers streaming data to destinations like S3, Redshift, and OpenSearch. It is not designed for multiple EC2-based applications to concurrently consume the same stream with sub-second latency and independent checkpoint/replay. Firehose focuses on delivery with buffering (seconds to minutes) and transformation, not multi-consumer real-time processing.
Amazon EventBridge is an event bus for routing events to targets with filtering and SaaS integration. While it supports multiple targets, it is not a streaming service with shard-based throughput, ordered records, and consumer-controlled checkpoints. EventBridge also does not provide the same stream replay model (per-consumer resume from sequence checkpoints) required for crash recovery and near-real-time stream processing at scale.
Amazon Kinesis Data Streams is purpose-built for real-time streaming ingestion and processing with multiple concurrent consumers. It provides low-latency reads, durable retention (24 hours to 365 days), and supports independent consumer progress via sequence numbers and checkpointing (commonly using KCL + DynamoDB). Enhanced Fan-Out can give each consumer dedicated throughput, making it ideal for adding more processors without duplicating ingestion.
Core Concept: This question tests real-time streaming ingestion with multiple concurrent consumers, low latency, and independent replay/checkpointing without duplicating pipelines. The AWS service designed for this is Amazon Kinesis Data Streams (KDS), which supports durable stream storage and multiple consumer applications reading the same data. Why the Answer is Correct: Kinesis Data Streams ingests events at high throughput (1,200 events/sec at ~1.5 KB is ~1.8 MB/s) and provides sub-second read latency for near-real-time processing. Crucially, multiple processing applications can consume the same stream concurrently. Each application can maintain its own checkpoint (typically via KCL leases/checkpointing in DynamoDB or custom checkpointing) so if a processor pauses during deployment or crashes, it can resume from the last processed sequence number without data loss (within the stream retention window). Adding new processors next month is straightforward: they attach as additional consumers to the same stream rather than duplicating ingestion. Key AWS Features: - Multiple consumers: Standard polling consumers or Enhanced Fan-Out (EFO) for dedicated throughput per consumer and low latency. - Replayability: Configurable retention (24 hours up to 365 days) enables reprocessing and recovery. - Ordering and durability: Records are ordered per shard and replicated across multiple AZs. - Scaling: Increase shard count (or use on-demand mode) to handle throughput growth. - Checkpointing: Kinesis Client Library (KCL) commonly stores checkpoints in DynamoDB, enabling resume-after-failure semantics. Common Misconceptions: SQS is often chosen for decoupling, but it is a queue (competing consumers) rather than a stream with independent replays for multiple apps. Firehose is for delivery to destinations (S3/Redshift/OpenSearch) and is not intended for multiple near-real-time consumers with replay. EventBridge is an event bus for routing/integration; it does not provide stream-style retention with per-consumer checkpoints and sub-second streaming semantics. Exam Tips: When you see “multiple applications must read the same events,” “near real time/sub-second,” and “resume from last checkpoint,” think Kinesis Data Streams (or Kafka/MSK). If the requirement includes “fan-out to many consumers without duplicating pipelines,” remember KDS + Enhanced Fan-Out and per-consumer checkpointing as the canonical AWS-native answer.
A team is launching a single-page application served through Amazon CloudFront with an Amazon S3 origin for production. The SPA calls an Amazon API Gateway REST API that invokes an AWS Lambda function which connects to an Amazon Aurora MySQL DB cluster. Production traffic uses a Lambda alias named prod that points to a specific published function version. Company policy requires rotating the database credentials every 14 days, and all previously published Lambda versions that the alias might reference must always use the latest credentials without republishing code. Which solution meets these requirements?
Correct. Secrets Manager is designed for storing and rotating database credentials. Managed rotation for Aurora MySQL can rotate every 14 days and update both the secret and the DB password. Having Lambda retrieve the secret at runtime ensures any published Lambda version that the prod alias points to will always use the latest credentials without republishing or changing the alias/version.
Incorrect. Embedding credentials in the deployment package/container image couples secrets to a specific Lambda version. Rotation would require rebuilding and redeploying the function and potentially updating the prod alias, which violates the requirement that previously published versions must use the latest credentials without republishing code. This is also a security anti-pattern (secrets in artifacts).
Incorrect. Lambda environment variables are part of the function configuration captured in a published version. Because versions are immutable, updating environment variables updates only the unpublished $LATEST configuration; older published versions that an alias might reference will still have the old environment variable values and therefore old credentials, breaking the requirement.
Incorrect. Systems Manager Parameter Store SecureString can store encrypted values, but it does not provide the same built-in, managed database credential rotation workflow as Secrets Manager for Aurora/RDS. You would need to implement custom rotation logic (including updating the DB password) and handle coordination/rollback. The option’s claim to “enable rotation there” is not the standard managed approach expected for this requirement.
Core Concept: This question tests secure secret storage and rotation for database credentials while using AWS Lambda versioning/aliases. The key constraint is that the prod alias can be moved among previously published Lambda versions, yet every version must always use the latest database credentials without republishing. Why the Answer is Correct: AWS Secrets Manager is purpose-built for storing database credentials and rotating them on a schedule. If the Lambda function retrieves the secret value at runtime (rather than baking it into code, configuration, or a specific version), then any published Lambda version—old or new—will always fetch the current credentials. This satisfies the requirement that credential rotation happens every 14 days and that previously published versions continue to work even if the prod alias is shifted to them. Key AWS Features: Secrets Manager supports managed rotation for Amazon Aurora MySQL credentials using an AWS-provided rotation Lambda blueprint. Rotation updates the secret and the database user password in a coordinated way. Lambda can read the secret via the Secrets Manager API (GetSecretValue) using an IAM policy scoped to the specific secret ARN. Best practice is to cache the secret in-memory for a short TTL to reduce latency and API calls, but still ensure refresh after rotation. This aligns with AWS Well-Architected Security pillar guidance: store secrets centrally, rotate automatically, and avoid long-lived static credentials. Common Misconceptions: It may seem that updating Lambda environment variables (Option C) would update all versions, but environment variables are part of the Lambda version configuration. Published versions are immutable; updating environment variables affects only $LATEST (and any newly published version), not old versions that the alias might reference. Similarly, embedding credentials in code (Option B) forces redeployments and violates the “no republishing code” requirement. Exam Tips: When you see “rotate credentials” plus “must work across Lambda versions/aliases without redeploy,” think Secrets Manager + runtime retrieval. Also remember: Lambda versions are immutable snapshots of code and configuration (including environment variables). For database credentials, Secrets Manager is typically preferred over Parameter Store when rotation is explicitly required and especially for RDS/Aurora integrations.
A development team built a serverless photo‑inspection portal where the web UI triggers processing of 600–1,800 MB satellite image bundles using an AWS Step Functions workflow with AWS Lambda, and the UI is backed by a REST API on Amazon API Gateway that invokes Lambda; start-processing requests frequently hit timeouts due to API Gateway’s 29‑second limit because jobs take 4–10 minutes, but the UI must receive an immediate response so it can show a “processing started” message, and the backend invoked by the API must send an email when processing completes—how should the developer configure the API to meet these requirements?
Correct. Setting X-Amz-Invocation-Type to Event makes API Gateway invoke Lambda asynchronously (fire-and-forget). API Gateway can return immediately to the UI so it can display “processing started,” while Lambda starts the Step Functions execution and exits quickly. The workflow can send an email upon completion (e.g., via SNS/SES or a final Lambda task), avoiding the 29-second API Gateway limit.
Incorrect. Maximum event age is a configuration for asynchronous Lambda invocations that controls how long Lambda will keep retrying an event before discarding it (and optionally sending it to a DLQ/on-failure destination). It does not change an API Gateway synchronous invocation into an asynchronous one, so it does not solve the API Gateway 29-second timeout problem.
Incorrect. You cannot increase the API Gateway REST API integration timeout beyond 29 seconds. Lambda can run up to 15 minutes, but API Gateway will still time out waiting for a response. This option reflects a common misconception that API Gateway timeout is adjustable like Lambda’s timeout; it is not, so long-running work must be decoupled.
Incorrect. X-Amz-Target is used for certain AWS service APIs (notably some JSON/Query protocols) to specify an operation target, and it is not the correct mechanism to control Lambda invocation type from API Gateway. The correct header for Lambda async invocation is X-Amz-Invocation-Type with value Event. Using X-Amz-Target here would not reliably invoke Lambda asynchronously.
Core Concept: This question tests how to design an API Gateway + Lambda integration for long-running, serverless backends. The key constraint is API Gateway’s hard integration timeout (29 seconds for REST APIs), which is incompatible with synchronous request/response patterns when the backend work takes minutes. The correct pattern is to decouple the client request from the long-running processing by invoking Lambda asynchronously (fire-and-forget) and returning immediately. Why the Answer is Correct: Option A configures API Gateway to invoke Lambda with the InvocationType set to Event (asynchronous invocation). With async invocation, API Gateway can return an immediate 202-style response (or a custom success payload) without waiting for the Step Functions workflow to complete. The Lambda function can then start the Step Functions state machine execution (StartExecution) and exit quickly. When the workflow finishes, the backend can send an email (commonly via Amazon SNS or Amazon SES) from a final Step Functions task state or a completion Lambda. Key AWS Features: - API Gateway REST API timeout is not configurable beyond 29 seconds; therefore, you must avoid synchronous waiting. - Lambda supports synchronous (RequestResponse) and asynchronous (Event) invocation. API Gateway can pass the X-Amz-Invocation-Type header to control this. - Step Functions is designed for multi-minute workflows; use it for orchestration and trigger notifications on completion. - Reliable completion notification: Step Functions can publish to SNS/SES directly via service integrations or invoke a Lambda to send email. Common Misconceptions: Many assume you can “increase API Gateway timeout” to match Lambda (you cannot). Others confuse Lambda’s async retry controls (Maximum event age) with making an invocation asynchronous; those settings only apply after you are already invoking asynchronously. Exam Tips: When you see “API Gateway timeout” plus “processing takes minutes,” the expected solution is asynchronous invocation or an async workflow pattern (return immediately with a job ID, use callbacks/webhooks, polling, or notifications). Remember: REST API Gateway timeout is fixed; design around it using decoupling (async Lambda, SQS, EventBridge, Step Functions) and completion events (SNS/SES/EventBridge).
A serverless job uses an AWS Lambda function triggered by an Amazon EventBridge schedule to call a third-party currency rates API every 15 minutes, and the function must include an x-api-key while ensuring the key is encrypted at rest; which solution should the developer implement?
Correct. Lambda environment variables can be encrypted at rest using AWS KMS. Choosing a customer managed key (CMK) provides tighter control (key policy, rotation, auditing via CloudTrail) than the default AWS managed key. At runtime, Lambda decrypts the variable, and the function can add the x-api-key header. Ensure the execution role has kms:Decrypt and the CMK policy allows it.
Incorrect. A scheduled EventBridge-triggered Lambda is designed for unattended execution; prompting an operator on first run is not feasible and would break automation. It also does not define a secure, durable, encrypted-at-rest storage mechanism for the key. Certification questions typically reject any approach requiring manual intervention for recurring serverless jobs.
Incorrect. Hardcoding the API key in source code or packaging it in the deployment artifact is a security anti-pattern. It increases the risk of exposure through repositories, CI/CD logs, artifact stores, or reverse engineering. It also makes rotation difficult and violates least privilege and secure secret management best practices expected on AWS exams.
Incorrect. HTTPS ensures encryption in transit, not encryption at rest. Running Lambda in a VPC does not automatically protect or encrypt a secret; it mainly controls network egress/ingress and can introduce operational overhead (NAT gateway for internet access, VPC endpoints). This option does not meet the explicit requirement to keep the key encrypted at rest.
Core Concept: This question tests secure secret handling for serverless workloads, specifically how AWS Lambda stores and protects sensitive configuration data (an API key) and how AWS Key Management Service (AWS KMS) provides encryption at rest. Why the Answer is Correct: Storing the x-api-key as a Lambda environment variable and encrypting it with a customer managed KMS key (CMK) satisfies the requirement that the key is encrypted at rest. Lambda encrypts environment variables at rest and decrypts them at invocation time. By specifying a CMK instead of the default AWS managed key, the developer gains stronger control over key policy, rotation, auditing, and access boundaries. This is a common exam pattern: “secret must be encrypted at rest” + “Lambda” typically points to environment variable encryption with KMS (or Secrets Manager/Parameter Store, but those are not offered here). Key AWS Features: - Lambda environment variables: used to store configuration separate from code. - Encryption helpers: Lambda integrates with AWS KMS to encrypt environment variables at rest. - Customer managed KMS key: enables granular key policies, CloudTrail auditing of decrypt usage, and optional automatic rotation. - IAM permissions: the Lambda execution role must have kms:Decrypt permission for the CMK (and the key policy must allow it). Common Misconceptions: - “HTTPS-only” protects data in transit, not at rest. VPC placement does not inherently encrypt secrets and can add complexity (NAT, endpoints). - Embedding secrets in code is a major security anti-pattern and complicates rotation. - Prompting an operator is incompatible with unattended scheduled execution and does not ensure secure storage. Exam Tips: When you see “must be encrypted at rest” for configuration/credentials in Lambda, look for KMS-backed storage (environment variables, or external secret stores). If the option explicitly mentions a CMK for Lambda environment variables, it is usually the best fit because it directly addresses encryption-at-rest and access control requirements. Also remember to consider IAM + KMS key policy alignment for successful decryption at runtime.
外出先でもすべての問題を解きたいですか?
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。
A fintech startup runs a Node.js microservice on AWS Lambda that calls a third-party payment gateway's REST API using a client_id and client_secret. The credentials are currently stored as plaintext environment variables in the Lambda function, and the security team mandates automatic rotation every 90 days with no code redeployments. The developer must secure the API credentials and enforce quarterly rotation while allowing the function to fetch credentials at runtime using IAM. Which solution provides the most secure approach?
KMS can encrypt/decrypt data and supports automatic KMS key rotation, but rotating a KMS key does not rotate the payment gateway client_id/client_secret. You would still be using the same credentials unless you build a separate rotation mechanism. Also, storing secrets in an encrypted file and decrypting in Lambda adds operational complexity and increases the risk of mishandling plaintext in the runtime environment.
STS provides temporary AWS credentials (AccessKeyId/SecretAccessKey/SessionToken) to call AWS services, not to authenticate to a third-party payment gateway. Retrieving STS credentials every 15 minutes does not address rotating the external API’s client_id/client_secret. This option confuses AWS identity federation/assume-role patterns with third-party secret management requirements.
Secrets Manager is designed to store and manage sensitive credentials, encrypt them with KMS, and enable automatic rotation on a schedule (e.g., every 90 days). The Lambda function can use its execution role to call Secrets Manager at runtime (least-privilege IAM) and always retrieve the current secret value without redeployment. CloudTrail auditing and optional resource policies further strengthen security.
AWS Systems Manager Parameter Store SecureString can store encrypted values and allow IAM-based runtime retrieval, but it is not the purpose-built service for managing rotating application secrets. Parameter Store does not offer the same native, managed automatic rotation workflow for third-party credentials that AWS Secrets Manager provides. To meet a strict 90-day rotation requirement, you would need to build and maintain custom automation to update both the stored value and the third-party payment gateway credentials. Because the question asks for the most secure approach with automatic rotation and no redeployments, Secrets Manager is the better and more direct choice.
Core Concept: This question tests secure storage and automated rotation of third-party application secrets for AWS Lambda, using IAM-based runtime access. The primary service is AWS Secrets Manager, which is purpose-built for managing credentials, rotation, and controlled retrieval. Why the Answer is Correct: Option C is the most secure and operationally correct approach because Secrets Manager can store the payment gateway client_id/client_secret, encrypt them at rest with KMS, and (critically) perform automatic rotation on a 90-day schedule without requiring Lambda code redeployments. The Lambda function retrieves the secret at runtime via the Secrets Manager API using an IAM role policy (least privilege). Rotation updates the stored secret value centrally; the function simply reads the latest version each invocation (or with short-lived caching), meeting the “no redeployments” requirement. Key AWS Features: Secrets Manager supports built-in rotation scheduling and rotation workflows (typically via a rotation Lambda). Even for third-party APIs, you can implement a custom rotation Lambda that generates a new secret and updates the third-party system if it supports credential regeneration. Secrets are encrypted with KMS, access is controlled via IAM policies and optional resource policies, and retrieval can be audited with AWS CloudTrail. This aligns with AWS Well-Architected Security Pillar practices: protect data, implement strong identity foundations, and automate security best practices. Common Misconceptions: KMS key rotation (Option A) rotates the encryption key material, not the underlying plaintext secret value; it does not satisfy “rotate credentials every 90 days.” STS (Option B) issues AWS credentials for AWS APIs, not third-party payment gateways. Parameter Store SecureString (Option D) encrypts values and supports access control, but it does not provide a native, managed automatic rotation workflow comparable to Secrets Manager; rotation would be manual or require custom automation without the same first-class rotation features. Exam Tips: When you see “automatic rotation,” “secrets,” and “no redeployments,” default to Secrets Manager. Distinguish between rotating encryption keys (KMS) versus rotating the secret itself. Also remember STS is for AWS temporary credentials, not external API secrets. Prefer IAM role-based runtime retrieval over environment variables for sensitive data.
A media startup mandates that all Amazon S3 buckets be provisioned exclusively through AWS CloudFormation stacks, and a developer has already created an Amazon Simple Notification Service (Amazon SNS) topic and subscribed security-ops@example.com to it, and the security team now requires an immediate notification (within 60 seconds) whenever a CreateBucket action occurs that was not initiated by CloudFormation, so which approach will satisfy this requirement?
Incorrect. Although Lambda can process events quickly, this option relies on an EventBridge schedule every 15 minutes, which violates the “within 60 seconds” requirement. Additionally, polling CloudTrail (especially log files delivered to S3) can introduce extra latency and complexity. The better pattern is to have EventBridge react to CloudTrail events in near real time rather than periodic scans.
Incorrect. Running an ECS Fargate task on a 15-minute schedule is also polling-based and cannot meet the 60-second notification requirement. It adds unnecessary operational overhead (task definitions, networking, IAM, cost) for a problem that EventBridge can solve natively with an event pattern and an SNS target.
Incorrect. An EC2 instance with a cron job is the most operationally heavy approach (patching, scaling, availability, credentials management) and still only runs every 15 minutes, failing the 60-second requirement. Like the other polling options, it depends on CloudTrail log availability and parsing, which is not designed for immediate detection.
Correct. An EventBridge rule can match CloudTrail CreateBucket management events and filter out CloudFormation-initiated calls using event pattern logic (for example, excluding detail.userIdentity.invokedBy values associated with CloudFormation). EventBridge can then target the existing SNS topic directly, providing near-real-time notifications (typically seconds) and meeting the 60-second requirement with minimal operational overhead.
Core Concept: This question tests near-real-time detection of unauthorized API activity using AWS CloudTrail integrated with Amazon EventBridge (formerly CloudWatch Events). The goal is to alert within 60 seconds when an Amazon S3 CreateBucket API call occurs outside the approved provisioning path (AWS CloudFormation). Why the Answer is Correct: Option D is correct because EventBridge can directly match CloudTrail management events for s3.amazonaws.com CreateBucket and route matching events to an Amazon SNS topic immediately (typically seconds, well within 60 seconds). By filtering on fields such as detail.eventSource = "s3.amazonaws.com" and detail.eventName = "CreateBucket", and excluding CloudFormation-originated calls (for example, using an "anything-but" match on detail.userIdentity.invokedBy or other identity attributes that indicate CloudFormation), the rule triggers only when the bucket creation was not initiated by CloudFormation. This is event-driven, low-latency, and requires no polling infrastructure. Key AWS Features: 1) CloudTrail management events: CreateBucket is a management event and can be delivered to EventBridge when CloudTrail is enabled. 2) EventBridge event patterns: Support advanced matching including "anything-but" to exclude known sources. 3) SNS as a target: EventBridge can publish directly to SNS, leveraging the existing topic/subscription to email. 4) Security best practice: Centralized detection/alerting without servers aligns with AWS Well-Architected Security and Operational Excellence pillars. Common Misconceptions: Polling CloudTrail logs (Lambda/ECS/EC2 on a schedule) seems straightforward, but it introduces detection delays (15 minutes in the options) and operational overhead. Also, parsing delivered log files in S3 is not guaranteed to meet a 60-second SLA due to delivery latency and batching. EventBridge is designed for near-real-time reaction to API calls. Exam Tips: When you see “notify within X seconds” for API activity, prefer event-driven patterns: CloudTrail + EventBridge rule + SNS/Lambda target. Avoid scheduled polling unless the requirement explicitly allows minutes of delay. Also remember that CloudFormation-originated actions can be identified via userIdentity fields (invokedBy/arn/session context), enabling exclusion filters in EventBridge event patterns.
A fintech startup runs a serverless API layer in Amazon API Gateway with AWS Lambda proxy integrations for 12 internal microservice APIs, and the team wants to expose a lightweight HTML API Directory page (about 6 KB) that lists links to these APIs and is served directly by API Gateway without invoking any Lambda; a developer has created a new /directory resource and a GET method using a mock integration, the response must return HTTP 200 with Content-Type text/html, what should the developer do next to meet these requirements?
Incorrect. It places the HTML in the integration response template (good), but it sets the integration response Content-Type to application/json, which violates the requirement to return text/html. Also, setting “Content-Type of text/html” in the integration request is not what controls the client response; response headers must be configured in the method/integration response.
Incorrect. It puts the HTML in the integration request template, but the request mapping template is used to create the payload sent to (or generated for) the integration, not the body returned to the client. While it mentions text/html in the integration response, it does not correctly provide the mock statusCode selection pattern and misplaces the HTML content.
Correct. For a mock integration, the integration request mapping template commonly returns a small JSON object like {"statusCode": 200} so API Gateway can match an integration response. Then the integration response mapping template returns the static HTML body and sets Content-Type to text/html, meeting the requirement of HTTP 200 and HTML without invoking Lambda.
Incorrect. It puts the HTML in the integration request template and sets the integration response Content-Type to application/json, which conflicts with the requirement. Additionally, the request template Content-Type does not determine the outgoing response Content-Type; the response must be configured in the method/integration response settings.
Core Concept: This question tests Amazon API Gateway REST API “Mock” integration behavior and how to use mapping templates to return a static response without invoking a backend (Lambda/HTTP). With a mock integration, API Gateway generates the integration response based on a request mapping template, then transforms it into the method response using integration response settings. Why the Answer is Correct: For a mock integration, API Gateway needs a request mapping template that produces a payload containing a status code (commonly {"statusCode": 200}) so that API Gateway can select the correct integration response. Then, the integration response mapping template is where you place the static body you want returned to the client. Because the requirement is to return HTML with HTTP 200 and Content-Type: text/html, you configure the integration response to map to a 200 method response and set the response header Content-Type to text/html, while the body mapping template contains the HTML directory page. Option C matches this pattern: request template sets {"statusCode": 200}, and response template returns the HTML with text/html. Key AWS Features: - Mock integration: API Gateway returns a response without calling any backend. - Mapping templates (Velocity Template Language): - Integration request template: generates a synthetic payload (including statusCode) used for response selection. - Integration response template: defines the response body (the HTML content). - Method/Integration response headers: you must ensure the method response declares Content-Type (or a header like Content-Type) and the integration response maps/overrides it to text/html. Common Misconceptions: A frequent mistake is putting the HTML in the integration request template. The request template is not what the client receives; it’s what API Gateway uses internally to drive the integration response selection. Another misconception is thinking the request template’s “Content-Type” controls the outgoing response; response headers are controlled in method/integration response configuration. Exam Tips: - For REST APIs with Mock integration: request template often returns {"statusCode": 200}. - Static payload belongs in the integration response mapping template. - Always align: Method Response (declares status code/headers) + Integration Response (maps status code/headers/body). If the question emphasizes Content-Type, ensure it’s set on the response path, not the request path.
A developer is testing an event-driven order processing system on AWS that uses Amazon EventBridge rules to trigger AWS Step Functions state machines, which invoke multiple AWS Lambda functions. During a 20-minute load test at 1,500 events per minute, intermittent 5xx errors occur, but the developer cannot quickly determine which component is failing. To pinpoint the errors, the developer must be able to search and correlate logs across all components (EventBridge, Step Functions, and Lambda) within the most recent 24 hours with minimal setup and ongoing maintenance. What should the developer do to meet these requirements with the LEAST operational overhead?
ELB health checks are irrelevant because this is an event-driven serverless workflow (no load balancer target health to check). Publishing logs as custom metrics via PutMetricData is not a log search solution and adds significant custom work and cost. Athena can query logs in S3, but this option does not provide a clear, minimal-overhead path to centralize EventBridge/Step Functions/Lambda logs into S3 in the first place.
Route 53 health checks monitor endpoints, which does not help isolate failures inside EventBridge/Step Functions/Lambda. CloudTrail captures control-plane API calls (e.g., StartExecution) but not the detailed application/runtime logs needed to pinpoint intermittent 5xx errors in Lambda code or state transitions. Querying CloudTrail with Athena can help audit who called what, but it is not the right tool for correlating execution failures across components.
Streaming logs through Kinesis Data Firehose to S3 and indexing in Amazon OpenSearch Service can enable powerful search, but it introduces substantial setup and ongoing operations: delivery streams, buffering, index management, scaling, retention, and cost tuning. This violates the “least operational overhead” requirement. It is appropriate for long-term centralized logging/search at large scale, not for quick troubleshooting with minimal maintenance.
This is the lowest-ops, most direct approach. Enable Step Functions execution logging to CloudWatch Logs (ALL level, optionally include execution data) and rely on Lambda’s native CloudWatch Logs integration. Use structured JSON logs to make correlation fields queryable. Then use CloudWatch Logs Insights to search across multiple log groups for the last 24 hours and correlate errors by orderId/eventId/executionArn, quickly identifying the failing state or function.
Core Concept - The question tests centralized observability and log analytics for serverless/event-driven architectures with the least operational overhead. The key services are Amazon CloudWatch Logs (as the native log sink for Lambda and Step Functions) and CloudWatch Logs Insights (for ad-hoc search, filtering, aggregation, and correlation across multiple log groups within a time window). Why the Answer is Correct - The requirement is to quickly pinpoint intermittent 5xx errors during a load test by searching and correlating logs across EventBridge, Step Functions, and Lambda for the last 24 hours, with minimal setup and maintenance. Enabling Step Functions execution logging to CloudWatch Logs at the ALL level (optionally including execution data) provides detailed state transition and error context. Lambda already writes to CloudWatch Logs by default; using structured JSON logging improves queryability and correlation. CloudWatch Logs Insights can query multiple log groups at once, filter by request/order IDs, error codes, or execution ARNs, and rapidly identify which Lambda invocation or Step Functions state failed. Key AWS Features - Step Functions supports CloudWatch Logs integration with configurable log level (ERROR/ALL) and inclusion of execution data (useful for debugging but consider sensitive data). Lambda logs go to per-function log groups; adding correlation identifiers (e.g., orderId, eventId, executionArn) in JSON enables efficient Logs Insights queries. CloudWatch Logs Insights supports cross-log-group queries, time-bounded searches (e.g., last 24 hours), and aggregations (count of errors by function/state). For EventBridge, while it does not “log” every event by default, you can capture failures via rule metrics and DLQs; in practice, correlating Step Functions start and downstream failures is usually sufficient to isolate the failing component. Common Misconceptions - Teams often jump to OpenSearch or an ELK-style pipeline for “search,” but that adds ingestion, indexing, scaling, and cost management. CloudTrail is also frequently mistaken as an application troubleshooting tool; it records control-plane API activity, not detailed per-event processing failures. Exam Tips - For least-ops log search across Lambda/Step Functions, default to CloudWatch Logs + Logs Insights. Choose managed, native integrations over building pipelines (Firehose/OpenSearch) unless there is a stated long-term analytics/search requirement at scale. Also remember Step Functions execution logging is opt-in and is a common missing piece when debugging distributed workflows.
A logistics company exposes a single REST endpoint /shipments as a POST method through Amazon API Gateway that invokes an AWS Lambda function named CreateShipment. The developer has published a new immutable version 5 of the function and must run integration tests at up to 200 requests per minute for 15 minutes. The tests must not affect the production stage, which currently serves about 3,000 requests per minute. With the least operational overhead, how should the developer test the new version before using it in production?
Adding /shipments-v2 in the existing API and deploying to the production stage introduces change into the prod stage. Even if users don’t call the new path, it still modifies production configuration and can create operational risk (deployment errors, shared stage settings like throttling/usage plans, logging changes). It also doesn’t cleanly separate test traffic from prod stage metrics and governance.
A new stage (staging) provides an isolated invoke URL while reusing the same API definition. Using a stage variable in the Lambda integration URI lets staging call Lambda version 5 (or an alias pointing to it) while prod continues to call the current version. This minimizes overhead (no duplicate API) and prevents test traffic (200 rpm for 15 minutes) from impacting the prod stage handling ~3,000 rpm.
A separate REST API would isolate testing, but it has higher operational overhead: duplicating resources/methods, authorizers, usage plans, throttling, logging, deployments, and potentially custom domain/base path mappings. It also risks configuration drift between the test API and the real production API, reducing the fidelity of integration tests compared to using a separate stage of the same API.
Pointing the existing POST method integration to version 5 and deploying to the production stage directly affects production traffic. Even if the test is short, all prod requests would invoke the new version during the test window, violating the requirement that tests must not affect the production stage. This is the highest-risk approach and not aligned with safe deployment best practices.
Core Concept: This question tests safe deployment and testing patterns for Amazon API Gateway + AWS Lambda using Lambda versions/aliases and API Gateway stages/stage variables. The goal is to run integration tests against a new immutable Lambda version without impacting the production stage. Why the Answer is Correct: Creating a separate API Gateway stage (for example, “staging”) allows the developer to test the same REST API configuration (resources, methods, auth, throttling, mapping templates) but with a different backend Lambda target. By configuring the Lambda integration URI to reference a stage variable (such as ${stageVariables.lambdaAlias}), the staging stage can invoke Lambda version 5 (or an alias pointing to version 5) while the prod stage continues invoking the current production version/alias. This isolates traffic: the tests hit the staging invoke URL, so production users and metrics remain unaffected. Key AWS Features: 1) Lambda versions are immutable snapshots; aliases provide stable pointers (e.g., “prod”, “staging”) and can be shifted later. 2) API Gateway stages provide separate deployment environments with distinct invoke URLs, stage variables, throttling, logging, and canary settings. 3) Stage variables can be used in integration URIs to dynamically select the Lambda alias/version per stage, minimizing operational overhead and avoiding duplicate APIs. Common Misconceptions: It may seem easiest to update the production integration to version 5 temporarily, but that directly impacts prod. Similarly, adding a new resource in the same prod stage still requires deploying to prod and can introduce risk (and may share throttling/usage plans). Creating a whole new REST API works but adds unnecessary overhead and configuration drift. Exam Tips: For “test without affecting production” with “least operational overhead,” look for: separate stage + stage variables + Lambda alias/version. This is a standard exam pattern for safe validation before promotion, aligning with Well-Architected principles (reliability and operational excellence) by reducing blast radius and enabling controlled rollout.
学習期間: 1 month
점수는 793점으로 합격했어요! 하루에 최소 30문제는 풀었어요. 밖에서도 짬 날 때마다 풀 수 있으니 좋네요ㅎㅎ
学習期間: 2 months
앱 문제가 시험이랑 굉장히 유사했어요. 그리고 해설들이 왜 틀렸는지 이해하는데 도움이 됐어요.
学習期間: 1 month
Thank you very much, these questions are wonderful !!!
学習期間: 2 months
1달 전에 합격했는데 지금 후기쓰네요. 시험하고 문제 구성이 비슷했어요
学習期間: 2 months
I just passed the exam, and I can confidently say that this app was instrumental in helping me thoroughly review the exam material.
外出先でもすべての問題を解きたいですか?
無料アプリを入手
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。