
65問と130分の制限時間で実際の試験をシミュレーションしましょう。AI検証済み解答と詳細な解説で学習できます。
AI搭載
すべての解答は3つの主要AIモデルで交差検証され、最高の精度を保証します。選択肢ごとの詳細な解説と深い問題分析を提供します。
A media streaming company stores its microservice code in AWS CodeCommit, and compliance requires that 100% of unit tests pass and detailed test reports be retained for 60 days and viewable by auditors; for approximately 15 commits per day to the main and develop branches, the team needs each commit to automatically build the service in a Linux build environment, run unit tests, and produce a navigable JUnit-style report without creating a custom UI, while supporting at least 8 concurrent builds and keeping the reports centrally accessible; which solution will meet these requirements?
Incorrect. AWS CodeDeploy is designed for deploying applications to EC2, Lambda, or ECS and coordinating deployment hooks, not for CI unit testing and rich test report visualization. Publishing results as CloudWatch metrics does not create a navigable JUnit-style report for auditors, and it would require additional tooling/dashboards to interpret detailed per-test-case results. It also couples unit testing to deployments, which is not a best practice for fast feedback CI.
Incorrect. Amazon CodeWhisperer is an AI coding assistant, not a CI system. It does not orchestrate builds on commits, run unit tests at scale, or provide standardized test report visualization. Storing raw outputs in S3 alone does not meet the requirement for a navigable JUnit-style report without building a UI; you would still need a viewer or reporting layer to interpret and browse results.
Correct. AWS CodeBuild can be triggered automatically from CodeCommit on each commit to main/develop, run builds and unit tests in a managed Linux environment, and ingest JUnit XML into CodeBuild Report Groups for a built-in, navigable console experience. Reports and artifacts can be exported to S3 for centralized access and governed with a 60-day lifecycle policy. CodeBuild supports scaling to at least 8 concurrent builds via managed concurrency and quota configuration.
Incorrect. Lambda is not appropriate for compiling/building typical microservices and running full unit test suites due to execution time limits, dependency/tooling complexity, and ephemeral storage constraints. Storing results in a Lambda layer is not a durable, queryable, auditor-friendly retention mechanism and does not provide a navigable JUnit report UI. This approach would effectively require building and maintaining custom CI/reporting plumbing.
Core Concept: This question tests CI build-and-test automation with AWS developer tools, specifically AWS CodeBuild’s native test reporting and scalable, managed build execution integrated with AWS CodeCommit. Why the Answer is Correct: Option C directly satisfies all requirements: trigger on each commit to main/develop, build in a Linux environment, run unit tests, generate a navigable JUnit-style report without building a custom UI, retain reports for 60 days, and support at least 8 concurrent builds. CodeBuild integrates with CodeCommit via webhooks (or EventBridge) to start builds automatically per commit. CodeBuild Report Groups provide first-class test report ingestion and visualization in the AWS console for common formats like JUnit XML, giving auditors a built-in UI to browse test suites, cases, pass/fail, and trends. Key AWS Features: 1) CodeBuild managed Linux environments: choose standard images (e.g., aws/codebuild/standard) and define commands in buildspec.yml. 2) Test reporting (Report Groups): configure reports section in buildspec.yml to collect JUnit XML; results are viewable in CodeBuild without custom tooling. 3) Central retention and auditor access: export artifacts (including raw JUnit XML and any HTML) to Amazon S3; apply an S3 Lifecycle rule to expire objects after 60 days. Use IAM policies (and optionally S3 Object Lock/WORM if required) for auditor read-only access. 4) Concurrency: CodeBuild supports parallel builds; ensure account build concurrency is set to at least 8 (request quota increase if needed) and select appropriate compute types. Common Misconceptions: Teams often assume CodeDeploy is for testing; it is primarily for deployment orchestration and does not provide rich, navigable unit test reporting. Others try to assemble Lambda-based build systems, but Lambda is not suited for full builds (runtime limits, tooling, storage) and would require custom report hosting. Exam Tips: When you see “JUnit-style report,” “viewable without custom UI,” and “retain for X days,” think CodeBuild Report Groups + S3 lifecycle. For “on every commit,” look for CodeCommit webhook/EventBridge triggers. For “N concurrent builds,” remember CodeBuild concurrency quotas and scaling are managed, not self-hosted.
A site reliability engineer using a Windows laptop must run a local PowerShell script to provision an application stack in the ap-northeast-2 Region within 10 minutes, with command-line-only access to a new AWS account that allows programmatic access but no console sign-in; what must the engineer set up on the laptop to perform the deployment?
Incorrect. The AWS CLI does not authenticate with an IAM username and password. Those credentials are used for AWS Management Console sign-in (interactive web login), which the prompt explicitly says is not allowed. For API calls from the CLI, AWS requires request signing with access keys or temporary STS credentials. Even if an IAM user exists, the CLI needs access key material, not the console password.
Incorrect. SSH keys are used to authenticate to resources like EC2 instances (SSH/RDP-related workflows), some Git operations (e.g., CodeCommit via SSH), or to establish secure shell sessions. They do not provide AWS API authentication for the AWS CLI. The AWS CLI requires IAM-based credentials (access keys or STS tokens) to sign API requests, not an SSH keypair.
Correct. Installing the AWS CLI and configuring it with an IAM user access key ID and secret access key enables programmatic access to AWS APIs from the Windows laptop. This matches the requirement of command-line-only access and no console sign-in. The engineer can also set the default region to ap-northeast-2 (or pass `--region`) to ensure the PowerShell provisioning script deploys into the correct Region quickly.
Incorrect. X.509 certificates are not the standard authentication mechanism for AWS CLI usage. Modern AWS API access is typically done with IAM access keys or temporary credentials from AWS STS (often via roles). While certificates appear in some legacy or specialized contexts, they are not what you configure for typical CLI-based provisioning from a laptop, and this option does not align with current best practices or common exam expectations.
Core Concept: This question tests AWS programmatic access from a local machine using command-line tools. Specifically, it focuses on how the AWS CLI authenticates to a new AWS account when console sign-in is not allowed. Why the Answer is Correct: Because the engineer has “command-line-only access” and the account “allows programmatic access but no console sign-in,” the correct setup is the AWS CLI configured with IAM access keys (access key ID and secret access key). The AWS CLI uses SigV4 request signing with these credentials (or with temporary credentials from STS), not a username/password. With a Windows laptop running a local PowerShell script, installing the AWS CLI and configuring credentials (typically via `aws configure` or environment variables) is the standard and fastest way to enable the script to call AWS APIs in ap-northeast-2 within the 10-minute constraint. Key AWS Features: - AWS CLI credential sources: shared credentials file (~/.aws/credentials), config file (~/.aws/config), environment variables, or credential_process. - IAM access keys: long-term programmatic credentials for an IAM user. Best practice is to prefer temporary credentials via IAM roles/STS, but the prompt implies a new account and immediate programmatic access. - Region targeting: configure default region (ap-northeast-2) in the CLI config or pass `--region ap-northeast-2` to commands. Common Misconceptions: Many assume “IAM username and password” can be used for CLI login. That is for AWS Management Console sign-in and is not used by the AWS CLI for API authentication. Others confuse SSH keys (used for EC2 instance login, CodeCommit SSH, etc.) with AWS API authentication. X.509 certificates were historically used with older mechanisms (e.g., some legacy EC2 API tools) and are not the standard for AWS CLI authentication. Exam Tips: - If a question says “programmatic access,” think “access key ID + secret access key” or “STS temporary credentials via role.” - If console sign-in is disallowed, username/password is irrelevant. - SSH keys authenticate to servers/repositories, not to AWS API endpoints. - Always note region requirements: set a default region or specify `--region` to avoid deploying to the wrong Region. (Reference: AWS CLI configuration and credential types in AWS IAM/AWS CLI documentation; AWS Well-Architected Security Pillar emphasizes least privilege and avoiding long-term credentials when possible.)
A startup operates a two-node Amazon Redshift cluster across 2 Availability Zones for analytics and needs to create an AWS CloudFormation custom resource backed by an AWS Lambda function (256 MB memory, 45-second timeout) to apply additional database configuration during stack creation; the Lambda function must connect to the Redshift cluster using the cluster’s admin user credentials (username set to dw_admin) that are also provisioned at deployment—what is the MOST secure way to provide these credentials to the Lambda function while supplying them to the Redshift cluster?
NoEcho helps prevent the password from being displayed in the CloudFormation console output, but the password is still provided directly to Lambda via an environment variable. Environment variables can be exposed through configuration access, debugging, or accidental logging, and they are not a best-practice secret store. This approach also lacks rotation and centralized secret governance.
This is better than A because Lambda retrieves the value at runtime and only stores a parameter name in the environment. However, the option does not specify using SSM Parameter Store SecureString with KMS, which is critical for secure storage. Even if SecureString were used, Secrets Manager is typically the preferred service for database credentials due to native rotation and purpose-built secret lifecycle management.
Encrypting the parameter value with the KMS encrypt command does not solve the core problem: Lambda still needs the plaintext to connect, so you would need to manage decryption, key permissions, and ciphertext handling. This becomes a custom secret-management scheme and often results in secrets being stored or passed around in less controlled ways (e.g., environment variables), increasing operational and security risk.
Creates a managed secret in Secrets Manager and uses a CloudFormation dynamic reference to supply the secret value to Redshift at deployment without embedding plaintext in the template. Lambda receives only the secret identifier in an environment variable and retrieves the secret at runtime using an IAM role with secretsmanager:GetSecretValue. This minimizes exposure, supports auditing and rotation, and aligns with AWS best practices for credential handling.
Core Concept: This question tests secure secret distribution during infrastructure provisioning with AWS CloudFormation, and secure runtime retrieval by AWS Lambda. It focuses on avoiding plaintext credentials in templates, parameters, logs, and environment variables, and on using managed secret services with IAM least privilege. Why the Answer is Correct: Option D is the most secure because it uses AWS Secrets Manager as the system of record for the Redshift admin password and uses CloudFormation dynamic references to inject the secret value into the Redshift cluster’s MasterUserPassword at deployment time without hardcoding or directly exposing the secret in the template. The Lambda function is given permission (secretsmanager:GetSecretValue) to retrieve the secret at runtime, while only the secret identifier (name/ARN) is stored in the Lambda environment variable. This minimizes secret exposure, supports rotation, and aligns with best practices for credential management. Key AWS Features / Best Practices: - CloudFormation dynamic references (e.g., {{resolve:secretsmanager:...}}) allow secure retrieval at deploy time; CloudFormation does not store the plaintext secret in the template. - Secrets Manager provides encryption at rest with KMS, fine-grained IAM access control, auditing via CloudTrail, and built-in rotation workflows. - Lambda should fetch secrets at runtime (and ideally cache them briefly) rather than embedding them in environment variables. - Least privilege: the Lambda execution role only needs secretsmanager:GetSecretValue for that specific secret. Common Misconceptions: - NoEcho on a CloudFormation parameter reduces console display, but the secret can still leak through other paths (stack events, logs, environment variables, CI/CD parameter handling). It is not equivalent to a managed secret store. - Parameter Store can store SecureString, but the option as written does not specify SecureString or a dynamic reference pattern; Secrets Manager is generally preferred for database credentials and rotation. Exam Tips: When you see “MOST secure” and “credentials for Lambda + database provisioning,” prefer Secrets Manager + dynamic references + runtime retrieval with IAM. Avoid passing secrets via Lambda environment variables or plaintext parameters, even with NoEcho. Look for solutions that support rotation and minimize secret exposure across deployment artifacts.
A company runs a nationwide smart-thermostat platform where device firmware sends telemetry and commands through an existing Amazon API Gateway REST API that invokes multiple AWS Lambda functions for validation and routing. The engineering team has built updated Lambda code with new rule-processing features and needs to test exactly 5% of incoming device requests (about 12,000 requests per hour) for 48 hours without impacting the remaining traffic or changing client behavior, and they want the approach with the least ongoing operational work and minimal API changes; which solution meets these requirements with the least operational effort?
Correct. Publish new Lambda versions and create a weighted alias (5% new, 95% old). Update API Gateway to invoke the alias (qualified ARN). This keeps client behavior unchanged, requires minimal API change (integration ARN only), and provides an exact, stable traffic split for 48 hours with low operational overhead (adjust weights to end the test or promote).
Incorrect. API Gateway stage canaries can shift a percentage of requests to a canary deployment, but creating a new REST API adds unnecessary operational work and configuration duplication. Stage canaries are better suited when testing API Gateway configuration changes. For testing only Lambda code, Lambda weighted aliases are simpler and require fewer moving parts.
Incorrect. CodeDeploy canary deployments for Lambda are designed for progressive shifting toward the new version (e.g., 10% for N minutes, then 100%) with alarms and automated rollback. The requirement is to test exactly 5% for 48 hours, not to gradually roll forward. Introducing CodeDeploy increases operational complexity versus a simple alias weight.
Incorrect. Like B, it requires creating and maintaining a separate REST API and configuring non-proxy integrations and mapping templates, which is higher operational effort and more API change than needed. Non-proxy integrations add complexity and risk of mismatched mappings. The requirement is minimal API changes and least ongoing work; Lambda weighted aliases achieve this directly.
Core concept: This question tests safe production testing/traffic shifting for AWS Lambda behind an existing Amazon API Gateway REST API, using Lambda versions and aliases (a common deployment pattern) to do controlled canary-style routing without changing clients. Why the answer is correct: Option A uses Lambda function versioning plus a weighted alias to split invocations (5% to the new version, 95% to the current version). API Gateway continues to receive the same device requests and simply invokes the alias ARN instead of the unqualified function ARN. This meets the requirement to test exactly 5% of incoming requests for 48 hours with minimal operational effort: you set the weights, monitor, then revert or promote by adjusting alias weights. No client behavior changes are required, and the API surface remains essentially unchanged (only the integration target ARN changes). Key AWS features and best practices: - Lambda versions are immutable snapshots of code/config. - Lambda aliases are stable pointers to versions and support weighted routing between two versions for gradual deployments/canaries. - API Gateway REST API Lambda integration can invoke a specific version/alias by using the qualified ARN. - Operationally, this is lightweight: no additional API stages, no new APIs, and no external deployment orchestrator required for a fixed 48-hour experiment. Common misconceptions: API Gateway stage canaries (options B/D) are often confused with backend canaries. Stage canaries primarily shift traffic between stage deployments/configuration (e.g., different integrations, mappings, authorizers) and typically require maintaining a canary deployment and potentially duplicating configuration. Creating a new REST API is explicitly more change and more operational overhead than necessary. CodeDeploy canary deployments (option C) are excellent for automated progressive rollouts, but they are designed to shift traffic over time toward 100% and manage alarms/rollback. The requirement is to hold at exactly 5% for 48 hours, which is simpler with alias weights and does not require introducing CodeDeploy. Exam tips: - For Lambda behind API Gateway, the lowest-effort traffic split is usually “Lambda version + weighted alias,” then point the integration to the alias. - Use API Gateway canary when you need to test API Gateway configuration changes (mappings, authorizers, stage variables), not just new Lambda code. - Use CodeDeploy when you want automated rollout/rollback with alarms and a progression plan, not when you need a fixed-percentage experiment for a set duration.
A ride-sharing company is building an internal console that aggregates trip and payment data from 25 partner services into one dashboard. The team has automated an OAuth onboarding workflow that retrieves API client credentials (client_id and client_secret) from each partner. During stack updates, an AWS CloudFormation custom resource invokes an AWS Lambda function that fetches or refreshes those credentials. The developer needs a solution to persist the retrieved credentials with minimal operational overhead while keeping them secure at rest and in transit. Which solution will meet these requirements in the most secure way?
Incorrect. Secrets Manager GenerateSecretString is designed to generate a new secret, not to safely ingest externally retrieved credentials from a custom resource response. Also, attempting to “reference” credentials returned by a custom resource risks exposing them in CloudFormation events/state because custom resource responses and stack updates can be logged. A more secure pattern is to have Lambda write directly to Secrets Manager (PutSecretValue), not via template properties.
Correct. The Lambda function that retrieves/refreshes partner OAuth credentials can store them using SSM Parameter Store with Type=SecureString via ssm:PutParameter. SecureString encrypts the value at rest with AWS KMS and uses TLS in transit. This avoids placing secrets in the CloudFormation template or stack properties, reduces leakage risk, and has minimal operational overhead while supporting updates/rotation via Overwrite=true.
Incorrect. While CloudFormation supports creating SSM parameters and NoEcho can mask values in some displays, this option still requires passing the secret value through CloudFormation resource properties. That increases the risk of exposure through stack events, change sets, drift/diagnostics, or custom resource responses. Additionally, NoEcho is not an encryption mechanism; the secure storage property comes from SecureString, which this option does not specify.
Incorrect. NoEcho is a CloudFormation property used for parameters/outputs masking; it is not something you can “set” on an SSM parameter via the PutParameter API. Even if masking were possible, it would not provide encryption at rest. The correct API-level control for secure storage in Parameter Store is setting Type=SecureString (and optionally specifying a KMS KeyId).
Core Concept: This question tests secure secret storage for dynamically retrieved credentials during CloudFormation updates. The key services are AWS Systems Manager Parameter Store (SecureString) and AWS Secrets Manager, plus CloudFormation custom resources and how to avoid leaking secrets in templates, events, or logs. Why the Answer is Correct: Option B is best because the custom resource Lambda already fetches/refreshes the OAuth client_id/client_secret, and it can immediately persist them using ssm:PutParameter with Type=SecureString. SecureString encrypts the value at rest using AWS KMS and uses TLS in transit via the AWS SDK. This approach avoids embedding secrets into the CloudFormation template or stack state, minimizing operational overhead while keeping secrets protected. Key AWS Features: - Parameter Store SecureString: encrypted at rest with KMS (AWS-managed key by default or a customer managed key via KeyId), and decrypted only for authorized principals. - IAM least privilege: restrict Lambda to ssm:PutParameter (and optionally ssm:GetParameter) on specific parameter ARNs and kms:Encrypt/Decrypt for the chosen key. - Versioning/overwrite: PutParameter supports Overwrite=true to rotate/refresh values cleanly. - Avoiding CloudFormation exposure: CloudFormation can record resource properties and custom resource responses in stack events; keeping the secret write inside Lambda reduces the chance of accidental disclosure. Common Misconceptions: Many assume “NoEcho” makes any secret safe in CloudFormation. NoEcho only masks values in certain outputs/console views; it does not provide encryption at rest for the stored value unless the backing service does (and it doesn’t apply to SDK calls). Another misconception is that Secrets Manager is always the “most secure”; it is excellent, but CloudFormation cannot safely “set” a secret value from a custom resource return without risking exposure, and GenerateSecretString is for generating secrets, not ingesting externally retrieved credentials. Exam Tips: - If credentials are fetched dynamically (custom resource/Lambda), store them directly from code into a secret store (Parameter Store SecureString or Secrets Manager via PutSecretValue), rather than passing them through CloudFormation properties. - NoEcho is about masking display, not encryption. - Choose SecureString when you want low operational overhead and KMS-backed encryption; choose Secrets Manager when you need built-in rotation workflows, cross-account replication, and richer secret lifecycle features.
外出先でもすべての問題を解きたいですか?
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。
A fintech startup runs a payment-processing AWS Lambda function triggered by Amazon EventBridge; during a 30-minute flash sale in us-west-2, Amazon CloudWatch shows the Throttles metric spiking to 900 per minute while the account-level concurrent executions quota is 1,000 and another critical Lambda has 800 reserved concurrency, so the team wants the MOST operationally efficient actions to reduce throttling for the payment function without re-architecting the workload (Choose two).
Moving the workload to ECS/Fargate can avoid Lambda concurrency limits, but it is a re-architecture (new deployment model, scaling policies, networking, observability, and potentially different failure modes). The question explicitly asks to reduce throttling without re-architecting and to choose the most operationally efficient actions. Therefore, this is not an appropriate answer for an exam scenario focused on quick Lambda operational controls.
EventBridge’s maximum event age and retry policy affect how long EventBridge will keep retrying delivery to Lambda when invocations fail or are throttled. Increasing the window can reduce the chance of event loss during prolonged throttling, but it does not reduce throttling itself; it can extend the backlog and delay processing. It’s a durability/latency tradeoff, not a capacity fix.
Setting reserved concurrency for the payment Lambda (e.g., 300) guarantees it can use up to that many concurrent executions regardless of other functions’ demand. This directly addresses the scenario where another function’s 800 reserved concurrency leaves too little unreserved capacity, causing throttles. It is operationally simple, fast to implement, and aligns with best practices for protecting critical workloads from noisy neighbors.
IAM permissions such as lambda:GetFunctionConcurrency only allow an identity to read concurrency settings via the Lambda API. They do not change runtime scaling behavior, do not increase available concurrency, and do not reduce throttling. The function’s execution role is also not the right place for permissions used by operators or CI/CD systems to query account/function settings.
Requesting a regional Lambda concurrency quota increase via AWS Service Quotas increases the total concurrency available in the Region (e.g., from 1,000 to 2,500). If flash-sale demand legitimately requires more total concurrent executions across functions, this is the correct capacity lever. Combined with reserved concurrency for the payment function, it reduces throttling both by raising the ceiling and by ensuring the payment function has guaranteed capacity.
Core Concept: This question tests AWS Lambda concurrency controls and throttling behavior, especially the interaction between account-level (regional) concurrency quotas and function-level reserved concurrency. It also touches on event source retry behavior (EventBridge) and operationally efficient mitigations. Why the Answer is Correct: Throttles spiking while the regional quota is 1,000 and another function has 800 reserved concurrency strongly implies the payment function is competing for the remaining unreserved pool (about 200 concurrent executions). During the flash sale, demand exceeds that pool, so Lambda throttles invocations. The most operationally efficient fixes that do not re-architect are: (C) allocate reserved concurrency to the payment function so it has a guaranteed slice of concurrency even when other functions consume capacity, and (E) request a regional concurrency quota increase so the total available concurrency can meet peak demand. Key AWS Features: Reserved concurrency guarantees a function can scale up to that number and also caps it (protecting downstream systems). Importantly, reserved concurrency is carved out of the regional quota, preventing noisy-neighbor effects from other functions. Service Quotas lets you request increases to the regional Lambda concurrency limit, which directly reduces throttling when overall demand exceeds the account quota. EventBridge will retry throttled Lambda invocations, but retries do not eliminate throttling; they only defer processing. Common Misconceptions: Increasing retry window/event age (B) can reduce data loss but does not reduce throttling; it may increase backlog and prolong recovery. Permissions changes (D) do not affect concurrency. Rebuilding on ECS (A) is a re-architecture and not the most operationally efficient immediate action. Exam Tips: When you see throttles with high traffic, first determine whether the bottleneck is function reserved concurrency, regional concurrency quota, or an event source concurrency limit. If another function has large reserved concurrency, remember it reduces the shared pool for all other functions. The fastest levers are (1) reserved concurrency to guarantee capacity for critical functions and (2) requesting quota increases when the account-level ceiling is the limiting factor.
A media company ingests video playback telemetry into an Amazon Kinesis Data Streams stream from its web player, and an AWS Lambda function that is triggered by the stream logs and processes each playback event as structured JSON. An operational review shows that some events have playbackDurationSeconds set to 0. A developer must build a dashboard that shows how many unique viewerIds are impacted each day (grouped in 1-day periods); what should the developer do to implement the dashboard?
Correct. Lambda logs can be stored in CloudWatch Logs (typically via AWSLambdaBasicExecutionRole). CloudWatch Logs Insights can filter playbackDurationSeconds = 0, compute count_distinct(viewerId), and group by daily bins (bin(1d)). A Logs Insights widget can be added directly to a CloudWatch dashboard, meeting the requirement with minimal additional infrastructure.
Incorrect. Athena queries data in S3 (and other supported sources via connectors), not CloudTrail “API logs” for application telemetry. CloudTrail records AWS API calls (e.g., PutRecord to Kinesis), not the playback event fields like viewerId or playbackDurationSeconds. Even if logs were in S3, the option’s stated data source (CloudTrail API logs) is wrong for this use case.
Incorrect. EventBridge is an event bus for routing and filtering events, not for performing aggregations like distinct viewer counts grouped into 1-day periods. Also, a CloudWatch dashboard cannot be a target of an EventBridge rule. You would need a downstream processor (Lambda/Kinesis/Firehose) to aggregate and then publish metrics or store results elsewhere.
Incorrect. Kinesis Data Streams provides built-in metrics (e.g., IncomingBytes, GetRecords.IteratorAgeMilliseconds) and you can publish custom CloudWatch metrics, but CloudWatch metrics do not natively compute distinct counts of viewerIds from raw events. An alarm is for thresholding a metric, not for producing a daily distinct-count report. You’d need custom aggregation logic before emitting metrics.
Core Concept: This question tests using Amazon CloudWatch Logs + CloudWatch Logs Insights + CloudWatch Dashboards to derive operational analytics from application logs. The key requirement is counting distinct viewerIds that meet a condition (playbackDurationSeconds = 0) and grouping by 1-day time bins. Why the Answer is Correct: The Lambda function already “logs and processes each playback event as structured JSON.” If those JSON events (or a subset containing viewerId and playbackDurationSeconds) are written to CloudWatch Logs, CloudWatch Logs Insights can query the log group directly. Logs Insights supports filtering on JSON fields, computing distinct counts (e.g., count_distinct(viewerId)), and time-binning (e.g., bin(1d)) to produce daily aggregates. CloudWatch dashboards can embed a Logs Insights query widget to visualize the daily impacted unique viewers without building a separate data lake or metrics pipeline. Key AWS Features: 1) CloudWatch Logs structured logging: Lambda automatically writes to CloudWatch Logs when it has permissions (AWSLambdaBasicExecutionRole). Using JSON logs enables field extraction for Insights. 2) CloudWatch Logs Insights: purpose-built for interactive log analytics, including filter expressions, stats aggregations, and time-series grouping with bin(). 3) CloudWatch Dashboards: can display Logs Insights query results as widgets for near-real-time operational visibility. Common Misconceptions: A frequent trap is assuming CloudTrail or EventBridge can be used to “query events.” CloudTrail records AWS API activity, not application telemetry. EventBridge routes events but does not provide built-in distinct-count aggregations over time windows for dashboards. Another misconception is to use CloudWatch metrics/alarms for distinct viewer counts; CloudWatch metrics are numeric time series and do not natively support distinct counting over high-cardinality IDs without custom aggregation logic. Exam Tips: When the question mentions “logs as structured JSON” and asks for ad-hoc aggregation (distinct counts, grouping by day) plus a dashboard, think CloudWatch Logs Insights + Dashboard widget. Choose Athena only when data is in S3 (data lake) and you need SQL over stored objects. Choose CloudWatch metrics/alarms for thresholding numeric metrics, not for high-cardinality distinct analytics.
A real-time ticketing web app runs on an Auto Scaling group of 6 Amazon EC2 instances behind an Application Load Balancer, keeps user cart and login state in memory per client (up to 64 KB per session) with 1,500 writes/s and 1,500 reads/s, must survive sudden instance termination or scale-in events without losing any session data, recover within 1 second, and remain highly available across multiple AZs—what should the developer do to ensure sessions are not lost if an EC2 instance fails?
Sticky sessions on an ALB can improve user experience by consistently routing a client to the same target, which can reduce the need to externalize session state. However, it does not satisfy the requirement that sessions must survive sudden instance termination or scale-in. If the pinned EC2 instance fails or is terminated, the in-memory session is lost and cannot be recovered within 1 second.
Amazon SQS Standard queues are for decoupling and asynchronous processing, not for synchronous session storage. Reading session state “back when needed” is not a natural SQS pattern (queues are not key-value stores), and reconstructing the latest session from a stream of messages is complex, error-prone, and may not meet the 1-second recovery requirement. This also introduces ordering and duplication concerns with Standard queues.
Storing session state in DynamoDB makes the application tier stateless so any EC2 instance in any AZ can serve any request. DynamoDB is multi-AZ, durable, and supports high throughput with low latency, fitting 1,500 reads/s and 1,500 writes/s. A 30-minute TTL automatically expires sessions. This directly addresses instance failure/scale-in without losing sessions and enables near-immediate recovery.
ALB deregistration delay (connection draining) helps existing in-flight requests complete when a target is being deregistered, which is useful during deployments or scale-in events that are graceful. It does not protect against sudden instance termination (crash, underlying host failure, forced termination) and does not preserve in-memory session state. Therefore it cannot guarantee sessions are not lost.
Core Concept: This question tests stateless web tier design and highly available session management for Auto Scaling groups behind an Application Load Balancer (ALB). The key principle is: do not store user session state on ephemeral EC2 instance memory when instances can be terminated or scaled in. Why the Answer is Correct: Option C externalizes session state to a durable, multi-AZ managed datastore (Amazon DynamoDB). If any EC2 instance terminates unexpectedly, the session data remains available and another instance can immediately serve the next request by reading the session from DynamoDB. This meets the requirement to survive sudden termination/scale-in without losing session data and to recover within ~1 second (DynamoDB single-digit millisecond latency is typical). A 30-minute TTL aligns with session expiration and automatically removes stale sessions. Key AWS Features: DynamoDB is inherently highly available across multiple AZs in a Region and is designed for very high request rates. With 1,500 writes/s and 1,500 reads/s and small items (<=64 KB), DynamoDB is a strong fit. Using TTL reduces operational burden. For performance and cost, the app can use on-demand capacity or provisioned capacity with auto scaling. If needed, DynamoDB Accelerator (DAX) can further reduce read latency, but is not required by the prompt. Common Misconceptions: Sticky sessions (A) keep a user pinned to one instance, but do not protect against instance failure or scale-in; when the instance disappears, the in-memory session disappears. Connection draining (D) helps in graceful deregistration, but does not handle sudden termination and does not replicate session state. SQS (B) is a messaging service, not a low-latency key-value store for per-request session reads; reconstructing state from a queue is complex and not designed for immediate read-after-write session retrieval. Exam Tips: When you see “session state in memory” plus “Auto Scaling” plus “must not lose sessions,” the correct pattern is to make the web tier stateless and store sessions in a shared, durable store (DynamoDB, ElastiCache/Redis, or a database). If the question emphasizes multi-AZ HA and rapid recovery, choose managed services that replicate across AZs and support low-latency reads/writes.
An online ticketing platform is launching a new serverless notification service with 8 AWS Lambda functions, one Amazon API Gateway HTTP API, and one Amazon DynamoDB table that must be deployed to dev and prod via automation with less than 100 lines of deployment code, support a 10% canary for 5 minutes with automatic rollback, and provide the least operational overhead for building and deploying both the functions and their infrastructure—what method best meets these requirements?
Manually uploading zip files in the console is not automation, does not scale across 8 functions and multiple environments, and provides high operational overhead. It also does not provision API Gateway and DynamoDB in a repeatable way, nor does it implement a controlled canary deployment with automatic rollback. This approach is fragile and not aligned with IaC or CI/CD best practices expected on certification exams.
AWS SAM is purpose-built for serverless IaC with minimal template code and low operational overhead. A SAM template can define the Lambda functions, HTTP API, and DynamoDB table, and SAM CLI can build/package/deploy through CloudFormation for dev and prod. SAM supports Lambda traffic shifting via CodeDeploy deployment preferences (e.g., Canary10Percent5Minutes) and can enable automatic rollback using CloudWatch alarms—directly matching the canary and rollback requirements.
Using shell scripts and AWS CLI to update zip files can automate code updates, but it typically separates code deployment from infrastructure provisioning (API Gateway and DynamoDB), increasing complexity and drift risk. Implementing a 10% canary for 5 minutes with automatic rollback is also non-trivial with ad-hoc scripting compared to SAM/CodeDeploy. This option fails the “least operational overhead” and “<100 lines of deployment code” intent.
Lambda container images should be stored in Amazon ECR, not AWS CodeArtifact, making this option technically incorrect. Even if corrected to ECR, building and managing separate images for 8 functions adds overhead versus SAM zip-based builds. Container packaging alone does not provide canary deployments and automatic rollback; you would still need CodeDeploy configuration and IaC. This does not best meet the simplicity and low-code deployment requirements.
Core Concept: This question tests serverless Infrastructure as Code (IaC) and deployment strategies for AWS Lambda, specifically using AWS SAM to define and deploy both code and infrastructure with safe progressive delivery (canary) and automated rollback. Why the Answer is Correct: Option B best meets all requirements: minimal deployment code (<100 lines), automated deployments to dev and prod, least operational overhead, and a 10% canary for 5 minutes with automatic rollback. AWS SAM lets you define the 8 Lambda functions, API Gateway HTTP API, and DynamoDB table in a concise SAM template (CloudFormation under the hood). For progressive delivery, SAM integrates with AWS CodeDeploy via the Lambda deployment preferences (e.g., Canary10Percent5Minutes) and can automatically roll back on CloudWatch alarms or failed health checks. Key AWS Features: - AWS SAM template: concise syntax for serverless resources (AWS::Serverless::Function, AWS::Serverless::HttpApi, DynamoDB table resources), enabling dev/prod parameterization. - SAM CLI: sam build/package/deploy to build artifacts, upload to S3, and deploy CloudFormation stacks consistently. - CodeDeploy for Lambda: traffic shifting configurations such as Canary10Percent5Minutes and automatic rollback when alarms trigger. - CI/CD integration: SAM works well with CodePipeline/CodeBuild or third-party CI to promote the same template across environments. Common Misconceptions: A and C might seem “simple” because they use zip files and CLI/scripts, but they increase operational overhead and typically split application code deployment from infrastructure provisioning, making repeatable environment creation harder and longer than an IaC approach. D might sound modern (containers), but it adds complexity (image build/push lifecycle) and incorrectly uses CodeArtifact (ECR is the correct registry for container images). Containers also don’t inherently solve canary/rollback or IaC brevity. Exam Tips: When you see “serverless + least operational overhead + deploy infrastructure and code together + canary/rollback,” think AWS SAM (or CDK) with CodeDeploy. Also, remember: Lambda canary deployments are commonly implemented through CodeDeploy deployment preferences, and container images for Lambda belong in Amazon ECR, not CodeArtifact. References: AWS SAM Developer Guide (serverless templates and SAM CLI) and AWS CodeDeploy support for AWS Lambda traffic shifting and automatic rollback.
A sports analytics startup runs a latency‑sensitive leaderboard microservice on Amazon ECS using the Amazon EC2 launch type across two Availability Zones and needs to migrate the workload to Amazon ECS on AWS Fargate with near‑zero disruption; during a blue/green cutover, the developer must configure the cluster’s capacity providers so that on‑demand Fargate tasks launch by default (ensuring at least 1 task runs on on‑demand) and FARGATE_SPOT is used only for overflow, while avoiding any API calls that attempt to create AWS‑owned capacity providers; which solution achieves the migration with the least downtime?
Correct. PutClusterCapacityProviders is the right API to associate the AWS-owned FARGATE and FARGATE_SPOT capacity providers with the cluster and define the default strategy. Setting FARGATE with base=1 guarantees at least one on-demand task. Using weights (FARGATE weight >= FARGATE_SPOT) makes on-demand the default while allowing Spot to take overflow capacity, meeting the near-zero disruption migration intent.
Incorrect. CreateCapacityProvider is used to create custom capacity providers (typically for ECS on EC2 backed by an Auto Scaling group). FARGATE and FARGATE_SPOT are AWS-owned capacity providers and should not be created. The question explicitly requires avoiding API calls that attempt to create AWS-owned capacity providers, so this violates the constraint even though the strategy concept (base/secondary) is directionally right.
Incorrect. While PutClusterCapacityProviders is the correct API to associate AWS-owned providers, the strategy is wrong: setting FARGATE_SPOT as Provider 1 with base=1 guarantees at least one task runs on Spot, not on on-demand. This contradicts the requirement to ensure at least one on-demand Fargate task and to use Spot only for overflow rather than as the default placement target.
Incorrect. This option repeats two issues: it uses CreateCapacityProvider (not appropriate for AWS-owned Fargate providers) and it makes FARGATE_SPOT the primary provider, which conflicts with the requirement that on-demand tasks launch by default and that at least one task always runs on on-demand. It also increases operational risk during cutover because Spot interruptions could affect baseline availability.
Core Concept: This question tests Amazon ECS capacity providers and capacity provider strategies when migrating an ECS service from EC2 launch type to AWS Fargate with minimal disruption. It also tests the distinction between AWS-owned capacity providers (FARGATE, FARGATE_SPOT) and customer-managed capacity providers (typically backed by Auto Scaling groups for EC2). Why the Answer is Correct: To run tasks on Fargate, the cluster must be associated with the AWS-owned FARGATE and (optionally) FARGATE_SPOT capacity providers. The correct way to associate these with an existing cluster is to use PutClusterCapacityProviders, which sets both the list of capacity providers available to the cluster and the cluster’s default capacity provider strategy. Then, to ensure on-demand Fargate is the default and at least one task always runs on on-demand, you set a strategy where FARGATE has base=1. Any additional tasks are distributed by weight; giving FARGATE a weight >= FARGATE_SPOT ensures on-demand is preferred, while still allowing FARGATE_SPOT to be used for overflow. This supports a blue/green cutover with near-zero downtime because you can deploy the new Fargate-based service/task set while the old one remains healthy, then shift traffic. Key AWS Features: - PutClusterCapacityProviders: associates AWS-owned capacity providers to the cluster and sets default strategy. - Capacity provider strategy fields: - base: minimum number of tasks placed on that provider before weights apply. - weight: relative distribution for remaining tasks. - AWS-owned providers: FARGATE and FARGATE_SPOT are not created by customers; they are referenced/associated. - Blue/green on ECS is commonly implemented with CodeDeploy (ECS deployment controller) and an ALB, but the question’s focus is ensuring capacity placement behavior during cutover. Common Misconceptions: A frequent trap is thinking you must “create” Fargate capacity providers via CreateCapacityProvider. That API is for custom capacity providers (e.g., EC2 Auto Scaling-backed), not AWS-owned FARGATE/FARGATE_SPOT. Another trap is setting FARGATE_SPOT with base=1, which violates the requirement that at least one task runs on on-demand. Exam Tips: - Remember: AWS-owned capacity providers (FARGATE/FARGATE_SPOT) are associated, not created. - For “at least N tasks on X,” use base=N. - For “use Spot only for overflow,” make on-demand the provider with base and/or higher weight, and keep Spot as secondary with lower weight.
学習期間: 1 month
점수는 793점으로 합격했어요! 하루에 최소 30문제는 풀었어요. 밖에서도 짬 날 때마다 풀 수 있으니 좋네요ㅎㅎ
学習期間: 2 months
앱 문제가 시험이랑 굉장히 유사했어요. 그리고 해설들이 왜 틀렸는지 이해하는데 도움이 됐어요.
学習期間: 1 month
Thank you very much, these questions are wonderful !!!
学習期間: 2 months
1달 전에 합격했는데 지금 후기쓰네요. 시험하고 문제 구성이 비슷했어요
学習期間: 2 months
I just passed the exam, and I can confidently say that this app was instrumental in helping me thoroughly review the exam material.
外出先でもすべての問題を解きたいですか?
無料アプリを入手
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。