
65問と130分の制限時間で実際の試験をシミュレーションしましょう。AI検証済み解答と詳細な解説で学習できます。
AI搭載
すべての解答は3つの主要AIモデルで交差検証され、最高の精度を保証します。選択肢ごとの詳細な解説と深い問題分析を提供します。
A logistics company exposes a Regional Amazon API Gateway REST API (GET /shipments/{trackingId}) that invokes an AWS Lambda function using Lambda proxy integration; on 2024-11-03 at 14:25:33 UTC, a developer runs curl -i https://api.company.example/prod/shipments/42 and receives HTTP 502, API Gateway execution logs show 'Method completed with status: 502', and the Lambda’s CloudWatch Logs indicate the handler completed successfully and returned the plain string 'OK' (2 bytes) with no stack trace; to resolve the error and have the API return 200, what should the developer change?
Changing from Regional to Edge-Optimized affects how clients reach API Gateway (CloudFront distribution in front) and can improve latency for global users, but it does not change the required Lambda proxy response format. Endpoint type mismatches or CloudFront issues typically show different symptoms (DNS/403/5xx at edge), not a consistent 502 caused by a malformed integration response after Lambda succeeds.
For a GET /shipments/{trackingId}, there is typically no request body. The request clearly reached API Gateway and invoked the Lambda function (as evidenced by Lambda logs). A malformed client payload would more likely cause a 400 (bad request) or mapping/template errors before invocation, not a 502 after Lambda returns. The core issue is the response format, not the request.
With Lambda proxy integration, Lambda must return a JSON object containing at least statusCode and body (string), optionally headers and isBase64Encoded. Returning a plain string like "OK" is not a valid proxy response, so API Gateway cannot translate it into an HTTP response and returns 502. Changing Lambda to return {"statusCode":200,"body":"OK"} resolves the issue and yields HTTP 200.
Authorization issues (missing/invalid credentials) are handled by API Gateway before invoking Lambda and typically result in 401 Unauthorized or 403 Forbidden, not a 502. The evidence shows Lambda was invoked and completed successfully, so the request was authorized to reach the integration. The problem occurs when API Gateway tries to interpret the Lambda output, not when authenticating the caller.
Core Concept: This question tests Amazon API Gateway REST API + AWS Lambda proxy integration (a.k.a. Lambda proxy). With proxy integration, API Gateway expects the Lambda function to return a specific JSON structure so API Gateway can translate it into an HTTP response. Why the Answer is Correct: The Lambda handler “completed successfully” but returned the plain string "OK". In Lambda proxy integration, API Gateway requires a response object like {"statusCode": 200, "headers": {...}, "body": "..."} (and optionally "isBase64Encoded"). If the Lambda returns a raw string (or otherwise malformed proxy response), API Gateway cannot parse it into an HTTP response and typically returns a 502 Bad Gateway with execution logs showing “Method completed with status: 502”. This is a classic symptom: Lambda succeeded, but API Gateway failed to transform the integration response. Key AWS Features / Configurations: - Lambda proxy integration for REST APIs requires: - statusCode (integer) - body (string; JSON must be stringified) - optional headers and isBase64Encoded - API Gateway 502 in this context often indicates “Malformed Lambda proxy response” (sometimes visible in execution logs). - Correct fix: update Lambda to return the proxy response format, e.g. statusCode 200 and body "OK". Common Misconceptions: - 502 is often assumed to be a network/endpoint-type issue (Edge vs Regional) or a Lambda runtime crash. Here, Lambda logs show no stack trace and successful completion, pointing away from runtime failure. - Authorization problems usually yield 401/403 from API Gateway (or 403 from IAM auth), not a 502 after invoking Lambda. - For a GET, “request payload format” is rarely the issue; the request reached API Gateway and invoked Lambda. Exam Tips: When you see API Gateway + Lambda proxy integration + HTTP 502 while Lambda logs show success, immediately verify the Lambda proxy response shape. Remember: body must be a string, and missing/incorrect fields (statusCode/body) commonly cause 502. Also distinguish 401/403 (auth) from 502 (integration/response mapping/parsing).
A team adds OpenCV and SciPy to an existing AWS Lambda function that resizes images; after the update, the unzipped .zip deployment package is 420 MB, exceeding the 250 MB quota for .zip-based Lambda packages, and the function uses the x86_64 instruction set architecture that must remain unchanged. The developer must deploy the function with the new dependencies without altering its business logic—Which solution will meet these requirements?
Incorrect. AWS Lambda does not support mounting arbitrary “snapshots” of dependencies as a filesystem in the way EC2 can use EBS snapshots. Lambda provides /tmp ephemeral storage and optional EFS mounting, but not snapshot mounting. This option describes a non-existent Lambda feature and would not be a valid deployment approach for oversized dependencies.
Incorrect. Although arm64 can sometimes provide better price/performance and may change dependency footprint, the question explicitly states the function uses x86_64 and that it must remain unchanged. Therefore, switching architectures violates a hard requirement. Additionally, changing architecture can introduce compatibility issues for native libraries like OpenCV/SciPy.
Incorrect. Lambda cannot attach an Amazon EBS volume. EBS is an EC2 block storage service and requires an EC2 instance to attach. For Lambda, the supported network filesystem option is Amazon EFS (and only within a VPC), but even EFS is not the best fit here compared to container images for packaging large dependencies.
Correct. Lambda container images support up to 10 GB, which easily accommodates a 420 MB dependency set. The developer can build an image (using an AWS Lambda base image), install OpenCV and SciPy, push to Amazon ECR, and update the Lambda function to use the image while keeping the same code logic and x86_64 architecture. This is the standard solution for oversized Lambda packages.
Core Concept: This question tests AWS Lambda deployment packaging limits and the alternative packaging model: Lambda container images. It also touches on architecture constraints (x86_64) and how to include large native dependencies (OpenCV, SciPy) without changing application logic. Why the Answer is Correct: A .zip-based Lambda deployment package has a hard limit of 250 MB unzipped (and 50 MB zipped for direct upload). The updated function’s unzipped size is 420 MB, so it cannot be deployed as a .zip. Lambda container images allow packaging the function code and all dependencies into an OCI-compatible image up to 10 GB. The developer can build an image based on an AWS-provided Lambda base image for the runtime (for example, python), install OpenCV/SciPy and any native libraries, push the image to Amazon ECR, and point the Lambda function to that image. This meets all requirements: deploy with new dependencies, keep business logic unchanged, and keep the x86_64 architecture (Lambda supports container images on x86_64). Key AWS Features: - Lambda container image support (up to 10 GB) stored in Amazon ECR. - AWS Lambda base images that include the Lambda Runtime Interface Client, simplifying compatibility. - Ability to specify function architecture (x86_64) when creating/updating the function. - Operationally, ECR integrates with IAM and supports image scanning and lifecycle policies. Common Misconceptions: - Lambda layers can help with dependencies, but they do not solve this case if the combined unzipped size still exceeds limits (and layers have their own size limits). The question specifically points to exceeding the .zip quota, steering you to container images. - “Mounting a snapshot” or attaching EBS are not supported patterns for Lambda. - Switching to arm64 might reduce size in some cases, but the architecture must remain x86_64. Exam Tips: When you see large ML/scientific libraries (OpenCV, SciPy, TensorFlow) and package size limit issues, immediately consider Lambda container images (or occasionally splitting into layers if within limits). Also watch for constraints like “must remain x86_64,” which eliminates architecture-change options even if they seem beneficial.
A fintech team uses AWS CloudFormation to deploy an EventBridge-triggered AWS Lambda function whose code is a zipped artifact stored in Amazon S3 (bucket artifacts-567890-us-east-1 with versioning enabled, key builds/processor.zip, last version ID 3Lg8xZ2), but every time they run a stack update while keeping the same S3 object key, the Lambda function code does not change; what should they do to ensure CloudFormation applies the new code during updates?
Creating a new Lambda alias does not update the function’s code package. Aliases are pointers to published versions and are used for traffic shifting, stable ARNs, and deployment strategies (e.g., blue/green with CodeDeploy). If the underlying function code is not updated (or a new version is not published), a new alias won’t cause CloudFormation to pull a new ZIP from S3.
This is correct because CloudFormation needs a template-detectable change to the Lambda Code property to trigger a code update. With S3 versioning enabled, specifying the latest S3ObjectVersion (e.g., a new version ID) ensures CloudFormation fetches that exact artifact. Alternatively, changing the S3Key (unique per build) also forces an update and is a common CI/CD best practice.
Uploading to a different bucket is unnecessary and does not inherently solve the problem unless the template is also updated to reference that new bucket/key/version. The core issue is CloudFormation not seeing a change in the Lambda Code definition. Changing buckets adds operational complexity and is not the recommended mechanism for deterministic deployments.
A code-signing configuration enforces that Lambda only deploys code signed by trusted publishers, improving supply-chain security. It does not force CloudFormation to redeploy code when the S3 key remains the same. Even with code signing, CloudFormation still requires a change in S3Key or S3ObjectVersion (or equivalent) to trigger an update.
Core Concept: This question tests how AWS CloudFormation detects and applies updates to AWS::Lambda::Function code when the deployment package is stored in Amazon S3. CloudFormation updates resources based on changes in the template (or parameters) and, for Lambda code, it needs a detectable change in the Code property (S3Key and/or S3ObjectVersion) to trigger a code update. Why the Answer is Correct: If the team uploads a new ZIP to the same S3 key (builds/processor.zip) and then runs a stack update without changing the template, CloudFormation often sees no change to the Lambda function’s Code definition. With S3 versioning enabled, the correct way to force CloudFormation to fetch the new artifact is to either (1) change the S3 object key (e.g., builds/processor-<buildId>.zip) or (2) specify the new S3ObjectVersion value (e.g., 3Lg8xZ2 or the latest version ID) in the template. Either approach changes the template’s Code property, which causes CloudFormation to update the Lambda function code. Key AWS Features: CloudFormation supports Lambda code from S3 via Code: { S3Bucket, S3Key, S3ObjectVersion }. S3 versioning provides immutable references to specific artifacts, enabling repeatable deployments and safe rollbacks. A best practice is to use unique, content-addressed keys (or CI/CD-injected version IDs) so deployments are deterministic and auditable. Common Misconceptions: It’s tempting to think that simply overwriting the object at the same key will be detected automatically. However, CloudFormation is template-driven; if the template doesn’t change, it may not redeploy the code. Another misconception is that operational features like aliases or code signing will force a refresh—those address traffic shifting and integrity, not change detection. Exam Tips: For Lambda + CloudFormation, remember: to update code from S3, you must change S3Key or S3ObjectVersion (or use packaging tools like SAM/CDK that auto-generate unique keys). When you see “same S3 key” and “code doesn’t change,” the fix is almost always “update the key or version in the template.”
A fintech startup operates 340 AWS Lambda functions in us-west-2 that must be tested via Lambda function URLs, and the internal verification team grouped in an IAM group named QA-Reviewers needs to invoke all endpoints using the public URLs while blocking anonymous access, so as the developer, how should you configure authentication and permissions at scale to meet these requirements?
Correct. Setting each function URL to AWS_IAM forces SigV4 authentication, which blocks anonymous access even though the URL is publicly reachable. A single identity-based policy granting lambda:InvokeFunctionUrl to the QA-Reviewers group scales cleanly across 340 functions. This matches AWS best practices: centralize permissions in IAM identities and automate repetitive configuration with CLI/IaC.
Incorrect. AuthType=NONE allows anonymous invocation, directly violating the requirement to block anonymous access. Also, it suggests creating a resource-based policy and attaching it to an IAM group, which is not how resource policies work. Resource-based policies are attached to the resource (the Lambda function) and specify principals; they are not attached to IAM groups.
Incorrect. While AWS_IAM is the right auth type, creating 340 separate identity-based policies is unnecessary operational overhead and not a best-practice scalable approach. Additionally, identity-based policies do not grant access “from a group ARN” as a condition in the way implied; you attach the policy to the group (or users/roles), not reference the group as a principal.
Incorrect. AuthType=NONE permits anonymous access, failing the core security requirement. It also proposes adding a resource-based policy per function with the QA-Reviewers group ARN as principal. IAM groups are not valid principals in resource-based policies; you would need to specify roles/users/accounts. This option is both insecure and technically flawed.
Core Concept: This question tests AWS Lambda Function URLs authentication and IAM authorization at scale. Function URLs can be configured with auth type NONE (public/anonymous) or AWS_IAM (SigV4-signed requests). When AWS_IAM is used, access is controlled primarily through IAM identity-based policies (and optionally resource-based permissions), preventing anonymous invocation. Why the Answer is Correct: The requirement is to “invoke all endpoints using the public URLs while blocking anonymous access.” The correct way is to configure each function URL with AWS_IAM so requests must be signed with AWS credentials. Then grant the QA-Reviewers group permission via a single identity-based policy allowing lambda:InvokeFunctionUrl across the relevant functions. This scales well for 340 functions: one policy attached to the group, and automated creation/updates of function URLs via CLI/script. Key AWS Features: - Lambda Function URLs with AuthType=AWS_IAM enforce IAM authentication (SigV4). Anonymous callers cannot invoke. - IAM identity-based policy action: lambda:InvokeFunctionUrl. You can scope Resource to function ARNs (and optionally conditions like aws:RequestedRegion, lambda:FunctionUrlAuthType, or tags if you use tag-based access control). - Attaching the policy to an IAM group is a standard scalable pattern for a team. - Automation (CLI/script/IaC) is appropriate for bulk configuration across many functions. Common Misconceptions: A frequent trap is thinking “public URL” implies it must be unauthenticated. Function URLs are internet-reachable endpoints, but AWS_IAM still requires signed requests, so they are not anonymously accessible. Another misconception is trying to “attach a resource-based policy to a group” (not supported) or using a group ARN as a principal in resource policies (IAM groups are not valid principals). Exam Tips: - If the requirement says “block anonymous,” avoid AuthType=NONE. - For large fleets, prefer one reusable identity policy (possibly tag-scoped) over per-function policies. - Remember: identity-based policies attach to users/groups/roles; resource-based policies attach to resources and specify principals (users/roles/accounts/services), not groups.
A small team is prototyping an event-driven photo-tagging service on AWS using 10 AWS Lambda functions, Amazon API Gateway, and a single Amazon DynamoDB table, and they need an accelerated inner-loop workflow that can push code-only changes for one Lambda function to a dev stack within 20 seconds for QA testing without performing a full stack redeploy on every commit. The team wants to keep AWS CloudFormation as the deployment backbone and use one CLI command from their workstation to incrementally sync updated code and minor template changes (such as environment variables) to the cloud. What should the team do to meet these requirements?
Correct. AWS SAM uses CloudFormation stacks for provisioning and supports fast incremental updates with sam sync. This command is designed for inner-loop development, pushing updated Lambda code (and some small configuration changes like environment variables) to an existing dev stack without running a full CloudFormation redeploy each time. It matches the requirement for a single CLI command and sub-20-second iteration for code-only changes.
Incorrect. sam init is a project scaffolding command that generates a starter SAM application structure and sample templates. It does not deploy resources, update Lambda code, or perform incremental synchronization to an existing stack. While SAM is the right framework, the command required for rapid incremental updates is sam sync (or sam deploy for full deployments), not sam init.
Incorrect. cdk synth only synthesizes (generates) a CloudFormation template from CDK code; it does not deploy anything to AWS. Even though CDK ultimately uses CloudFormation, synth is a local build step and cannot push incremental Lambda code changes to a dev stack. The deployment command in CDK would be cdk deploy (not provided as an option).
Incorrect. cdk bootstrap provisions prerequisite resources (the CDK toolkit stack, such as an S3 bucket and roles) needed for CDK deployments in an account/region. It is a one-time or infrequent environment setup step, not an incremental deployment mechanism. It will not sync updated Lambda code or apply template tweaks to an existing application stack.
Core Concept: This question tests rapid, incremental deployments for serverless apps while keeping AWS CloudFormation as the deployment engine. The key capability is “syncing” code and small configuration changes to an existing stack without a full CloudFormation redeploy. Why the Answer is Correct: AWS SAM is an extension of CloudFormation for serverless resources (AWS::Serverless::Function, API, etc.). The sam sync command is specifically designed to accelerate the inner loop by incrementally updating Lambda function code and certain configuration changes (for example, environment variables) to a deployed dev stack. This avoids the time cost of packaging and executing a full CloudFormation change set for every commit, enabling near-real-time QA validation. It also satisfies the requirement of “one CLI command” from a workstation and “CloudFormation as the deployment backbone,” because SAM ultimately translates to and deploys via CloudFormation. Key AWS Features: - sam sync: Watches or pushes local changes and performs targeted updates (commonly Lambda code updates) to the cloud. - CloudFormation-backed deployments: SAM templates are CloudFormation-compatible; SAM uses CloudFormation stacks for provisioning. - Serverless acceleration patterns: Incremental code sync is ideal for multiple Lambda functions where only one changes frequently. - Minor template changes: SAM can apply certain configuration updates quickly; larger infrastructure changes still require a full sam deploy. Common Misconceptions: - Confusing sam init with deployment: sam init only scaffolds a project; it does not deploy or sync changes. - Assuming CDK synth or bootstrap deploys changes: synth only generates CloudFormation templates; bootstrap prepares an environment (toolkit stack) but does not update application resources. - Thinking “any IaC tool” can do sub-20-second incremental updates: the exam is pointing to SAM’s explicit sync workflow for serverless inner-loop speed. Exam Tips: When you see “serverless + CloudFormation backbone + incremental code-only updates + one command + fast inner loop,” look for AWS SAM and sam sync. For CDK, the analogous deployment command would be cdk deploy (not listed). Also remember: init/synth/bootstrap are setup/build steps, not incremental deployment mechanisms.
外出先でもすべての問題を解きたいですか?
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。
A fitness startup runs a daily step-count challenge on AWS; participant submissions are written to an Amazon DynamoDB table, aggregate winners are announced at 09:00 UTC the next day, and thereafter the raw submissions are no longer needed; a developer has added an expires_on attribute (Unix epoch time in seconds set to submission_time + 86,400) to each item and wants the old submissions to be removed automatically with the least development effort; which solution meets these requirements?
A scheduled Lambda can delete items, but it requires writing and maintaining code to query/scan for expired items, handle pagination, retries, and throttling. It also consumes DynamoDB read/write capacity and can become costly at scale. This is more development and operational effort than necessary when DynamoDB TTL can handle expiration natively.
Running a scheduled ECS Fargate task is even heavier operationally than Lambda: you must build and maintain a container image, task definition, IAM roles, networking, and scheduling. It still needs logic to find and delete expired items and can drive significant DynamoDB capacity usage. This does not meet the “least development effort” requirement compared to TTL.
AWS Glue is intended for ETL and data processing, not routine operational deletion of DynamoDB items. A Glue job would be complex, slower to start, and requires additional setup (job scripts, connections, scheduling). It would likely rely on scans and batch deletes, increasing cost and effort. This is not an appropriate fit for simple item expiration.
DynamoDB TTL is purpose-built for automatic item expiration. By enabling TTL on the table and selecting expires_on (epoch seconds), DynamoDB will automatically delete items after they expire without any scheduled jobs or custom deletion code. This is the lowest-effort, AWS-native solution and aligns directly with the requirement to remove old submissions automatically.
Core Concept - The question is testing Amazon DynamoDB Time to Live (TTL), a native feature that automatically deletes items after a specified expiration timestamp, minimizing operational and development effort. Why the Answer is Correct - The developer already stores an expires_on attribute as a Unix epoch timestamp in seconds (submission_time + 86,400). DynamoDB TTL is designed for exactly this pattern: mark each item with an expiration time and let DynamoDB remove expired items automatically. Enabling TTL on the table and pointing it to expires_on meets the requirement to remove old submissions without building custom deletion logic, scheduling, or compute infrastructure. Key AWS Features - DynamoDB TTL requires: 1) A single attribute per table configured as the TTL attribute (here, expires_on). 2) Values stored as epoch time in seconds. 3) No application-side delete calls; DynamoDB’s background process deletes expired items. Important nuance: TTL deletion is not guaranteed to occur exactly at the expiration time; it is typically within minutes to hours. This still fits the scenario because winners are announced at 09:00 UTC the next day and the raw submissions are “no longer needed thereafter,” not “must be deleted at exactly 09:00:00.” TTL is also cost-effective because you avoid periodic scans and delete operations that consume read/write capacity. Common Misconceptions - Many candidates assume they must schedule a Lambda/ECS/Glue job to scan for expired items and delete them. That approach increases development effort, adds operational overhead (scheduling, retries, error handling), and can be expensive due to table scans and delete throughput. Another misconception is expecting TTL to be a precise scheduler; it is an eventual cleanup mechanism, not a real-time deletion guarantee. Exam Tips - When you see DynamoDB items that should “automatically expire” after a known time, look for TTL as the default best answer—especially when the item already contains an epoch timestamp attribute. Choose custom scheduled deletion (Lambda/ECS/Glue) only when you need strict deletion timing, complex selection logic, or coordinated multi-table cleanup beyond TTL’s capabilities.
A sports streaming startup places Amazon CloudFront in front of an S3 origin to deliver HLS segments and static assets. A developer needs a dashboard that visualizes 4xx/5xx error rates and anomalies for each edge location with near-real-time updates (≤2 seconds delay). Which combination of steps will meet these requirements? (Choose two.)
Standard CloudFront access logs delivered to S3 are not near-real-time; delivery is typically delayed (often minutes). Running scheduled Athena queries further increases latency and is batch-oriented. This approach can produce dashboards, but it will not meet the ≤2 seconds requirement for per-edge-location 4xx/5xx visualization and anomaly detection.
CloudFront real-time logs are purpose-built for low-latency monitoring and can be delivered to Amazon Kinesis Data Streams. This meets the ingestion side of the ≤2 seconds requirement and provides the necessary fields (including edge location/POP and status codes) to compute 4xx/5xx rates per edge location in near-real time.
Consuming the Kinesis stream with AWS Lambda and indexing into Amazon OpenSearch Service enables near-real-time search/aggregation and dashboarding in OpenSearch Dashboards. This supports visualizing error rates by edge location and implementing anomaly detection/alerting patterns. It complements option B by providing the analytics and visualization layer needed for the dashboard.
The statement “stream CloudFront logs directly to Kinesis Data Firehose” is not the best fit as written. CloudFront real-time logs natively integrate with Kinesis Data Streams, and while Firehose can deliver to OpenSearch/S3, it typically buffers data (size/time), which can add latency and may exceed a strict ≤2 seconds requirement unless carefully tuned and supported by the source integration.
CloudTrail records AWS API activity for governance and auditing, not CloudFront viewer request/response logs. Sending CDN access logs to CloudTrail is not a valid pattern, and CloudTrail-based alarms/dashboards would not provide per-edge-location 4xx/5xx request error analytics with near-real-time updates.
Core Concept: This question tests CloudFront observability for near-real-time analytics. The key services are CloudFront real-time logs (sub-second to seconds delivery), Kinesis Data Streams for low-latency ingestion, and a near-real-time analytics/dashboard layer (commonly OpenSearch) to visualize 4xx/5xx rates and anomalies per edge location. Why the Answer is Correct: To achieve ≤2 seconds end-to-end delay, standard CloudFront access logs to S3 are too slow (minutes-level delivery). CloudFront real-time logs are designed for near-real-time monitoring and can be delivered to Kinesis Data Streams. From there, you need to transform/index the streaming records into a datastore that supports fast aggregations and dashboards by dimensions like edge location (e.g., x-edge-location / POP). Using Lambda to consume the stream and index into Amazon OpenSearch Service enables OpenSearch Dashboards to visualize error rates and run anomaly detection/alerting with near-real-time refresh. Key AWS Features: - CloudFront real-time logs: configurable sampling rate, includes fields such as status code, URI, and edge location; delivery to Kinesis Data Streams with very low latency. - Kinesis Data Streams: shard-based throughput, enhanced fan-out if multiple consumers, and low-latency processing. - Lambda stream processing: event source mapping from Kinesis, batching/windowing, retries, and DLQ patterns. - OpenSearch Service + Dashboards: near-real-time indexing and aggregations; can build per-POP error rate visualizations and use anomaly detection features (where enabled) or custom detectors. Common Misconceptions: A is attractive because Athena is easy for log analytics, but S3 log delivery plus scheduled queries cannot meet a 2-second SLA. D seems plausible because Firehose can deliver to OpenSearch, but CloudFront real-time logs integrate with Kinesis Data Streams (not “directly” to Firehose), and Firehose buffering typically introduces seconds-to-minutes latency depending on buffer settings. E is incorrect because CloudTrail is for API auditing, not CDN request logs. Exam Tips: When you see “near-real-time (seconds)” for CloudFront request analytics, think “CloudFront real-time logs → Kinesis Data Streams → streaming processing → real-time analytics store (OpenSearch).” For “minutes to hours,” think standard logs to S3 + Athena/EMR. Also verify the destination integrations: CloudFront real-time logs natively support Kinesis Data Streams (and Kinesis Data Firehose in some contexts, but the option wording matters), while CloudTrail is not a log analytics sink for CloudFront access events.
A developer at Company X must publish messages to an Amazon SQS queue in a partner AWS account (account ID 444455556666) by assuming the cross-account role arn:aws:iam::444455556666:role/SQSWriter, and the company mandates multi-factor authentication (MFA) using the virtual device arn:aws:iam::111122223333:mfa/dev1 with a 6-digit token and a 60-minute session duration; which AWS Security Token Service (AWS STS) API operation should the developer call with the MFA parameters to obtain temporary credentials for this access?
AssumeRoleWithWebIdentity is used when the caller presents a web identity token from an OIDC provider (for example, Amazon Cognito, Google, or EKS IRSA). It does not match this scenario because the developer is not using an OIDC token; they are explicitly assuming a cross-account IAM role ARN and providing MFA device ARN and a 6-digit token code. For classic IAM role assumption with MFA, AssumeRole is the correct API.
GetFederationToken is used to obtain temporary credentials for a federated user, typically to access AWS resources without assuming a specific role in another account. It is not the standard mechanism for cross-account role assumption to a named role ARN in a partner account. Additionally, cross-account access patterns for a partner role like arn:aws:iam::444455556666:role/SQSWriter are designed around sts:AssumeRole, not GetFederationToken.
AssumeRoleWithSAML is used when authentication is performed via a SAML 2.0 identity provider and the caller presents a SAML assertion to obtain role credentials. The question does not mention SAML, an IdP, or a SAML assertion—only MFA with a virtual device and a token code. Therefore, this option is a mismatch for the required inputs and the described access pattern.
AssumeRole is the correct STS operation to assume an IAM role (including cross-account roles) and obtain temporary credentials. It supports MFA by including SerialNumber (the MFA device ARN) and TokenCode (the 6-digit code), and it supports setting DurationSeconds to 3600 for a 60-minute session (subject to the role’s MaxSessionDuration). This exactly matches the requirement to assume arn:aws:iam::444455556666:role/SQSWriter with MFA.
Core Concept: This question tests AWS STS role assumption for cross-account access with MFA. The developer needs temporary credentials to act as a role in a partner account (444455556666) to send messages to Amazon SQS. In AWS, cross-account access is commonly implemented by calling STS to assume a role in the target account, producing temporary security credentials (AccessKeyId/SecretAccessKey/SessionToken). Why the Answer is Correct: The correct API is STS AssumeRole. AssumeRole is the standard operation used to assume an IAM role (including cross-account roles) and it supports MFA by supplying the MFA device ARN (SerialNumber) and the 6-digit token (TokenCode). It also supports specifying the session duration (DurationSeconds), which can be set to 3600 seconds (60 minutes) as required. The developer would call AssumeRole with RoleArn=arn:aws:iam::444455556666:role/SQSWriter, RoleSessionName, SerialNumber=arn:aws:iam::111122223333:mfa/dev1, TokenCode=<6-digit>, and DurationSeconds=3600. Key AWS Features / Configurations: 1) Trust policy in the partner account role must allow the source principal (user/role in 111122223333) to call sts:AssumeRole. 2) The role (SQSWriter) must have permissions like sqs:SendMessage on the target queue. 3) MFA enforcement is typically done in the role trust policy using conditions such as "aws:MultiFactorAuthPresent": "true" (and optionally "aws:MultiFactorAuthAge"). Even with that, the caller must pass SerialNumber and TokenCode to STS. 4) DurationSeconds must be within the role’s MaxSessionDuration setting (default 1 hour, configurable up to 12 hours). Common Misconceptions: People often confuse AssumeRole with federation APIs (GetFederationToken) or identity-provider-based flows (AssumeRoleWithSAML / AssumeRoleWithWebIdentity). Those are used when the caller is authenticated via SAML/OIDC or needs federated user credentials, not when directly assuming a specific IAM role ARN with MFA. Exam Tips: If the question includes a RoleArn and cross-account access, default to AssumeRole. If it mentions SAML, choose AssumeRoleWithSAML; if it mentions OIDC/web identity tokens (e.g., Cognito, EKS IRSA), choose AssumeRoleWithWebIdentity. MFA parameters (SerialNumber/TokenCode) are a strong hint for AssumeRole in classic IAM role assumption scenarios.
A developer is authoring an AWS CloudFormation template for an internal API service in us-east-1 that must attach to an existing interface VPC endpoint (service name com.amazonaws.us-east-1.ssm) created by a separate CloudFormation stack named network-core, but on the first launch the application stack fails because it attempts to reference the endpoint ID from the other stack; which template coding mistakes could have caused this failure? (Choose two.)
Not necessarily. Ref returns a value for a resource or parameter in the same template (e.g., for AWS::EC2::VPCEndpoint, Ref returns the endpoint ID). However, cross-stack sharing is not achieved by Ref alone; you must import an exported value. The application stack could correctly use ImportValue and not directly use Ref at all, so this is not a required cause of failure.
Correct. To consume a value produced by another CloudFormation stack, the application template must use Fn::ImportValue (or !ImportValue) referencing the export name. Without ImportValue, the template cannot resolve the endpoint ID from the network-core stack at deploy time, leading to failures such as unresolved references, missing parameters, or invalid GetAtt/Ref attempts across stacks.
Incorrect. The Mappings section is for static key/value lookups (e.g., region-to-AMI mappings) and cannot dynamically contain a VPC endpoint ID created by another stack at runtime. Including the endpoint ID in Mappings would require hardcoding it, which defeats the purpose and is brittle. Cross-stack resource sharing should use Outputs/Export and ImportValue instead.
Correct. The producing stack (network-core) must export the VPC endpoint ID in its Outputs section using Export with a unique name. If it only outputs without exporting, or doesn’t output at all, the consuming stack cannot import it. This results in CloudFormation errors like “No export named X found” when the application stack runs.
Incorrect. CloudFormation does not support exporting values from the Mappings section. Only Outputs can be exported for cross-stack references. Mappings are evaluated within a single template and are not addressable by other stacks. Therefore, the absence of an export in Mappings is not a meaningful concept and would not be the root cause.
Core Concept: This question tests CloudFormation cross-stack references. When one stack (application) needs a resource identifier created by another stack (network-core), the supported pattern is: the producing stack exports a value in Outputs with Export, and the consuming stack imports it with Fn::ImportValue. This creates an explicit dependency and allows CloudFormation to resolve the value at deploy time. Why the Answer is Correct: The failure on first launch is consistent with the application stack trying to reference an endpoint ID that is not available through a valid cross-stack mechanism. If the application template does not use ImportValue (Option B), it cannot legally pull an output from another stack; attempts to hardcode, use GetAtt on a non-local logical ID, or reference a parameter that isn’t provided will fail. Additionally, if the network-core stack does not export the VPC endpoint ID in Outputs (Option D), there is nothing for ImportValue to import, and CloudFormation will error with an “Export not found” style message. Key AWS Features: CloudFormation Exports are defined only in the Outputs section using Export: { Name: ... }. The consumer uses Fn::ImportValue (or !ImportValue) to retrieve it. Exports are regional (both stacks must be in us-east-1) and must have unique names per account/region. This pattern is commonly used for sharing VPC IDs, subnet IDs, security group IDs, and VPC endpoint IDs across stacks. Common Misconceptions: Ref (Option A) is necessary to obtain the physical ID of a resource within the same template, but it does not cross stack boundaries by itself. Mappings (Options C and E) are static lookup tables and are not a mechanism for dynamically sharing resource IDs created at deployment time. Exam Tips: For “stack A needs resource from stack B,” look for Outputs+Export in stack B and ImportValue in stack A. If either side is missing, the deployment fails. Also remember exports are regional and name-unique; mismatched export names or deploying in different regions are common real-world causes of similar errors.
A developer is building an AWS Step Functions state machine to orchestrate a subscription activation workflow for a video streaming platform. When a new subscription request arrives, the state machine must pause until the external billing system confirms payment. The billing system, upon successful charge, writes a confirmation item with partition key subscriptionId into an Amazon DynamoDB table named BillingConfirmations. The developer must complete the workflow so it resumes only after the matching confirmation item exists, while minimizing idle compute and avoiding long-running functions. Which solution will meet this requirement?
Correct. This implements an efficient Step Functions polling loop using DynamoDB GetItem via AWS SDK integration, followed by a Choice state. If the item is missing, a Wait state pauses the workflow without consuming compute, then retries. It satisfies “resume only after the matching item exists” while avoiding long-running Lambda functions and minimizing idle compute.
Incorrect. DynamoDB Streams can trigger Lambda, but Step Functions does not provide a “redrive execution” mechanism to resume a currently running (waiting) execution. Redrive is used to restart failed executions (and only for certain workflows/configurations). To resume from an external event you would need a callback task token and SendTaskSuccess/SendTaskFailure, which is not described here.
Incorrect. Stopping the current execution and starting a new one from the beginning breaks workflow correctness and idempotency unless the entire state machine is designed for safe replays. It also risks duplicate side effects (e.g., re-sending activation steps) and adds operational complexity. This does not meet the requirement to simply “resume” the paused workflow at the correct point.
Incorrect. A Lambda function that continuously polls DynamoDB is exactly the “long-running function” anti-pattern the question warns against. It consumes compute while waiting, can hit Lambda timeout limits, and is less reliable than a durable Step Functions Wait state. Step Functions should handle waiting; Lambda should not be used as a busy-wait loop.
Core Concept: This question tests AWS Step Functions orchestration patterns for waiting on an external event without consuming compute. The key concept is using Step Functions service integrations (AWS SDK integrations) plus a Wait state to implement efficient polling when you cannot use a native callback pattern. Why the Answer is Correct: Option A uses Step Functions to call DynamoDB GetItem directly (no Lambda) and branches based on whether the confirmation item exists. If it does not exist, the workflow enters a Wait state (e.g., 5 minutes) and then checks again. During the Wait state, Step Functions does not run compute; it simply persists state, which minimizes idle compute and avoids long-running functions. This meets the requirement to resume only after the matching item exists and avoids keeping a Lambda function running. Key AWS Features: - Step Functions AWS SDK service integration for DynamoDB (e.g., dynamodb:GetItem) eliminates Lambda “glue code” and reduces cost/operational overhead. - Wait state provides durable, low-cost pausing. Combined with a Choice state, it forms a standard “poll with backoff” pattern. - DynamoDB GetItem by partition key is efficient and strongly consistent reads can be enabled if needed (at higher RCU cost) to reduce the chance of reading stale data. - This aligns with Well-Architected principles: cost optimization (no idle Lambda), reliability (durable state), and operational excellence (fewer moving parts). Common Misconceptions: Many assume DynamoDB Streams can “resume” a running Step Functions execution. However, Step Functions does not support an arbitrary “resume execution” API from a stream event. The callback pattern requires an explicit task token (SendTaskSuccess/SendTaskFailure) and a worker that holds that token; the question’s options do not implement that correctly. Another misconception is using a long-running Lambda poller; that wastes compute and risks timeouts. Exam Tips: When you see “pause until external system confirms” and “minimize idle compute,” first think of Step Functions Wait states and callback patterns. If no task token/callback option is provided, the exam-friendly solution is often “Wait + re-check” using AWS SDK integrations to avoid Lambda. Also watch for invalid Step Functions operations like “redrive to resume a running execution”—redrive is for failed executions, not for waiting ones.
学習期間: 1 month
점수는 793점으로 합격했어요! 하루에 최소 30문제는 풀었어요. 밖에서도 짬 날 때마다 풀 수 있으니 좋네요ㅎㅎ
学習期間: 2 months
앱 문제가 시험이랑 굉장히 유사했어요. 그리고 해설들이 왜 틀렸는지 이해하는데 도움이 됐어요.
学習期間: 1 month
Thank you very much, these questions are wonderful !!!
学習期間: 2 months
1달 전에 합격했는데 지금 후기쓰네요. 시험하고 문제 구성이 비슷했어요
学習期間: 2 months
I just passed the exam, and I can confidently say that this app was instrumental in helping me thoroughly review the exam material.
外出先でもすべての問題を解きたいですか?
無料アプリを入手
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。