
AWS
548+ kostenlose Übungsfragen mit KI-verifizierten Antworten
KI-gestützt
Jede AWS Certified Developer - Associate (DVA-C02)-Antwort wird von 3 führenden KI-Modellen kreuzverifiziert, um maximale Genauigkeit zu gewährleisten. Erhalte detaillierte Erklärungen zu jeder Option und tiefgehende Fragenanalysen.
A telemedicine startup runs a web portal where 12 partner clinics upload DICOM images (up to 5 GB each) to an Amazon S3 bucket using presigned URLs, and the company must enforce encryption in transit for all uploads/downloads while ensuring that any objects containing patient identifiers are encrypted at rest with AWS KMS keys that the security team can rotate on demand; which combination of steps will meet these requirements? (Choose two.)
Incorrect. IAM permissions boundaries are attached to IAM principals (users/roles) to limit the maximum permissions they can receive. They do not enforce HTTPS/TLS for S3 requests and are not evaluated as a condition like aws:SecureTransport. Transport security for S3 is enforced with bucket policies (or CloudFront), not permissions boundaries.
Incorrect. S3 bucket policies cannot “enable client-side encryption.” Client-side encryption is performed by the client/application (e.g., AWS Encryption SDK) before uploading, and S3 has no mechanism to enforce that via policy. A bucket policy can require server-side encryption headers (like SSE-KMS) but cannot ensure the client encrypted the payload before upload.
Correct. Because only objects containing patient identifiers require KMS-based protection, the application or upload workflow must determine which files meet that condition before they are stored in Amazon S3. The workflow can then upload those objects with AWS KMS encryption using a customer managed key, which gives the security team control over key rotation and key policy management. This is the best available option because S3 cannot inspect object contents and selectively apply KMS encryption based on the data inside the file.
Correct. A bucket policy that uses the aws:SecureTransport condition can deny any request sent over HTTP and allow only HTTPS requests. That enforces encryption in transit for both PUT and GET operations, including access performed through presigned URLs. This is the standard AWS mechanism for requiring TLS when accessing Amazon S3.
Incorrect. S3 Block Public Access prevents public ACLs and public bucket policies from granting public access. It does not enforce HTTPS-only access and does not guarantee encryption in transit. You could enable Block Public Access as a best practice, but it does not satisfy the stated transport encryption requirement by itself.
Core Concept: This question tests two separate controls for Amazon S3 uploads and downloads: enforcing encryption in transit and ensuring that sensitive objects are protected at rest with AWS KMS customer managed keys. For S3, HTTPS-only access is enforced with a bucket policy using the aws:SecureTransport condition. For at-rest protection with customer-controlled key rotation, the upload process must ensure that qualifying objects are stored using AWS KMS customer managed keys, because S3 cannot inspect object contents and automatically decide which objects contain patient identifiers. Why the Answer is Correct: Option D is correct because an S3 bucket policy can deny any request that is not made over TLS by checking aws:SecureTransport. This applies to both uploads and downloads, including requests made with presigned URLs. Option C is the best match for the selective at-rest encryption requirement because the application workflow must identify which objects contain patient identifiers and then upload those objects using KMS-based encryption with a customer managed key. Key AWS Features: - S3 bucket policy with the aws:SecureTransport condition to require HTTPS. - AWS KMS customer managed keys for encryption at rest when the security team needs control over rotation. - Application-side logic to determine which uploads require KMS protection, since S3 does not classify object contents automatically. Common Misconceptions: - Bucket policies can enforce HTTPS and can require server-side encryption headers, but they cannot inspect DICOM contents to determine whether an object contains patient identifiers. - S3 Block Public Access is unrelated to transport encryption and does not force HTTPS. - IAM permissions boundaries do not control whether S3 requests use TLS. Exam Tips: When a question asks for encryption in transit to S3, look for aws:SecureTransport in a bucket policy. When only some objects require stronger at-rest protection, expect the application or upload workflow to decide which objects must use SSE-KMS with a customer managed key. If the requirement mentions key rotation under security-team control, prefer AWS KMS customer managed keys over AWS managed keys.
A fintech startup serves its single-page web app (index.html, app.js, styles.css) from an Amazon S3 bucket named fintech-ui-prod behind an Amazon CloudFront distribution whose default TTL is 86,400 seconds (min TTL 0), and objects are cached due to Cache-Control: max-age=86400. After a CI/CD job deploys build 2025.08.15 by overwriting the same object keys in S3, the team confirms the new artifacts in S3, but end users still see the old UI for hours through CloudFront. How should the developer ensure the updated assets are delivered immediately via CloudFront?
S3 Object Lock is a data protection/compliance feature that prevents deletion or modification of objects for a retention period (WORM). It does not control CloudFront caching or force edge locations to re-fetch updated objects. In fact, Object Lock could hinder overwriting objects depending on retention settings, making it unrelated and potentially harmful for this deployment scenario.
An S3 lifecycle policy to delete previous versions (or noncurrent versions) affects storage management and cost, not CloudFront edge caches. Even if old versions are removed from S3, CloudFront can still serve the previously cached object until its TTL expires or an invalidation occurs. Lifecycle policies also operate on a schedule, not as an immediate post-deploy cache refresh mechanism.
A CloudFront invalidation explicitly removes cached objects from edge locations before TTL expiry. After deploying the new build to S3 (overwriting the same keys), invalidating the changed paths (specific files or /*) ensures the next viewer request triggers CloudFront to fetch the latest objects from S3. This is the correct, immediate way to refresh CloudFront when filenames/paths remain constant and TTLs are long.
Changing the CloudFront origin bucket after each deployment is an anti-pattern. It adds operational complexity, risks misconfiguration, and does not align with standard CI/CD practices. While it could force CloudFront to fetch from a different origin, it’s unnecessary and disruptive compared to invalidations or versioned asset filenames. It also doesn’t scale well and can break permissions, OAC/OAI settings, and DNS behavior.
Core Concept: This question tests Amazon CloudFront caching behavior in front of an Amazon S3 origin, specifically how TTLs and Cache-Control headers cause CloudFront edge locations to continue serving cached objects even after the origin objects are overwritten. Why the Answer is Correct: CloudFront caches objects at edge locations based on the cache key (typically path + query strings/headers/cookies per behavior) and retains them until they expire (TTL) or are explicitly removed. Here, the default TTL is 86,400 seconds and the objects include Cache-Control: max-age=86400, so CloudFront is expected to serve the cached (old) index.html/app.js/styles.css for up to a day. Overwriting the same keys in S3 does not automatically purge CloudFront caches. Creating a CloudFront invalidation for the updated paths (e.g., /index.html, /app.js, /styles.css, or /*) forces CloudFront to evict those cached objects so the next request fetches the new versions from S3 immediately. Key AWS Features / Best Practices: CloudFront invalidations are the standard mechanism to remove cached content before TTL expiry. A common deployment best practice for SPAs is also “versioned assets” (e.g., app.20250815.js) with long TTLs, and a shorter TTL for index.html, but given the question’s requirement for immediate delivery after overwriting keys, invalidation is the direct fix. Min TTL 0 allows CloudFront to honor lower TTLs, but the current Cache-Control still sets a 1-day max-age. Common Misconceptions: Teams often assume that updating S3 objects automatically updates CloudFront. It does not—CloudFront is a separate cache layer. Another misconception is that deleting old S3 versions or changing bucket settings affects what CloudFront already cached; it won’t until the cache entry expires or is invalidated. Exam Tips: When you see “users still get old content for hours” with CloudFront + long TTL/Cache-Control and “same object keys overwritten,” the exam pattern is: either (1) invalidate paths, or (2) use cache-busting versioned filenames. If the question asks for “immediately” and doesn’t mention changing filenames, choose invalidation.
A developer is building a serverless image-processing API on AWS Lambda behind Amazon API Gateway that must read two adjustable runtime parameters—maximum parallel transformations (initially 300) and maximum images processed per minute (initially 900)—which operations may update weekly, and these updates must be rolled out automatically with a canary release (e.g., 10% over 20 minutes), allow fast rollback, and apply across all function instances without redeploying the function code or causing downtime; which solution meets these requirements?
Lambda environment variables are static per function configuration and updating them requires a function configuration update (often paired with publishing a new version/alias update). CodeDeploy AllAtOnce is the opposite of a canary and increases blast radius. This also couples config changes to Lambda deployments and does not provide a controlled 10% over 20 minutes rollout for configuration-only changes.
Using CDK/CloudFormation to manage parameters and redeploy on each change ties operational parameter updates to infrastructure deployments. While CodeDeployDefault.LambdaCanary10Percent15Minutes provides canary for Lambda version traffic shifting, it still requires publishing new versions and updating aliases, which violates the requirement to apply changes without redeploying function code and adds unnecessary deployment complexity for simple runtime limits.
AWS AppConfig is designed for dynamic application configuration with controlled deployments. Storing the limits as hosted configuration and using the AppConfig Lambda extension allows the function to read updated values at runtime (with local caching) without redeploying code or changing environment variables. AppConfig deployment strategies (e.g., Canary10Percent20Minutes) provide progressive rollout and easy rollback by redeploying a previous configuration version.
Polling S3 every 5 minutes and programmatically updating Lambda environment variables introduces delay, complexity, and failure modes (missed updates, race conditions). Updating environment variables triggers function configuration changes and typically requires publishing new versions/CodeDeploy actions, which is effectively a redeployment path. It also does not provide a native, reliable canary rollout of configuration changes across invocations comparable to AppConfig deployment strategies.
Core Concept: This question tests dynamic configuration management for serverless workloads and safe rollout/rollback of configuration changes without redeploying code. The key services are AWS AppConfig (part of AWS Systems Manager) and the AWS AppConfig Lambda extension. Why the Answer is Correct: The requirement is to update two runtime parameters weekly, roll them out automatically with a canary (10% over 20 minutes), support fast rollback, and have the change apply across all Lambda instances without redeploying code or causing downtime. AWS AppConfig is purpose-built for this: you store the limits as hosted configuration, then deploy configuration changes using a deployment strategy such as Canary10Percent20Minutes. The AppConfig Lambda extension allows the function to fetch configuration at runtime (locally cached by the extension), so updates propagate without changing Lambda code packages, publishing new versions, or updating environment variables. Key AWS Features: 1) AWS AppConfig hosted configuration: central, versioned configuration store. 2) Deployment strategies: built-in canary/linear strategies with bake time and automatic progression. 3) Rollback: you can quickly redeploy a prior known-good configuration version (or stop a deployment) without touching Lambda deployments. 4) Lambda extension caching: reduces latency and throttling risk by caching configuration locally per execution environment while still refreshing on a defined interval. 5) Separation of config from code: aligns with AWS Well-Architected best practices (operational excellence and reliability) by enabling controlled changes and reducing blast radius. Common Misconceptions: Many assume Lambda environment variables are the right place for runtime parameters. However, changing environment variables requires updating the function configuration, which triggers a new function version/config update and does not provide native canary rollout of “just config” across invocations without a deployment. Similarly, using CDK/CloudFormation redeployments couples config changes to infrastructure deployments, increasing risk and operational overhead. Exam Tips: When you see “runtime parameters,” “frequent updates,” “no redeploy,” and “canary/rollback,” think AWS AppConfig (or Parameter Store/Secrets Manager for simpler cases). If the question explicitly requires progressive rollout (10%/20 minutes) and fast rollback, AppConfig deployment strategies are the strongest match—especially with Lambda via the AppConfig extension.
A telemedicine platform is rolling out nationwide training webinars, and as concurrent sessions surge, the DevOps team needs to be alerted if the Amazon EC2 instance running the WebRTC signaling service exceeds 85% CPU utilization for 10 consecutive minutes so that on-call engineers can scale capacity. Which solution will meet this requirement?
Correct. CloudWatch Alarms are designed to evaluate EC2 metrics like CPUUtilization against thresholds over defined time windows. Configure the alarm to require 10 consecutive minutes above 85% (e.g., 1-minute period with 10 evaluation periods, or 5-minute period with 2 evaluation periods) and set the alarm action to publish to an SNS topic. This is the standard, low-ops solution for metric-based alerting.
Incorrect. AWS CloudTrail does not track CPU utilization metrics; it logs API calls and account activity for governance, compliance, and auditing. You cannot create a meaningful “CloudTrail alarm” for CPUUtilization because the data source is wrong. For operational metrics and thresholds, CloudWatch (not CloudTrail) is the correct service.
Incorrect. A cron job running aws ssm describe-instance-information does not provide CPU utilization and is not a reliable monitoring approach. It also adds operational burden, requires instance credentials/permissions, and can fail if the instance is unhealthy (exactly when you most need monitoring). CloudWatch already collects CPU metrics without custom scripts.
Incorrect. CloudTrail logs do not contain CPUUtilization metrics, so scanning them with Lambda cannot detect sustained CPU thresholds. Even if metrics were available elsewhere, polling and custom evaluation logic every 10 minutes is unnecessary and less robust than CloudWatch Alarms, which natively handle evaluation periods, missing data behavior, and alarm state transitions.
Core Concept: This question tests Amazon CloudWatch monitoring and alarming for EC2 metrics, and using Amazon SNS for notifications. CPU utilization is a standard EC2 metric published to CloudWatch, and CloudWatch Alarms are the native mechanism to evaluate metric thresholds over time and trigger actions. Why the Answer is Correct: Option A uses a CloudWatch alarm on the EC2 instance’s CPUUtilization metric with a threshold of 85% and a 10-minute sustained breach requirement. In CloudWatch, “10 consecutive minutes” is implemented by setting Period and EvaluationPeriods appropriately (for example, Period=60 seconds and EvaluationPeriods=10, with DatapointsToAlarm=10 to require all 10 datapoints breaching). When the alarm enters the ALARM state, it can publish to an SNS topic, which then notifies on-call engineers via email, SMS, or integrations (PagerDuty, Opsgenie, etc.). This directly meets the requirement with minimal operational overhead. Key AWS Features: - CloudWatch EC2 metrics: CPUUtilization is available by default (basic monitoring at 5-minute granularity; detailed monitoring at 1-minute granularity). To reliably evaluate “10 consecutive minutes,” enable detailed monitoring or use a 5-minute period with 2 evaluation periods. - CloudWatch Alarm configuration: Threshold, Period, EvaluationPeriods, DatapointsToAlarm, and TreatMissingData (commonly set to “notBreaching” to avoid false alarms during gaps). - SNS notifications: Decouples alerting from recipients and supports multiple subscribers and protocols. Common Misconceptions: A frequent trap is assuming CloudTrail contains performance metrics. CloudTrail records API activity (who did what, when), not CPU utilization. Another misconception is building custom scripts or Lambda log scanning for something CloudWatch already does natively, which increases complexity and reduces reliability. Exam Tips: When you see “alert when a metric exceeds X for Y minutes,” think CloudWatch Alarm first. Use SNS for notifications. Remember CloudTrail is for auditing API calls, not operational metrics. Also consider metric granularity: 1-minute detailed monitoring is often required for “consecutive minutes” precision.
A team is using AWS Serverless Application Model (AWS SAM) to deploy a video-analytics microservice that includes one AWS Lambda function (512 MB memory with reserved concurrency of 5) and one Amazon S3 bucket named media-logs-789 that stores input files under the incoming/ prefix, and the function must be able to read objects from that bucket and prefix only (no write or delete) as part of the automated deployment; how should the developer configure the SAM template to grant the required read-only access?
A Lambda authorizer is used with Amazon API Gateway to control access to API methods. It does not grant the Lambda function permissions to call S3 APIs. The function’s ability to read from S3 is governed by its execution role IAM policies (and possibly the bucket policy), not by an authorizer. This option confuses request authorization for APIs with service-to-service IAM permissions.
You cannot attach an S3 bucket policy “to the Lambda function.” Bucket policies are resource-based policies attached to the S3 bucket. While a bucket policy could grant the Lambda execution role access, it’s not the cleanest or most typical approach for same-account access in SAM, and it’s easy to over-permit. The question asks how to configure the SAM template to grant read-only access, which is best done on the function role.
SQS is a queue (not a topic), and it is unrelated to granting read permissions on S3 objects. You could use S3 event notifications to send messages to SQS when objects are created, but that does not authorize the Lambda function to read the objects. IAM permissions are still required for s3:GetObject (and possibly s3:ListBucket). This option mixes event-driven ingestion with access control.
S3ReadPolicy is an AWS SAM policy template designed to grant read-only access to S3. When added under the Lambda function’s Policies property and scoped to the media-logs-789 bucket and incoming/ prefix, SAM generates the necessary IAM statements on the function’s execution role. This provides automated, least-privilege access (read-only, no write/delete) and is the intended SAM-native solution.
Core Concept: This question tests AWS SAM’s built-in policy templates for least-privilege IAM permissions on a Lambda execution role, specifically granting scoped Amazon S3 read-only access during automated deployments. Why the Answer is Correct: In SAM, the recommended way to grant a Lambda function permissions is to attach IAM policies to the function’s execution role via the Policies property. SAM provides managed “policy templates” (e.g., S3ReadPolicy) that expand into the correct IAM statements at deploy time. By using S3ReadPolicy and scoping it to the specific bucket (media-logs-789) and prefix (incoming/), the function receives only the permissions it needs: typically s3:GetObject (and often s3:ListBucket with a prefix condition) for that bucket/prefix, with no write or delete actions. This aligns with least privilege and is fully automatable within the SAM template. Key AWS Features: - AWS SAM policy templates: Simplify IAM authoring and reduce mistakes compared to hand-written policies. - Lambda execution role: The role assumed by the Lambda service to call AWS APIs (S3 in this case). - Prefix scoping: Achieved by limiting object ARNs to arn:aws:s3:::media-logs-789/incoming/* and, when needed, restricting ListBucket to the bucket ARN with s3:prefix conditions. - Infrastructure as Code: Permissions are versioned and deployed consistently via SAM/CloudFormation. Common Misconceptions: A frequent confusion is thinking S3 access is granted via a “bucket policy to the Lambda function.” Bucket policies attach to the bucket resource, not to the function, and are typically used for cross-account access or to enforce constraints. Another misconception is introducing unrelated services (authorizers, SQS) to “enable reads,” but reads are controlled by IAM authorization, not eventing or API gateway authorizers. Exam Tips: For SAM questions, look for “Policies:” on the AWS::Serverless::Function and prefer SAM policy templates (S3ReadPolicy, DynamoDBReadPolicy, etc.) when the requirement is straightforward. Always scope permissions to the smallest resource set (bucket + prefix) and the minimum actions (read-only) to satisfy least privilege and Well-Architected Security best practices.
Möchtest du alle Fragen unterwegs üben?
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.
A logistics company exposes a Regional Amazon API Gateway REST API (GET /shipments/{trackingId}) that invokes an AWS Lambda function using Lambda proxy integration; on 2024-11-03 at 14:25:33 UTC, a developer runs curl -i https://api.company.example/prod/shipments/42 and receives HTTP 502, API Gateway execution logs show 'Method completed with status: 502', and the Lambda’s CloudWatch Logs indicate the handler completed successfully and returned the plain string 'OK' (2 bytes) with no stack trace; to resolve the error and have the API return 200, what should the developer change?
Changing from Regional to Edge-Optimized affects how clients reach API Gateway (CloudFront distribution in front) and can improve latency for global users, but it does not change the required Lambda proxy response format. Endpoint type mismatches or CloudFront issues typically show different symptoms (DNS/403/5xx at edge), not a consistent 502 caused by a malformed integration response after Lambda succeeds.
For a GET /shipments/{trackingId}, there is typically no request body. The request clearly reached API Gateway and invoked the Lambda function (as evidenced by Lambda logs). A malformed client payload would more likely cause a 400 (bad request) or mapping/template errors before invocation, not a 502 after Lambda returns. The core issue is the response format, not the request.
With Lambda proxy integration, Lambda must return a JSON object containing at least statusCode and body (string), optionally headers and isBase64Encoded. Returning a plain string like "OK" is not a valid proxy response, so API Gateway cannot translate it into an HTTP response and returns 502. Changing Lambda to return {"statusCode":200,"body":"OK"} resolves the issue and yields HTTP 200.
Authorization issues (missing/invalid credentials) are handled by API Gateway before invoking Lambda and typically result in 401 Unauthorized or 403 Forbidden, not a 502. The evidence shows Lambda was invoked and completed successfully, so the request was authorized to reach the integration. The problem occurs when API Gateway tries to interpret the Lambda output, not when authenticating the caller.
Core Concept: This question tests Amazon API Gateway REST API + AWS Lambda proxy integration (a.k.a. Lambda proxy). With proxy integration, API Gateway expects the Lambda function to return a specific JSON structure so API Gateway can translate it into an HTTP response. Why the Answer is Correct: The Lambda handler “completed successfully” but returned the plain string "OK". In Lambda proxy integration, API Gateway requires a response object like {"statusCode": 200, "headers": {...}, "body": "..."} (and optionally "isBase64Encoded"). If the Lambda returns a raw string (or otherwise malformed proxy response), API Gateway cannot parse it into an HTTP response and typically returns a 502 Bad Gateway with execution logs showing “Method completed with status: 502”. This is a classic symptom: Lambda succeeded, but API Gateway failed to transform the integration response. Key AWS Features / Configurations: - Lambda proxy integration for REST APIs requires: - statusCode (integer) - body (string; JSON must be stringified) - optional headers and isBase64Encoded - API Gateway 502 in this context often indicates “Malformed Lambda proxy response” (sometimes visible in execution logs). - Correct fix: update Lambda to return the proxy response format, e.g. statusCode 200 and body "OK". Common Misconceptions: - 502 is often assumed to be a network/endpoint-type issue (Edge vs Regional) or a Lambda runtime crash. Here, Lambda logs show no stack trace and successful completion, pointing away from runtime failure. - Authorization problems usually yield 401/403 from API Gateway (or 403 from IAM auth), not a 502 after invoking Lambda. - For a GET, “request payload format” is rarely the issue; the request reached API Gateway and invoked Lambda. Exam Tips: When you see API Gateway + Lambda proxy integration + HTTP 502 while Lambda logs show success, immediately verify the Lambda proxy response shape. Remember: body must be a string, and missing/incorrect fields (statusCode/body) commonly cause 502. Also distinguish 401/403 (auth) from 502 (integration/response mapping/parsing).
A company runs a payment-authorization microservice on Amazon ECS with AWS Fargate, and each authorization must finish in under 2 seconds; if any single authorization takes 2 seconds or longer, the development team must be notified—how can a developer implement the timing and notification with the least operational overhead?
Correct. Publishing each authorization’s latency as a CloudWatch custom metric is the lowest-overhead way to capture the timing data in a managed AWS service. To satisfy the requirement that any single authorization taking 2 seconds or longer triggers notification, the CloudWatch alarm should evaluate the Maximum statistic for the period against a 2000 ms threshold. SNS is the standard managed notification target for CloudWatch alarms, so this design minimizes custom code and operational burden while still providing timely alerts.
Incorrect. SQS plus a custom consumer can detect >2000 ms values, but it introduces operational overhead: you must build, deploy, scale, and monitor the consumer, handle failures, retries, and poison messages. This is not the least-ops solution when CloudWatch alarms can natively evaluate thresholds and trigger SNS notifications.
Incorrect. An alarm on the average over 1 minute can miss single slow authorizations because the average may remain below 2000 ms even if one request exceeds 2 seconds. It also uses SES, which is not the standard service for CloudWatch alarm notifications (SNS is). This fails the “any single authorization” requirement.
Incorrect. Kinesis is unnecessary and adds operational complexity and cost for this use case. Also, CloudWatch alarms do not evaluate “any stream value” inside Kinesis records directly; you would still need a consumer (Lambda/KCL) to extract values and publish a metric. That defeats the least operational overhead requirement compared to direct CloudWatch custom metrics.
Core Concept: This question tests how to implement low-overhead latency monitoring and alerting for an ECS on Fargate microservice. The best AWS-native approach is to publish authorization latency as a CloudWatch custom metric and use a CloudWatch alarm to notify through Amazon SNS. Why the Answer is Correct: The requirement is to notify the team if any single authorization takes 2 seconds or longer. By publishing each authorization’s processing time as a custom metric datapoint, the developer can configure a CloudWatch alarm on the Maximum statistic with a threshold of 2000 ms so that if any datapoint in the evaluation period is 2000 ms or higher, the alarm triggers. SNS is the standard managed notification target for CloudWatch alarms, which keeps operational overhead low. Key AWS Features: - CloudWatch custom metrics: publish per-authorization latency values in milliseconds, optionally with dimensions such as service name or environment. - CloudWatch alarms: configure a short period and evaluate the Maximum statistic so a single slow authorization in that period causes a breach; high-resolution metrics can improve responsiveness. - Amazon SNS: provides managed fan-out notifications to email, SMS, HTTPS endpoints, or incident-management integrations. Common Misconceptions: A common mistake is assuming an average latency alarm will catch individual slow requests; averages can hide outliers. Another misconception is that services like SQS or Kinesis are needed for simple threshold detection, but they add unnecessary consumers and operational complexity. Also, CloudWatch alarms work on metric statistics over a period, so using the right statistic such as Maximum is important. Exam Tips: When the requirement is 'alert if any single request exceeds a threshold' with minimal operational overhead, think custom CloudWatch metric + alarm on Maximum + SNS. Avoid solutions that require building queue or stream consumers unless the problem explicitly requires custom processing, replay, or downstream analytics. Pay close attention to wording like 'any single event' versus 'average over time.'
A team adds OpenCV and SciPy to an existing AWS Lambda function that resizes images; after the update, the unzipped .zip deployment package is 420 MB, exceeding the 250 MB quota for .zip-based Lambda packages, and the function uses the x86_64 instruction set architecture that must remain unchanged. The developer must deploy the function with the new dependencies without altering its business logic—Which solution will meet these requirements?
Incorrect. AWS Lambda does not support mounting arbitrary “snapshots” of dependencies as a filesystem in the way EC2 can use EBS snapshots. Lambda provides /tmp ephemeral storage and optional EFS mounting, but not snapshot mounting. This option describes a non-existent Lambda feature and would not be a valid deployment approach for oversized dependencies.
Incorrect. Although arm64 can sometimes provide better price/performance and may change dependency footprint, the question explicitly states the function uses x86_64 and that it must remain unchanged. Therefore, switching architectures violates a hard requirement. Additionally, changing architecture can introduce compatibility issues for native libraries like OpenCV/SciPy.
Incorrect. Lambda cannot attach an Amazon EBS volume. EBS is an EC2 block storage service and requires an EC2 instance to attach. For Lambda, the supported network filesystem option is Amazon EFS (and only within a VPC), but even EFS is not the best fit here compared to container images for packaging large dependencies.
Correct. Lambda container images support up to 10 GB, which easily accommodates a 420 MB dependency set. The developer can build an image (using an AWS Lambda base image), install OpenCV and SciPy, push to Amazon ECR, and update the Lambda function to use the image while keeping the same code logic and x86_64 architecture. This is the standard solution for oversized Lambda packages.
Core Concept: This question tests AWS Lambda deployment packaging limits and the alternative packaging model: Lambda container images. It also touches on architecture constraints (x86_64) and how to include large native dependencies (OpenCV, SciPy) without changing application logic. Why the Answer is Correct: A .zip-based Lambda deployment package has a hard limit of 250 MB unzipped (and 50 MB zipped for direct upload). The updated function’s unzipped size is 420 MB, so it cannot be deployed as a .zip. Lambda container images allow packaging the function code and all dependencies into an OCI-compatible image up to 10 GB. The developer can build an image based on an AWS-provided Lambda base image for the runtime (for example, python), install OpenCV/SciPy and any native libraries, push the image to Amazon ECR, and point the Lambda function to that image. This meets all requirements: deploy with new dependencies, keep business logic unchanged, and keep the x86_64 architecture (Lambda supports container images on x86_64). Key AWS Features: - Lambda container image support (up to 10 GB) stored in Amazon ECR. - AWS Lambda base images that include the Lambda Runtime Interface Client, simplifying compatibility. - Ability to specify function architecture (x86_64) when creating/updating the function. - Operationally, ECR integrates with IAM and supports image scanning and lifecycle policies. Common Misconceptions: - Lambda layers can help with dependencies, but they do not solve this case if the combined unzipped size still exceeds limits (and layers have their own size limits). The question specifically points to exceeding the .zip quota, steering you to container images. - “Mounting a snapshot” or attaching EBS are not supported patterns for Lambda. - Switching to arm64 might reduce size in some cases, but the architecture must remain x86_64. Exam Tips: When you see large ML/scientific libraries (OpenCV, SciPy, TensorFlow) and package size limit issues, immediately consider Lambda container images (or occasionally splitting into layers if within limits). Also watch for constraints like “must remain x86_64,” which eliminates architecture-change options even if they seem beneficial.
A media streaming company stores its microservice code in AWS CodeCommit, and compliance requires that 100% of unit tests pass and detailed test reports be retained for 60 days and viewable by auditors; for approximately 15 commits per day to the main and develop branches, the team needs each commit to automatically build the service in a Linux build environment, run unit tests, and produce a navigable JUnit-style report without creating a custom UI, while supporting at least 8 concurrent builds and keeping the reports centrally accessible; which solution will meet these requirements?
Incorrect. AWS CodeDeploy is designed for deploying applications to EC2, Lambda, or ECS and coordinating deployment hooks, not for CI unit testing and rich test report visualization. Publishing results as CloudWatch metrics does not create a navigable JUnit-style report for auditors, and it would require additional tooling/dashboards to interpret detailed per-test-case results. It also couples unit testing to deployments, which is not a best practice for fast feedback CI.
Incorrect. Amazon CodeWhisperer is an AI coding assistant, not a CI system. It does not orchestrate builds on commits, run unit tests at scale, or provide standardized test report visualization. Storing raw outputs in S3 alone does not meet the requirement for a navigable JUnit-style report without building a UI; you would still need a viewer or reporting layer to interpret and browse results.
Correct. AWS CodeBuild can be triggered automatically from CodeCommit on each commit to main/develop, run builds and unit tests in a managed Linux environment, and ingest JUnit XML into CodeBuild Report Groups for a built-in, navigable console experience. Reports and artifacts can be exported to S3 for centralized access and governed with a 60-day lifecycle policy. CodeBuild supports scaling to at least 8 concurrent builds via managed concurrency and quota configuration.
Incorrect. Lambda is not appropriate for compiling/building typical microservices and running full unit test suites due to execution time limits, dependency/tooling complexity, and ephemeral storage constraints. Storing results in a Lambda layer is not a durable, queryable, auditor-friendly retention mechanism and does not provide a navigable JUnit report UI. This approach would effectively require building and maintaining custom CI/reporting plumbing.
Core Concept: This question tests CI build-and-test automation with AWS developer tools, specifically AWS CodeBuild’s native test reporting and scalable, managed build execution integrated with AWS CodeCommit. Why the Answer is Correct: Option C directly satisfies all requirements: trigger on each commit to main/develop, build in a Linux environment, run unit tests, generate a navigable JUnit-style report without building a custom UI, retain reports for 60 days, and support at least 8 concurrent builds. CodeBuild integrates with CodeCommit via webhooks (or EventBridge) to start builds automatically per commit. CodeBuild Report Groups provide first-class test report ingestion and visualization in the AWS console for common formats like JUnit XML, giving auditors a built-in UI to browse test suites, cases, pass/fail, and trends. Key AWS Features: 1) CodeBuild managed Linux environments: choose standard images (e.g., aws/codebuild/standard) and define commands in buildspec.yml. 2) Test reporting (Report Groups): configure reports section in buildspec.yml to collect JUnit XML; results are viewable in CodeBuild without custom tooling. 3) Central retention and auditor access: export artifacts (including raw JUnit XML and any HTML) to Amazon S3; apply an S3 Lifecycle rule to expire objects after 60 days. Use IAM policies (and optionally S3 Object Lock/WORM if required) for auditor read-only access. 4) Concurrency: CodeBuild supports parallel builds; ensure account build concurrency is set to at least 8 (request quota increase if needed) and select appropriate compute types. Common Misconceptions: Teams often assume CodeDeploy is for testing; it is primarily for deployment orchestration and does not provide rich, navigable unit test reporting. Others try to assemble Lambda-based build systems, but Lambda is not suited for full builds (runtime limits, tooling, storage) and would require custom report hosting. Exam Tips: When you see “JUnit-style report,” “viewable without custom UI,” and “retain for X days,” think CodeBuild Report Groups + S3 lifecycle. For “on every commit,” look for CodeCommit webhook/EventBridge triggers. For “N concurrent builds,” remember CodeBuild concurrency quotas and scaling are managed, not self-hosted.
An esports platform is building a real-time global leaderboard service where lookups for the most frequently queried 1% of player profiles must return in microseconds (p95 < 1 ms), the service must sustain up to 45,000 read requests per second and 1,500 write operations per second, and the cache must be updated immediately whenever a profile score is created, updated, or deleted so clients never see stale data; which solution will meet these requirements with minimal operational overhead?
DynamoDB + DAX is purpose-built for microsecond read latency at very high RPS with minimal ops. DAX is a managed, in-memory cache that is DynamoDB API-compatible and supports read-through and write-through behavior. If the application performs writes via the DAX endpoint, DAX updates/invalidates cached items so hot-profile lookups stay consistent without implementing custom invalidation logic.
RDS + ElastiCache for Redis with lazy loading can achieve low latency, but lazy loading inherently risks stale reads (cache is refreshed on demand or via TTL). Meeting “never see stale data” requires explicit write-through updates or invalidation on every write/delete, plus careful handling of race conditions. Operational overhead is higher (cache-aside logic, invalidation, potential thundering herd) than using DAX with DynamoDB.
An in-process memory cache with lazy loading is operationally risky for a global, scaled service. Each application instance has its own cache, so updates/deletes won’t propagate instantly across the fleet, causing stale data. Achieving coherency would require a distributed invalidation mechanism (pub/sub, messaging) and careful coordination, increasing complexity and still not guaranteeing microsecond p95 across all instances.
DAX only works with DynamoDB; it cannot accelerate Amazon RDS. Also, “configure a TTL on the DAX cluster” is not how DAX consistency is achieved—TTL-based expiration would still allow stale reads until expiry and does not satisfy immediate update/delete requirements. This option is architecturally invalid and mixes incompatible services.
Core Concept: This question tests ultra-low-latency read optimization and cache coherency for a high-throughput key-value workload. The key services are Amazon DynamoDB for scalable reads/writes and DynamoDB Accelerator (DAX) for microsecond in-memory caching with minimal operational overhead. Why the Answer is Correct: Option A (DynamoDB + DAX) best fits the requirement that the hottest ~1% of profiles be returned in microseconds (p95 < 1 ms) while sustaining ~45,000 reads/sec and 1,500 writes/sec. DAX is a fully managed, DynamoDB-compatible, in-memory cache designed specifically to reduce read latency from single-digit milliseconds to microseconds for read-heavy workloads. Importantly, DAX provides write-through behavior: when the application writes to DynamoDB via the DAX endpoint, DAX updates/invalidates cached items so subsequent reads do not return stale data. This aligns with “updated immediately whenever created/updated/deleted” and “clients never see stale data,” assuming the application uses DAX for both reads and writes. Key AWS Features: - DAX read-through and write-through caching: reads are served from cache when possible; writes through DAX keep cache entries consistent. - API compatibility: DAX uses the DynamoDB API, minimizing code/ops changes. - DynamoDB scaling: on-demand or provisioned capacity with auto scaling can meet 45k RPS reads and 1.5k WPS writes; partition key design is critical to avoid hot partitions. - Minimal ops: DAX is managed (patching, replication within a cluster), simpler than self-managed cache invalidation logic. Common Misconceptions: Many assume Redis is always the fastest option; however, the requirement “never see stale data” breaks common lazy-loading patterns because they inherently allow stale reads until refresh/expiry. In-process caches also create per-instance inconsistency and operational complexity (invalidation across a fleet). Exam Tips: When you see “microseconds” + “DynamoDB workload” + “minimal operational overhead,” think DAX. Also watch for wording about cache coherency: “immediately updated” points to write-through/managed invalidation rather than TTL/lazy loading. DAX is for DynamoDB only; it does not sit in front of RDS.
Möchtest du alle Fragen unterwegs üben?
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.
A reporting API backed by Amazon RDS for PostgreSQL serves read-only analytics on a 5-year dataset using complex SQL views over ~250 million rows, and after traffic increased from 300 to 2,000 requests per second for the same parameterized report endpoint, average query latency rose from 120 ms to 1.2 s while RDS CPU remains above 85%. The underlying tables are batch-updated once per day at 02:00 UTC, most requests (≥95%) are identical or cacheable for 24 hours, and the team cannot change the database schema or migrate to a different database engine. A developer must improve read performance and minimize management overhead without modifying the database structure; which approach best meets these requirements?
Migrating to DynamoDB violates key constraints: the team cannot change the database engine and cannot modify the database structure. Moving a 5-year relational analytics dataset with complex SQL views to DynamoDB would require significant re-modeling, query rewrites, and likely pre-aggregation. It also increases project risk and time-to-value compared to caching, and it is not “minimal management overhead” for this scenario.
ElastiCache for Redis is the best fit: it is managed, fast, and ideal for caching expensive, repetitive read results. With ≥95% of requests identical or cacheable for 24 hours and data updated once daily, a 24-hour TTL plus invalidation after the 02:00 UTC batch update yields high cache hit rates and large reductions in RDS CPU. This improves latency without schema changes or database migration.
Memcached on EC2 can cache responses, but it is self-managed: you must handle instance sizing, scaling policies, patching, failures, and cache warm-up behavior. This conflicts with the requirement to minimize management overhead. Additionally, ElastiCache provides a managed caching layer with better operational simplicity and built-in features (monitoring, maintenance) compared to running Memcached yourself on EC2.
DynamoDB Accelerator (DAX) only works in front of Amazon DynamoDB tables to accelerate DynamoDB reads. It cannot be placed in front of Amazon RDS for PostgreSQL and does not accelerate SQL queries or views. This option is a common exam trap: DAX is not a generic database cache and is not compatible with RDS engines.
Core Concept: This question tests read-scaling and latency reduction for Amazon RDS using an external cache layer. When the workload is highly repetitive (≥95% identical/cacheable) and the source data changes on a predictable schedule (daily batch), application-level caching is the most effective optimization with minimal operational overhead. Why the Answer is Correct: Adding Amazon ElastiCache for Redis to cache the report results with a 24-hour TTL directly targets the bottleneck: repeated execution of complex SQL views over ~250M rows. With traffic rising to 2,000 RPS, the database CPU stays >85% and latency increases 10x, indicating the DB is saturated by redundant reads. Caching the computed report response (or the query result set) allows the API to serve most requests from Redis in sub-millisecond to single-digit millisecond latency, dramatically reducing RDS CPU and stabilizing performance. Because tables update once per day at 02:00 UTC, the cache can be invalidated (flush keys / versioned keys) after the batch update, ensuring correctness without schema changes. Key AWS Features / Best Practices: - ElastiCache for Redis is a managed in-memory data store (patching, monitoring, failover options) with low management overhead compared to self-managed caching. - Use a deterministic cache key derived from the report parameters (the endpoint is parameterized but mostly identical). - Set TTL to 24 hours and perform explicit invalidation after the 02:00 UTC load (or use cache versioning: include a “data_version” prefix that changes daily). - Consider Redis cluster mode and read replicas (where applicable) to scale throughput; integrate with CloudWatch metrics (CPU, evictions, hit rate). Common Misconceptions: - “Just add read replicas” is not offered here and may still be expensive if every request runs the same heavy view; caching avoids repeated computation. - DynamoDB/DAX are not drop-in accelerators for RDS/PostgreSQL queries; they require data modeling changes and do not accelerate SQL views. - Self-managed Memcached on EC2 increases operational burden and is less aligned with the “minimize management overhead” requirement. Exam Tips: When you see (1) read-only analytics, (2) repeated identical queries, (3) predictable refresh windows, and (4) constraints against schema/engine changes, the exam-friendly pattern is “cache the results” using a managed service (ElastiCache). Also remember DAX is only for DynamoDB, not RDS.
A media analytics company runs a Node.js API on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer; traffic fluctuates from 1,000 requests per second at night to 12,000 requests per second during promotions, causing CPU spikes and occasional memory pressure. The engineering team must gather per-instance, 1-minute OS-level metrics (including memory utilization, swap usage, disk I/O, and file system utilization) within the next two weeks to right-size the fleet. The team also needs to monitor custom application metrics such as cacheHitRate and queueDepth emitted by the service, without making significant code changes. Which solution will meet these requirements?
Incorrect. AWS X-Ray focuses on distributed tracing (latency breakdowns, service maps, traces/segments) and can provide insights into request paths and downstream calls. It does not collect OS-level metrics like memory utilization, swap usage, filesystem utilization, or disk I/O at 1-minute granularity. X-Ray also is not the standard mechanism for ingesting arbitrary custom metrics such as cacheHitRate and queueDepth into CloudWatch.
Correct. The Amazon CloudWatch agent can be installed on EC2 instances to publish 1-minute OS-level metrics that are not available by default (memory, swap, disk, filesystem). It also supports collecting custom application metrics via StatsD or collectd, enabling the service to emit cacheHitRate and queueDepth with minimal changes (often configuration-only if a metrics library already exists). This meets the two-week timeline and right-sizing needs.
Incorrect. Modifying the application to publish metrics via the AWS SDK could work for custom application metrics, but it requires code changes and does not inherently provide OS-level metrics like memory, swap, and filesystem utilization. You would still need an agent or additional host instrumentation to gather those OS metrics. This approach increases development effort and risk compared to using the CloudWatch agent’s built-in capabilities.
Incorrect. AWS CloudTrail records AWS API calls for governance, compliance, and auditing (who did what, when, from where). It does not capture per-instance OS performance metrics or custom application metrics. CloudTrail logs are useful for security investigations and change tracking, not for right-sizing based on CPU/memory/disk utilization or for monitoring application-level KPIs like cacheHitRate and queueDepth.
Core Concept: This question tests Amazon CloudWatch observability for EC2-based workloads: collecting OS-level metrics beyond default EC2 metrics, and ingesting custom application metrics with minimal code changes. Why the Answer is Correct: By default, EC2 publishes basic CloudWatch metrics (CPUUtilization, NetworkIn/Out, DiskRead/Write ops for some instance types/EBS), but it does not publish memory utilization, swap usage, file system utilization, or detailed disk I/O at the OS level. The Amazon CloudWatch agent is designed to run on instances and push these additional system-level metrics to CloudWatch at a configurable interval (including 1-minute). It also supports collecting custom metrics via StatsD and collectd, which allows the Node.js service to emit metrics (e.g., cacheHitRate, queueDepth) to a local daemon/UDP endpoint with minimal or no significant code changes (often just configuration of an existing metrics library). Key AWS Features: CloudWatch agent supports the “metrics” section for memory, swap, disk, and filesystem, and can be configured for 60-second collection. It can be deployed uniformly across an Auto Scaling group using user data, a launch template, or AWS Systems Manager. Custom metrics can be ingested through StatsD/collectd integration, enabling standardized metric namespaces, dimensions (e.g., AutoScalingGroupName, InstanceId), and CloudWatch Alarms/Dashboards for right-sizing decisions. Common Misconceptions: X-Ray is for distributed tracing and request-level performance analysis, not OS metrics collection. CloudTrail is for API auditing, not performance telemetry. Writing custom code with the AWS SDK can publish custom metrics, but it does not solve OS-level memory/filesystem metrics without additional host instrumentation and creates unnecessary development effort and risk given the two-week timeline. Exam Tips: When you see requirements like memory, swap, disk, and filesystem utilization on EC2, the expected answer is “CloudWatch agent” (or sometimes SSM + CloudWatch agent). For custom metrics with minimal code changes, look for StatsD/collectd support or embedded metric format patterns; avoid X-Ray/CloudTrail unless the question is explicitly about tracing or auditing.
A fintech team uses AWS CloudFormation to deploy an EventBridge-triggered AWS Lambda function whose code is a zipped artifact stored in Amazon S3 (bucket artifacts-567890-us-east-1 with versioning enabled, key builds/processor.zip, last version ID 3Lg8xZ2), but every time they run a stack update while keeping the same S3 object key, the Lambda function code does not change; what should they do to ensure CloudFormation applies the new code during updates?
Creating a new Lambda alias does not update the function’s code package. Aliases are pointers to published versions and are used for traffic shifting, stable ARNs, and deployment strategies (e.g., blue/green with CodeDeploy). If the underlying function code is not updated (or a new version is not published), a new alias won’t cause CloudFormation to pull a new ZIP from S3.
This is correct because CloudFormation needs a template-detectable change to the Lambda Code property to trigger a code update. With S3 versioning enabled, specifying the latest S3ObjectVersion (e.g., a new version ID) ensures CloudFormation fetches that exact artifact. Alternatively, changing the S3Key (unique per build) also forces an update and is a common CI/CD best practice.
Uploading to a different bucket is unnecessary and does not inherently solve the problem unless the template is also updated to reference that new bucket/key/version. The core issue is CloudFormation not seeing a change in the Lambda Code definition. Changing buckets adds operational complexity and is not the recommended mechanism for deterministic deployments.
A code-signing configuration enforces that Lambda only deploys code signed by trusted publishers, improving supply-chain security. It does not force CloudFormation to redeploy code when the S3 key remains the same. Even with code signing, CloudFormation still requires a change in S3Key or S3ObjectVersion (or equivalent) to trigger an update.
Core Concept: This question tests how AWS CloudFormation detects and applies updates to AWS::Lambda::Function code when the deployment package is stored in Amazon S3. CloudFormation updates resources based on changes in the template (or parameters) and, for Lambda code, it needs a detectable change in the Code property (S3Key and/or S3ObjectVersion) to trigger a code update. Why the Answer is Correct: If the team uploads a new ZIP to the same S3 key (builds/processor.zip) and then runs a stack update without changing the template, CloudFormation often sees no change to the Lambda function’s Code definition. With S3 versioning enabled, the correct way to force CloudFormation to fetch the new artifact is to either (1) change the S3 object key (e.g., builds/processor-<buildId>.zip) or (2) specify the new S3ObjectVersion value (e.g., 3Lg8xZ2 or the latest version ID) in the template. Either approach changes the template’s Code property, which causes CloudFormation to update the Lambda function code. Key AWS Features: CloudFormation supports Lambda code from S3 via Code: { S3Bucket, S3Key, S3ObjectVersion }. S3 versioning provides immutable references to specific artifacts, enabling repeatable deployments and safe rollbacks. A best practice is to use unique, content-addressed keys (or CI/CD-injected version IDs) so deployments are deterministic and auditable. Common Misconceptions: It’s tempting to think that simply overwriting the object at the same key will be detected automatically. However, CloudFormation is template-driven; if the template doesn’t change, it may not redeploy the code. Another misconception is that operational features like aliases or code signing will force a refresh—those address traffic shifting and integrity, not change detection. Exam Tips: For Lambda + CloudFormation, remember: to update code from S3, you must change S3Key or S3ObjectVersion (or use packaging tools like SAM/CDK that auto-generate unique keys). When you see “same S3 key” and “code doesn’t change,” the fix is almost always “update the key or version in the template.”
A logistics company receives telematics events from 8,000 delivery vans at a peak of 1,200 events per second, averaging 1.5 KB per event. Three EC2-based processing applications (anomaly detection, billing, and live dashboards) must consume the same event stream concurrently in near real time with sub-second latency. Each processor must be able to pause during deployments or crashes and then resume from its last checkpoint without data loss. The team plans to add two more processors next month and wants to avoid duplicating the data pipeline for each consumer. Which AWS service should the developer use to ingest and fan out the stream to meet these requirements?
Amazon SQS is a message queue optimized for decoupling producers and consumers with at-least-once delivery, but it uses a competing-consumer model: once a message is consumed and deleted, other consumers cannot read it. You could duplicate messages to multiple queues or use SNS-to-SQS fanout, but that explicitly duplicates the pipeline per consumer and complicates replay/checkpointing semantics compared to a true stream.
Amazon Kinesis Data Firehose is a fully managed delivery service that buffers and delivers streaming data to destinations like S3, Redshift, and OpenSearch. It is not designed for multiple EC2-based applications to concurrently consume the same stream with sub-second latency and independent checkpoint/replay. Firehose focuses on delivery with buffering (seconds to minutes) and transformation, not multi-consumer real-time processing.
Amazon EventBridge is an event bus for routing events to targets with filtering and SaaS integration. While it supports multiple targets, it is not a streaming service with shard-based throughput, ordered records, and consumer-controlled checkpoints. EventBridge also does not provide the same stream replay model (per-consumer resume from sequence checkpoints) required for crash recovery and near-real-time stream processing at scale.
Amazon Kinesis Data Streams is purpose-built for real-time streaming ingestion and processing with multiple concurrent consumers. It provides low-latency reads, durable retention (24 hours to 365 days), and supports independent consumer progress via sequence numbers and checkpointing (commonly using KCL + DynamoDB). Enhanced Fan-Out can give each consumer dedicated throughput, making it ideal for adding more processors without duplicating ingestion.
Core Concept: This question tests real-time streaming ingestion with multiple concurrent consumers, low latency, and independent replay/checkpointing without duplicating pipelines. The AWS service designed for this is Amazon Kinesis Data Streams (KDS), which supports durable stream storage and multiple consumer applications reading the same data. Why the Answer is Correct: Kinesis Data Streams ingests events at high throughput (1,200 events/sec at ~1.5 KB is ~1.8 MB/s) and provides sub-second read latency for near-real-time processing. Crucially, multiple processing applications can consume the same stream concurrently. Each application can maintain its own checkpoint (typically via KCL leases/checkpointing in DynamoDB or custom checkpointing) so if a processor pauses during deployment or crashes, it can resume from the last processed sequence number without data loss (within the stream retention window). Adding new processors next month is straightforward: they attach as additional consumers to the same stream rather than duplicating ingestion. Key AWS Features: - Multiple consumers: Standard polling consumers or Enhanced Fan-Out (EFO) for dedicated throughput per consumer and low latency. - Replayability: Configurable retention (24 hours up to 365 days) enables reprocessing and recovery. - Ordering and durability: Records are ordered per shard and replicated across multiple AZs. - Scaling: Increase shard count (or use on-demand mode) to handle throughput growth. - Checkpointing: Kinesis Client Library (KCL) commonly stores checkpoints in DynamoDB, enabling resume-after-failure semantics. Common Misconceptions: SQS is often chosen for decoupling, but it is a queue (competing consumers) rather than a stream with independent replays for multiple apps. Firehose is for delivery to destinations (S3/Redshift/OpenSearch) and is not intended for multiple near-real-time consumers with replay. EventBridge is an event bus for routing/integration; it does not provide stream-style retention with per-consumer checkpoints and sub-second streaming semantics. Exam Tips: When you see “multiple applications must read the same events,” “near real time/sub-second,” and “resume from last checkpoint,” think Kinesis Data Streams (or Kafka/MSK). If the requirement includes “fan-out to many consumers without duplicating pipelines,” remember KDS + Enhanced Fan-Out and per-consumer checkpointing as the canonical AWS-native answer.
A logistics company uses AWS CodeDeploy to perform in-place deployments of version 2.3.7 to 120 Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer with MinimumHealthyHosts set to 95% and an overall deployment timeout of 15 minutes; during a rollout that aborted at 6 minutes, the CodeDeploy console showed the error 'HEALTH_CONSTRAINTS: The overall deployment failed because too many individual instances failed or too few healthy instances were available,' which two issues could explain this failure?
Correct. The CodeDeploy agent must be installed and running on every target EC2 instance for in-place deployments. If the agent is stopped or missing on multiple instances, those instances cannot run lifecycle events or report success/failure. CodeDeploy will mark them as failed or time out waiting for status, quickly reducing the number of healthy instances and triggering HEALTH_CONSTRAINTS when the minimum healthy hosts threshold is breached.
Incorrect. The CloudWatch agent is used for publishing OS/application metrics and logs to Amazon CloudWatch. CodeDeploy does not require the CloudWatch agent to perform deployments or evaluate deployment success. While CloudWatch can help with troubleshooting (e.g., viewing logs/metrics during a rollout), its absence would not directly cause CodeDeploy to fail with HEALTH_CONSTRAINTS.
Incorrect. If the developer’s IAM user lacked codedeploy:CreateDeployment, the deployment would fail to start and would be denied immediately by IAM. The scenario describes a rollout that began and then aborted at 6 minutes due to health constraints, which indicates instance-level deployment execution/health issues rather than a caller authorization problem.
Correct. EC2 instances need an IAM instance profile with permissions to retrieve the application revision (commonly from Amazon S3, possibly with KMS decrypt) and to interact with CodeDeploy as required. If these permissions are missing, instances will fail early during DownloadBundle or related steps. Multiple instance failures reduce the healthy host count below 95%, causing CodeDeploy to abort with HEALTH_CONSTRAINTS.
Incorrect. There is no universal requirement to enable a special “CodeDeploy health checks” feature for every deployment group. CodeDeploy health behavior is driven by deployment configurations (minimum healthy hosts, batch size) and optional integrations like load balancers and alarms. HEALTH_CONSTRAINTS is typically caused by real instance failures or insufficient healthy capacity, not by a missing mandatory feature toggle.
Core Concept: This question tests AWS CodeDeploy in-place deployments with health constraints. In an Auto Scaling group behind an Application Load Balancer (ALB), CodeDeploy must keep a minimum percentage of instances healthy while it deploys. The error HEALTH_CONSTRAINTS indicates CodeDeploy stopped because it could not maintain the configured minimum healthy hosts or too many instances reported deployment failures. Why the Answer is Correct: With 120 instances and MinimumHealthyHosts set to 95%, CodeDeploy must keep at least 114 instances healthy during the deployment. That means only up to 6 instances can be simultaneously unhealthy/out of service/failed (depending on how health is evaluated for the deployment configuration). If the CodeDeploy agent is not running on a meaningful number of instances (A), those instances cannot execute lifecycle hooks (e.g., DownloadBundle, Install, AfterInstall) and will quickly be marked as failed or never transition to success, reducing the healthy count below the threshold. Similarly, if the EC2 instances’ IAM instance profile lacks required permissions (D)—commonly s3:GetObject for the revision bundle, KMS decrypt if encrypted, and permissions to communicate status—instances will fail early when trying to fetch the application revision or report progress. Multiple early failures can rapidly violate the 95% healthy constraint, causing an abort within minutes (as observed at 6 minutes), well before the 15-minute overall timeout. Key AWS Features: CodeDeploy relies on the CodeDeploy agent on each instance and on an instance profile with appropriate permissions. Health constraints are enforced via deployment configuration (e.g., minimum healthy hosts) and, when using load balancers, by coordinating deregistration/registration and observing instance health. In-place deployments are especially sensitive because instances are updated in batches; if too many in a batch fail, the minimum healthy threshold is breached. Common Misconceptions: CloudWatch agent (B) is unrelated to CodeDeploy’s ability to deploy. IAM permission to create the deployment (C) would prevent starting the deployment at all, not cause a mid-deployment HEALTH_CONSTRAINTS failure. There is no mandatory “dedicated CodeDeploy health checks” feature required for every group (E); health constraints are configured through deployment configs and (optionally) load balancer integration. Exam Tips: When you see HEALTH_CONSTRAINTS, think: “too many instance-level failures or not enough healthy capacity to continue.” Immediately check CodeDeploy agent health, instance profile permissions, and minimum healthy host percentage vs. fleet size. High minimum healthy percentages (like 95%) leave very little room for failures in large fleets.
Möchtest du alle Fragen unterwegs üben?
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.
A fintech company runs its exchange-rate normalization microservice on Amazon ECS using AWS Fargate and emits up to 20,000 JSON log events per minute to CloudWatch Logs where each event includes service='rates-pipeline', action='normalize', and an integer field latency_ms (milliseconds); over the selected time range, which Amazon CloudWatch Logs Insights query will return the average, slowest (max), and fastest (min) normalization latency aggregated into 1-minute intervals for events where service='rates-pipeline' and action='normalize'?
Correct. It filters on both required JSON fields (service and action) and computes avg(latency_ms), min(latency_ms) as fastest, and max(latency_ms) as slowest. It groups results into 1-minute buckets using by bin(1m), matching the requested aggregation interval and metrics. Including fields @timestamp, service, action, latency_ms is fine and commonly used for clarity, though only latency_ms is required for stats.
Incorrect. It only filters on action="normalize" and does not restrict service="rates-pipeline", so it could include normalize events from other services. It also aggregates by bin(1s) (1-second intervals) rather than the required 1-minute intervals, producing far more granular and noisy results than requested. Even though the stats functions are correct, the filter and bin size do not meet the question requirements.
Incorrect. While it correctly filters on service and action and uses bin(1m), it swaps the meanings of fastest and slowest by labeling max(latency_ms) as fastest_ms and min(latency_ms) as slowest_ms. In latency measurements, the fastest request corresponds to the minimum latency, and the slowest corresponds to the maximum latency. This is a common exam trick focused on careful interpretation of min/max.
Incorrect in context. It uses parse @message "normalize completed in * ms" to extract latency_ms, which is only necessary if the logs are unstructured text. The question states each JSON log event already includes an integer field latency_ms, so parsing is unnecessary and may fail if the message format differs. It also does not filter on service and action, so it could include unrelated events that match the parsed pattern.
Core Concept: This question tests Amazon CloudWatch Logs Insights query construction: selecting fields, filtering JSON log events by key/value pairs, and using statistical aggregation functions (avg/min/max) grouped into time bins (bin()). Logs Insights automatically extracts top-level JSON fields (like service, action, latency_ms) when logs are valid JSON, enabling direct filtering and aggregation. Why the Answer is Correct: Option A correctly: 1) References the JSON fields (service, action, latency_ms). 2) Filters to only events where service="rates-pipeline" AND action="normalize". 3) Uses stats avg(), min(), and max() over latency_ms. 4) Aggregates results into 1-minute intervals using by bin(1m). This exactly matches the requirement: average, fastest (min), and slowest (max) normalization latency per minute over the selected time range. Key AWS Features: - CloudWatch Logs Insights supports powerful ad-hoc querying over CloudWatch Logs without exporting data. - JSON field discovery lets you filter with filter service="..." and action="..." when the events are structured. - stats functions (avg, min, max, pct, count) compute metrics from log fields. - bin(1m) groups events into fixed 1-minute buckets aligned to timestamps, which is a common pattern for operational dashboards and troubleshooting latency spikes. These align with AWS Well-Architected Operational Excellence principles: use structured logging and centralized analysis to detect anomalies and trends. Common Misconceptions: - Mixing up min/max labels (fastest vs slowest) is a frequent exam trap. - Using the wrong bin size (1s instead of 1m) changes the aggregation granularity and can produce noisy results. - Assuming you must parse @message even when JSON fields already exist; parsing is only needed when latency is embedded in unstructured text. Exam Tips: - If the question states the log event “includes” fields, prefer direct field references over parse. - Verify the time aggregation requirement (bin(1m) vs bin(5m), etc.). - Double-check semantic mapping: fastest = minimum latency, slowest = maximum latency. - In Logs Insights, the grouping clause is “by bin(1m)” (optionally with additional dimensions), and stats is the right operator for aggregations.
A fintech startup runs a rules engine on Amazon ECS with AWS Fargate in two AWS accounts (prod and audit) processing up to 3,000 events per minute, and when an event fails validation the container must call a third-party incident REST API that requires retrieving a bearer access token at runtime, and the token must be encrypted at rest and in transit and be accessible to workloads in both accounts with the least management overhead, which solution meets these requirements?
Systems Manager Parameter Store SecureString can store sensitive values and can be shared across accounts in some scenarios by using resource policies on advanced parameters. However, this option specifies an AWS managed KMS key, and cross-account use of the encrypted value requires KMS permissions that are not appropriately controllable with an AWS managed key. Because the workloads in both accounts must retrieve and decrypt the same token, a customer managed KMS key is the correct pattern. Even aside from that, Secrets Manager is the more purpose-built service for runtime secret retrieval with lower operational overhead.
DynamoDB plus KMS can securely store encrypted data, but it is not a managed secrets solution. You would need to implement your own retrieval, caching, rotation strategy, and careful IAM scoping for DynamoDB reads and KMS decrypt. This adds operational and development overhead and increases the chance of mistakes (e.g., logging plaintext, improper key policies). It meets encryption requirements but not “least management overhead.”
Secrets Manager is purpose-built for storing tokens and credentials, encrypts secrets at rest with KMS (including customer managed keys), and uses TLS for retrieval. It supports resource-based policies for cross-account access, enabling one secret to be shared with workloads in both prod and audit accounts. ECS task roles can be granted least-privilege secretsmanager:GetSecretValue (and KMS decrypt as needed), minimizing operational overhead.
S3 with SSE-KMS can encrypt objects at rest and TLS protects in transit, and bucket policies can grant cross-account access. However, S3 is not a secret store: you must manage object formats, retrieval logic, caching, and potential exposure risks (e.g., accidental broader bucket permissions). Using an AWS managed KMS key also reduces control. This is higher overhead and less aligned with best practices than Secrets Manager.
Core concept: choose the AWS service that is purpose-built for storing sensitive runtime credentials, supports encryption at rest and in transit, and can be shared across AWS accounts with minimal operational effort. The best fit is AWS Secrets Manager with a customer managed KMS key and a resource-based policy for cross-account access. Why correct: Secrets Manager is designed specifically for secrets such as bearer tokens and API credentials. It encrypts secrets at rest with KMS, returns them over TLS in transit, and supports cross-account access through resource-based policies. Using a customer managed KMS key is important because cross-account decryption requires explicit key policy control, which AWS managed keys do not provide in the same way. Key features: Secrets Manager provides native secret storage, fine-grained IAM permissions, optional rotation support, and direct integration patterns for ECS workloads. A resource policy on the secret can allow task roles in both prod and audit accounts to retrieve the same secret. A customer managed KMS key allows explicit decrypt permissions for those cross-account principals. Common misconceptions: Parameter Store can store SecureString values and can support parameter sharing in some cases, but it is not the best answer here because the option specifies an AWS managed KMS key, which is unsuitable for the required cross-account decrypt model. DynamoDB and S3 can hold encrypted data, but they are generic storage services and require more custom code and operational handling than a managed secrets service. Exam tips: when the requirement mentions tokens, credentials, runtime retrieval, encryption, and least management overhead, prefer Secrets Manager unless the question clearly points to a simpler configuration store. For cross-account encrypted secret access, always verify both the resource policy and the KMS key type and policy model.
A site reliability engineer using a Windows laptop must run a local PowerShell script to provision an application stack in the ap-northeast-2 Region within 10 minutes, with command-line-only access to a new AWS account that allows programmatic access but no console sign-in; what must the engineer set up on the laptop to perform the deployment?
Incorrect. The AWS CLI does not authenticate with an IAM username and password. Those credentials are used for AWS Management Console sign-in (interactive web login), which the prompt explicitly says is not allowed. For API calls from the CLI, AWS requires request signing with access keys or temporary STS credentials. Even if an IAM user exists, the CLI needs access key material, not the console password.
Incorrect. SSH keys are used to authenticate to resources like EC2 instances (SSH/RDP-related workflows), some Git operations (e.g., CodeCommit via SSH), or to establish secure shell sessions. They do not provide AWS API authentication for the AWS CLI. The AWS CLI requires IAM-based credentials (access keys or STS tokens) to sign API requests, not an SSH keypair.
Correct. Installing the AWS CLI and configuring it with an IAM user access key ID and secret access key enables programmatic access to AWS APIs from the Windows laptop. This matches the requirement of command-line-only access and no console sign-in. The engineer can also set the default region to ap-northeast-2 (or pass `--region`) to ensure the PowerShell provisioning script deploys into the correct Region quickly.
Incorrect. X.509 certificates are not the standard authentication mechanism for AWS CLI usage. Modern AWS API access is typically done with IAM access keys or temporary credentials from AWS STS (often via roles). While certificates appear in some legacy or specialized contexts, they are not what you configure for typical CLI-based provisioning from a laptop, and this option does not align with current best practices or common exam expectations.
Core Concept: This question tests AWS programmatic access from a local machine using command-line tools. Specifically, it focuses on how the AWS CLI authenticates to a new AWS account when console sign-in is not allowed. Why the Answer is Correct: Because the engineer has “command-line-only access” and the account “allows programmatic access but no console sign-in,” the correct setup is the AWS CLI configured with IAM access keys (access key ID and secret access key). The AWS CLI uses SigV4 request signing with these credentials (or with temporary credentials from STS), not a username/password. With a Windows laptop running a local PowerShell script, installing the AWS CLI and configuring credentials (typically via `aws configure` or environment variables) is the standard and fastest way to enable the script to call AWS APIs in ap-northeast-2 within the 10-minute constraint. Key AWS Features: - AWS CLI credential sources: shared credentials file (~/.aws/credentials), config file (~/.aws/config), environment variables, or credential_process. - IAM access keys: long-term programmatic credentials for an IAM user. Best practice is to prefer temporary credentials via IAM roles/STS, but the prompt implies a new account and immediate programmatic access. - Region targeting: configure default region (ap-northeast-2) in the CLI config or pass `--region ap-northeast-2` to commands. Common Misconceptions: Many assume “IAM username and password” can be used for CLI login. That is for AWS Management Console sign-in and is not used by the AWS CLI for API authentication. Others confuse SSH keys (used for EC2 instance login, CodeCommit SSH, etc.) with AWS API authentication. X.509 certificates were historically used with older mechanisms (e.g., some legacy EC2 API tools) and are not the standard for AWS CLI authentication. Exam Tips: - If a question says “programmatic access,” think “access key ID + secret access key” or “STS temporary credentials via role.” - If console sign-in is disallowed, username/password is irrelevant. - SSH keys authenticate to servers/repositories, not to AWS API endpoints. - Always note region requirements: set a default region or specify `--region` to avoid deploying to the wrong Region. (Reference: AWS CLI configuration and credential types in AWS IAM/AWS CLI documentation; AWS Well-Architected Security Pillar emphasizes least privilege and avoiding long-term credentials when possible.)
A telemedicine provider hosts a latency-sensitive appointment API on AWS Elastic Beanstalk (Amazon Linux 2, load balanced with Auto Scaling) and needs to deploy a new version with zero user-visible downtime while routing exactly 25% of incoming requests to the new version and 75% to the current version for a 15-minute evaluation window before promoting or rolling back; which Elastic Beanstalk deployment policy meets these requirements?
Rolling deployments update instances in batches, keeping some capacity serving traffic while others are updated. This can reduce downtime, but it does not provide an exact, controlled percentage split (25%/75%) between two versions for a fixed evaluation window. During rolling, traffic goes to a mix of updated and non-updated instances based on batch progression, not a stable canary percentage.
Traffic-splitting deployments are purpose-built for canary-style releases in Elastic Beanstalk. They run the new version alongside the old version and use the environment load balancer to route a specified percentage of requests (e.g., 25%) to the new version for a defined evaluation period (e.g., 15 minutes). After evaluation, you can promote to 100% or roll back with no user-visible downtime.
In-place deployments update the existing instances directly (all at once or in a way that can temporarily reduce capacity). This approach can cause brief interruptions or performance degradation, especially for latency-sensitive APIs, and it does not support a controlled 25%/75% traffic split between versions. It’s generally less safe than immutable or traffic-splitting for production releases.
Immutable deployments create a new set of instances with the new version, verify health, and then switch over, which greatly reduces deployment risk and can achieve near-zero downtime. However, immutable is primarily an all-or-nothing cutover model and does not natively provide an exact 25%/75% request routing split for a timed evaluation window. That requirement maps directly to traffic-splitting.
Core Concept: This question tests AWS Elastic Beanstalk deployment policies for achieving zero-downtime releases with controlled, percentage-based request shifting (a canary-style evaluation) in a load-balanced, Auto Scaling environment. Why the Answer is Correct: Elastic Beanstalk’s traffic-splitting deployment policy is designed specifically to route a defined percentage of incoming requests to a new application version while the rest continues to go to the existing version. It supports an evaluation period (here, 15 minutes) during which you can monitor metrics (latency, 5xx errors, custom health checks) and then either promote the new version to receive 100% of traffic or roll back. This meets both requirements: (1) zero user-visible downtime (because both versions run concurrently behind the load balancer) and (2) exact 25%/75% traffic distribution during the evaluation window. Key AWS Features / Configurations: Traffic-splitting deployments work with load-balanced Elastic Beanstalk environments and create a parallel set of instances for the new version. The environment’s load balancer then splits traffic between the old and new instance groups according to the configured percentage. After the evaluation time, Elastic Beanstalk can complete the deployment (shift all traffic) or you can abort/roll back. This aligns with Well-Architected reliability and operational excellence principles by reducing blast radius and enabling safer releases. Common Misconceptions: Many candidates confuse immutable deployments with traffic shifting. Immutable provides safer deployments by launching new instances and swapping them in, but it does not natively guarantee an exact 25%/75% request split for a timed evaluation window. Rolling and in-place can reduce downtime, but they update instances sequentially or directly, and do not provide precise, controlled canary traffic percentages. Exam Tips: When you see “route X% of requests to the new version” and “evaluate for N minutes then promote/rollback,” think canary/traffic shifting. In Elastic Beanstalk, the keyword is “traffic-splitting deployment.” If the question instead emphasizes “new instances first, then swap” without percentage routing, that points to immutable.
A fintech startup has eight AWS Lambda functions (Python 3.11) that must share a 1.8-MB utilities library for request validation and logging. The developer has packaged the shared library into a single zip file and needs to deploy it once and update all eight functions across dev, staging, and prod with minimal operational overhead. Which solution will meet this requirement in the MOST operationally efficient way?
Lambda cannot directly import Python modules from an S3 object path at runtime as part of the normal import mechanism. While S3 can store deployment artifacts, the code and dependencies must be included in the function package, a container image, or a layer. Pulling code from S3 during invocation would require custom download logic, adds latency and failure points, and is not an operationally efficient or standard pattern.
Creating a separate Lambda to perform validation/logging and invoking it synchronously turns a local library call into a network call. This increases end-to-end latency, adds per-invocation cost, introduces additional throttling and error handling concerns, and complicates tracing and retries. It also creates tight runtime coupling between functions. This is generally an anti-pattern for shared utility code that should run in-process.
A Lambda layer is designed to share common libraries across multiple functions. You package the utilities into a layer, publish a version, and attach it to all eight functions. Updates are handled by publishing a new layer version and updating function configuration (ideally via IaC/CI/CD) per environment. This minimizes duplication, supports controlled rollouts/rollbacks, and is the most operationally efficient solution.
Container images can include shared libraries via a base image, but migrating all functions to container packaging is more operationally heavy than needed for a 1.8-MB Python utility library. It introduces image build pipelines, ECR management, and larger deployment artifacts. While viable, it is not the MOST operationally efficient compared to Lambda layers for simple shared dependency reuse across many functions.
Core Concept: This question tests AWS Lambda code sharing and deployment best practices. The primary concept is using AWS Lambda layers to package and centrally manage shared dependencies (libraries, runtimes, or common code) that multiple Lambda functions can consume. Why the Answer is Correct: A Lambda layer is purpose-built for exactly this scenario: eight Python 3.11 functions must share the same 1.8-MB utilities library and the team wants to deploy it once and update all functions with minimal operational overhead across dev, staging, and prod. By publishing a layer version containing the library and attaching it to each function, you decouple shared code from function code. Updates become a controlled operation: publish a new layer version and update function configurations (or aliases/versions via IaC/CI/CD) to reference the new layer version. This is operationally efficient and aligns with AWS’s recommended pattern for shared libraries. Key AWS Features: - Lambda layers support packaging dependencies separately and mounting them at /opt (Python typically uses /opt/python). - Versioned layers enable safe rollouts and rollbacks by selecting specific layer versions per environment. - Works cleanly with AWS SAM/CloudFormation/CDK/Terraform to apply the same layer to multiple functions and environments consistently. - Keeps function deployment packages smaller and reduces duplication across functions. Common Misconceptions: Some assume S3 can be used as an import path at runtime (it cannot for standard Lambda imports), or that a “shared utility Lambda” is a good reuse mechanism. Invoking another Lambda for validation/logging adds latency, cost, failure modes, and operational complexity. Container images can share a base image, but migrating all functions to images is heavier than necessary for a small shared library and increases build/publish overhead. Exam Tips: When you see “multiple Lambda functions share the same library” and “deploy once/update many,” think “Lambda layers.” Also remember layers are versioned; environments typically pin to a specific version and update via CI/CD for controlled promotion from dev to staging to prod.
Lernzeitraum: 1 month
점수는 793점으로 합격했어요! 하루에 최소 30문제는 풀었어요. 밖에서도 짬 날 때마다 풀 수 있으니 좋네요ㅎㅎ
Lernzeitraum: 2 months
앱 문제가 시험이랑 굉장히 유사했어요. 그리고 해설들이 왜 틀렸는지 이해하는데 도움이 됐어요.
Lernzeitraum: 1 month
Thank you very much, these questions are wonderful !!!
Lernzeitraum: 2 months
1달 전에 합격했는데 지금 후기쓰네요. 시험하고 문제 구성이 비슷했어요
Lernzeitraum: 2 months
I just passed the exam, and I can confidently say that this app was instrumental in helping me thoroughly review the exam material.

Associate

Practitioner

Specialty

Practitioner

Associate

Professional

Associate

Specialty

Professional


Möchtest du alle Fragen unterwegs üben?
Kostenlose App holen
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.