CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
AWS Certified Developer - Associate (DVA-C02)
AWS Certified Developer - Associate (DVA-C02)

Practice Test #5

Simula la experiencia real del examen con 65 preguntas y un límite de tiempo de 130 minutos. Practica con respuestas verificadas por IA y explicaciones detalladas.

65Preguntas130Minutos720/1000Puntaje de aprobación
Explorar preguntas de práctica

Impulsado por IA

Respuestas y explicaciones verificadas por triple IA

Cada respuesta es verificada de forma cruzada por 3 modelos de IA líderes para garantizar la máxima precisión. Obtén explicaciones detalladas por opción y análisis profundo de cada pregunta.

GPT Pro
Claude Opus
Gemini Pro
Explicaciones por opción
Análisis profundo de preguntas
Precisión por consenso de 3 modelos

Preguntas de práctica

1
Pregunta 1

A fintech startup runs a rules engine on Amazon ECS with AWS Fargate in two AWS accounts (prod and audit) processing up to 3,000 events per minute, and when an event fails validation the container must call a third-party incident REST API that requires retrieving a bearer access token at runtime, and the token must be encrypted at rest and in transit and be accessible to workloads in both accounts with the least management overhead, which solution meets these requirements?

Systems Manager Parameter Store SecureString can store sensitive values and can be shared across accounts in some scenarios by using resource policies on advanced parameters. However, this option specifies an AWS managed KMS key, and cross-account use of the encrypted value requires KMS permissions that are not appropriately controllable with an AWS managed key. Because the workloads in both accounts must retrieve and decrypt the same token, a customer managed KMS key is the correct pattern. Even aside from that, Secrets Manager is the more purpose-built service for runtime secret retrieval with lower operational overhead.

DynamoDB plus KMS can securely store encrypted data, but it is not a managed secrets solution. You would need to implement your own retrieval, caching, rotation strategy, and careful IAM scoping for DynamoDB reads and KMS decrypt. This adds operational and development overhead and increases the chance of mistakes (e.g., logging plaintext, improper key policies). It meets encryption requirements but not “least management overhead.”

Secrets Manager is purpose-built for storing tokens and credentials, encrypts secrets at rest with KMS (including customer managed keys), and uses TLS for retrieval. It supports resource-based policies for cross-account access, enabling one secret to be shared with workloads in both prod and audit accounts. ECS task roles can be granted least-privilege secretsmanager:GetSecretValue (and KMS decrypt as needed), minimizing operational overhead.

S3 with SSE-KMS can encrypt objects at rest and TLS protects in transit, and bucket policies can grant cross-account access. However, S3 is not a secret store: you must manage object formats, retrieval logic, caching, and potential exposure risks (e.g., accidental broader bucket permissions). Using an AWS managed KMS key also reduces control. This is higher overhead and less aligned with best practices than Secrets Manager.

Análisis de la pregunta

Core concept: choose the AWS service that is purpose-built for storing sensitive runtime credentials, supports encryption at rest and in transit, and can be shared across AWS accounts with minimal operational effort. The best fit is AWS Secrets Manager with a customer managed KMS key and a resource-based policy for cross-account access. Why correct: Secrets Manager is designed specifically for secrets such as bearer tokens and API credentials. It encrypts secrets at rest with KMS, returns them over TLS in transit, and supports cross-account access through resource-based policies. Using a customer managed KMS key is important because cross-account decryption requires explicit key policy control, which AWS managed keys do not provide in the same way. Key features: Secrets Manager provides native secret storage, fine-grained IAM permissions, optional rotation support, and direct integration patterns for ECS workloads. A resource policy on the secret can allow task roles in both prod and audit accounts to retrieve the same secret. A customer managed KMS key allows explicit decrypt permissions for those cross-account principals. Common misconceptions: Parameter Store can store SecureString values and can support parameter sharing in some cases, but it is not the best answer here because the option specifies an AWS managed KMS key, which is unsuitable for the required cross-account decrypt model. DynamoDB and S3 can hold encrypted data, but they are generic storage services and require more custom code and operational handling than a managed secrets service. Exam tips: when the requirement mentions tokens, credentials, runtime retrieval, encryption, and least management overhead, prefer Secrets Manager unless the question clearly points to a simpler configuration store. For cross-account encrypted secret access, always verify both the resource policy and the KMS key type and policy model.

2
Pregunta 2

A real-time food delivery platform stores courier GPS pings in an Amazon DynamoDB table, ingesting tens of millions of items per day where each ping is a single item, and the application only needs pings from the most recent 24 hours so any item older than 24 hours should be removed; what is the MOST cost-effective way to delete items older than 24 hours?

Incorrect. A scheduled table scan is extremely expensive and inefficient at this scale because Scan reads large portions of the table repeatedly, consuming significant read capacity (or on-demand read charges) and requiring ongoing EC2 costs and operations. BatchWriteItem helps reduce API calls but does not eliminate the fundamental cost of scanning and explicit deletes. This is not the most cost-effective approach for time-based expiration.

Incorrect. Running scheduled ECS/Fargate tasks adds compute cost and operational overhead, and the approach still relies on scanning the table to find expired items. Even if it runs frequently, it will generate substantial DynamoDB read traffic and explicit deletes. Fargate is convenient but not cost-optimal for a simple, continuous cleanup job when DynamoDB TTL can handle expiration natively.

Incorrect. While querying a GSI is better than scanning the base table, it still incurs ongoing read costs on the index plus Lambda invocation costs and explicit delete write operations. Also, DynamoDB does not have a native “Date” attribute type; you’d store dates as String or Number. This design is more complex than necessary and is not as cost-effective as TTL.

Correct. DynamoDB TTL is purpose-built for expiring items automatically. Storing a Unix epoch timestamp (seconds) in a Number attribute and configuring TTL to use it enables DynamoDB to delete items asynchronously without scans, scheduled compute, or explicit delete operations. This minimizes cost and operational burden and is the standard best practice for removing items older than a retention window.

Análisis de la pregunta

Core Concept: This question tests Amazon DynamoDB data lifecycle management and cost optimization, specifically the Time to Live (TTL) feature for automatic item expiration. Why the Answer is Correct: With tens of millions of GPS pings per day and a strict requirement to keep only the last 24 hours, the most cost-effective approach is to let DynamoDB expire items automatically using TTL. You store an expiration timestamp per item (24 hours after creation) and configure TTL to reference that attribute. DynamoDB then marks items as expired and deletes them asynchronously in the background without you running scans, queries, or delete jobs. This avoids continuous read capacity consumption (from scans/queries), avoids compute costs (EC2/Fargate/Lambda), and reduces operational complexity. Key AWS Features: DynamoDB TTL requires an attribute of type Number containing a Unix epoch timestamp in seconds. When the current time exceeds that value, the item becomes eligible for deletion. Deletions are best-effort and not immediate; DynamoDB typically removes expired items within hours, which is usually acceptable for “only need last 24 hours” retention. TTL deletions do not consume provisioned write capacity in the same way as application-driven deletes, and you can optionally use DynamoDB Streams to react to deletions if downstream cleanup is needed. Common Misconceptions: A common trap is assuming you must actively delete old items via scheduled scans/queries. That approach is expensive at scale because scans read the whole table (or large portions) repeatedly, and even GSI-based approaches still require reads plus explicit deletes. Another misconception is using a “Date” type—DynamoDB has String, Number, Binary, Boolean, Null, List, Map, and Set types; TTL specifically expects a Number epoch timestamp. Exam Tips: When you see “delete items older than X” in DynamoDB, immediately consider TTL as the default answer for cost-effective expiration. Remember: TTL uses epoch seconds (Number), is asynchronous (not real-time), and is ideal for ephemeral/time-series data like GPS pings, sessions, and logs. If strict immediate deletion is required, then you’d consider application deletes, but that is rarely “MOST cost-effective.”

3
Pregunta 3

A fintech startup is moving a legacy payment ledger to an Amazon Aurora PostgreSQL-Compatible cluster with 1 writer and 2 readers and requires the database credentials to be stored securely and rotated automatically every 30 days without adding any application code or custom rotation logic; which solution best meets these requirements?

Incorrect. AWS Systems Manager Parameter Store can securely store encrypted values using SecureString, but it is not the primary AWS service for managed database credential rotation. The phrase 'turn on rotation' aligns with AWS Secrets Manager, which is purpose-built for scheduled secret rotation and database integration. For Aurora PostgreSQL credentials that must rotate automatically every 30 days, Secrets Manager is the better and expected choice.

Correct. AWS Secrets Manager is specifically designed to store sensitive credentials such as database usernames and passwords and to rotate them automatically on a defined schedule. It integrates with Amazon Aurora PostgreSQL-Compatible clusters, making it the best fit for rotating database credentials every 30 days. This approach avoids embedding credentials in code and does not require the application to implement its own credential rotation logic.

Incorrect. This option fails the core database requirement because the workload is being migrated to an Amazon Aurora PostgreSQL-Compatible cluster, not Amazon DynamoDB. It also pairs that incorrect database choice with Parameter Store, which is not the best service for managed automatic database credential rotation. Therefore it is wrong on both the database platform and the secret-management requirement.

Incorrect. Although AWS Secrets Manager is the correct service for securely storing and rotating credentials, DynamoDB is not the database specified in the scenario. The question explicitly requires an Aurora PostgreSQL-Compatible cluster with one writer and two readers, which describes an Aurora architecture rather than DynamoDB. Because the database service is wrong, this option cannot be the best answer.

Análisis de la pregunta

Core Concept: This question tests which AWS service securely stores database credentials and supports automatic scheduled rotation for an Amazon Aurora PostgreSQL-Compatible cluster with minimal operational effort and no application-side credential rotation logic. The correct pairing is Aurora PostgreSQL with AWS Secrets Manager. Why the Answer is Correct: AWS Secrets Manager is the AWS service purpose-built for storing secrets such as database usernames and passwords and rotating them automatically on a schedule like every 30 days. It integrates directly with Amazon RDS and Aurora, including Aurora PostgreSQL-Compatible clusters, and supports managed rotation workflows for database credentials. This satisfies the requirement to store credentials securely and rotate them automatically without adding custom rotation handling into the application. Key AWS Features: 1) Secure secret storage: Secrets Manager encrypts secrets with AWS KMS and controls access through IAM policies. 2) Native RDS/Aurora integration: It supports credential rotation for Aurora PostgreSQL and updates the secret value as part of the rotation workflow. 3) Scheduled rotation: You can configure automatic rotation intervals such as every 30 days. 4) Reduced application impact: Applications read the current secret from Secrets Manager rather than embedding credentials, so no application-side rotation logic is needed. Common Misconceptions: Systems Manager Parameter Store can store encrypted values, but it is not the standard AWS service for managed database credential rotation. Another common mistake is choosing DynamoDB just because it is a database service, even though the requirement explicitly states Aurora PostgreSQL-Compatible cluster. Exam Tips: When a question mentions database credentials, secure storage, and automatic rotation on a schedule, AWS Secrets Manager is usually the best answer. If the database is explicitly Aurora or RDS, look for the option that keeps that database choice and pairs it with Secrets Manager rather than Parameter Store.

4
Pregunta 4

A developer is building a serverless application for drivers to reserve electric vehicle (EV) charging connectors at a city network of 12 stations (8 connectors per station). Drivers send reservation requests to an Amazon API Gateway API backed by an AWS Lambda function that acknowledges the request within 300 ms and generates a reservation ID. The application includes two additional Lambda functions: one for allocation management and one for payment processing; these two functions run in parallel and write reservation records to an Amazon DynamoDB table. Concurrent requests for the same connector and 30-minute timeslot can arrive within 10 ms of each other. The application must assign connectors according to the following: if a connector timeslot is accidentally double-reserved, the first reservation request received by the application must get the slot and only that reservation's payment must be processed; however, if the first reservation is rejected during payment (payment response can take up to 30 seconds), the second reservation must get the slot and its payment must be processed. Which solution will meet these requirements?

Correct. SNS FIFO can fan out to multiple SQS FIFO queues while preserving ordering per MessageGroupId and providing deduplication. Using a group key like connectorId+timeslot serializes competing reservations for the same slot, ensuring the first request is processed first. If the first payment fails, the next queued reservation for that same group can proceed, meeting the “first wins unless payment rejected” requirement.

Incorrect. Directly invoking allocation and then payment from the initial Lambda introduces tight coupling and reduces parallelism (the question states allocation and payment run in parallel). It also does not provide durable ordering across concurrent requests, retries, or partial failures. Two near-simultaneous requests could still interleave, and without FIFO semantics or a distributed lock/conditional write, “first received wins” is not reliably enforced.

Incorrect. SNS standard topics provide at-least-once delivery with no ordering guarantees and possible duplicates. With concurrent requests arriving within milliseconds, the second request could be delivered/processed before the first, violating the “first request received must get the slot” rule. Duplicate deliveries could also cause multiple payment attempts unless additional idempotency and coordination logic is implemented beyond what the option provides.

Incorrect. A single SQS standard queue does not guarantee ordering and can deliver messages more than once. Even if it were FIFO, using one queue for two Lambda consumers is problematic: each message would be processed by only one consumer, not both, so you would not reliably run allocation and payment in parallel for the same reservation. Fan-out requires separate queues or a pub/sub mechanism.

Análisis de la pregunta

Core Concept: This question tests event ordering, concurrency control, and exactly-once/first-wins processing patterns in serverless architectures. The key AWS concepts are FIFO messaging (ordering + deduplication), fan-out to parallel consumers, and ensuring that allocation and payment actions are coordinated so that only the correct reservation is charged. Why the Answer is Correct: Option A uses an SNS FIFO topic to fan out the reservation event to two SQS FIFO queues (one per downstream workflow). FIFO provides strict ordering within a message group and deduplication, which is critical when two requests for the same connector/timeslot arrive within milliseconds. By using a MessageGroupId derived from (stationId, connectorId, timeslot), all competing reservations for the same slot are processed in order. The first request is processed first by both allocation and payment pipelines. If payment for the first reservation fails (up to 30 seconds), the next message in the same group can then be processed, allowing the second reservation to obtain the slot and be charged. Key AWS Features: - SNS FIFO topic: preserves ordering per message group and supports content-based deduplication. - SQS FIFO queues: guarantee ordered, exactly-once processing semantics (with deduplication window) per MessageGroupId. - Fan-out pattern: separate queues decouple allocation and payment, allowing parallel scaling while maintaining per-slot ordering. - Best practice: use deterministic MessageGroupId for the contested resource (connector+timeslot) to serialize only what must be serialized, while allowing different connectors/timeslots to process concurrently. Common Misconceptions: Many assume a standard SNS topic is sufficient for fan-out, but standard SNS/SQS do not guarantee ordering and can deliver duplicates, which breaks “first request wins” under race conditions. Others try to sequence Lambda invocations directly, but that reduces parallelism and still doesn’t provide durable ordering/locking across retries and failures. Exam Tips: When you see “requests arrive within 10 ms,” “first received must win,” and “second should proceed only if first fails,” think FIFO + message group for the contested resource. For parallel downstream processing with ordering, prefer SNS FIFO -> multiple SQS FIFO queues. Also remember that DynamoDB conditional writes/transactions can enforce uniqueness, but the question’s options focus on messaging/orchestration; pick the option that provides ordering and deduplication guarantees end-to-end.

5
Pregunta 5

A mobile gaming company is deploying a leaderboard service and will create an Amazon DynamoDB table using an AWS CloudFormation template; compliance requires server-side encryption at rest with an AWS owned key to avoid any KMS key administration, and the table must launch in us-east-1 with 100 write capacity units and 250 read capacity units. How should the engineer define the table to meet these requirements?

Incorrect. A customer managed KMS key requires key creation, key policy management, and potentially rotation/permissions administration. That directly violates the requirement to avoid KMS key administration. While it would satisfy encryption at rest, it is the most operationally heavy option and not aligned with “AWS owned key.”

Incorrect. alias/aws/dynamodb is an AWS managed KMS key, not an AWS owned key. Although AWS managed keys reduce some overhead (AWS handles rotation), they still involve KMS constructs in your account and are not the same as AWS owned keys. The requirement explicitly calls for AWS owned keys to avoid KMS key administration.

Incorrect. Customers cannot create or supply an ARN for an AWS owned key. AWS owned keys are fully controlled by AWS and are not visible or manageable in the customer account, so there is no ARN you can reference in SSESpecification.KMSMasterKeyId. This option is not technically feasible.

Correct. DynamoDB encrypts tables at rest by default using an AWS owned key when no specific KMS key is configured. Omitting SSESpecification in the CloudFormation template leverages the default behavior and meets the compliance requirement of AWS owned key encryption with no KMS key administration. Provisioned throughput can be set separately to 100 WCU and 250 RCU, and deploying the stack in us-east-1 satisfies the region requirement.

Análisis de la pregunta

Core Concept: This question tests Amazon DynamoDB server-side encryption (SSE) configuration via AWS CloudFormation and the differences between AWS owned keys, AWS managed KMS keys, and customer managed KMS keys. It also touches on provisioned capacity settings (RCU/WCU) and region deployment (us-east-1 is determined by the stack’s region). Why the Answer is Correct: The requirement is “server-side encryption at rest with an AWS owned key” and “avoid any KMS key administration.” DynamoDB encrypts data at rest by default. When you omit the SSESpecification in the CloudFormation DynamoDB table resource, DynamoDB uses its default encryption behavior, which is SSE enabled using an AWS owned key (AWS owned keys are not visible in your account and require no configuration, rotation, or policy management). Therefore, defining the table with default encryption settings best matches the compliance requirement and eliminates any KMS administration. Key AWS Features: - DynamoDB encryption at rest is enabled by default for all tables. - AWS owned keys: fully managed by AWS, not in your account, no key policies/rotation to manage. - AWS managed KMS keys (e.g., alias/aws/dynamodb): exist in your account, are managed by AWS but still involve KMS usage controls/auditing and can be referenced; they are not “AWS owned keys.” - Provisioned throughput: set ProvisionedThroughput with ReadCapacityUnits=250 and WriteCapacityUnits=100 in the template. Region is controlled by deploying the CloudFormation stack in us-east-1. Common Misconceptions: A frequent trap is assuming “AWS managed KMS key” equals “AWS owned key.” They are different: AWS managed keys are KMS keys in your account (AWS manages rotation), while AWS owned keys are not customer-accessible and require no KMS configuration. Another misconception is thinking you must explicitly set SSESpecification to enable encryption; for DynamoDB, encryption is on by default. Exam Tips: - If a question explicitly says “AWS owned key” and “no KMS administration,” default encryption (no KMS key specified) is usually the intended answer. - If you see alias/aws/service, that indicates an AWS managed KMS key, not an AWS owned key. - For region requirements in CloudFormation, remember resources are created in the region where the stack is launched, not via a property on the resource.

¿Quieres practicar todas las preguntas en cualquier lugar?

Descarga Cloud Pass — incluye exámenes de práctica, seguimiento de progreso y más.

6
Pregunta 6

An IoT analytics team operates 200 edge gateways that send application logs to a single Amazon CloudWatch Logs log group named /prod/gateway via an AWS IoT Core rule. The on-call engineers must receive an email within 60 seconds whenever any log line contains the exact word "FATAL"; the developer has already created an Amazon Simple Notification Service (Amazon SNS) topic and subscribed the on-call email list to it—what should the developer do next to meet the requirement?

Correct. A CloudWatch Logs metric filter on /prod/gateway counts occurrences of the exact term "FATAL" and publishes a custom metric. A CloudWatch alarm with a 1-minute period and threshold >= 1 will enter ALARM when at least one matching log event occurs in that minute, and the alarm action can notify the existing SNS topic. This is the canonical AWS approach for log-pattern alerting.

Incorrect. CloudWatch Logs Insights is designed for ad hoc analysis and scheduled queries, not the primary near-real-time alerting mechanism for every incoming log line within 60 seconds. While CloudWatch supports alarms on certain metric math and query-based metrics in some contexts, relying on Logs Insights queries for continuous detection is typically higher latency and not the standard exam answer versus metric filters.

Incorrect. The concept of a “subscription filter” exists in CloudWatch Logs, but it delivers matching log events to destinations like Amazon Kinesis Data Streams, Kinesis Data Firehose, or AWS Lambda—not directly to Amazon SNS. SNS subscriptions filter messages within SNS (filter policies) based on message attributes, but CloudWatch Logs does not publish log events to SNS via an SNS subscription filter.

Incorrect. CloudWatch alarms evaluate metrics and do not support embedding a CloudWatch Logs filter pattern directly in the alarm definition. There is no native alarm configuration that specifies a log group dimension and a log filter pattern to trigger notifications. You must first convert log matches into a metric (metric filter) or process logs via a subscription (e.g., Lambda) to emit a metric/event.

Análisis de la pregunta

Core Concept: This question tests Amazon CloudWatch Logs metric filters and CloudWatch alarms for near-real-time detection of specific log patterns, with notifications delivered through Amazon SNS. CloudWatch Logs does not directly “email on matching log lines”; instead, you convert matching log events into a metric and alarm on that metric. Why the Answer is Correct: Option A is the standard and intended AWS pattern: create a CloudWatch Logs metric filter on the /prod/gateway log group with a filter pattern that matches the exact word "FATAL". The metric filter increments a custom CloudWatch metric each time a matching log event arrives. Then you create a CloudWatch alarm with a 1-minute period and a threshold of >= 1, and configure the alarm action to publish to the existing SNS topic. With a 1-minute evaluation period, engineers can be notified within 60 seconds of the first matching event (subject to normal CloudWatch ingestion/processing latency). Key AWS Features / Configurations: - CloudWatch Logs Metric Filter: transforms log events into numerical metrics. Use a filter pattern that matches the exact term (e.g., "FATAL"; in practice, ensure it’s not a substring match if that matters). - Custom Metric Namespace/Name: choose a clear namespace like GatewayLogs and metric name like FatalCount. - CloudWatch Alarm: set Period = 60 seconds, Statistic = Sum, Threshold = 1, and Alarm action = SNS topic. - SNS Topic: already created and subscribed, so the remaining step is wiring the alarm to SNS. Common Misconceptions: - Logs Insights is for interactive querying and dashboards; it’s not the primary mechanism for continuous, low-latency alerting on every incoming log line. - SNS does not subscribe directly to CloudWatch Logs with “subscription filters” (those are CloudWatch Logs subscription filters to Kinesis/Lambda/Firehose, not SNS). - CloudWatch alarms cannot natively include a log filter pattern; alarms evaluate metrics, not raw log streams. Exam Tips: When you see “alert when log contains X within N seconds/minutes,” think: CloudWatch Logs metric filter -> CloudWatch metric -> CloudWatch alarm -> SNS. Remember: alarms are metric-based; log pattern matching requires metric filters (or a Lambda subscription pipeline, but that’s more complex than needed here).

7
Pregunta 7

A backend team plans to migrate a legacy order service to AWS in 6 weeks, but the mobile app team needs immediate access to stable API endpoints to build and test UI flows for browsing and placing orders. The developer creates a GET method on the /orders resource of an Amazon API Gateway REST API with no backend available yet. The endpoints must return predefined HTTP status codes (200, 404, 429) and static JSON payloads that the mobile team can consume, without deploying any compute service. Which solution will meet these requirements?

AWS_PROXY with Lambda can easily return hardcoded JSON and status codes, but it requires provisioning and deploying a compute service (Lambda). The question explicitly forbids deploying any compute service. Also, proxy integrations reduce fine-grained control in API Gateway because the Lambda function is responsible for formatting the entire response, which is unnecessary for simple static stubs.

MOCK integration is purpose-built for returning responses without a backend. You configure Method Responses for 200/404/429 and then configure Integration Responses to map to each status code and return static JSON via mapping templates. This provides stable endpoints immediately for the mobile team and satisfies the requirement of no compute deployment.

HTTP_PROXY forwards requests to an existing HTTP endpoint. This violates the intent of “no backend available yet” and introduces dependency on an external placeholder API that still must be built/hosted somewhere. It also adds network and availability considerations outside API Gateway and does not inherently provide static responses without an upstream service.

This option misstates where status codes and payloads are configured. Method Request is for request parameters, validation, and authorization—not for defining response status codes. In API Gateway REST APIs, status codes are declared in Method Response and produced/mapped in Integration Response. Therefore, this configuration would not correctly implement the required 200/404/429 static responses.

Análisis de la pregunta

Core Concept: This question tests Amazon API Gateway REST API integrations—specifically the MOCK integration used to return static responses without any backend compute. It also tests how API Gateway maps responses using Method Response and Integration Response to produce specific HTTP status codes and payloads. Why the Answer is Correct: MOCK integration is designed for situations where you want API Gateway itself to generate a response. With no backend available, the developer can configure a GET /orders method with Integration type = MOCK, then define multiple Method Responses (e.g., 200, 404, 429) and corresponding Integration Responses that map to each status code and return static JSON bodies. This meets the requirement to provide stable endpoints immediately, with predefined status codes and static payloads, and it avoids deploying any compute service (no Lambda, no containers, no EC2). Key AWS Features: 1) MOCK Integration: API Gateway returns a response based on templates rather than invoking a backend. 2) Method Response vs Integration Response: Method Response declares which status codes/headers are possible; Integration Response maps backend (or mock) output to those method responses and defines mapping templates for the body. 3) Mapping Templates (Velocity Template Language): Used to generate static JSON payloads and to select responses (often via selection patterns or request parameters). For MOCK, you typically use an Integration Request mapping template to produce a dummy payload and then configure Integration Responses to return the desired body/status. Common Misconceptions: A common trap is thinking you must use Lambda (AWS_PROXY) to return hardcoded JSON. That works, but violates the “without deploying any compute service” constraint. Another misconception is that status codes are defined in the Method Request; they are not—status codes are configured in Method Response and produced via Integration Response mappings. Exam Tips: When you see “no backend yet,” “static payloads,” “return predefined responses,” or “mock/stub endpoints,” think API Gateway REST API + MOCK integration. Remember: Method Request = validate/authorize/parameters; Method Response = what the client can receive; Integration Request/Response = how API Gateway generates/transforms the response. This pattern is commonly used for contract-first API development and parallel team workflows.

8
Pregunta 8

A small media startup is releasing a React Native mobile app that runs entirely on client devices with no backend servers, must call a third-party catalog service exposed as an HTTP API by passing the company’s API key in an HTTP header (x-api-key) without exposing the key to users, and expects about 3,000,000 requests per month with payloads under 10 KB; which AWS solution allows the mobile app to call the third-party API securely in the MOST cost-effective way?

Incorrect. A private REST API in API Gateway is accessible only from within a VPC through an interface VPC endpoint (AWS PrivateLink). A React Native app running on client devices over the public internet cannot directly reach a private API endpoint. While HTTP integration header injection would work technically, the connectivity model does not meet the “mobile app with no backend” requirement.

Incorrect. This uses a private REST API plus Lambda proxy integration. It fails for the same reason as A: private APIs are not reachable from public mobile clients. Additionally, adding Lambda increases cost and operational complexity (timeouts, concurrency, retries) without being necessary for simply adding a static x-api-key header and proxying small requests.

Correct. A public API Gateway REST API with an HTTP integration can proxy requests to the third-party catalog service and inject the required x-api-key header at the integration request layer, keeping the key out of the mobile app. This avoids Lambda costs and is typically the most cost-effective approach for millions of small requests per month when no custom compute is needed.

Incorrect. A public REST API with Lambda proxy integration would hide the third-party API key, but it is not the MOST cost-effective. Lambda adds per-invocation charges and potential data transfer/latency overhead compared to a direct HTTP integration. Choose Lambda only if you need custom logic (validation, aggregation, dynamic secrets, complex transformations) beyond simple header injection.

Análisis de la pregunta

Core Concept: This question tests how to securely broker calls from an untrusted client (a mobile app) to a third-party HTTP API that requires a static API key, without exposing that key, and to do so cost-effectively at moderate scale. The key AWS service pattern is Amazon API Gateway acting as a lightweight “API key vault + proxy” using an HTTP integration. Why the Answer is Correct: A public API Gateway REST API with an HTTP integration (Option C) lets the mobile app call an AWS-managed endpoint, while API Gateway injects the third-party x-api-key header on the integration request to the upstream catalog service. The mobile app never receives the third-party key. Because the payloads are small (<10 KB) and the requirement is simply to add a header and forward the request, a direct HTTP integration avoids Lambda execution costs and operational overhead. At ~3,000,000 requests/month, eliminating Lambda is typically the most cost-effective approach. Key AWS Features: API Gateway REST API supports HTTP/HTTP_PROXY integrations and request parameter mapping. You can set a static header value for the integration request (e.g., integration.request.header.x-api-key) so the upstream receives the required key. You can also add throttling/usage plans (for your own clients), AWS WAF for abuse protection, and IAM/authorizers (e.g., Cognito JWT authorizer) to control who can call your proxy endpoint—separately from the third-party API key. Common Misconceptions: “Private REST API” (Options A/B) sounds more secure, but private APIs are reachable only from within a VPC via interface endpoints; a React Native app on the public internet generally cannot call them directly. Another misconception is that Lambda is required to hide secrets; for simple header injection and proxying, API Gateway mapping is sufficient and cheaper. Exam Tips: When the requirement is “hide a static credential from clients” and “just forward HTTP,” prefer API Gateway direct integrations over Lambda unless you need custom logic, complex transformations, or dynamic secret retrieval/rotation. Also, ensure the endpoint type matches the caller: mobile/public clients require a public (regional/edge-optimized) API, not a private API.

9
Pregunta 9

A media company has 25 application configuration values (all String type) stored as parameters under the path /prod/img/ in AWS Systems Manager Parameter Store in the us-east-1 Region of a centralized account named Shared-Config. A new image-scaling microservice will run in Amazon ECS in a separate account named Dev-Render and must read these parameters at container startup as environment variables without copying them into Dev-Render. The Dev-Render team wants a read-only, least-privilege solution with the least operational overhead for cross-account access. Which solution meets these requirements?

Incorrect. Long-lived access keys for an IAM user are explicitly discouraged by AWS security best practices (use roles and temporary credentials). It also increases operational overhead (key rotation, secret distribution to ECS, risk of leakage). While it could technically work, it is not least-privilege-friendly in practice and is not the recommended cross-account pattern for ECS workloads.

Correct. Create an IAM role in Shared-Config with read-only SSM permissions scoped to the /prod/img/ parameters, and a trust policy allowing the Dev-Render ECS task role to assume it. The task uses STS to obtain temporary credentials and reads parameters directly from us-east-1 at startup. This is least privilege, avoids copying, and has low operational overhead.

This option is not the best choice here because, although AWS RAM can be used to share advanced Systems Manager parameters cross-account, the question does not state that these parameters are advanced tier or that the team wants to manage sharing through RAM. The most standard and broadly applicable solution for an ECS task reading another account’s parameters is to assume a role in the owning account and call SSM directly. On AWS exams, STS-based cross-account access is usually preferred unless the question explicitly points to a supported resource-sharing feature and its prerequisites.

Incorrect. Exporting parameters to S3 creates a second copy of configuration data, violating the “without copying them into Dev-Render” intent and adding operational overhead (scheduled job, synchronization, access controls, potential drift). It also broadens the attack surface and complicates least-privilege because S3 access and object lifecycle must be managed in addition to SSM.

Análisis de la pregunta

Core Concept: This question is about cross-account access to AWS Systems Manager Parameter Store from an ECS task, while keeping the parameters in the centralized account and using a read-only, least-privilege model. The most standard AWS pattern for a workload in one account to read resources in another account is to assume an IAM role in the owning account by using AWS STS. Why the Answer is Correct: Option B is the best answer because the ECS task in Dev-Render can assume a tightly scoped read-only role in Shared-Config and then call SSM in us-east-1 to retrieve the parameters under /prod/img/ at startup. This uses temporary credentials, avoids copying configuration into the consuming account, and is the most broadly applicable and operationally simple cross-account pattern for compute workloads. Key AWS Features: - An IAM role in Shared-Config with permissions such as ssm:GetParameter, ssm:GetParameters, and typically ssm:GetParametersByPath for the /prod/img/ hierarchy. - A trust policy on that role allowing the ECS task role in Dev-Render to call sts:AssumeRole. - The ECS task role in Dev-Render needs permission to assume only that specific role. - Because the parameters are String type, no KMS decrypt permission is needed; SecureString would require additional KMS permissions. Common Misconceptions: - AWS RAM can share some resource types, and advanced Parameter Store parameters can be shared cross-account, but that is not the default or most universally applicable answer unless the question explicitly indicates use of advanced shared parameters. - Long-lived IAM user access keys are not the preferred pattern for ECS tasks because they increase security risk and operational burden. - Copying values into S3 or another store creates duplication and synchronization overhead. Exam Tips: When an AWS exam asks for cross-account access from a running workload, first think of STS AssumeRole with least-privilege permissions in the owning account. Also watch for wording like "least operational overhead" and "without copying"; these usually favor direct access with temporary credentials over replication or static secrets.

10
Pregunta 10

A developer is running a 50/50 A/B test for a new "smart coupon" feature in an Amazon CloudWatch Evidently project in the staging environment. The feature has two variations (Variation A and Variation B) and requests are evaluated using the userId attribute. The developer needs to ensure that only Variation A is returned when calling the application's GET /v2/coupons endpoint using the test user ID qa-7788. Which solution will meet this requirement?

Correct. A feature override in CloudWatch Evidently is designed to force a specific variation for a specific evaluation identifier (entity ID). Since the application evaluates using the userId attribute, setting the override identifier to qa-7788 and selecting Variation A guarantees that qa-7788 always receives Variation A, independent of the 50/50 A/B allocation.

Incorrect. The override identifier is not the variation name; it must be the entity ID value used during evaluation (here, userId such as qa-7788). Also, setting “variation to 100%” is not how overrides work for a single user; that would imply changing allocation for a broader set, not targeting one identifier deterministically.

Incorrect. Creating an experiment and setting Variation B to 0% would affect the experiment’s traffic allocation at a population level, not specifically guarantee behavior for user qa-7788. Additionally, “experiment identifier” is not used to target a specific userId; it identifies the experiment resource, not the evaluation entity.

Incorrect. Using the AWS account ID as an experiment identifier does not map to the userId-based evaluation used by the application. Experiments don’t target a specific userId unless your application uses that identifier as the entity ID and you use overrides/targeting. This option confuses resource identifiers with evaluation identifiers and would not ensure qa-7788 gets Variation A.

Análisis de la pregunta

Core Concept: This question tests Amazon CloudWatch Evidently feature flags and A/B testing behavior, specifically how evaluation works when a feature is configured for a 50/50 split and requests are bucketed by an entity identifier (here, the userId attribute). Evidently supports deterministic assignment (consistent variation per identifier) and also supports targeted behavior through feature overrides. Why the Answer is Correct: The requirement is to guarantee that a specific test user (qa-7788) always receives Variation A when calling GET /v2/coupons. In Evidently, a feature override lets you force a specific variation for a specific evaluation identifier. Because the application evaluates the feature using the userId attribute, setting an override identifier to qa-7788 and choosing Variation A ensures that any evaluation for that userId returns Variation A regardless of the 50/50 allocation. This is the standard approach for QA validation, demos, and safe testing of a specific path. Key AWS Features: Evidently feature flags can be evaluated using an “entity ID” (often userId, sessionId, or deviceId). Allocation rules (like 50/50) apply to the general population, but overrides take precedence for matching identifiers. Overrides are scoped to the feature and environment, making them ideal for staging-only deterministic behavior without changing production traffic allocation. Common Misconceptions: A common mistake is trying to manipulate the split percentage or create an experiment to force a result for one user. Experiments control traffic allocation at the population level, not per specific userId (unless you implement segmentation logic externally). Another misconception is confusing the override identifier with a variation name; the identifier must match the entity ID used during evaluation. Exam Tips: When you see “ensure a specific user always gets a specific variation,” look for “feature override” (or “targeting”) rather than “experiment.” Also confirm what attribute is used for evaluation (userId here). Overrides must match that identifier exactly, and they are the fastest, least disruptive way to guarantee deterministic behavior for a single tester.

Historias de éxito(8)

전
전**Nov 26, 2025

Periodo de estudio: 1 month

점수는 793점으로 합격했어요! 하루에 최소 30문제는 풀었어요. 밖에서도 짬 날 때마다 풀 수 있으니 좋네요ㅎㅎ

김
김**Nov 24, 2025

Periodo de estudio: 2 months

앱 문제가 시험이랑 굉장히 유사했어요. 그리고 해설들이 왜 틀렸는지 이해하는데 도움이 됐어요.

**********Nov 22, 2025

Periodo de estudio: 1 month

Thank you very much, these questions are wonderful !!!

윤
윤**Nov 20, 2025

Periodo de estudio: 2 months

1달 전에 합격했는데 지금 후기쓰네요. 시험하고 문제 구성이 비슷했어요

A
A******Nov 16, 2025

Periodo de estudio: 2 months

I just passed the exam, and I can confidently say that this app was instrumental in helping me thoroughly review the exam material.

Otros exámenes de práctica

Practice Test #1

65 Preguntas·130 min·Aprobación 720/1000

Practice Test #2

65 Preguntas·130 min·Aprobación 720/1000

Practice Test #3

65 Preguntas·130 min·Aprobación 720/1000

Practice Test #4

65 Preguntas·130 min·Aprobación 720/1000

Practice Test #6

65 Preguntas·130 min·Aprobación 720/1000
← Ver todas las preguntas de AWS Certified Developer - Associate (DVA-C02)

Comienza a practicar ahora

Descarga Cloud Pass y comienza a practicar todas las preguntas de AWS Certified Developer - Associate (DVA-C02).

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

App de práctica para certificaciones TI

Get it on Google PlayDownload on the App Store

Certificaciones

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

Preguntas frecuentesPolítica de privacidadTérminos de servicio

Empresa

ContactoEliminar cuenta

© Copyright 2026 Cloud Pass, Todos los derechos reservados.

¿Quieres practicar todas las preguntas en cualquier lugar?

Obtén la app

Descarga Cloud Pass — incluye exámenes de práctica, seguimiento de progreso y más.