CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
AWS Certified Solutions Architecture - Associate (SAA-C03)
AWS Certified Solutions Architecture - Associate (SAA-C03)

Practice Test #5

Simula la experiencia real del examen con 65 preguntas y un límite de tiempo de 130 minutos. Practica con respuestas verificadas por IA y explicaciones detalladas.

65Preguntas130Minutos720/1000Puntaje de aprobación
Explorar preguntas de práctica

Impulsado por IA

Respuestas y explicaciones verificadas por triple IA

Cada respuesta es verificada de forma cruzada por 3 modelos de IA líderes para garantizar la máxima precisión. Obtén explicaciones detalladas por opción y análisis profundo de cada pregunta.

GPT Pro
Claude Opus
Gemini Pro
Explicaciones por opción
Análisis profundo de preguntas
Precisión por consenso de 3 modelos

Preguntas de práctica

1
Pregunta 1

A healthcare organization runs a patient management system on Amazon EC2 instances with an Amazon RDS MySQL database. The application currently connects to the database using hardcoded credentials stored in configuration files on each EC2 instance. The organization has 15 EC2 instances across multiple Availability Zones and needs to comply with HIPAA security requirements. The security team wants to eliminate hardcoded database credentials and reduce the administrative burden of credential management while ensuring secure access to sensitive patient data. What should a solutions architect recommend to meet these security and operational requirements?

Correct. AWS Secrets Manager is designed for storing and managing secrets and supports automatic rotation for Amazon RDS MySQL using managed rotation Lambda templates. EC2 instances use IAM roles to retrieve secrets at runtime, eliminating hardcoded credentials and reducing operational overhead. Secrets are encrypted with KMS and access is auditable via CloudTrail, aligning well with HIPAA security expectations.

Incorrect. Systems Manager Parameter Store SecureString provides encrypted storage and IAM-controlled access, but it is not the best fit for automated database credential rotation at scale compared to Secrets Manager. While rotation can be engineered with custom automation, it increases operational burden and complexity. For exam scenarios involving RDS credential rotation and compliance, Secrets Manager is typically the intended service.

Incorrect. Storing encrypted credential files in S3 still relies on distributing and managing credential artifacts and does not provide a robust, integrated rotation mechanism with RDS. It also increases the risk of misconfiguration (bucket policies, object ACLs) and complicates least-privilege access patterns. S3 encryption addresses data at rest, not secret lifecycle management and rotation.

Incorrect. Encrypting EBS volumes protects credentials at rest on each instance, but credentials remain locally stored and must be rotated across 15 instances, increasing administrative burden and risk of inconsistency. A custom Lambda rotation script adds complexity and does not inherently ensure atomic rotation with RDS and immediate propagation to all instances. This is not the recommended AWS-native approach.

Análisis de la pregunta

Core Concept: This question tests secure secret storage and lifecycle management for database credentials, especially for regulated workloads (HIPAA). The key AWS service is AWS Secrets Manager, which is purpose-built to store secrets (database usernames/passwords, API keys) and rotate them automatically. Why the Answer is Correct: AWS Secrets Manager eliminates hardcoded credentials on 15 EC2 instances by centralizing the secret and allowing the application (or a bootstrap script/sidecar) to retrieve credentials at runtime using IAM permissions. For an Amazon RDS MySQL database, Secrets Manager supports native integration and automated rotation using an AWS-managed rotation Lambda template. This reduces administrative burden (no manual password changes across instances) and improves security posture by enforcing periodic rotation and minimizing credential sprawl—important for HIPAA’s access control and audit expectations. Key AWS Features / Best Practices: - Automatic rotation: Configure rotation (e.g., every 30 days) with a rotation Lambda that updates the RDS password and the stored secret atomically. - Fine-grained access control: Use IAM roles on EC2 instances to allow secretsmanager:GetSecretValue only for the specific secret; avoid distributing credentials via files. - Encryption and auditing: Secrets are encrypted with AWS KMS and access is logged in AWS CloudTrail, supporting compliance evidence. - Multi-AZ EC2 fleet: All instances can retrieve the same secret securely without copying files between AZs. Common Misconceptions: Parameter Store (SecureString) can store encrypted values, but it does not provide the same first-class, managed secret rotation workflow for RDS credentials as Secrets Manager. S3/EBS encryption protects data at rest but does not solve secret distribution, rotation coordination, or least-privilege runtime retrieval. Exam Tips: When you see “eliminate hardcoded credentials” plus “reduce administrative burden” and “rotation,” default to Secrets Manager (especially for RDS). Use Parameter Store for configuration values and simpler secrets when rotation is not a primary requirement. For compliance scenarios, also consider IAM least privilege, KMS encryption, and CloudTrail auditing as supporting controls.

2
Pregunta 2

A healthcare organization stores patient medical records and diagnostic images in multiple Amazon S3 buckets across different departments. The organization must comply with HIPAA regulations and maintain strict audit trails. The compliance team requires continuous monitoring to detect any unauthorized modifications to S3 bucket configurations, including changes to bucket policies, ACLs, encryption settings, and public access configurations. The solution must provide automated compliance checking and historical configuration tracking. What should a solutions architect implement to continuously monitor S3 bucket configuration changes and ensure compliance?

AWS Config is the correct service because it continuously records S3 bucket configuration changes and evaluates them against compliance rules. It supports managed and custom rules for controls such as encryption, public access restrictions, and bucket policy posture, which directly matches the compliance requirement. It also maintains historical configuration states, which is essential for audit trails and investigations. For identifying the actor behind a change, AWS Config is commonly paired with CloudTrail, but AWS Config is the core service for configuration monitoring and compliance.

AWS Trusted Advisor provides best-practice recommendations across categories such as security, cost optimization, performance, fault tolerance, and service limits. Although it may highlight some risky S3 settings, it does not continuously record every bucket configuration change or maintain a detailed historical timeline of those changes. It also is not the primary AWS service for rule-based compliance evaluation of resource configurations. Therefore, it cannot satisfy the requirement for continuous monitoring and historical configuration tracking for HIPAA-style audits.

Amazon Inspector is a vulnerability management service that assesses workloads such as EC2 instances, container images in Amazon ECR, and some Lambda functions for software vulnerabilities and unintended network exposure. It does not monitor S3 bucket configuration settings like bucket policies, ACLs, encryption configuration, or public access block status. It also does not provide configuration history for S3 resources. As a result, it is not suitable for continuous compliance monitoring of S3 bucket configurations.

S3 server access logging captures requests made to S3 objects and buckets, which is useful for analyzing access patterns and data-plane activity. EventBridge can route events and trigger notifications or automation when certain API calls occur, but by itself it does not maintain a normalized configuration history or evaluate resources against compliance rules. This combination may help detect that a change happened, but it does not provide the full compliance framework requested in the question. The requirement for automated compliance checking and historical configuration tracking points to AWS Config instead.

Análisis de la pregunta

Core Concept: This question tests AWS Config for continuous configuration monitoring, compliance evaluation, and historical tracking of resource configurations—specifically Amazon S3 bucket security posture (policies, ACLs, encryption, and public access settings). These capabilities align strongly with HIPAA auditability requirements and the AWS Well-Architected Security Pillar (governance, detective controls, and traceability). Why the Answer is Correct: AWS Config continuously records configuration changes to supported resources (including S3 buckets) and maintains a configuration history. It can evaluate those configurations against compliance rules (AWS managed rules and custom rules) and report noncompliance. For the requirement “continuous monitoring to detect unauthorized modifications” plus “automated compliance checking and historical configuration tracking,” AWS Config is purpose-built: it detects drift, timestamps changes, shows who/what changed (via integration with CloudTrail for API activity context), and provides a timeline of configuration states for audits. Key AWS Features: 1) Configuration recorder + delivery channel: records S3 bucket configuration items and delivers snapshots/history to an S3 bucket. 2) AWS Config Rules: use managed rules such as checking S3 bucket public read/write prohibition, server-side encryption enabled, and public access block settings; add custom rules (Lambda or Guard) for organization-specific HIPAA controls. 3) Conformance Packs: package multiple rules and remediation guidance for consistent compliance across departments/accounts. 4) Aggregators (optional): centralize compliance visibility across multiple accounts/regions. 5) Remediation (optional): Systems Manager Automation to auto-remediate noncompliant buckets. Common Misconceptions: Trusted Advisor provides best-practice checks but is not a continuous configuration history/audit trail system for specific bucket configuration changes. Amazon Inspector focuses on vulnerability management (EC2, ECR, Lambda) and does not assess S3 bucket configuration drift. S3 server access logging and EventBridge can help detect data-plane access and some events, but they do not provide a comprehensive, queryable configuration history with compliance evaluation across multiple configuration dimensions. Exam Tips: When you see “continuous compliance,” “configuration drift,” “historical configuration tracking,” and “audit trails” for AWS resource settings, think AWS Config (often paired with CloudTrail). For S3 security posture (public access, encryption, policies/ACLs), AWS Config Rules and Conformance Packs are the canonical exam answer.

3
Pregunta 3

A healthcare analytics company collects patient data from 15 different Electronic Health Record (EHR) systems through their APIs. Currently, they use a single Amazon EC2 instance (m5.large) to pull data every 30 minutes from each EHR system, process the data, store it in an Amazon S3 bucket for compliance reporting, and send email notifications to the compliance team when uploads complete. The company processes approximately 2GB of data per hour during peak times, and the current EC2 instance is experiencing CPU utilization of 85-90%, causing delays in data processing and notification delivery. The company wants to improve performance while minimizing operational complexity and management overhead. Which solution will meet these requirements with the LEAST operational overhead?

Auto Scaling can reduce CPU pressure by adding EC2 instances, and S3 event notifications to SNS is a solid notification pattern. However, operational overhead remains high: you must manage AMIs/patching, scaling policies, instance coordination to avoid duplicate polling, and potentially shared state for 15 API pulls every 30 minutes. It improves performance but does not minimize management overhead compared to a managed ingestion service.

Amazon AppFlow is purpose-built for managed data transfers from external/SaaS systems to AWS destinations like S3 with minimal code and infrastructure. Creating flows per EHR system offloads the polling/ingestion workload from EC2, directly addressing the CPU bottleneck with the least operations burden. S3 event notifications to SNS then provide an event-driven, managed way to email the compliance team when uploads complete.

EventBridge is excellent for routing events from AWS services and supported SaaS partners, but it cannot inherently “capture data events” from arbitrary EHR APIs without something generating events (custom code, partner integration, or an API destination pattern). You would still need a polling/ingestion component to call each EHR API and put data into S3. This increases complexity and does not directly solve the compute bottleneck.

ECS with auto scaling can handle higher throughput than a single EC2 instance and provides better deployment primitives than pets-on-EC2. However, it introduces container build/deploy pipelines, task definitions, scaling policies, and ongoing operational management. It also keeps the same fundamental approach (poll, process, upload, notify) running on compute you manage, which is higher overhead than a fully managed ingestion service like AppFlow.

Análisis de la pregunta

Core Concept: This question tests selecting managed integration and eventing services to improve throughput and reliability while minimizing operational overhead. Key services are Amazon AppFlow (SaaS/API data ingestion without servers) and Amazon S3 event notifications with Amazon SNS (decoupled, managed notifications). Why the Answer is Correct: The bottleneck is a single EC2 instance doing three jobs: polling 15 APIs, processing, and notifying. Scaling compute (Auto Scaling/ECS) can improve performance but increases operational responsibility (capacity planning, patching, scaling policies, container lifecycle). Amazon AppFlow offloads the data extraction/transfer from EC2 entirely by creating managed flows from each EHR API source to Amazon S3. This removes the CPU-bound polling workload and reduces moving parts. Once objects land in S3, S3 event notifications can publish to an SNS topic to email the compliance team (via SNS email subscriptions), providing near-real-time completion notifications without custom code. Key AWS Features: - Amazon AppFlow: fully managed flows, scheduling, incremental transfers (where supported), encryption in transit/at rest, and direct delivery to S3. - Amazon S3 event notifications: trigger on ObjectCreated events and publish to SNS. - Amazon SNS: fanout and email delivery via subscriptions; integrates cleanly with S3 events. This design aligns with the Well-Architected Framework (Operational Excellence and Performance Efficiency) by using managed services and reducing undifferentiated heavy lifting. Common Misconceptions: It’s tempting to pick Auto Scaling (A) because CPU is high, but that keeps the company in the business of running and scaling servers and still requires coordination of polling across instances. EventBridge (C) is excellent for event routing, but it does not “capture data events” from arbitrary external EHR APIs without custom producers; you’d still need compute/integration to call the APIs. ECS (D) improves scalability but adds container operations and does not eliminate the polling/ingestion complexity. Exam Tips: When the question emphasizes “LEAST operational overhead,” prefer fully managed ingestion/integration services (AppFlow, Step Functions, Lambda, SQS) over EC2/ECS. Also, for “notify on S3 upload,” the canonical pattern is S3 Event Notifications -> SNS (or SQS/Lambda), not custom polling or monitoring.

4
Pregunta 4

A biotechnology research company is developing a high-performance computing solution for genomic sequencing analysis in the AWS Cloud. The research team requires a parallel file system that can handle large-scale computational workloads with high throughput and low latency. The company needs the ability to use POSIX-compliant parallel file system clients for their HPC workloads. The solution must be fully managed and optimized for compute-intensive applications processing datasets up to 100 TB. Which solution meets these requirements?

DataSync and S3 Transfer Acceleration address data movement to S3, not providing a POSIX-compliant parallel file system. Mounting S3 with s3fs is not equivalent to a native POSIX parallel file system (different consistency/locking semantics, metadata behavior, and typically lower performance for HPC I/O patterns). This option focuses on transfer, not low-latency parallel shared storage for compute nodes.

Storage Gateway Volume Gateway (stored volumes) is intended for hybrid architectures where on-premises applications access cloud-backed block storage via iSCSI. It does not provide a shared POSIX parallel file system for many EC2 instances, and iSCSI volumes are not designed for concurrent parallel access by multiple clients without a clustered file system layer. It also adds unnecessary hybrid complexity for a cloud-native HPC solution.

Amazon EFS is a fully managed, POSIX-compliant shared file system accessed over NFS v4.1 and can scale throughput, but it is not a parallel file system like Lustre. Even with Max I/O mode, EFS is generally better for shared file workloads (home directories, content management, web serving) than for extreme HPC parallel I/O requiring very high throughput and low latency across many compute nodes.

Amazon FSx for Lustre is a fully managed Lustre parallel file system designed for HPC workloads requiring high throughput, low latency, and concurrent access from many compute nodes. It supports POSIX semantics and parallel I/O with file striping across storage servers, enabling very high aggregate performance. Using SSD storage aligns with the sub-millisecond latency requirement and supports intensive genomic sequencing analysis at 5–50 TB per run.

Análisis de la pregunta

Core Concept: This question tests selecting the right fully managed shared storage for high-performance computing (HPC) that requires a POSIX-compliant parallel file system client. In AWS, the purpose-built managed parallel file system is Amazon FSx for Lustre. Why the Answer is Correct: Amazon FSx for Lustre is designed for compute-intensive workloads (genomics, simulation, rendering, ML feature generation) that need very high throughput and low latency with parallel access from many compute nodes. It provides a POSIX-compliant file system and is mounted using the native Lustre client, enabling true parallel I/O across a cluster. It is fully managed by AWS (provisioning, patching, monitoring, replacing failed components) and supports scaling to large datasets, including the stated ~100 TB range. Key AWS Features: FSx for Lustre offers SSD storage for low-latency, high-IOPS workloads and high aggregate throughput that scales with file system size. It integrates with Amazon S3 (import/export) for durable data lakes while using Lustre as the high-performance scratch/working tier. It supports common HPC patterns: many EC2 instances (or AWS Batch / ParallelCluster) mounting the same file system, high metadata performance, and parallel reads/writes. This aligns with the AWS Well-Architected Performance Efficiency pillar: use managed, purpose-built services optimized for the workload. Common Misconceptions: EFS is POSIX and shared, but it is an NFS file system, not a parallel file system; it typically won’t match Lustre’s throughput/latency characteristics for large-scale HPC parallel I/O. S3 is object storage and not POSIX; mounting via s3fs is a workaround with semantic and performance limitations. Storage Gateway Volume Gateway is for hybrid on-premises integration and iSCSI block volumes, not a parallel shared file system for many compute nodes. Exam Tips: When you see “HPC”, “parallel file system”, “Lustre client”, “high throughput/low latency”, or “genomics sequencing analysis,” default to FSx for Lustre. Choose EFS for general shared POSIX NFS use cases (web serving, home directories) and S3 for object storage/data lakes—not as a primary POSIX parallel file system.

5
Pregunta 5

A healthcare company operates a patient management system that uses Amazon DynamoDB to store critical patient records and medical history data. The system handles approximately 50,000 patient records with real-time updates from multiple medical facilities across the region. Due to strict healthcare compliance requirements and the critical nature of patient data, the solutions architect must design a disaster recovery solution that can handle data corruption scenarios with a maximum data loss of 15 minutes (RPO) and system restoration within 1 hour (RTO). What should the solutions architect recommend to meet these stringent backup and recovery requirements?

DynamoDB global tables provide multi-Region, active-active replication and are excellent for high availability and low-latency global access. However, they do not protect against logical corruption because writes (including bad or corrupted updates) replicate to all Regions. Without an additional mechanism like PITR, you cannot reliably roll back to a pre-corruption state. This option addresses Region failure more than corruption recovery.

Enabling DynamoDB point-in-time recovery is the correct approach for corruption scenarios. PITR provides continuous backups and allows restoring the table to any second within the last 35 days, easily meeting a 15-minute RPO. The restore creates a new table, and the application can be redirected to it, typically achieving an RTO within 1 hour for the stated dataset size when operationalized properly.

Daily exports to S3 Glacier Deep Archive are designed for low-cost, long-term retention, not rapid operational recovery. A daily cadence cannot meet a 15-minute RPO, and Deep Archive retrieval times (hours) generally cannot meet a 1-hour RTO. Even if exports were more frequent, the restore process (retrieve, re-import) is operationally heavy compared to PITR.

This is not feasible because DynamoDB is a fully managed service; customers do not manage or access the underlying storage volumes, so EBS snapshots cannot be used to back up DynamoDB tables. For DynamoDB, backups are handled via on-demand backups, PITR, or exports to S3. This option reflects a common misconception of treating managed services like self-managed databases.

Análisis de la pregunta

Core Concept: This question tests DynamoDB backup/restore strategies for disaster recovery when the failure mode is logical corruption (bad writes, accidental deletes, application bugs) and the requirements are RPO 15 minutes and RTO 1 hour. Why the Answer is Correct: DynamoDB point-in-time recovery (PITR) continuously records changes and allows restoring a table to any second within the last 35 days. In a corruption scenario, the architect can restore the table to a timestamp just before the corruption occurred, meeting an RPO of 15 minutes (and typically much better, since PITR is second-level). The restore creates a new table, which can then be swapped in by updating application configuration (or using a blue/green approach), enabling restoration within the 1-hour RTO for a 50,000-item table. Key AWS Features: - DynamoDB PITR: continuous backups, restore to any second in the retention window (up to 35 days). - Restore behavior: PITR restores to a new table; you then redirect reads/writes to the restored table. - Operational best practice: automate the cutover (e.g., via infrastructure as code and configuration management) and validate with periodic DR tests. - Complementary controls: use IAM least privilege and conditional writes to reduce corruption risk, but PITR is the recovery mechanism. Common Misconceptions: Global tables (Option A) improve availability and regional resilience, but they replicate corruption too—bad data is quickly copied to all regions—so they do not solve logical corruption recovery by themselves. Glacier Deep Archive exports (Option C) are too infrequent and too slow to retrieve for a 1-hour RTO. EBS snapshots (Option D) are not applicable because DynamoDB is a managed service; you cannot snapshot its underlying storage volumes. Exam Tips: When you see “data corruption” or “accidental delete” with DynamoDB, think PITR first. When you see “regional outage,” think global tables or multi-region strategies. Always map requirements explicitly: PITR for tight RPO and fast logical restore; archival tiers like Glacier Deep Archive are for long-term retention, not rapid recovery.

¿Quieres practicar todas las preguntas en cualquier lugar?

Descarga Cloud Pass — incluye exámenes de práctica, seguimiento de progreso y más.

6
Pregunta 6

A financial services company runs automated fraud detection analysis using AWS Batch every 4 hours to process transaction data. The company needs a serverless solution that will automatically notify their external compliance monitoring system when the fraud detection job completes successfully. The external compliance system has a REST API endpoint that requires API key authentication with Bearer token format. The solution must be fully serverless and cost-effective. Which solution will meet these requirements?

Correct. AWS Batch job state changes are delivered to Amazon EventBridge, where a rule can match SUCCEEDED events. EventBridge API Destinations can invoke external REST endpoints directly, and EventBridge Connections can manage API key credentials and set an Authorization header in Bearer token format. This is fully serverless, minimizes components, and is typically the most cost-effective and operationally simple approach.

Incorrect. EventBridge Scheduler is for time-based invocations (cron/rate) and does not “capture” AWS Batch SUCCEEDED events. While using EventBridge (rules) to trigger Lambda would work, the option’s use of Scheduler is conceptually wrong for event capture. Even if corrected to an EventBridge rule, Lambda adds code and cost and is less cost-optimized than API Destinations for simple HTTP notifications.

Incorrect. AWS Batch does not natively publish job SUCCEEDED events to API Gateway. API Gateway is designed to front APIs for clients, not to act as an event target for Batch without an intermediary like EventBridge. Additionally, HTTP proxy integration is for forwarding inbound API requests to a backend; it does not solve the need to originate a call to an external system upon a Batch completion event.

Incorrect. Like option C, AWS Batch cannot directly publish job SUCCEEDED events to API Gateway. Even if you inserted EventBridge to call API Gateway, this adds unnecessary components (API Gateway + Lambda) compared to EventBridge API Destinations. Lambda proxy integration is useful when you must run custom logic, transform payloads, or handle complex auth, but it is not the most cost-effective for a straightforward notification.

Análisis de la pregunta

Core Concept: This question tests event-driven, serverless integration patterns on AWS—specifically using Amazon EventBridge to react to AWS Batch state changes and invoke an external REST endpoint without managing servers. It also touches on secure outbound API calls using managed authentication and cost-optimized architectures. Why the Answer is Correct: AWS Batch emits job state change events (including SUCCEEDED) to Amazon EventBridge. An EventBridge rule can match the SUCCEEDED event for the specific job queue/job definition and route it to a target. EventBridge API Destinations provide a fully managed way to call external HTTP endpoints directly from EventBridge, including support for API key–based auth and adding an Authorization header in Bearer token format via a Connection. This eliminates the need for Lambda or API Gateway, keeping the solution fully serverless and typically lower cost and lower operational overhead. Key AWS Features: 1) EventBridge Rule with event pattern matching on AWS Batch job state changes (detail.status = "SUCCEEDED"). 2) EventBridge Connection to store credentials securely (API key) and apply them to outbound requests. 3) API Destination as the target, with throttling controls and retry behavior managed by EventBridge. 4) Least-privilege IAM: EventBridge assumes a role to invoke the API destination; secrets are not embedded in code. Common Misconceptions: Many candidates default to Lambda for “call an API when something happens.” While valid, it adds code, runtime management, and additional cost per invocation/duration. Another misconception is using API Gateway as an intermediary for outbound calls; API Gateway is primarily for exposing APIs to clients, not for receiving AWS Batch events directly or proxying outbound calls as a notification mechanism. Exam Tips: When you see “notify external SaaS/REST endpoint on AWS event” and “fully serverless/cost-effective,” consider EventBridge + API Destinations first. Use Lambda only when transformation, complex logic, or custom signing is required. Also remember EventBridge is the native event bus for many AWS service state changes, including Batch job lifecycle events.

7
Pregunta 7

A growing online gaming platform 'GameHub' has an application running on a single Amazon EC2 instance with a MySQL database on the same instance. The platform is experiencing rapid traffic growth and needs a solution to handle increased user load. The company needs a solution that is highly available and can automatically scale to handle the increased traffic, both for the application and the database layers. Which solution will meet these requirements?

Correct. EC2 Auto Scaling behind an ALB provides horizontal scaling and high availability for the application tier across multiple AZs. Aurora Serverless MySQL provides a MySQL-compatible database that can automatically scale capacity with demand while maintaining high availability through Aurora’s distributed storage and failover capabilities. This best matches the requirement to auto-scale both the application and database layers.

Partially meets requirements but not best. ALB improves availability, but the option does not explicitly use an Auto Scaling group, so application auto-scaling is not guaranteed. Also, “RDS for MySQL cluster with multiple instances” is ambiguous: RDS Multi-AZ improves availability, and read replicas scale reads, but write capacity generally does not auto-scale. It’s less aligned with “automatically scale” for the database.

Incorrect. Auto Scaling + ALB can scale the application tier, but ElastiCache for Redis is a caching layer, not a durable MySQL database replacement. A “MySQL connector” does not make Redis a system of record for transactional data. This would not satisfy the requirement to scale the database layer for MySQL workloads with durability, backups, and relational features.

Incorrect. Moving to a single new EC2 instance does not provide high availability or scaling. Amazon Redshift is a columnar data warehouse optimized for analytics (OLAP), not transactional MySQL workloads (OLTP). Even if some compatibility exists via integrations, Redshift is not a drop-in MySQL database and would not meet the requirement for an auto-scaling, highly available MySQL database layer.

Análisis de la pregunta

Core Concept: This question tests designing a highly available, automatically scalable architecture by separating the application tier from the database tier and using managed services. Key services: EC2 Auto Scaling + Application Load Balancer (ALB) for stateless web/app scaling, and Aurora Serverless v2 (MySQL-compatible) for database high availability and capacity scaling. Why the Answer is Correct: Option A provides automatic scaling for both layers. The application tier becomes horizontally scalable by placing multiple EC2 instances in an Auto Scaling group behind an ALB. This supports rapid traffic growth and improves availability across multiple Availability Zones. For the database, Aurora Serverless (MySQL) is designed to automatically adjust database capacity based on load, reducing the operational burden of resizing instances. Aurora also provides Multi-AZ resilience via its distributed, fault-tolerant storage and supports fast failover, meeting the “highly available” requirement. Key AWS Features / Best Practices: - ALB + Auto Scaling group: health checks, cross-AZ load balancing, scaling policies (target tracking on CPU/RequestCount), and self-healing by replacing unhealthy instances. - Aurora Serverless (v2): scales capacity in fine-grained increments, supports Multi-AZ, and is compatible with MySQL drivers. Use RDS Proxy (often paired) to manage connection pooling during scaling events. - Architectural best practice: keep the app tier stateless (sessions in ElastiCache/DynamoDB, shared assets in S3/EFS) so instances can scale in/out safely. Common Misconceptions: A common trap is assuming “RDS with multiple instances” automatically means a scalable cluster for writes (it usually means read replicas). Another misconception is treating ElastiCache as a database replacement; it’s a cache, not durable system of record. Redshift is for analytics, not OLTP. Exam Tips: When requirements explicitly say “automatically scale” for the database, look for Aurora Serverless (or other autoscaling database patterns) rather than standard RDS Multi-AZ. Also, for “highly available + scalable application,” the default exam pattern is ALB + Auto Scaling across multiple AZs. Ensure the chosen database is appropriate for transactional MySQL workloads (Aurora/RDS), not caching (ElastiCache) or warehousing (Redshift).

8
Pregunta 8

A fintech startup operates a mobile banking application on AWS with 50,000 active users. The application uses Amazon Cognito for customer authentication and identity management. When customers access their account information, the mobile app calls REST APIs hosted on Amazon API Gateway to retrieve transaction data from Amazon DynamoDB. The startup is experiencing rapid user growth of 20% monthly. The company needs an AWS managed solution to secure API access control that minimizes development complexity and operational burden while handling the growing user base efficiently. Which solution will meet these requirements with the LEAST operational overhead?

A Lambda custom authorizer can validate tokens and implement complex authorization logic, but it introduces additional development and operational overhead (code, deployments, monitoring, scaling, and potential cold starts). With rapid growth and high request volume, per-request Lambda invocations can increase latency and cost. This is not the least-operations approach when Cognito user pool authorizers provide managed JWT validation.

API keys are intended for API Gateway usage plans (throttling/quotas) and identifying calling applications, not authenticating individual mobile banking customers. Storing and validating per-customer API keys in DynamoDB plus a Lambda validation layer adds complexity and creates key-rotation and leakage risks in mobile clients. It also duplicates functionality already provided by Cognito JWTs.

Including an account number in headers is not a secure access control mechanism because headers can be manipulated by the client. You would still need strong authentication (e.g., JWT) and server-side authorization checks mapping identity to allowed resources. Using Lambda to validate this increases custom logic and operational burden, and it risks insecure direct object reference (IDOR) issues if not perfectly implemented.

API Gateway with a Cognito user pool authorizer is a fully managed, low-overhead solution that validates Cognito-issued JWTs automatically (signature, expiration, issuer, audience). It scales with user growth without custom authorization code and integrates cleanly with OAuth scopes/claims for method-level access control. This best meets the requirement for least operational overhead while securing API access.

Análisis de la pregunta

Core Concept: This question tests managed API access control using Amazon API Gateway integrated with Amazon Cognito. The key idea is offloading authentication token validation and request authorization to AWS-managed services (Cognito user pools + API Gateway authorizers) to minimize custom code and operational overhead. Why the Answer is Correct: Configuring an Amazon Cognito user pool authorizer in API Gateway (D) allows API Gateway to automatically validate JSON Web Tokens (JWTs) issued by the Cognito user pool for each request. This is the lowest-operations approach because it eliminates custom Lambda authorizer code, reduces moving parts, and scales natively with user growth (20% monthly). With 50,000 active users and rapid growth, avoiding per-request Lambda invocations for authorization reduces operational burden, latency variability, and scaling concerns. Key AWS Features: API Gateway Cognito user pool authorizers validate JWT signature, issuer, audience (client ID), and token expiration. You can use OAuth scopes and claims in the JWT to enforce fine-grained access control (e.g., route/method-level authorization). API Gateway can cache authorizer results (where applicable) to reduce repeated validation overhead. This aligns with AWS Well-Architected Security Pillar guidance: use managed identity services, implement least privilege, and centralize authentication. Common Misconceptions: Custom Lambda authorizers (A) can seem flexible, but they add code maintenance, monitoring, cold starts, concurrency planning, and potential throttling under growth. API keys (B) are not an authentication mechanism for end users; they are primarily for metering/throttling and are easily mishandled in mobile apps. Passing account numbers in headers (C) is not secure authentication/authorization; it is user-supplied data and can be spoofed without strong token-based validation. Exam Tips: When you see “Cognito for authentication” + “API Gateway” + “least operational overhead,” the default best answer is usually a Cognito user pool authorizer (or IAM auth for machine-to-machine). Reserve Lambda authorizers for cases requiring non-JWT identity providers, complex external policy checks, or custom token formats. Also remember: API keys are for usage plans, not customer identity.

9
Pregunta 9

A media streaming company needs to store video analytics reports generated by their machine learning pipeline. The reports are automatically generated twice daily and uploaded to Amazon S3 for access by multiple departments including marketing, content strategy, and executive teams. Each report file ranges from 1.5 GB to 4 GB in size. The access patterns are highly unpredictable as different departments may need the reports at various times throughout the day or week. All reports must be instantly accessible without any retrieval delays and need to be retained for exactly 90 days for compliance purposes. The company requires the most cost-efficient storage solution that maintains immediate access capabilities. Which S3 storage class should the company implement to meet these requirements?

S3 Intelligent-Tiering is the best fit for unpredictable access with a requirement for instant availability. It automatically moves objects between Frequent and Infrequent access tiers (and optional archive tiers) based on usage, optimizing cost without requiring you to predict access patterns. It maintains millisecond access in the active tiers and reduces the risk of overpaying for S3 Standard when many objects are not accessed regularly.

S3 Glacier Instant Retrieval provides millisecond access but is intended for archive data that is rarely accessed. It typically has minimum storage duration charges (commonly 90 days) and retrieval fees, which can become expensive if access is more frequent than expected. Because the company’s access is highly unpredictable across multiple departments, Glacier Instant Retrieval can introduce unnecessary retrieval costs compared to Intelligent-Tiering’s automatic optimization.

S3 Standard offers the highest availability and millisecond access with no retrieval fees, making it attractive for immediate access requirements. However, it is generally the most expensive option for data that is not consistently accessed. Given the unpredictable access pattern and 90-day retention, many reports may be accessed infrequently, so Standard would likely cost more than necessary compared to Intelligent-Tiering.

S3 Standard-IA provides millisecond access at a lower storage price than Standard, but it charges retrieval fees and is optimized for data that is known to be infrequently accessed. With highly unpredictable access, retrieval charges can accumulate and negate savings. It also has a minimum storage duration charge (typically 30 days), which is not a problem for 90-day retention but does not address the uncertainty as well as Intelligent-Tiering.

Análisis de la pregunta

Core Concept: This question tests Amazon S3 storage class selection for cost optimization under unpredictable access patterns with a strict retention requirement and a hard constraint of immediate (millisecond) access. It also implicitly tests understanding of minimum storage duration charges and retrieval fees. Why the Answer is Correct: S3 Intelligent-Tiering is designed for data with unknown or changing access patterns while still requiring instant access. Objects are automatically moved between access tiers (Frequent Access and Infrequent Access, and optionally Archive Instant/Archive/Deep Archive tiers) based on observed access, without performance impact or retrieval delay for the non-archive tiers. Because departments may access reports unpredictably, Intelligent-Tiering avoids the risk of paying unnecessary S3 Standard rates for rarely accessed objects while still keeping frequently accessed objects in a tier optimized for that pattern. Key AWS Features: Intelligent-Tiering provides automatic tiering using S3’s monitoring and automation, charging a small per-object monitoring fee. It offers millisecond access for Frequent and Infrequent tiers (and for Archive Instant Access if enabled), and it avoids operational overhead of lifecycle tuning when access is hard to predict. For the 90-day compliance requirement, you can use an S3 Lifecycle expiration rule to delete objects exactly at 90 days, and optionally use S3 Object Lock (governance/compliance mode) if “exactly 90 days” also implies immutability. Common Misconceptions: S3 Standard-IA and Glacier Instant Retrieval both provide immediate access, so they can look correct. However, Standard-IA is best when access is known to be infrequent; unpredictable access can lead to higher retrieval charges and per-GB retrieval fees. Glacier Instant Retrieval is intended for rarely accessed archive data and has minimum storage duration charges (typically 90 days) and retrieval costs; it’s not ideal when access frequency is uncertain and could be moderate. Exam Tips: When you see “unpredictable access” + “must be immediately accessible,” Intelligent-Tiering is a primary candidate. Choose Standard only when you expect frequent access. Choose Standard-IA when you are confident access is infrequent and can tolerate retrieval charges. Glacier classes are for archival patterns; confirm whether retrieval delays are acceptable and watch for minimum storage duration constraints.

10
Pregunta 10

A financial services company operates a mission-critical customer transaction processing system using Amazon RDS for MySQL. The database handles approximately 15,000 transactions per minute during peak hours and maintains customer account information worth over $2 billion in assets. Due to increasing performance demands and the need for better scalability, the company wants to migrate to Amazon Aurora MySQL. The migration must ensure zero data loss, minimize downtime to less than 5 minutes, and require minimal operational effort from the database administration team. Which solution will meet these requirements with the LEAST operational overhead?

A manual snapshot/restore is operationally straightforward but usually causes significant downtime and cannot guarantee near-zero data loss at cutover. You must stop or accept losing in-flight writes after the snapshot time, then restore into Aurora, which can take longer than 5 minutes for large databases. This approach is better for non-critical systems or when longer maintenance windows are acceptable.

This is the best fit: create an Aurora read replica from the RDS for MySQL instance, let it replicate continuously, then promote it to a standalone Aurora cluster at cutover. With a brief write freeze and ensuring replication lag is zero, you can achieve effectively zero data loss and keep downtime to minutes. It also minimizes operational overhead because AWS manages the replication and promotion workflow.

Using mysqldump to S3 and importing into Aurora is highly manual and typically slow for large, busy databases. It requires coordinating exports, handling consistency (often needing locks or transaction snapshots), and then importing, which can exceed the 5-minute downtime requirement. It also increases risk of human error and is not ideal for mission-critical, high-transaction workloads.

AWS DMS can perform full load plus ongoing CDC replication and can meet low-downtime and low-data-loss goals. However, it generally has higher operational overhead than the Aurora read replica approach: you must provision and manage a replication instance, configure tasks, monitor latency, handle errors, and tune performance. DMS is excellent when native replication isn’t available or for complex migrations, but it’s not the least-ops option here.

Análisis de la pregunta

Core Concept: This question tests low-downtime migration from Amazon RDS for MySQL to Amazon Aurora MySQL with minimal operational effort. The key concept is using Aurora’s capability to create an Aurora read replica from an existing RDS for MySQL instance (MySQL binlog-based replication) and then promoting it to cut over. Why the Answer is Correct: Creating an Aurora read replica of the existing RDS for MySQL instance allows near-real-time replication from the source to Aurora. Once replication lag is minimal, the company can perform a controlled cutover: briefly stop writes (or quiesce the application), wait for the replica to catch up, then promote the Aurora replica to be the primary writer. This approach can achieve near-zero data loss (RPO ~ 0 when fully caught up) and downtime typically limited to DNS/endpoint/application configuration changes—often within the <5 minute requirement. Operationally, it is largely managed by AWS and avoids building and operating a separate migration replication stack. Key AWS Features: Aurora MySQL supports creating an Aurora read replica from an RDS for MySQL source (requires compatible MySQL versions, binary logging enabled, and appropriate parameter settings). Promotion is a managed action that converts the replica into a standalone Aurora cluster. Using Aurora cluster endpoints (writer endpoint) simplifies post-cutover connectivity. This aligns with Well-Architected operational excellence by reducing manual steps and ongoing management. Common Misconceptions: Many assume AWS DMS is always the best for minimal downtime. DMS is powerful, but it introduces additional components (replication instance sizing, task tuning, monitoring, error handling) and operational overhead. Snapshots and dump/import approaches can seem simpler, but they typically increase downtime and risk missing last-minute writes. Exam Tips: For RDS MySQL to Aurora MySQL with strict downtime and minimal ops, look first for “Aurora read replica from RDS” and “promote” patterns. Choose DMS when heterogenous migrations, complex transformations, or unsupported native replication paths are required. Always map requirements to RPO/RTO: replication + promotion targets low RPO and low RTO with fewer moving parts than DMS.

Historias de éxito(31)

이
이**Apr 25, 2026

Periodo de estudio: 1 month

시험문제보다 난이도가 있는편이고 같은문제도 조금 나왔습니다

C
C*********Mar 23, 2026

Periodo de estudio: 1 week

요구사항 정확히 인지하기(이거 젤중요 이 훈련이 제일 중요한듯), 오답노트 갈겨서 200문제만 확실히 해서 갔어요 실제 시험 지문은 훨씬 간단한데 난이도는 앱이랑 비슷하거나 더 낮았던것같아요 느낌상 탈락이었는데 통과해서 기쁘네요 큰 도움 되었습니다 감사해요!

소
소**Feb 22, 2026

Periodo de estudio: 1 week

그냥 문제 풀면서 개념들 GPT에 물어보면서 학습했어요 768점 턱걸이 합격,,

조
조**Jan 12, 2026

Periodo de estudio: 3 months

그냥 꾸준히 공부하고 문제 풀고 합격했어요 saa 준비생분들 화이팅!!

김
김**Dec 9, 2025

Periodo de estudio: 1 month

앱으로는 4일만에 몇 문제를 풀었는지 모르겠지만 1딜동안 aws 기본 개념부터 덤프로 시나리오 그려보고 하니까 합격했습니다. 시험은 생각보다 헷갈리게 나와서 당황했는데 30분 추가 테크로 flag한 지문들 다시 확인하니까 문제 없었습니다.

Otros exámenes de práctica

Practice Test #1

65 Preguntas·130 min·Aprobación 720/1000

Practice Test #2

65 Preguntas·130 min·Aprobación 720/1000

Practice Test #3

65 Preguntas·130 min·Aprobación 720/1000

Practice Test #4

65 Preguntas·130 min·Aprobación 720/1000

Practice Test #6

65 Preguntas·130 min·Aprobación 720/1000

Practice Test #7

65 Preguntas·130 min·Aprobación 720/1000

Practice Test #8

65 Preguntas·130 min·Aprobación 720/1000

Practice Test #9

65 Preguntas·130 min·Aprobación 720/1000

Practice Test #10

65 Preguntas·130 min·Aprobación 720/1000
← Ver todas las preguntas de AWS Certified Solutions Architecture - Associate (SAA-C03)

Comienza a practicar ahora

Descarga Cloud Pass y comienza a practicar todas las preguntas de AWS Certified Solutions Architecture - Associate (SAA-C03).

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

App de práctica para certificaciones TI

Get it on Google PlayDownload on the App Store

Certificaciones

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

Preguntas frecuentesPolítica de privacidadTérminos de servicio

Empresa

ContactoEliminar cuenta

© Copyright 2026 Cloud Pass, Todos los derechos reservados.

¿Quieres practicar todas las preguntas en cualquier lugar?

Obtén la app

Descarga Cloud Pass — incluye exámenes de práctica, seguimiento de progreso y más.