CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
AWS Certified Solutions Architect - Professional (SAP-C02)
AWS Certified Solutions Architect - Professional (SAP-C02)

Practice Test #5

Simulate the real exam experience with 75 questions and a 180-minute time limit. Practice with AI-verified answers and detailed explanations.

75Questions180Minutes750/1000Passing Score
Browse Practice Questions

AI-Powered

Triple AI-Verified Answers & Explanations

Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.

GPT Pro
Claude Opus
Gemini Pro
Per-option explanations
In-depth question analysis
3-model consensus accuracy

Practice Questions

1
Question 1

A multinational gym franchise issues app-based QR visit passes that are scanned at turnstiles worldwide. The turnstile scanner calls a backend API to validate the QR data against a single database table and, after a successful scan, atomically marks the pass as redeemed. The company must deploy on AWS under the DNS name api.fitpass.example.org and host the database in three AWS Regions: us-east-1, eu-west-1, and ap-southeast-1. Peak traffic is 75,000 validations per minute globally with payloads under 20 KB, and the business requires the lowest possible end-to-end validation latency with p95 under 120 ms from users’ nearest AWS edge location. Which solution will meet these requirements with the lowest latency?

This is the best option among the choices because AWS Global Accelerator is purpose-built to route users to the nearest healthy regional application endpoint using the AWS global network. That reduces internet path variability and typically provides lower latency for dynamic API calls than sending every request through a CDN-style origin model. Running the backend on ECS in the same Regions as the database also keeps application-to-database communication local within each Region. Route 53 can map the required DNS name to the accelerator endpoint, satisfying the naming requirement cleanly.

This option uses CloudFront in front of EKS clusters, but CloudFront is not the best fit for selecting the nearest regional backend for a write-heavy dynamic API. Requests would still traverse from the edge to an origin Region, and CloudFront does not provide the same purpose-built global anycast routing model for regional application endpoints that Global Accelerator does. It also adds unnecessary operational complexity by using EKS when the question is focused on latency rather than container orchestration features. Aurora Global Database still leaves the design less compelling than the Global Accelerator pattern in option A.

CloudFront Functions cannot make network calls to external services such as DynamoDB, so they cannot validate a pass against a database or perform an atomic redemption update. They are limited to lightweight request and response manipulation at the edge with very constrained execution capabilities. Because the core requirement is a database-backed validation plus atomic state change, this option is technically incapable of implementing the workflow. Therefore it cannot be the correct answer regardless of DynamoDB global tables being a strong database choice.

Although DynamoDB global tables are a strong fit for multi-Region active-active data and atomic conditional updates, Lambda@Edge is not the right service for implementing this transactional backend pattern. Lambda@Edge is designed for request and response customization around CloudFront, not as a primary globally distributed application runtime for database-driven API writes. The architecture is also less direct and less aligned with AWS best practices for low-latency dynamic API routing than using Global Accelerator with regional application endpoints. As a result, this option is less appropriate than option A for the stated requirement.

Question Analysis

Core concept: This question is about choosing the lowest-latency global architecture for a dynamic API that performs a database lookup and an atomic redemption update. For globally distributed API traffic, the AWS service specifically designed to route users to the nearest healthy regional application endpoint with optimal network performance is AWS Global Accelerator. The database must exist in three Regions, but the application still needs a practical request-routing layer for dynamic, non-cacheable API calls. Why correct: Option A is the best answer because it uses AWS Global Accelerator to direct clients to the nearest regional ECS-based API endpoint over the AWS global backbone, minimizing network latency for dynamic requests. ECS services in us-east-1, eu-west-1, and ap-southeast-1 provide regional processing close to users, and Route 53 can map the required DNS name to the accelerator. Among the provided options, this is the most appropriate architecture for a globally distributed low-latency API. Key features: - AWS Global Accelerator provides anycast static IPs and routes traffic to the nearest healthy regional endpoint with fast failover. - ECS is a valid regional compute platform for API services that need to perform database reads and writes. - Aurora Global Database supports cross-Region replication and multi-Region database presence, satisfying the requirement to host the database in three Regions. - Route 53 can alias the custom DNS name api.fitpass.example.org to the Global Accelerator endpoint. Common misconceptions: CloudFront is often mistaken as the best choice for all global APIs, but it is primarily a CDN and edge proxy, not the default best option for dynamic write-heavy API routing across multiple regional application stacks. CloudFront Functions cannot call databases at all, and Lambda@Edge is not the intended mechanism for implementing a write-heavy transactional backend against DynamoDB. DynamoDB global tables are excellent for active-active data, but the surrounding compute and routing pattern in options C and D is flawed. Exam tips: For globally distributed dynamic APIs that must reach the nearest regional backend with the lowest latency, look first at AWS Global Accelerator rather than CloudFront. Use CloudFront when caching, static content delivery, or edge request manipulation is central to the design. Be cautious when an option suggests CloudFront Functions or Lambda@Edge performing full backend business logic with direct database transactions, because those services have important capability and architectural limitations.

2
Question 2

A digital education platform runs its enrollment service on AWS. The service exposes a REST endpoint through Amazon API Gateway that invokes an AWS Lambda function, which performs transactional reads and writes against an Amazon RDS for PostgreSQL DB instance. During a 2-hour “Early Access” promotion, Amazon CloudWatch showed concurrent Lambda executions spiking from a baseline of 80 to 3,200, active database connections climbing to 470 out of a 500-connection limit, and DB CPU utilization sustained above 85%, resulting in intermittent 502 errors and p95 API latencies over 2.5 seconds. What should a solutions architect recommend to optimize the application’s performance under these burst conditions?

Increasing Lambda memory can reduce function runtime, but it does not directly solve the database connection limit or connection storm. Explicitly closing the DB connection after each query is usually counterproductive for Lambda-to-RDS because it increases connection churn (auth/TLS setup) and can raise DB CPU. This option might slightly reduce concurrent duration, but it won’t reliably prevent exhausting 500 connections during bursts.

ElastiCache for Redis is useful when repeated read queries dominate and results can be cached (e.g., course catalog browsing). However, the workload is described as transactional reads and writes for enrollment, and the symptoms include near-max DB connections and high CPU. Caching does not reduce the number of concurrent DB connections created by Lambda, and it does not help write-heavy or strongly consistent transactional paths.

RDS Proxy is purpose-built for Lambda and bursty workloads. It pools and reuses database connections, allowing thousands of Lambda invocations to share a smaller number of backend PostgreSQL connections. This directly addresses the observed 470/500 connections, reduces CPU spent on connection management, improves latency consistency, and mitigates intermittent 502 errors caused by DB saturation. Modifying Lambda to use the proxy endpoint is the standard best practice.

Initializing the DB client outside the handler and reusing connections is a good Lambda optimization, but it only helps within a single warm execution environment. With concurrency spiking to 3,200, Lambda will create many parallel environments, each potentially holding one or more connections, still overwhelming a 500-connection RDS limit. This is a partial mitigation, not a robust solution for burst conditions compared to RDS Proxy.

Question Analysis

Core Concept: This question tests burst scaling behavior of AWS Lambda calling Amazon RDS and the resulting “connection storm” problem. The key service is Amazon RDS Proxy, which provides connection pooling and manages database connections for serverless and highly concurrent clients. Why the Answer is Correct: During the promotion, Lambda concurrency jumped to 3,200 while the database hit 470/500 connections and sustained >85% CPU, causing 502s and high p95 latency. This pattern strongly indicates that the database is overwhelmed by too many concurrent client connections and connection churn (frequent opens/closes, TLS/auth overhead), not just slow queries. RDS Proxy sits between Lambda and RDS, maintains a warm pool of established connections to PostgreSQL, and multiplexes many Lambda invocations over fewer DB connections. This reduces connection spikes, lowers CPU spent on connection management, improves resilience during bursts, and stabilizes latency. Key AWS Features: RDS Proxy supports PostgreSQL, integrates with IAM and AWS Secrets Manager for credential management, provides connection pooling, and can improve failover handling by reducing application-side reconnection logic. For Lambda, the best practice is to point the function to the proxy endpoint and keep application connections open (the proxy manages backend pooling). This aligns with AWS Well-Architected Reliability and Performance Efficiency pillars by smoothing burst load and protecting the database. Common Misconceptions: Caching (ElastiCache) can help read-heavy workloads, but the question states transactional reads/writes and shows connection saturation; caching won’t fix write pressure or connection storms. Reusing connections in Lambda (initializing outside the handler) helps, but it is not sufficient under extreme concurrency because each concurrent execution environment can still create its own connection(s), quickly exhausting DB limits. Increasing Lambda memory may speed compute, but it does not address DB connection limits and can even increase parallelism against the DB. Exam Tips: When you see Lambda/API Gateway + RDS with sudden concurrency spikes, high DB connections near the limit, and elevated CPU with intermittent 502s/timeouts, think “RDS Proxy/connection pooling.” Choose RDS Proxy over “close connections” guidance; in serverless, closing per request often worsens churn. Use caching only when the bottleneck is repeated reads and you can tolerate eventual consistency.

3
Question 3

A genomics research consortium provides a regulated data-annotation service to regional laboratories across North America and Europe. The service runs entirely on AWS across more than 30 member accounts that are centrally managed in a single AWS Organizations organization; workloads are deployed in all enabled Regions, and new accounts are added monthly. For audit and regulatory requirements, every API call to AWS resources across all current and future accounts and Regions must be recorded, tracked for changes, and stored durably and securely for 7 years with encryption at rest; logs must be immutable (recoverable from accidental deletions), and the solution must minimize ongoing operational effort without introducing third-party tooling. Which solution meets these requirements with the least operational overhead?

A trail in the management account that writes to an S3 bucket is not necessarily an organization trail. Without enabling an organization-level trail, CloudTrail will not automatically capture events from all member accounts. Even with multi-Region enabled and S3 encryption/MFA Delete, it fails the requirement to record every API call across all current and future accounts with minimal ongoing effort.

Creating a trail and bucket in each member account can capture API activity per account and can be configured for all Regions with encryption and MFA Delete. However, it has high operational overhead (30+ accounts, new accounts monthly, many Regions) and increases the chance of configuration drift. This contradicts the requirement to minimize ongoing operational effort.

An organization-level CloudTrail trail in the management account is the most scalable and lowest-operations way to capture API activity across all existing and future accounts in an AWS Organizations environment. Configuring the trail for all Regions ensures coverage across enabled Regions without requiring per-account trail administration. Sending logs to a centralized S3 bucket with encryption at rest satisfies the durability and security requirements for long-term retention. Enabling S3 Versioning and MFA Delete helps protect the logs from accidental deletion by preserving prior versions and requiring MFA for permanent version deletion.

Adding SNS notifications to an external management system introduces extra integration and operational burden and is not required to meet the core logging and retention requirements. Also, a trail in the management account alone does not guarantee coverage of all member accounts unless it is configured as an organization trail. This option is both incomplete and higher-ops than necessary.

Question Analysis

Core Concept: This question is about implementing centralized, organization-wide AWS API activity logging with AWS CloudTrail across all existing and newly added AWS accounts, storing logs securely for long-term retention with encryption and protection against accidental deletion, while minimizing operational overhead. Why correct: An AWS CloudTrail organization trail created in the management account is specifically designed to apply automatically across all member accounts in an AWS Organizations organization. Configuring the trail as multi-Region provides logging across all enabled Regions, and central delivery to an Amazon S3 bucket gives durable, low-operations storage. Enabling S3 encryption at rest satisfies the encryption requirement, while S3 Versioning plus MFA Delete helps protect against accidental deletions by allowing recovery of prior object versions. This is the most operationally efficient native AWS solution among the options. Key features: - CloudTrail organization trail automatically includes current and future member accounts. - Multi-Region trail records events from all enabled Regions. - Centralized S3 storage supports long-term retention and simplified audit access. - S3 Versioning and MFA Delete provide strong protection against accidental deletion and overwrite. - S3 server-side encryption supports compliance requirements for encryption at rest. Common misconceptions: A standard trail in the management account does not automatically capture events from all member accounts; it must be an organization trail. Creating separate trails in every account can work technically, but it creates unnecessary operational burden and configuration drift risk. Also, Versioning plus MFA Delete improves recoverability, but if a question explicitly required strict WORM immutability, S3 Object Lock would be the stronger feature; it is simply not offered in the answer choices here. Exam tips: When a question mentions AWS Organizations with current and future accounts, prefer organization-level services such as CloudTrail organization trails. For centralized audit logging, look for a dedicated S3 bucket with encryption and retention controls. Be careful not to assume that a generic trail in one account automatically covers the organization, and distinguish between recoverability controls like Versioning/MFA Delete and true immutable retention controls like Object Lock.

4
Question 4
(Select 2)

An education technology company is preparing to host a week-long global virtual hackathon this quarter; the platform uses an Application Load Balancer in front of web and application tiers on Amazon EC2 and an Amazon Aurora PostgreSQL database, about 70% of payload is static assets and approximately 95% of requests are read-only, and the company expects a 15x surge in traffic from North America, Europe, and APAC during the event while aiming to minimize p95 response times for a worldwide audience; which combination of steps should a solutions architect take to reduce system response times globally? (Choose two.)

Partially relevant but incomplete/misaligned. Aurora logical cross-Region replication (e.g., PostgreSQL logical replication) is more complex and typically higher-latency than Aurora Global Database physical replication. Replacing the web tier with S3 only works for fully static sites; most hackathon platforms need dynamic web/app logic. S3 cross-Region replication helps DR and regional availability but does not provide edge caching like CloudFront, which is key for global p95 latency.

Auto Scaling and multi-Region web/app tiers can help, but adding AWS Direct Connect is not an effective lever for a global internet-facing audience; Direct Connect benefits private connectivity from corporate/on-prem networks to AWS. It won’t reduce latency for end users on the public internet. Also, without a global routing/caching layer (Route 53 LBR/CloudFront) and a multi-Region database read strategy, p95 latency and database bottlenecks remain.

Incorrect for the goal. Migrating from Aurora PostgreSQL to RDS for PostgreSQL generally removes Aurora-specific performance and global features and does not inherently reduce latency. Placing all tiers in private subnets is a security posture choice, not a performance optimization, and it can complicate ingress/egress without improving global response times. This option ignores the need for edge caching, global routing, and cross-Region read scaling.

Correct. Aurora Global Database is purpose-built for low-latency cross-Region replication and fast local reads in secondary Regions, ideal for 95% read-only workloads during a global event. Deploying web/app tiers in multiple Regions reduces RTT for dynamic requests. Storing static assets in S3 with cross-Region replication supports regional origins and resilience (often paired with CloudFront), reducing dependence on a single Region and improving availability and performance.

Correct. CloudFront directly targets the 70% static payload by caching at edge locations, reducing latency and origin load, which improves p95 globally. Route 53 latency-based routing sends users to the closest healthy Region, minimizing network distance for dynamic requests. Ensuring web/app tiers are in Auto Scaling groups provides elasticity for the 15x surge. This combination is a classic exam pattern for global performance and scalability.

Question Analysis

Core concept: This question tests global performance optimization for a read-heavy, static-heavy web application: using edge caching (Amazon CloudFront), global traffic steering (Route 53 latency-based routing), multi-Region application deployment, and low-latency cross-Region database reads (Aurora Global Database). Why the answer is correct: With a 15x global surge and a worldwide audience, the biggest p95 latency drivers are network distance and origin load. Option E reduces latency immediately by caching ~70% static assets at edge locations via CloudFront and by sending users to the closest healthy Region using Route 53 latency-based routing. Auto Scaling groups handle the surge for dynamic compute. Option D addresses the remaining bottleneck: database reads (95% read-only). Aurora Global Database provides physical replication with low-latency cross-Region read capability, allowing each Region’s application tier to read locally (or near-locally) instead of traversing oceans to a single writer Region. Storing static assets in S3 and replicating them cross-Region supports multi-Region origins and resilience, while multi-Region web/app tiers reduce RTT for dynamic requests. Key AWS features / best practices: - CloudFront: cache static content, reduce origin load, improve p95 globally; can also cache some dynamic content with appropriate cache keys. - Route 53 latency-based routing: directs users to the Region with lowest latency; typically combined with health checks. - Aurora Global Database: one primary writer Region, up to 5 secondary Regions with fast physical replication; use reader endpoints in secondary Regions for read scaling. - Multi-Region active/active (for web/app) with Auto Scaling: absorbs spikes and reduces user-to-origin latency. Common misconceptions: - “Just add Direct Connect” (Option B) doesn’t help internet users; it’s for private connectivity from on-premises. - “Cross-Region replication for S3 alone” (Option A) helps durability/DR but doesn’t provide edge caching; also “replace the web tier with S3” only works for static websites and doesn’t address dynamic app/API needs. - “Move to RDS PostgreSQL/private subnets” (Option C) doesn’t improve global latency and can reduce flexibility. Exam tips: When you see global users + static-heavy content, think CloudFront first. When you see read-heavy databases + multi-Region needs, think Aurora Global Database (or read replicas) and route users to the nearest Region with Route 53 latency-based routing. Combine caching + multi-Region compute + cross-Region read strategy for best p95 improvements.

5
Question 5
(Select 2)

A fintech company is moving a legacy reporting platform from a third-party hosting facility to AWS; the platform consists of a single Linux application server VM (Nginx and a Java service) and a PostgreSQL 12 database on a separate VM, and each VM uses multiple attached volumes totaling 420 TB; the company has a dedicated 10 Gbps AWS Direct Connect link to the nearest AWS Region that is currently unused by other workloads; the business requires the migration cutover downtime to be under 30 minutes and wants the database to run on Amazon RDS for PostgreSQL after migration; which combination of steps should a solutions architect take to migrate the workload with the least amount of downtime? (Choose two.)

Incorrect because AWS Server Migration Service rehosts a VM onto Amazon EC2 rather than migrating the database into Amazon RDS for PostgreSQL. The requirement explicitly states that the database must run on RDS after migration, so lifting and shifting the database VM does not satisfy the target-state architecture. Even if SMS could reduce downtime for a VM move, it is the wrong service for a managed PostgreSQL destination.

Incorrect because VM Import/Export is primarily a one-time image import/export mechanism and does not provide continuous replication or an efficient low-downtime migration workflow for an active production application server. It also does not address the challenge of repeatedly synchronizing changes across a 420 TB footprint. For a strict cutover target, a one-time import process is generally too rigid and operationally disruptive.

Correct because the workload includes VMs with multiple attached volumes totaling 420 TB, which is an enormous amount of data to move over the network as an initial seed. AWS Snowball Edge Storage Optimized is specifically designed for large-scale bulk data transfer and is the most practical option among the choices for moving that volume efficiently into AWS. After the bulk transfer is completed, the remaining delta can be synchronized during cutover, helping reduce the final outage window.

Incorrect because although AWS SMS can replicate an application VM, it is not the strongest answer in a scenario dominated by extremely large attached storage volumes totaling 420 TB. The initial data movement is the primary bottleneck, and Snowball Edge is better suited than SMS for seeding such a large amount of data into AWS. SMS also does not solve the database modernization requirement, so pairing it with DMS is less optimal than using Snowball for bulk transfer and DMS for the database.

Correct because the database must run on Amazon RDS for PostgreSQL after migration, and AWS DMS is the service designed to migrate from self-managed PostgreSQL into RDS. DMS supports full load plus change data capture, so the source database can continue serving production traffic while changes are continuously replicated to the target. At cutover, the team can stop writes briefly, allow replication lag to reach zero, and switch the application to the RDS endpoint within the required downtime window.

Question Analysis

Core concept: This question tests choosing migration mechanisms that balance extremely large data volume, a strict cutover downtime target, and a requirement to modernize the database onto Amazon RDS for PostgreSQL. The database must be migrated with continuous replication into RDS, while the application server’s very large attached-volume footprint makes bulk seeding the dominant challenge. Why correct: Amazon RDS for PostgreSQL as the target strongly points to AWS Database Migration Service (AWS DMS), because DMS supports full load plus ongoing change data capture (CDC) from self-managed PostgreSQL into RDS. For the application server, the attached storage totals 420 TB, which is far beyond what is practical to move quickly with one-time network import methods; AWS Snowball Edge Storage Optimized is designed for bulk offline transfer of very large datasets. Using Snowball Edge to seed the large VM image/data and DMS to keep the database synchronized provides the least downtime among the listed options. Key features: AWS DMS can replicate ongoing PostgreSQL changes into Amazon RDS, allowing the source database to remain online until final cutover. Snowball Edge Storage Optimized devices are intended for petabyte-scale data movement and can dramatically reduce the time required for initial bulk transfer compared with network-only approaches. Direct Connect can then be used for final synchronization and cutover traffic after the bulk seed is complete. Common misconceptions: AWS SMS is a VM rehosting service, but it does not migrate a database into Amazon RDS and is not the best answer when the exam emphasizes modernizing the database target. VM Import/Export is a one-time import mechanism and does not support the incremental replication pattern needed for low-downtime cutovers. A common trap is to focus only on the downtime requirement and ignore the massive 420 TB footprint, which makes bulk seeding a critical design factor. Exam tips: When the target is Amazon RDS and downtime must be minimized, look for AWS DMS with CDC. When the dataset is hundreds of terabytes, look for Snow Family devices to handle the initial bulk transfer. Eliminate options that merely rehost a database VM when the requirement explicitly says the database must end up on RDS.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

6
Question 6

Helios Media operates an on-premises network (10.50.0.0/16) and a VPC named VPC X (172.31.0.0/16) in the Helios Media AWS account; the on-premises network connects to VPC X through an AWS Site-to-Site VPN terminating on a virtual private gateway, and on-premises servers can successfully reach resources in VPC X; Helios Media recently acquired Orion Labs, which runs a separate AWS account with a VPC named VPC Y (10.90.0.0/16), and there is no IP address overlap among the on-premises network, VPC X, and VPC Y; the companies have established a VPC peering connection between VPC X and VPC Y; Helios Media now wants on-premises servers to access workloads in VPC Y and has already configured network ACLs and security groups to allow the traffic; which solution meets this requirement with the least operational effort?

Correct. Transit Gateway is the AWS service designed to provide transitive routing between multiple VPCs and on-premises networks. In this scenario, both VPC X and VPC Y can attach to the TGW, and on-premises connectivity should be provided through a Site-to-Site VPN attachment to the TGW so traffic can reach both VPCs through a centralized routing domain. This avoids the unsupported pattern of trying to send VPN-originated traffic across a VPC peering connection and minimizes ongoing operational complexity compared with managing multiple separate VPNs.

Incorrect. While TGW is the right core service, creating a new Site-to-Site VPN specifically to VPC Y adds unnecessary operational overhead (additional tunnels, routing, monitoring). Also, “authorization rules” refers to AWS Client VPN, which is not part of the stated architecture/requirement. Least effort is to use TGW with existing connectivity patterns, not add extra VPNs.

Incorrect. Updating route tables and enabling BGP propagation cannot make VPC peering transitive. Even with correct routes in the VPC route tables and VPN routes, AWS will not forward traffic from a VPN attachment in VPC X across the peering connection to VPC Y. This option is a common trap: routing configuration cannot bypass peering’s non-transitive design.

Incorrect. A virtual private gateway (VGW) can be attached to only one VPC at a time; it cannot attach to both VPC X and VPC Y. You also cannot “split” the two VPN tunnels to different VPCs in the manner described. To share on-prem connectivity across multiple VPCs, use Transit Gateway (or separate VPNs per VPC, which is more effort).

Question Analysis

Core Concept: This question tests the non-transitive nature of VPC peering and the role of AWS Transit Gateway (TGW) as a centralized, transitive routing hub for multiple VPCs and on-premises networks. Why the Answer is Correct: On-premises currently reaches VPC X through a Site-to-Site VPN that terminates on a virtual private gateway (VGW). Although VPC X is peered with VPC Y, VPC peering does not allow transitive routing, so traffic from on-premises cannot traverse VPC X and then continue over the peering connection into VPC Y. The supported low-operations design is to use a Transit Gateway, attach both VPCs to it, and use a Site-to-Site VPN attachment to the TGW for on-premises connectivity. This creates a single transitive routing domain and avoids managing multiple point-to-point connections. Key AWS Features: - AWS Transit Gateway provides hub-and-spoke connectivity with transitive routing between VPCs and on-premises networks. - TGW route tables centralize route management and can control propagation and association per attachment. - Cross-account VPC attachments are supported, which fits the separate AWS accounts in the scenario. - A VPN that currently terminates on a VGW would need to be recreated or migrated as a TGW VPN attachment rather than directly reusing the VGW attachment. Common Misconceptions: - Adding routes or enabling BGP propagation does not overcome the non-transitive limitation of VPC peering. - A VGW cannot be attached to multiple VPCs. - Creating a separate VPN to each VPC can work, but it increases operational overhead compared with a TGW-based design. Exam Tips: When a question describes on-premises connectivity to one VPC and a need to reach another VPC through peering, remember that VPC peering is non-transitive. For multi-VPC and on-premises connectivity with centralized routing and least operational effort, Transit Gateway is usually the preferred answer.

7
Question 7

A fintech startup operates a mobile wallet API on an Amazon ECS cluster running on EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). Every Friday at 01:00 UTC, there is a surge of failed login attempts that drives the authentication microservice to 90% CPU. The failed attempts originate from about 600 source IP addresses that rotate every 7 days, and many of those IPs exceed 300 login requests within a 5-minute window. A solutions architect must prevent these failed logins from overwhelming the authentication service with the highest operational efficiency; what should be done?

Incorrect. AWS Firewall Manager can centrally manage security group policies, but security groups do not provide HTTP request-rate limiting and are not well-suited to blocking hundreds of rotating IPs based on application-layer behavior. You would need frequent updates as IPs change weekly, which is operationally heavy. Additionally, security groups are stateful L3/L4 controls and cannot express “300 requests in 5 minutes” logic.

Correct. AWS WAF rate-based rules are designed to detect and mitigate abusive clients by tracking requests per source IP over a 5-minute window and automatically applying an action such as Block. Associating the WAF web ACL with the ALB stops excessive login attempts before they reach ECS, reducing CPU pressure on the authentication service. This is highly operationally efficient because it adapts even as attacker IPs rotate.

Incorrect. An allow-only security group policy (allowlisting CIDRs) is typically infeasible for a public mobile wallet API because legitimate users come from many dynamic ISP/mobile IP ranges. It would also create significant operational burden and business risk (blocking real customers). This does not address the core requirement of rate limiting abusive login attempts at the application layer.

Incorrect. An AWS WAF IP set match rule can block known bad IPs, but the scenario states the source IPs rotate every 7 days. That would require continuously updating the IP set (manual or automated), which is less operationally efficient than a rate-based rule. IP sets are best when the malicious IPs are stable and you want deterministic blocking, not behavior-based throttling.

Question Analysis

Core Concept: This question tests edge protection for HTTP(S) workloads behind an Application Load Balancer using AWS WAF, specifically rate-based rules to mitigate bursts of abusive traffic (credential stuffing/brute-force) without constantly managing IP allow/deny lists. Why the Answer is Correct: The attack pattern is characterized by many source IPs (about 600) that rotate weekly, with individual IPs exceeding 300 login requests in a 5-minute window and driving the authentication service to high CPU. An AWS WAF web ACL associated with the ALB can block or challenge requests before they reach ECS, reducing load on the authentication microservice. A rate-based rule is purpose-built for this: it automatically tracks request rates per originating IP and takes action (Block) when the threshold is exceeded, which directly matches “>300 requests within 5 minutes” and avoids operational overhead as IPs rotate. Key AWS Features: AWS WAF rate-based rules aggregate requests over a 5-minute sliding window and can be configured with an action (Block) to stop abusive clients. The web ACL can be attached to an ALB, providing centralized, managed L7 filtering. You can further scope the rule down to the login URI (e.g., /auth/login) using scope-down statements to avoid impacting other endpoints, and use WAF logging to CloudWatch Logs/S3/Kinesis Data Firehose for investigation. This aligns with Well-Architected Security (protect at the edge, least privilege) and Reliability (prevent overload) principles. Common Misconceptions: Security groups and Firewall Manager operate at L3/L4 and are not designed for HTTP request-rate limiting. Even if you could list IPs, the weekly rotation makes static deny lists operationally inefficient. IP set blocking in WAF also requires continuously updating the set, which is explicitly what the scenario tries to avoid. Exam Tips: When you see ALB + rotating attacker IPs + request bursts (e.g., N requests per 5 minutes), think AWS WAF rate-based rules. Use IP sets when the bad IPs are stable and known; use rate-based rules when the pattern is volumetric/behavioral and IPs change frequently. Prefer managed, L7 controls in front of compute to protect downstream services and reduce scaling pressure.

8
Question 8

A fintech company runs a payment webhook receiver as an AWS Lambda function behind a public Amazon API Gateway HTTP API in ap-southeast-1; calls are handled asynchronously (the API returns HTTP 202 within 200 ms while the Lambda continues processing), the business requires automatic Regional failover within 5 minutes during a full Region outage with no client SDK changes and a single public hostname, normal operations must keep end-to-end latency under 300 ms, and the solution should support failover to ap-southeast-2 while preserving existing IAM authorization; which solution meets these requirements?

Incorrect. API Gateway HTTP API in ap-southeast-2 cannot use a native Lambda proxy integration to invoke a Lambda function that resides in ap-southeast-1; Lambda is a Regional service and API Gateway integrations are Region-scoped. Even if another integration type were used, a full Region outage in ap-southeast-1 would still break processing. Route 53 failover between two API endpoints is valid, but the backend must also be deployed in the failover Region.

Incorrect. Moving ingestion to SQS in ap-southeast-1 can improve decoupling and async processing, but it does not satisfy Regional failover to ap-southeast-2 during a full ap-southeast-1 outage. The queue, API, and Lambda polling would all be unavailable. Cross-Region SQS replication is not native, and adding it would still require a multi-Region API front end and queue strategy, which this option does not provide.

Incorrect. This introduces AWS Global Accelerator and an ALB to distribute traffic across API Gateway endpoints, but ALB is not used to route to API Gateway as targets in a typical supported architecture. Global Accelerator is best for accelerating and failing over ALB/NLB/EC2/EIP endpoints, not for directly fronting API Gateway regional endpoints. It also adds complexity and potential latency without addressing the simplest supported pattern: Route 53 + API Gateway custom domains.

Correct. Deploying API Gateway and Lambda in both Regions removes the Regional dependency and enables true Regional failover. Using API Gateway custom domain names in each Region plus Route 53 failover routing and health checks provides a single public hostname with automatic failover within minutes and no client changes. Normal operations keep low latency because traffic goes directly to the primary Region endpoint, and IAM authorization behavior is preserved by keeping API Gateway IAM auth in both Regions.

Question Analysis

Core Concept: This question tests multi-Region resiliency for an API Gateway + Lambda serverless endpoint with a single stable hostname, fast automatic failover, and preserved authorization behavior. The key pattern is active/passive (or active/active) multi-Region API endpoints fronted by DNS failover using Amazon Route 53 and API Gateway custom domain names. Why the Answer is Correct: Option D deploys the full stack (API Gateway HTTP API and Lambda) in both ap-southeast-1 and ap-southeast-2, then uses an API Gateway custom domain name in each Region and Route 53 failover routing with health checks to move the single public hostname to the healthy Region within minutes. This meets the “no client SDK changes” and “single public hostname” requirements because clients keep calling the same domain. It also meets the 5-minute RTO requirement because Route 53 health checks and failover policies can shift traffic quickly during a Regional outage. Latency stays low in normal operations because clients resolve to the primary Region endpoint directly (no extra proxy hop). Key AWS Features: - API Gateway custom domain names mapped to Regional HTTP APIs in each Region. - Route 53 failover routing policy (PRIMARY/SECONDARY) with health checks against a path that reflects API availability. - Separate Lambda deployments per Region (required because Lambda is Regional). - Preserving IAM authorization: API Gateway IAM auth (SigV4) remains the same model; clients still sign requests for execute-api. If using a custom domain, ensure the API is configured to accept IAM auth and clients sign with the correct service/region expectations; in practice, many webhook senders won’t use IAM, but the requirement explicitly asks to preserve it, which is best achieved by keeping API Gateway + IAM in each Region rather than introducing non-API-Gateway front doors. Common Misconceptions: - “Just point the secondary API to the primary Lambda” fails because Lambda cannot be invoked cross-Region via a native API Gateway Lambda proxy integration. - “Use Global Accelerator for everything” is attractive for fast failover, but it does not front API Gateway endpoints directly as origins the way it does ALB/NLB, and adding ALB in front of API Gateway is not a standard or necessary pattern. Exam Tips: For Regional outages with a single DNS name, think Route 53 failover (or latency/weighted) + health checks. For API Gateway multi-Region, you typically replicate the API and backend per Region and use custom domains to keep the client hostname stable. Avoid designs that rely on cross-Region Lambda integrations or unnecessary extra hops that increase latency.

9
Question 9

A regional food-delivery startup runs a stateful Node.js web application and a MySQL 5.7 database on a single rack server in a co-location facility; marketing plans expect peak concurrent users to grow from 400 to 4,500 within 30 days, and the CTO requires 99.99% service availability across at least two Availability Zones, session continuity for authenticated users, database RTO under 60 seconds, and elimination of single points of failure when migrating to a single AWS Region with minimal code changes. Which solution should provide the HIGHEST level of reliability?

RDS for MySQL Multi-AZ improves database availability, and EC2 Auto Scaling behind an ALB is a solid multi-AZ web tier. However, storing sessions in Amazon Neptune is an architectural mismatch: Neptune is a graph database, not a session store, and it adds unnecessary complexity and operational risk. This option does not represent the highest reliability design for session continuity and fast recovery.

Aurora MySQL Multi-AZ plus EC2 Auto Scaling across AZs behind an ALB provides a highly resilient, scalable web tier with minimal code change from MySQL 5.7. Using ElastiCache for Redis replication group with Multi-AZ automatic failover is the standard pattern for highly available session storage, maintaining session continuity during instance or AZ failures. Overall, it best meets 99.99% and <60s RTO goals.

DocumentDB (MongoDB-compatible) is not MySQL-compatible, so it violates the “minimal code changes” requirement and introduces migration risk. A Network Load Balancer is not ideal for HTTP session-based web apps compared to ALB features (HTTP routing, better L7 health checks). Kinesis Data Firehose is a streaming delivery service, not a session store, so session continuity and reliability requirements are not met.

RDS for MariaDB Multi-AZ changes the database engine from MySQL 5.7, potentially requiring more compatibility testing and code changes than Aurora MySQL. The bigger issue is session storage: ElastiCache for Memcached does not support Multi-AZ automatic failover and is not durable, so authenticated session continuity during failures is not assured. This makes achieving 99.99% availability and session continuity harder than option B.

Question Analysis

Core Concept: This question tests designing a highly available, multi-AZ web tier with session continuity and a relational database that meets a very low RTO. It focuses on removing single points of failure and choosing managed services that provide fast failover. Why the Answer is Correct: Option B provides the highest reliability end-to-end with minimal application change. The Node.js app becomes stateless by externalizing sessions to ElastiCache for Redis (Multi-AZ with automatic failover), enabling horizontal scaling behind an Application Load Balancer (ALB) across multiple AZs. For the database, Aurora MySQL (Multi-AZ) is purpose-built for high availability: storage is replicated across multiple AZs and failover is typically faster than standard RDS Multi-AZ, helping meet an RTO under 60 seconds. This architecture eliminates single points of failure at the compute, session, and database layers. Key AWS Features: - Aurora MySQL compatibility minimizes code changes from MySQL 5.7. - Aurora Multi-AZ with a writer and reader in different AZs supports rapid failover and read scaling. - EC2 Auto Scaling across multiple AZs + ALB health checks provides resilient, elastic web tier scaling from 400 to 4,500 concurrent users. - ElastiCache for Redis replication group with Multi-AZ automatic failover preserves session continuity even during node/AZ failure. - ALB supports HTTP/HTTPS, path-based routing, and integrates with Auto Scaling for reliable traffic distribution. Common Misconceptions: - “Any Multi-AZ RDS is enough for <60s RTO”: RDS Multi-AZ is highly available, but Aurora is generally designed for faster failover and higher resilience due to its distributed storage architecture. - “Memcached is fine for sessions”: Memcached is not durable and has no Multi-AZ automatic failover, so session continuity and availability targets are harder. - “Use a graph DB for sessions”: Neptune is not intended for session storage and adds complexity without improving reliability. Exam Tips: For 99.99% across at least two AZs, ensure every stateful component is either Multi-AZ managed (DB, cache) or made stateless (app tier). For session continuity, prefer Redis (replication + failover) over Memcached. For tight RTO requirements, Aurora is commonly the best fit among MySQL-compatible managed databases.

10
Question 10

A genomics research lab is planning a one-time migration of a 72 TB on-premises MariaDB database to Amazon Aurora MySQL in the eu-central-1 Region; with only a 150 Mbps internet link that would take approximately 45 days to transfer the data, the lab needs a faster approach and must determine which solution will migrate the database in the LEAST amount of time.

Incorrect. Although a 1 Gbps Direct Connect would be much faster than the existing 150 Mbps internet link, it is still a network-based transfer and would take substantial time for 72 TB, in addition to potential provisioning delays for the circuit. The question asks for the least amount of time for a one-time migration, and offline transfer with Snowball is generally faster for this data volume. AWS DMS is the correct migration service, but the transport method in this option is not the fastest available. Therefore, this is a valid approach, but not the best answer for minimum elapsed migration time.

Incorrect. AWS DataSync is intended for accelerating and orchestrating file and object data transfers, not for performing logical relational database migration from MariaDB into Aurora MySQL. AWS Application Migration Service is used to rehost whole servers into Amazon EC2 and does not migrate a MariaDB database into Aurora as a managed database target. This option misuses both services for the stated goal. Even if transfer performance improved, the migration path itself is not appropriate.

Correct. AWS Snowball Edge is designed for large-scale offline data transfer and is typically faster than sending 72 TB over a constrained WAN connection, even if a higher-bandwidth circuit could be provisioned. The device’s S3-compatible interface allows the migration data to be staged into Amazon S3 after AWS receives the appliance. AWS DMS supports Amazon S3 as a source endpoint and Aurora MySQL as a target, so it can load the staged data into Aurora. Among the listed options, this is the only one that combines the fastest bulk-transfer mechanism with an appropriate database migration service.

Incorrect. AWS Snowball can help with offline bulk transfer, but this option uses AWS Application Migration Service, which is not a database migration tool and cannot load data from Amazon S3 into Aurora MySQL. MGN replicates server disks for lift-and-shift migrations to EC2, not relational data into Aurora. Also, the older Snowball S3 Adapter terminology does not change the fact that the target migration service is wrong. Because the service combination is invalid for database migration, this option cannot be correct.

Question Analysis

Core Concept: This question tests selecting the fastest practical migration method for a very large one-time database migration when WAN bandwidth is severely constrained. The key is to distinguish between online network-based migration approaches and offline bulk data transport, while also choosing the correct AWS service for loading data into Aurora MySQL. Why correct: For 72 TB of data, a 150 Mbps link would take far too long, and even a 1 Gbps Direct Connect would still require many days of transfer time plus provisioning lead time. AWS Snowball Edge is specifically designed to move large datasets into AWS faster than limited network links by shipping the data physically to AWS. After the data is loaded into Amazon S3, AWS DMS can use S3 as a source to load data into Aurora MySQL, making this the fastest valid option presented. Key features: - AWS Snowball Edge provides offline petabyte-scale data transport and is commonly used for one-time bulk migrations. - The device supports an S3-compatible interface so data can be staged and imported efficiently. - AWS DMS supports Amazon S3 as a source endpoint and Aurora MySQL as a target endpoint for loading structured data. - DMS can perform the database load into Aurora after the bulk data has been transferred into AWS. Common misconceptions: A common mistake is assuming Direct Connect is always the fastest option for large migrations. Direct Connect improves bandwidth, but for very large one-time transfers, physical data transport is often faster overall. Another misconception is that AWS Application Migration Service can migrate databases into Aurora; it is intended for server rehosting, not logical database migration. It is also easy to overlook that AWS DMS supports Amazon S3 as a source in valid migration patterns. Exam tips: - For one-time migrations of tens of terabytes or more, consider Snow Family services first when network bandwidth is limited. - Use AWS DMS for database migrations into Aurora, not AWS Application Migration Service. - Eliminate answers that misuse MGN for database engine conversion or migration into managed database services. - When the question asks for the LEAST amount of time, include both transfer speed and service suitability in your evaluation.

Success Stories(9)

S
s******Nov 24, 2025

Study period: 3 months

I used these practice questions and successfully passed my exam. Thanks for providing such well-organized question sets and clear explanations. Many of the questions felt very close to the real exam.

T
t**********Nov 13, 2025

Study period: 3 months

Just got certified last week! It was a tough exam, but I’m really thankful to cloud pass. the app questions helped me a lot in preparing for it.

효
효**Nov 12, 2025

Study period: 1 month

앱 이용 잘했습니다 ^^

P
p*******Nov 7, 2025

Study period: 2 months

These practice exams are help for me to pass the certification. A lot of questions are mimicked from here.

D
d***********Nov 7, 2025

Study period: 1 month

Thanks. I think I passed because of high quality contents here. I am thinking to take next aws exam here.

Other Practice Tests

Practice Test #1

75 Questions·180 min·Pass 750/1000

Practice Test #2

75 Questions·180 min·Pass 750/1000

Practice Test #3

75 Questions·180 min·Pass 750/1000

Practice Test #4

75 Questions·180 min·Pass 750/1000
← View All AWS Certified Solutions Architect - Professional (SAP-C02) Questions

Start Practicing Now

Download Cloud Pass and start practicing all AWS Certified Solutions Architect - Professional (SAP-C02) exam questions.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT Certification Practice App

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

FAQPrivacy PolicyTerms of Service

Company

ContactDelete Account

© Copyright 2026 Cloud Pass, All rights reserved.

Want to practice all questions on the go?

Get the app

Download Cloud Pass — includes practice tests, progress tracking & more.