CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
AWS Certified Solutions Architect - Professional (SAP-C02)
AWS Certified Solutions Architect - Professional (SAP-C02)

Practice Test #1

Simula la experiencia real del examen con 75 preguntas y un límite de tiempo de 180 minutos. Practica con respuestas verificadas por IA y explicaciones detalladas.

75Preguntas180Minutos750/1000Puntaje de aprobación
Explorar preguntas de práctica

Impulsado por IA

Respuestas y explicaciones verificadas por triple IA

Cada respuesta es verificada de forma cruzada por 3 modelos de IA líderes para garantizar la máxima precisión. Obtén explicaciones detalladas por opción y análisis profundo de cada pregunta.

GPT Pro
Claude Opus
Gemini Pro
Explicaciones por opción
Análisis profundo de preguntas
Precisión por consenso de 3 modelos

Preguntas de práctica

1
Pregunta 1

A global media subscription service plans to launch a stateless customer dashboard on Amazon EC2 in ap-southeast-2 across three Availability Zones, requiring 99.9% availability with Auto Scaling from 6 to 120 instances when average CPU > 60% for 5 minutes, and to implement an active-passive disaster recovery environment in ap-northeast-1 with an RTO of 10 minutes and Route 53 health checks (30-second interval, fail after 2 consecutive failures) while serving traffic only from ap-southeast-2 during normal operations. Which solution will meet these requirements?

Incorrect. An Application Load Balancer is Regional and cannot span multiple Regions or be attached to subnets in two different VPCs/Regions. Likewise, an Auto Scaling group cannot launch instances across multiple Regions; it is scoped to one Region and one VPC. VPC peering does not change these service boundaries, so this design cannot implement the required cross-Region active-passive DR.

Incorrect/insufficient. While it correctly builds separate stacks in each Region, it does not explicitly configure Route 53 for active-passive behavior. “Enable health checks on both records” without a failover routing policy could lead to active-active behavior (depending on routing policy) or ambiguous traffic routing. The requirement states traffic must be served only from ap-southeast-2 during normal operations.

Correct. It deploys the primary stack in ap-southeast-2 across three AZs with an ALB and an Auto Scaling group (min 6/max 120) and the same stack in ap-northeast-1 as standby. Route 53 failover routing with primary/secondary records and health checks (30s, 2 failures) ensures traffic stays on the primary during normal operations and fails over to the secondary within the RTO window; 60s TTL helps reduce DNS caching delays.

Incorrect. Like option A, it relies on an ALB and Auto Scaling group spanning multiple VPCs/Regions, which is not supported. A single Route 53 record pointing to one ALB also does not provide the required active-passive cross-Region failover. VPC peering is unnecessary for this pattern and does not enable cross-Region load balancing.

Análisis de la pregunta

Core Concept: This question tests multi-Region high availability and disaster recovery (DR) design for a stateless EC2 web tier using Application Load Balancers, Auto Scaling, and Amazon Route 53 DNS failover with health checks. It also implicitly tests service scoping: ALBs and Auto Scaling groups are Regional, and cannot span Regions/VPCs the way some options suggest. Why the Answer is Correct: Option C implements an active-passive DR pattern: ap-southeast-2 serves all traffic during normal operations (primary), while ap-northeast-1 is provisioned and ready (secondary) but receives traffic only upon failure. Route 53 failover routing with health checks ensures DNS answers switch to the secondary when the primary endpoint becomes unhealthy. The specified health check settings (30-second interval, fail after 2 consecutive failures) align with the requirement, and a 60-second TTL helps meet the 10-minute RTO by reducing DNS caching delay. The primary Region uses an ALB across three AZs and an Auto Scaling group with min 6/max 120 and scaling on average CPU > 60% for 5 minutes, meeting the scaling and 99.9% availability goals. Key AWS Features: - ALB is Regional and can span multiple AZs within a Region, improving resilience. - EC2 Auto Scaling across multiple AZs provides elasticity and AZ fault tolerance. - Route 53 failover routing policy supports primary/secondary records with health checks. - Health check interval and failure threshold determine detection time; TTL influences client-side DNS caching and therefore effective failover time. Common Misconceptions: A and D assume an ALB can “extend” across VPCs/Regions and that a single Auto Scaling group can launch instances across both Regions/VPCs. In reality, ALBs and ASGs are scoped to a single Region (and an ALB is associated with subnets in that Region). VPC peering also does not enable a single ALB to target instances in another Region. B sounds close but does not enforce active-passive behavior; “health checks on both records” is ambiguous and could result in active-active (e.g., weighted/multivalue) or unintended traffic distribution. Exam Tips: - Remember Regional boundaries: ALB, ASG, and subnets are Regional; Route 53 is global. - For active-passive DR with Route 53, look for “failover routing policy” with explicit primary/secondary and health checks. - RTO depends on detection time + DNS TTL + provisioning readiness; keeping the secondary stack warm and TTL low improves RTO.

2
Pregunta 2

A logistics company is deploying a fleet of 2,500 autonomous warehouse robots that each push 8 KB of telemetry every second to AWS over Wi-Fi, and it needs a data platform that provides near-real-time (<=3 seconds) analytics on the inbound stream, ensures durable, ordered, highly parallel ingestion with the ability to reprocess data, and delivers processed results to a data warehouse for BI; which strategy should a solutions architect use to meet these requirements?

Kinesis Data Firehose is primarily a delivery service (to S3, Redshift, OpenSearch, etc.) with buffering and optional lightweight transforms. It is not designed for custom near-real-time analytics with strict per-partition ordering and consumer-controlled replay. Firehose also doesn’t “analyze with Kinesis clients” (KCL is for KDS). Storing results in Amazon RDS is not a typical BI warehouse pattern compared to Redshift.

Kinesis Data Streams best matches the requirements: durable multi-AZ ingestion, ordered records per shard, high parallelism via shards, and retention that enables replay/reprocessing. KCL-based consumers (or a processing layer such as EMR/Spark) can meet sub-3-second analytics with proper shard sizing and consumer scaling. Processed outputs can be loaded into Amazon Redshift for BI, aligning with a warehouse-centric analytics platform.

Amazon S3 is an object store, not a near-real-time ingestion bus; writing every second from 2,500 devices would create many small objects and higher end-to-end latency. The flow “analyze the data from Amazon SQS with Kinesis” is also incoherent: Kinesis does not consume from SQS as a native source. SQS provides at-least-once delivery but does not guarantee strict ordering at scale (standard queues) and is not ideal for replayable stream analytics.

API Gateway + SQS + Lambda can ingest events, but it is not a strong fit for ordered streaming with replay and high-throughput analytics. SQS standard queues do not guarantee ordering; FIFO queues provide ordering but have throughput constraints and operational complexity at this scale. Lambda-based analytics can struggle with stateful windowing and sub-3-second continuous processing compared to stream processors. EMR after SQS adds unnecessary complexity and latency.

Análisis de la pregunta

Core Concept: This question tests selecting the right streaming ingestion and near-real-time analytics architecture. The key requirements—durable, ordered ingestion; high parallelism; ability to replay/reprocess; and delivery to a data warehouse—map directly to Amazon Kinesis Data Streams (KDS) plus a stream processing layer and a Redshift sink. Why the Answer is Correct: Amazon Kinesis Data Streams is purpose-built for high-throughput, low-latency streaming ingestion with ordered records per shard and multi-consumer fan-out. With 2,500 robots sending 8 KB/s each, inbound volume is ~20 MB/s (~160 Mb/s). KDS can scale horizontally by adding shards to meet write throughput and parallel processing needs. Because KDS retains data for 24 hours by default (up to 365 days), consumers can re-read from a checkpoint to reprocess historical data—explicitly required. Near-real-time analytics (<=3 seconds) is achievable using Kinesis Client Library (KCL) consumers or managed processing (often Kinesis Data Analytics / Apache Flink), and then loading curated results into Amazon Redshift for BI. The option’s mention of EMR reflects a common exam pattern: use a scalable compute layer to transform and then load into Redshift. Key AWS Features: KDS provides ordered processing within a shard, durable replication across multiple AZs, and horizontal scaling via shards. Enhanced fan-out supports multiple parallel consumers with dedicated throughput. Checkpointing (via KCL/DynamoDB) enables exactly-once/at-least-once processing patterns and replay. For Redshift, typical delivery is via micro-batch COPY from S3 or streaming ingestion patterns; EMR/Spark can aggregate/window telemetry and write to S3/Redshift efficiently. Common Misconceptions: Firehose (Option A) is great for simple delivery to S3/Redshift but does not provide the same replay/reprocess control and per-shard ordering semantics as KDS, and its buffering can add latency. SQS-based designs (Options C/D) do not preserve strict ordering at scale and are not ideal for high-throughput streaming analytics with replay. Exam Tips: When you see “ordered, durable stream,” “parallel ingestion,” and “reprocess/replay,” think Kinesis Data Streams (or Kafka/MSK). When you see “deliver to data warehouse for BI,” think Redshift as the sink, often via an intermediate curated store (commonly S3) and a processing engine (Flink/Spark/EMR).

3
Pregunta 3

An independent cybersecurity auditing firm that operates under an AWS Organizations organization named blueOrg (orgA) must programmatically access a customer’s AWS account that belongs to a separate organization named greenOrg (orgB) to run an automated compliance scanner from the firm’s management account every 4 hours that reads only EC2 Describe*, IAM Get*, and S3 List* metadata using temporary credentials and least privilege. What is the MOST secure way to enable orgA to access resources in orgB via API/CLI?

Incorrect. Sharing account access keys (especially root or broadly privileged keys) is one of the least secure approaches. It uses long-term credentials, is hard to audit and rotate safely, and violates AWS best practices. It also breaks least privilege because account-level keys typically imply broad access and create significant blast radius if compromised.

Incorrect. Creating an IAM user and sharing long-term access keys is explicitly discouraged for third-party access. Even if permissions are scoped, long-term credentials increase exposure (leakage, reuse, storage risk) and require ongoing rotation and secure distribution. This does not meet the requirement to use temporary credentials and is not the most secure pattern.

Partially correct but not the MOST secure. Using an IAM role and STS AssumeRole provides temporary credentials and supports least privilege, which is good. However, for independent third-party access, omitting an ExternalId leaves the customer more exposed to confused deputy scenarios. Adding ExternalId (and optionally other conditions) is the stronger, recommended control.

Correct. A customer-managed least-privilege IAM role with a trust policy that allows the auditor’s account to assume it only with a unique ExternalId is the recommended, most secure third-party access pattern. The auditor uses sts:AssumeRole to obtain temporary credentials every 4 hours, meeting the temporary credential requirement while enabling strong auditing, tight trust controls, and confused deputy protection.

Análisis de la pregunta

Core Concept: This question tests secure cross-account access across separate AWS Organizations using AWS STS AssumeRole, least-privilege IAM roles, and the confused deputy protection mechanism (ExternalId). It also implicitly tests best practices: avoid long-term credentials, use temporary credentials, and tightly scope trust and permissions. Why the Answer is Correct: Option D is the most secure pattern for a third-party auditor accessing a customer account programmatically. The customer creates an IAM role in orgB (customer account) with only the required read-only permissions (EC2 Describe*, IAM Get*, S3 List*). The role’s trust policy allows assumption by the auditor’s AWS account in orgA, but only when the auditor supplies a unique ExternalId. The auditor then calls sts:AssumeRole from its management account every 4 hours to obtain short-lived credentials. This satisfies temporary credentials, least privilege, and strong protection against unauthorized role assumption. Key AWS Features: 1) IAM Role + Trust Policy: The trust policy specifies the principal (auditor account) and conditions (sts:ExternalId). Optionally, the customer can further restrict with aws:PrincipalArn to a specific role in the auditor account. 2) AWS STS Temporary Credentials: AssumeRole returns time-bound credentials, reducing blast radius and eliminating key rotation burdens. 3) Confused Deputy Mitigation: ExternalId is the AWS-recommended control when a third party accesses customer accounts, preventing the auditor from being tricked into using its permissions to access other customers’ roles. 4) Least Privilege Permissions Policy: Limit actions to ec2:Describe*, iam:Get*, s3:List* and scope resources where possible (not all actions support resource-level constraints). Common Misconceptions: Option C (role without ExternalId) is better than users/keys, but it omits a key third-party security control and is not the “MOST secure” for an independent firm scenario. Options A and B rely on long-term credentials, which violates best practices and increases risk of leakage and misuse. Exam Tips: When you see “third-party access,” “temporary credentials,” and “least privilege,” think: customer creates IAM role, auditor assumes via STS. If it’s a vendor/independent auditor, add ExternalId in the trust policy. Avoid sharing access keys. This aligns with AWS IAM guidance and Well-Architected Security Pillar principles (identity foundation, least privilege, and credential management).

4
Pregunta 4

A national retail chain ingests 4 TB per day of point-of-sale (POS) event logs from 2,500 stores; the logs are compacted hourly into Parquet files and stored in the Hadoop Distributed File System (HDFS) on a long-running Amazon EMR cluster, and business analysts run interactive SQL using Apache Trino (Presto) on the same EMR cluster; each query scans hundreds of gigabytes, completes in under 12 minutes, and queries are executed only between 6:00 PM and 11:00 PM local time on weekdays with up to 6 concurrent users, and the company is concerned about the high cost of the always-on cluster but still needs the most cost-effective way to continue running SQL queries with minimal operational overhead. Which solution will meet these requirements?

Redshift Spectrum can query Parquet in S3, but it is not the most cost-effective here because it typically implies running a Redshift cluster (or configuring Redshift Serverless) in addition to Spectrum. Given only a 5-hour weekday query window and low concurrency, paying for a warehouse layer is usually unnecessary. Spectrum is best when you already need Redshift for high-performance warehousing and want to extend queries to S3.

Athena + Glue Data Catalog is the best fit: move Parquet files to S3 and query them serverlessly with interactive SQL and minimal ops. Costs align to usage (pay per TB scanned), and Parquet plus partitioning (date/hour/store) reduces scanned data and improves performance. Glue provides centralized schema/partition metadata, enabling analysts to query without managing EMR clusters.

Using EMRFS on S3 reduces HDFS storage dependency and enables decoupled storage/compute, but running Presto on EMR still involves cluster provisioning, scaling, patching, and operational overhead. To meet the query window, you could spin up transient EMR clusters nightly, but that is still more complex than Athena and may not be as cost-effective given the simplicity of the workload and limited concurrency.

Loading all data into Amazon Redshift would add significant ingestion and maintenance overhead (COPY jobs, vacuum/analyze, distribution/sort key tuning) and ongoing compute cost. It can deliver strong performance, but the requirement emphasizes cost-effectiveness and minimal operational overhead for intermittent interactive queries. Redshift is typically chosen for consistent, frequent analytics workloads and broader BI/warehouse needs.

Análisis de la pregunta

Core Concept - The question tests selecting the most cost-effective interactive SQL analytics pattern for data already in columnar Parquet, with a time-bound usage window and minimal ops. This is a classic “serverless query over data lake” scenario: store data in Amazon S3 and query with Amazon Athena using the AWS Glue Data Catalog. Why the Answer is Correct - The current always-on EMR cluster is expensive because compute runs 24/7 even though queries only run 5 hours/day on weekdays with limited concurrency (up to 6). Moving the Parquet data to S3 and using Athena eliminates cluster management and idle compute costs. Athena is pay-per-query (per TB scanned), and Parquet significantly reduces scanned bytes versus raw logs. With partitioning (for example by date/hour/store) and column pruning, each query can scan far less than “hundreds of GB,” improving both cost and performance while meeting interactive needs. Key AWS Features - Use S3 as the durable data lake storage. Register tables/partitions in AWS Glue Data Catalog (or Athena’s partition projection) so analysts can query with standard SQL. Enforce best practices: partition by event_date and hour (and optionally region/store) and use Parquet + compression (Snappy/ZSTD). Use Athena workgroups for cost controls, query limits, and centralized result locations. Optionally use Athena result reuse and CTAS/UNLOAD for optimized derived datasets. Common Misconceptions - Redshift Spectrum (Option A) can query S3, but it requires a running Redshift cluster (or serverless) and is typically chosen when you already need a data warehouse for frequent, high-concurrency BI. Keeping Presto on EMR (Option C) still requires managing clusters (even if transient) and is more operationally heavy than Athena for this use case. Loading into Redshift (Option D) adds ingestion/maintenance overhead and ongoing compute cost. Exam Tips - When usage is intermittent and the requirement emphasizes “minimal operational overhead” and “most cost-effective,” favor serverless (Athena) over managed clusters. For Athena/Spectrum questions, look for Parquet/ORC + partitioning as the key to controlling scan cost and meeting interactive performance. If a solution requires an always-on cluster without a strong need, it’s usually not the best answer.

5
Pregunta 5
(Selecciona 2)

A solutions architect is designing an application for 1,200 retail store managers to submit weekly customer feedback forms from their mobile devices, with approximately 80% of the 6,000 peak submissions per minute occurring between 5:00 PM and 9:00 PM on Sundays, and the data must be stored in a format that allows business analysts to run region-level monthly dashboards while the infrastructure remains highly available and scales elastically for both ingestion and analytics with minimal operational overhead; which combination of steps meets these requirements? (Choose two.)

EC2 behind an ALB across multiple AZs is highly available, and scheduled Auto Scaling can prepare for predictable Sunday peaks. However, it requires OS/app server management, patching, AMI maintenance, and right-sizing. It is less “minimal operational overhead” than serverless and may still overprovision capacity for the 4-hour window, increasing cost and operational burden.

ECS with an ALB and scheduled Service Auto Scaling reduces some operational work versus raw EC2, but you still manage container images, task sizing, scaling policies, and potentially the underlying capacity (unless using Fargate, which is not stated). It can meet availability and scaling needs, but it is not as low-ops and elastic as API Gateway + Lambda for bursty, short-duration spikes.

S3 + CloudFront for the front end provides highly available static hosting with global edge delivery. API Gateway + Lambda proxy integration is a classic serverless ingestion pattern that scales automatically and is multi-AZ by design. It minimizes operational overhead (no servers/containers to manage) and handles bursty submission patterns well, fitting the Sunday evening surge requirement.

Redshift + QuickSight can deliver strong BI performance, but Redshift is a managed data warehouse that typically requires capacity planning (node types, scaling, concurrency), data loading pipelines, and ongoing cost even outside peak usage unless using additional features. For this use case, it is more operationally heavy than a serverless S3/Athena approach.

S3 as the storage layer enables a durable, low-cost data lake. Athena provides serverless SQL querying directly on S3, and QuickSight can build monthly region-level dashboards from Athena datasets (optionally using SPICE). This meets elastic analytics and minimal ops requirements, especially when data is partitioned by time/region to reduce scan costs and improve query performance.

Análisis de la pregunta

Core Concept: This question tests designing a highly available, elastically scaling, low-ops ingestion layer for spiky mobile submissions and a scalable analytics layer for monthly, region-level dashboards. The key patterns are serverless ingestion (API Gateway + Lambda) and data lake analytics (S3 + Athena + QuickSight). Why the Answer is Correct: Option C provides a fully managed, multi-AZ, automatically scaling front end and API tier. Hosting the static front end on Amazon S3 behind CloudFront offloads web serving and improves global performance. Using Amazon API Gateway with AWS Lambda proxy integration removes server management and scales rapidly to handle the Sunday evening surge (up to 6,000 submissions/min) without pre-provisioning. This meets “highly available,” “scales elastically,” and “minimal operational overhead.” Option E stores submissions in Amazon S3, which is highly durable and cost-effective for long-term retention. Business analysts can query data directly with Amazon Athena (serverless SQL over S3) and build monthly region-level dashboards in Amazon QuickSight. This combination scales for analytics without managing clusters, aligns with a data lake approach, and supports partitioning by date/region to optimize monthly dashboard queries. Key AWS Features: API Gateway throttling/usage plans, Lambda concurrency scaling, CloudFront caching and TLS, S3 lifecycle policies, S3 partitioned prefixes (e.g., region=.../year=.../month=...), Athena Glue Data Catalog schemas, and QuickSight SPICE for dashboard performance. These are Well-Architected (Operational Excellence, Reliability, Performance Efficiency, Cost Optimization) aligned choices. Common Misconceptions: EC2/ECS with scheduled scaling (A/B) can handle predictable peaks, but they still require capacity planning, patching, and operational management, and may not respond as seamlessly to variability within the 5–9 PM window. Redshift (D) is powerful for warehousing but introduces cluster management/cost and is not “minimal ops” for intermittent ingestion/analytics unless carefully designed. Exam Tips: When you see “spiky traffic,” “high availability,” and “minimal operational overhead,” strongly consider serverless (S3/CloudFront, API Gateway, Lambda). For “dashboards from stored data” with low ops, prefer S3 + Athena + QuickSight over managing a data warehouse unless complex joins/low-latency BI at scale explicitly require Redshift.

¿Quieres practicar todas las preguntas en cualquier lugar?

Descarga Cloud Pass — incluye exámenes de práctica, seguimiento de progreso y más.

6
Pregunta 6
(Selecciona 3)

An enterprise with 18 AWS accounts in AWS Organizations runs two third‑party inline inspection appliances as manually configured, highly available Amazon EC2 instances in a centralized security VPC within a shared networking account connected to member VPCs through an AWS Transit Gateway; each appliance uses a static private IP as the next hop for 0.0.0.0/0 from member VPCs, a misconfigured automation run recently terminated both instances, and while rebuilding the team created a bootstrap script to configure appliances at first boot, and now the company wants to modernize to minimize cost, scale horizontally from 2 to at least 10 appliances to handle up to 12 Gbps aggregate traffic across two Availability Zones with failover under 60 seconds while continuing to use the same vendor code (vendor confirmed full AWS compatibility)—which combination of steps should a solutions architect recommend to meet these requirements most cost‑effectively? (Choose three)

Correct. Gateway Load Balancer is the AWS-native pattern for deploying third-party inline inspection appliances. Creating a GWLB endpoint service allows other accounts/VPCs to connect privately using PrivateLink, enabling centralized inspection without VPC peering complexity or public exposure. GWLB supports transparent traffic steering using GENEVE and is designed for scaling appliance fleets while preserving original IP information.

Incorrect. A Network Load Balancer can expose L4 services, but it is not intended for transparent inline inspection/service insertion. NLB does not provide the GWLB/GWLBe model where route tables steer traffic as a next hop for inspection while preserving flow symmetry in the same way. Using NLB + PrivateLink is common for private SaaS/service access, not inline firewall insertion.

Correct. An Auto Scaling group with a launch template and user data directly addresses the operational weakness of manually configured appliances by enabling repeatable bootstrap and automatic replacement after failures. It also supports horizontal scaling from 2 to 10 or more appliances across two Availability Zones, which is necessary to meet the aggregate throughput and failover requirements. The appliances should be registered with a Gateway Load Balancer target group so GWLB can distribute traffic based on health, but for EC2 instances in an Auto Scaling group this is typically done with the instance target type rather than the IP target type.

Incorrect. Launch Wizard can simplify initial deployments, but it does not inherently provide the required architecture for transparent inline inspection at scale across many accounts. It also doesn’t replace the need for GWLB/GWLBe-based traffic steering and cross-account endpoint service consumption. Additionally, focusing on instance target type misses the key modernization goal: GWLB-based service insertion plus automated scaling.

Correct. Gateway Load Balancer endpoints must be created in each member (consumer) VPC where traffic originates. Updating subnet route tables (e.g., 0.0.0.0/0) to point to the GWLBe inserts the inspection hop transparently and removes dependency on static appliance IPs. This is the standard multi-account centralized inspection pattern with Transit Gateway and GWLB.

Incorrect. Endpoints for consuming a GWLB endpoint service are created in the consumer VPCs (member accounts), not in the centralized security VPC. Creating VPC endpoints in the security account would not allow member VPC route tables to target those endpoints, and it misunderstands the PrivateLink consumption model. The correct approach is per-member VPC GWLBe with routes pointing locally.

Análisis de la pregunta

Core concept: This question is about modernizing centralized, cross-account inline traffic inspection on AWS by replacing brittle static next-hop EC2 appliances with a scalable, resilient architecture built around Gateway Load Balancer (GWLB), Gateway Load Balancer endpoints (GWLBe), and Auto Scaling. GWLB is purpose-built for transparent insertion of third-party virtual appliances and integrates with AWS PrivateLink for private, cross-account service consumption. Why correct: A is correct because GWLB is the correct load balancer for inline inspection appliances, and exposing it through an endpoint service allows member VPCs in other accounts to consume the inspection service privately. C is correct because an Auto Scaling group with a launch template and bootstrap user data provides repeatable appliance deployment, automated replacement, and horizontal scaling across two Availability Zones. E is correct because each member VPC must create local GWLBe resources and update route tables so traffic is steered to the inspection service without relying on static appliance IPs. Key features: GWLB uses GENEVE encapsulation to send traffic to appliance fleets while preserving flow context needed by security appliances. GWLBe resources are deployed in the consumer VPCs and act as route targets for transparent service insertion. Auto Scaling with launch templates and user data reduces operational risk and cost by replacing manual appliance management with elastic, health-based scaling. Common misconceptions: A Network Load Balancer with PrivateLink is not the right pattern for transparent inline inspection because it is designed for L4 service exposure, not route-based service insertion. Another common mistake is thinking endpoints should be created in the centralized security VPC; in reality, the consuming member VPCs create the GWLB endpoints. Also, GWLB target groups for EC2 appliances in an Auto Scaling group normally use instance targets rather than IP targets. Exam tips: When a question mentions third-party firewalls, inline inspection, centralized security VPCs, multi-account access, and route-table steering, think GWLB plus GWLBe plus PrivateLink endpoint service. If the scenario also mentions manual EC2 appliances and a need for rapid recovery or horizontal scale, add Auto Scaling with launch templates and bootstrap automation. Be careful to distinguish GWLB from NLB and to place endpoints in the consumer VPCs, not the provider VPC.

7
Pregunta 7

A media-tech company operates a global learning portal whose web and mobile clients retrieve static UI assets (CSS, JS, thumbnails) from an Amazon S3 bucket in ap-south-1 (about 55 TB, up to 12,000 requests/second) by direct HTTPS object URLs; the company has already created a second S3 bucket in eu-west-1 and requires multi-Region resiliency with near-zero RPO for new objects, automatic read failover, no application code changes, and the least possible operational overhead; which solution will meet these requirements?

This option requires modifying the application to dual-write objects to both buckets, which directly violates the requirement for no application code changes and increases implementation complexity. Weighted Route 53 records are designed for traffic splitting, not deterministic automatic failover, so some requests can still be sent to the impaired endpoint depending on configuration. It also forces clients to use a new DNS name rather than preserving the existing direct-access pattern through a managed abstraction layer. Overall, it introduces more operational and development burden than a native S3 replication feature.

Using S3 event notifications and Lambda to copy every new object is a custom replication pipeline, which adds operational overhead for scaling, retries, idempotency, monitoring, and failure recovery. For a large static asset workload, managed S3 CRR is the more appropriate and reliable replication mechanism. Although CloudFront can provide a single front door, the replication side of this design is still unnecessarily complex and less robust than the native S3 feature. Therefore, this is not the least-operations solution.

This option uses S3 Cross-Region Replication, which is the AWS-native mechanism for keeping objects replicated from one bucket to another in a different Region with very low replication lag for new objects. That directly addresses the near-zero RPO requirement better than any custom event-driven copy process. It also places a managed access layer in front of the buckets so the application does not need to implement Region selection or outage handling itself. Of the available choices, it is the only one that combines managed replication with managed read-path failover and the least operational overhead.

This option correctly uses CRR for cross-Region data replication, so it helps with the RPO requirement for new objects. However, it explicitly requires updating the application during a Regional outage, which means failover is manual rather than automatic. That violates both the automatic read failover requirement and the requirement to avoid application changes. In an exam scenario, any answer that depends on manual cutover during failure is usually inferior to a managed failover design.

Análisis de la pregunta

Core Concept: This question tests multi-Region resiliency for static content on Amazon S3 with near-zero RPO and automatic read failover, while avoiding application changes and minimizing operational overhead. The key services are Amazon S3 Cross-Region Replication (CRR) for data durability/availability across Regions and Amazon CloudFront origin failover (origin groups) for transparent read failover. Why the Answer is Correct: Option C meets all requirements: (1) Near-zero RPO for new objects is achieved by enabling S3 CRR from ap-south-1 to eu-west-1 so newly written objects are replicated asynchronously with low lag (near-zero, not strictly zero). (2) Automatic read failover with no application code changes is achieved by placing CloudFront in front of S3 and using a single CloudFront distribution domain name for asset URLs; CloudFront origin groups can fail over from the primary S3 origin to the secondary when the primary returns configured failure status codes or is unreachable. (3) Least operational overhead is satisfied because CRR is managed by S3 (no custom code) and CloudFront failover is configuration-based. Key AWS Features: - S3 CRR with versioning enabled on both buckets; replication rules can be scoped (prefix/tags) and can replicate delete markers if desired. - CloudFront distribution with an origin group: primary origin = ap-south-1 S3 bucket, secondary origin = eu-west-1 S3 bucket; configure failover criteria (e.g., 500/502/503/504). - Use S3 REST endpoints (not S3 static website endpoints) with CloudFront for HTTPS and Origin Access Control (OAC) to keep buckets private (best practice). Common Misconceptions: - “Weighted Route 53” (Option A) is not automatic failover and can send traffic to an unhealthy Region unless health checks and failover routing are used; it also requires changing clients to use a new DNS name (code/config change). - “Lambda copy” (Option B) seems flexible but adds operational burden, scaling concerns at 12,000 rps, error handling, retries, and potential replication gaps—worse RPO than managed CRR. Exam Tips: When you see requirements like multi-Region S3 resiliency, near-zero RPO, and minimal ops, default to S3 CRR/SRR (as applicable). For “automatic read failover” and “no app changes,” look for CloudFront origin groups (or Route 53 failover) with a single stable hostname. Avoid custom replication code unless explicitly required.

8
Pregunta 8

A media conglomerate operates 12 AWS accounts across three OUs in a single AWS Organizations organization and provisions all infrastructure exclusively with AWS CloudFormation. The compliance office must implement a chargeback model and has mandated that every new resource be tagged with the key cost-center using only one of the allowed values CC-101, CC-202, CC-303, or CC-404; however, when filtering the AWS Cost and Usage Report in AWS Cost Explorer by cost-center, the team found many missing or noncompliant values in stacks created over the past 30 days. With the least operational effort, how should the company enforce both the presence of the cost-center tag on new CloudFormation-created resources and restrict the tag’s value to the approved set across all OUs?

Correct. A centralized Tag Policy attached to OUs standardizes the cost-center key and allowed values and provides compliance visibility. An SCP attached to OUs can deny cloudformation:CreateStack unless aws:RequestTag/cost-center is present, enforcing tag presence at creation time across all accounts/principals with minimal operational overhead. This matches the requirement to enforce across all OUs with least effort.

Incorrect. Creating separate tag policies per OU increases operational effort and risks drift/inconsistency. Tag policies are designed to be centrally managed in the organization (management account) and then attached where needed. While the SCP portion could work for tag presence, duplicating tag policies per OU violates the “least operational effort” requirement and complicates governance.

Incorrect. An IAM policy attached to every user and role in all accounts is high operational overhead and error-prone, especially with many roles (including automation roles) and future growth. Organizations SCPs are the intended mechanism for broad, account-wide guardrails. Also, IAM-only approaches are harder to ensure consistent coverage across all accounts and OUs.

Incorrect. AWS Service Catalog with TagOptions can enforce tagging standards, but adopting it requires a significant process and tooling change: converting every CloudFormation template into Service Catalog products, managing portfolios, and ensuring all teams deploy only through Service Catalog. The question states they already provision exclusively with CloudFormation and asks for least operational effort across OUs; SCPs and Tag Policies are lighter-weight.

Análisis de la pregunta

Core Concept: This question tests centralized governance in AWS Organizations for CloudFormation-based deployments. The company must ensure that a cost-center tag is included on new stack creations and that its value conforms to an approved list across multiple OUs. Why the Answer is Correct: Option A is correct because it uses the right tool for each part of the requirement. A tag policy in the management account standardizes the cost-center tag and its allowed values across attached OUs, while an SCP denies cloudformation:CreateStack if the request does not include aws:RequestTag/cost-center. This provides centralized, low-maintenance enforcement for presence plus value governance. Key AWS Features: 1) Tag policies in AWS Organizations: define acceptable tag formats, capitalization, and allowed values for organizational compliance monitoring. They do not force a tag to be present on every request. 2) SCPs: establish preventive controls across all principals in member accounts and are ideal for requiring request tags at stack creation time. 3) CloudFormation stack-level tagging: offers a consistent place to require business metadata such as cost-center for chargeback and cost allocation. Common Misconceptions: Many candidates confuse compliance reporting with preventive enforcement. Tag policies help detect and standardize noncompliant tags, but they are not sufficient to require tag presence by themselves. Another common trap is choosing IAM or Service Catalog when the requirement emphasizes organization-wide enforcement with minimal operational effort. Exam Tips: For AWS Organizations questions, map each requirement to the correct control plane. Use SCPs when the exam asks how to block or deny actions, and use tag policies when it asks how to standardize or report on tag values. If both are needed, the combined pattern is usually the best answer.

9
Pregunta 9
(Selecciona 3)

A global insurance company plans to accelerate workload migrations to AWS after finalizing network and security guardrails and has already provisioned a dedicated 10 Gbps AWS Direct Connect that terminates in a shared network-landing account. Within 12 months, the organization expects approximately 300 AWS accounts and about 900 VPCs across three Regions. The corporate MPLS WAN must have seamless reachability to all AWS resources, and every VPC must communicate with other VPCs through a central hub. For compliance, all outbound internet traffic from AWS workloads must egress via next-generation firewalls in the on-premises data center; direct internet egress from AWS is not permitted. Which combination of steps will meet these requirements at scale with minimal operational overhead? (Choose three.)

A Direct Connect gateway associated with a virtual private gateway (VGW) per VPC is a legacy scaling pattern. It can provide on-prem to VPC connectivity, but it does not inherently create a central hub for VPC-to-VPC communication and becomes operationally burdensome with ~900 VPCs. TGW is the intended hub service for this scale and requirement set.

Correct. A Transit Gateway provides the required central hub for inter-VPC routing. Creating a DX gateway and using a transit VIF to attach the TGW to the DX gateway is the scalable, AWS-recommended approach for connecting many VPCs to on-premises over Direct Connect. This enables seamless MPLS WAN reachability to AWS networks via private routing.

Incorrect. Attaching an internet gateway (IGW) and routing 0.0.0.0/0 directly to the IGW enables direct internet egress from AWS, which explicitly violates the compliance requirement that all outbound internet traffic must traverse on-premises next-generation firewalls. Even if additional controls were added, this option contradicts the stated prohibition.

Correct. AWS RAM sharing of the Transit Gateway allows centralized ownership in the network-landing account while enabling application accounts to create VPC attachments at scale. This reduces operational overhead, avoids duplicating TGWs, and supports consistent governance (central route table strategy, segmentation) across hundreds of accounts and VPCs.

Incorrect. VPC peering does not scale for 900 VPCs because it requires many point-to-point connections and extensive route management. It also lacks transitive routing, making a “central hub” architecture impractical. Peering is best for small numbers of VPCs with simple connectivity needs, not large enterprise hub-and-spoke designs.

Correct. Using only private subnets and steering default routes (0.0.0.0/0) from VPCs through TGW and over Direct Connect to on-premises NAT/egress appliances enforces centralized inspection and prevents direct internet egress from AWS. This is a common enterprise compliance pattern (forced tunneling) when on-prem firewalls must control outbound traffic.

Análisis de la pregunta

Core Concept: This question tests large-scale multi-account, multi-VPC connectivity using AWS Transit Gateway (TGW) as the hub, AWS Direct Connect (DX) for private WAN connectivity, and centralized egress control (forced tunneling) to on-premises security appliances. Why the Answer is Correct: At the stated scale (300 accounts, ~900 VPCs, 3 Regions), VPC-to-VPC and on-premises connectivity must avoid mesh designs. A TGW provides a scalable hub-and-spoke routing domain for all VPCs and can also connect to on-premises via DX using a transit virtual interface (transit VIF). Option B establishes the correct foundation: create TGW in the shared network account, create a DX gateway, create a transit VIF, and attach the TGW to the DX gateway so the corporate MPLS WAN can reach AWS networks privately. Option D addresses organizational complexity: sharing the TGW with AWS RAM enables member accounts to create VPC attachments without duplicating central networking constructs, minimizing operational overhead while maintaining centralized control. Option F enforces the compliance requirement: no direct internet egress from AWS. By using only private subnets and advertising/propagating a default route (0.0.0.0/0) from TGW toward on-premises over DX, all outbound internet traffic is forced through on-prem next-generation firewalls/NAT/egress appliances. Key AWS Features: TGW route tables (segmentation, propagation, association), RAM sharing for cross-account attachments, DX transit VIF + DX gateway for private connectivity, and centralized routing to steer default traffic to on-prem. This aligns with AWS Well-Architected (Security and Reliability pillars) by centralizing inspection and simplifying network operations. Common Misconceptions: Many assume a DX gateway should be associated to each VPC’s VGW (Option A). That pattern does not provide a central VPC-to-VPC hub and becomes operationally heavy at 900 VPCs. Others may choose IGWs (Option C) for simplicity, but it violates the explicit compliance requirement. VPC peering (Option E) seems straightforward but does not scale (mesh growth, route management, no transitive routing). Exam Tips: When you see “hundreds of accounts/VPCs” and “central hub,” think TGW + RAM. When you see “no internet egress from AWS,” think private subnets + forced tunneling (0.0.0.0/0) to on-prem via DX/VPN and centralized inspection.

10
Pregunta 10

A global media company operates a 14-account AWS Organizations environment governed by AWS Control Tower and connects VPCs across accounts with AWS Transit Gateway; in the us-east-1 Region, the analytics account (Account ID: 222222222222) hosts a microservice that uses AWS Lambda and an Amazon Aurora MySQL DB cluster, while the data governance team works from a separate security account (Account ID: 111111111111) using a single Amazon EC2 instance in a private subnet to administer databases across the organization. The analytics team stores the Aurora credentials as secrets in AWS Secrets Manager in the analytics account, and the secrets are encrypted with the default AWS managed key for Secrets Manager (alias aws/secretsmanager) in us-east-1; currently, the analytics team manually shares credentials with the data governance team for troubleshooting. A solutions architect must provide the security account administrators on the EC2 instance on-demand access to the Aurora credentials without manual sharing and must not re-encrypt existing secrets or change the encryption key; which solution meets these requirements?

Incorrect. AWS Resource Access Manager (AWS RAM) does not support sharing AWS Secrets Manager secrets as shareable resources. Cross-account access to secrets is typically done via resource policies on the secret (when using customer managed keys) or, more commonly in constrained KMS scenarios, by assuming a role in the owning account. Therefore RAM cannot meet the requirement for on-demand access here.

Correct. Create a role in the analytics account with permissions to secretsmanager:GetSecretValue for the specific secret ARNs. Allow the security account role to assume it via a trust policy. The EC2 instance uses its attached role to assume the analytics role and retrieve the secret within the analytics account context, which works with the default AWS managed KMS key and avoids any re-encryption or key changes.

Incorrect. The default AWS managed KMS key (alias aws/secretsmanager) cannot be edited to add a resource-based key policy statement for cross-account access. You cannot attach a custom key policy to an AWS managed key. Even if the security account had IAM permissions, KMS would still not allow cross-account decrypt via policy changes on an AWS managed key, so this option cannot work as stated.

Incorrect. Service Control Policies (SCPs) do not grant permissions; they only define the maximum permissions that accounts can use. Attaching an SCP to the analytics account cannot “allow access from the security account” by itself. Cross-account access still requires an IAM trust relationship and appropriate permissions on Secrets Manager (and KMS considerations). This option misunderstands SCP behavior.

Análisis de la pregunta

Core Concept: This question tests cross-account access to AWS Secrets Manager secrets in an AWS Organizations/Control Tower environment, and the interaction between Secrets Manager and AWS KMS when secrets are encrypted with the AWS managed key (alias aws/secretsmanager). It also tests the correct use of IAM role assumption for on-demand administrative access. Why the Answer is Correct: Option B is correct because the only supported way to grant another account access to a secret encrypted with the default AWS managed KMS key is to use a cross-account IAM role in the secret-owning account (analytics) and allow principals in the other account (security) to assume it. AWS managed keys cannot be modified with key policies to add cross-account principals, and you cannot “share” Secrets Manager secrets cross-account via AWS RAM. With B, the EC2 instance uses its attached role (Sec-DBA) to call sts:AssumeRole into the analytics account role (Analytics-SecretAccess). That assumed role then calls secretsmanager:GetSecretValue (and optionally DescribeSecret) in the same account as the secret, so KMS decrypt occurs under the analytics account context and works with the AWS managed key without re-encrypting or changing keys. Key AWS Features: - IAM role chaining across accounts using STS AssumeRole. - Least privilege: Analytics-SecretAccess scoped to specific secret ARNs and required actions. - Secrets Manager resource policy can further restrict which principals (the analytics role) can access the secret. - Operational best practice: use session tagging/MFA/CloudTrail to audit who assumed the role and when. Common Misconceptions: A is tempting because AWS RAM shares resources, but Secrets Manager secrets are not RAM-shareable resources. C is tempting because it mentions KMS permissions, but AWS managed KMS keys do not allow editing key policies; cross-account decrypt grants are not possible that way. D misuses SCPs: SCPs only set guardrails (maximum permissions) and do not grant access; they cannot enable cross-account secret retrieval. Exam Tips: When you see “default AWS managed KMS key” and “must not change/re-encrypt,” think: you cannot adjust the key policy, so you must access the secret from within the owning account’s security boundary (assume a role in that account). Also remember: SCPs restrict, RAM shares only supported resource types, and cross-account access typically requires both an IAM trust policy (role assumption) and permissions on the target service.

Historias de éxito(9)

S
s******Nov 24, 2025

Periodo de estudio: 3 months

I used these practice questions and successfully passed my exam. Thanks for providing such well-organized question sets and clear explanations. Many of the questions felt very close to the real exam.

T
t**********Nov 13, 2025

Periodo de estudio: 3 months

Just got certified last week! It was a tough exam, but I’m really thankful to cloud pass. the app questions helped me a lot in preparing for it.

효
효**Nov 12, 2025

Periodo de estudio: 1 month

앱 이용 잘했습니다 ^^

P
p*******Nov 7, 2025

Periodo de estudio: 2 months

These practice exams are help for me to pass the certification. A lot of questions are mimicked from here.

D
d***********Nov 7, 2025

Periodo de estudio: 1 month

Thanks. I think I passed because of high quality contents here. I am thinking to take next aws exam here.

Otros exámenes de práctica

Practice Test #2

75 Preguntas·180 min·Aprobación 750/1000

Practice Test #3

75 Preguntas·180 min·Aprobación 750/1000

Practice Test #4

75 Preguntas·180 min·Aprobación 750/1000

Practice Test #5

75 Preguntas·180 min·Aprobación 750/1000
← Ver todas las preguntas de AWS Certified Solutions Architect - Professional (SAP-C02)

Comienza a practicar ahora

Descarga Cloud Pass y comienza a practicar todas las preguntas de AWS Certified Solutions Architect - Professional (SAP-C02).

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

App de práctica para certificaciones TI

Get it on Google PlayDownload on the App Store

Certificaciones

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

Preguntas frecuentesPolítica de privacidadTérminos de servicio

Empresa

ContactoEliminar cuenta

© Copyright 2026 Cloud Pass, Todos los derechos reservados.

¿Quieres practicar todas las preguntas en cualquier lugar?

Obtén la app

Descarga Cloud Pass — incluye exámenes de práctica, seguimiento de progreso y más.