CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
AWS Certified Solutions Architect - Professional (SAP-C02)
AWS Certified Solutions Architect - Professional (SAP-C02)

Practice Test #3

Simula la experiencia real del examen con 75 preguntas y un límite de tiempo de 180 minutos. Practica con respuestas verificadas por IA y explicaciones detalladas.

75Preguntas180Minutos750/1000Puntaje de aprobación
Explorar preguntas de práctica

Impulsado por IA

Respuestas y explicaciones verificadas por triple IA

Cada respuesta es verificada de forma cruzada por 3 modelos de IA líderes para garantizar la máxima precisión. Obtén explicaciones detalladas por opción y análisis profundo de cada pregunta.

GPT Pro
Claude Opus
Gemini Pro
Explicaciones por opción
Análisis profundo de preguntas
Precisión por consenso de 3 modelos

Preguntas de práctica

1
Pregunta 1

A media company must migrate its payment processing database from an on-premises Microsoft SQL Server instance running on Windows Server 2019 to AWS, and a security policy mandates that the database credentials must be rotated every 90 days. Which solution will meet these requirements with the least operational overhead?

Incorrect. Converting SQL Server to DynamoDB using AWS SCT is a major modernization effort, not a straightforward migration, and it adds significant application refactoring risk for a payment processing relational workload. Also, using Parameter Store plus CloudWatch/Lambda for rotation is custom operational work compared to Secrets Manager’s managed rotation. This does not meet the “least operational overhead” requirement.

Correct. Amazon RDS for SQL Server provides a managed SQL Server environment with reduced operational burden (patching, backups, HA options, monitoring). AWS Secrets Manager is purpose-built for storing database credentials and supports automatic rotation on a defined schedule, including 90 days. This combination directly satisfies both migration and credential-rotation requirements with minimal custom code and maintenance.

Incorrect. Running SQL Server on EC2 increases operational overhead because the company must manage the Windows OS, SQL Server patching, backups, HA design, and monitoring. Using Parameter Store with a Lambda-based rotation schedule is also more custom work than Secrets Manager rotation and requires careful coordination to update the SQL login and dependent applications safely.

Incorrect. Amazon Neptune is a graph database and is not an appropriate target for migrating a Microsoft SQL Server payment processing database without significant redesign. AWS SCT does not make this a low-effort migration; it would require substantial schema and application changes. Additionally, using CloudWatch/Lambda for rotation is unnecessary compared to Secrets Manager’s managed rotation capabilities.

Análisis de la pregunta

Core Concept: This question tests selecting the lowest-operations managed database migration target and implementing mandatory credential rotation using native AWS managed capabilities. The key services are Amazon RDS for SQL Server (managed relational database) and AWS Secrets Manager (managed secret storage with built-in rotation). Why the Answer is Correct: Migrating an on-premises Microsoft SQL Server database to Amazon RDS for SQL Server minimizes operational overhead because AWS manages database provisioning, patching (engine and OS for RDS), backups, point-in-time recovery, monitoring integrations, and high availability options (Multi-AZ). The security requirement for rotating database credentials every 90 days is best met by storing credentials in AWS Secrets Manager and enabling automatic rotation on a 90-day schedule. Secrets Manager provides native rotation workflows (via Lambda) and integrates cleanly with RDS, reducing custom code and ongoing maintenance. Key AWS Features: - Amazon RDS for SQL Server: managed SQL Server with automated backups, maintenance windows, Multi-AZ, CloudWatch metrics, and simplified scaling/operations. - AWS Secrets Manager: encrypted secret storage (KMS), automatic rotation scheduling (e.g., every 90 days), rotation Lambda templates for supported engines, versioning (AWSCURRENT/AWSPREVIOUS), and auditability via CloudTrail. - Best practice: applications retrieve credentials at runtime from Secrets Manager (or via caching libraries) rather than hardcoding. Common Misconceptions: Options that propose Systems Manager Parameter Store plus custom Lambda rotation can look cheaper or simpler, but they increase operational overhead: you must build rotation logic, handle SQL login changes safely, coordinate application updates, manage failures/rollbacks, and maintain the Lambda and schedule. Options proposing DynamoDB or Neptune via AWS SCT are inappropriate because they require major re-architecture and are not direct targets for a payment processing relational SQL Server workload. Exam Tips: When you see “least operational overhead” with a commercial database engine, prefer managed services (RDS) over self-managed EC2. For “rotate credentials every X days,” prefer AWS Secrets Manager automatic rotation over custom rotation mechanisms. Also, avoid unnecessary modernization (e.g., DynamoDB/Neptune) unless the question explicitly requires it.

2
Pregunta 2

A fitness education platform runs its web tier on Amazon EC2 behind an Application Load Balancer with an Auto Scaling group and stores all static assets on Amazon EFS (General Purpose), but after launching a 1080p training video library (average file size 200 MB) traffic spiked 12x to about 20,000 concurrent users at peak and users now see 5–10 second buffering and occasional HTTP 504 errors, so which is the most cost-efficient and scalable deployment change to resolve these issues without replatforming the compute layer?

Max I/O performance mode can improve scalability for highly parallel access by distributing load, but it often increases latency and does not automatically provide the massive sustained throughput needed for 20,000 concurrent 1080p streams. You would still need to address EFS throughput limits (bursting credits or provisioned throughput), and costs can rise significantly. It’s a partial mitigation, not the most cost-efficient scalable fix.

Using instance store for videos is operationally complex and fragile: instance store is ephemeral, requires large local capacity, and syncing 200 MB assets back to S3 at shutdown is unreliable with scaling events and terminations. It also duplicates content across the fleet, wastes storage, and doesn’t provide global low-latency delivery. This is not a standard, cost-efficient pattern for video libraries.

S3 + CloudFront is the canonical AWS pattern for large static media at scale. S3 handles massive concurrency and throughput, while CloudFront reduces origin load and latency by caching at the edge and supporting range requests for streaming/seek. This directly addresses buffering and 504s by removing EFS from the hot path for video delivery, and it requires no replatforming of the EC2/ALB compute tier.

CloudFront in front of the ALB can accelerate dynamic content and cache some static responses, but keeping videos on EFS leaves the origin bottleneck in place. Video delivery often relies on range requests and can produce cache fragmentation or misses; when CloudFront must fetch from origin, EFS throughput/latency constraints still cause slow responses and timeouts. This is less effective and potentially more expensive than using S3 as the origin.

Análisis de la pregunta

Core Concept: This question tests designing scalable, cost-efficient content delivery for large static media using Amazon S3 and Amazon CloudFront, and recognizing when Amazon EFS is the wrong backend for high-fanout, large-object distribution. Why the Answer is Correct: Serving 1080p videos (200 MB average) to ~20,000 concurrent users creates extremely high aggregate throughput and many simultaneous reads. With EFS General Purpose, performance is governed by throughput mode (bursting or provisioned) and can become a bottleneck, increasing latency and causing upstream timeouts that manifest as HTTP 504s at the ALB. Migrating the video library to Amazon S3 and fronting it with CloudFront offloads the origin, scales essentially without limit for concurrent viewers, and dramatically reduces latency via edge caching and optimized delivery. This resolves buffering and 504s without changing the EC2/ALB compute layer. Key AWS Features: S3 provides highly durable object storage with very high request rates and throughput. CloudFront caches content at edge locations, supports large file delivery, HTTP range requests (critical for video seeking and adaptive players), origin shielding, and signed URLs/cookies if access control is needed. Using S3 as the origin also simplifies lifecycle policies (e.g., transition older videos to S3 Intelligent-Tiering) and reduces cost compared to scaling EFS throughput for peak. Common Misconceptions: Increasing EFS performance mode (Max I/O) can help with highly parallel workloads but does not inherently solve throughput economics for massive streaming; it can also increase latency and still requires sufficient throughput configuration. Putting CloudFront in front of the ALB while keeping EFS as the origin doesn’t fix the core issue if the videos are effectively uncacheable (frequent access patterns, range requests, cache misses) or if the origin remains constrained. Exam Tips: For large static assets (images, downloads, video), default to S3 + CloudFront. Use EFS for shared POSIX file systems for applications, not as a CDN origin for high-scale media distribution. When you see buffering + 504s under load, think origin saturation and move heavy static delivery off the compute/file tier to edge/object storage.

3
Pregunta 3

A gaming analytics company is deploying a matchmaking REST API behind Amazon API Gateway that invokes several AWS Lambda functions written in Java 17; these functions perform heavy static initialization outside the handler (loading third-party libraries, building a Spring context, and creating unique IDs/cryptographic seeds), and cold starts currently take 800–1200 ms during load tests. The service-level objective requires that 95% of cold initializations complete in under 200 ms, traffic bursts from 0 to 60 concurrent requests within seconds, and the team wants the most cost-effective way to minimize startup latency without paying for always-on capacity; which solution should the team choose?

Incorrect. Moving all initialization into the handler usually increases per-invocation latency and defeats the benefit of doing heavy work once per execution environment. Also, SnapStart cannot be configured on $LATEST; it requires a published version. This option both misuses SnapStart and worsens performance characteristics.

Incorrect for the stated cost goal. Provisioned Concurrency on an alias will reduce cold starts and handle bursts, but it incurs ongoing charges for the provisioned capacity even when traffic is zero. The question explicitly asks to minimize startup latency without paying for always-on capacity, making this less cost-effective for spiky workloads.

Incorrect/less appropriate. While SnapStart must be enabled on published versions and can be combined with Provisioned Concurrency, adding Provisioned Concurrency reintroduces always-on cost. Also, it does not address the correctness risk of snapshotting unique IDs/cryptographic seeds; without moving that logic out of the snapshot, you can still get duplicated or unsafe state.

Correct. A pre-snapshot hook plus moving unique ID/cryptographic seed generation into the handler (or otherwise ensuring it is not captured in the snapshot) addresses the correctness issue of snapshotting uniqueness-sensitive state. Publishing a version and enabling SnapStart on that version is the supported configuration and provides major cold-start reduction for Java without paying for always-on provisioned capacity.

Análisis de la pregunta

Core Concept: This question tests AWS Lambda cold-start mitigation for Java runtimes using Lambda SnapStart versus Provisioned Concurrency, and how to handle initialization code that must remain unique per execution environment (e.g., cryptographic seeds/unique IDs). It also implicitly tests versioning/aliases requirements and cost tradeoffs for bursty traffic behind API Gateway. Why the Answer is Correct: Lambda SnapStart (for Java) reduces cold starts by taking a snapshot of the initialized execution environment after the init phase and restoring it on-demand. This is ideal for heavy static initialization such as loading libraries and building a Spring context. However, anything that must be unique at runtime (random seeds, unique IDs, certain crypto material, timestamps) must not be “frozen” into the snapshot, or you risk duplicated values across restored environments. The correct approach is to use a pre-snapshot hook to prepare the runtime for snapshotting and move uniqueness-sensitive initialization into the handler (or post-restore logic) so it is generated per environment/invocation as appropriate. SnapStart requires publishing a version and enabling SnapStart on that version (not $LATEST). This meets the <200 ms cold init SLO without paying for always-on capacity. Key AWS Features: - Lambda SnapStart: snapshots initialized Java execution environments to dramatically reduce cold starts. - Published versions: SnapStart is enabled on versions; API Gateway should invoke an alias/version for stability. - Runtime hooks (pre-snapshot/post-restore): ensure correctness when snapshotting (e.g., re-seeding RNG, refreshing ephemeral state). - Best practice: keep heavy deterministic init outside handler; keep per-invocation/per-environment uniqueness inside handler or post-restore. Common Misconceptions: Provisioned Concurrency can also reduce cold starts, but it charges for pre-warmed capacity and is less cost-effective when traffic is frequently at zero with sudden bursts. Another trap is enabling SnapStart on $LATEST (not supported) or snapshotting unique IDs/seeds, which can cause duplicate identifiers or weakened randomness. Exam Tips: - For Java cold starts with heavy frameworks, think SnapStart first when you want on-demand cost behavior. - Remember: SnapStart works on published versions, not $LATEST. - If the question mentions unique IDs/crypto seeds or “must be unique,” expect a hook/handler change to avoid snapshotting that state. - Choose Provisioned Concurrency when you need guaranteed warm capacity and can justify the ongoing cost; choose SnapStart when you want lower cold starts without always-on spend.

4
Pregunta 4

A design studio is enabling 1,200 employees to work remotely and needs a scalable, cost-effective AWS Client VPN solution so that up to 300 concurrent users can securely access internal workloads deployed in six VPCs across four AWS accounts, where a shared networking (hub) account hosts a hub VPC already peered to five spoke VPCs with non-overlapping CIDR blocks, the corporate network currently reaches the hub VPC via a single AWS Site-to-Site VPN, and the solution must minimize monthly spend while allowing remote users to reach applications in all VPCs.

Creating a Client VPN endpoint in each AWS account could provide access, but it is not the best architecture here because it creates multiple endpoints, duplicated configuration, and higher operational overhead. It also does not inherently solve centralized access across all six VPCs from a single user connection unless users connect to different endpoints as needed. The question asks for a scalable solution that allows remote users to reach applications in all VPCs, and this option fragments access rather than centralizing it. It is also less elegant and potentially more expensive than a single endpoint with proper transit routing.

This option is incorrect because VPC peering does not support transitive routing. A Client VPN endpoint associated with the hub VPC can provide access to resources in that VPC, but traffic from VPN clients cannot simply traverse the hub and then continue over peering connections to the spoke VPCs as a transit path. Even with route tables and authorization rules configured, AWS does not allow using a peered VPC as a transit gateway for this purpose. Therefore, a single Client VPN endpoint in the hub account cannot reach all peered VPCs using peering alone.

This is the correct answer because a single AWS Client VPN endpoint can provide centralized remote access only when the underlying network supports transitive routing between the client-connected VPC and the other VPCs. AWS Transit Gateway is designed for exactly this multi-account, multi-VPC connectivity pattern and allows the hub VPC and all spoke VPCs to exchange routes in a controlled, scalable way. By attaching each VPC to the Transit Gateway and associating the Client VPN endpoint with the hub VPC, remote users can reach applications in all six VPCs through one endpoint. While Transit Gateway adds cost, it is the only option listed that actually satisfies the technical routing requirement.

This option is incorrect because AWS Client VPN is not designed to connect to an existing Site-to-Site VPN as a mechanism for reaching AWS workloads across multiple VPCs. The Site-to-Site VPN connects the corporate network to AWS, but remote users connecting through Client VPN would still face the same underlying routing limitations to the spoke VPCs. It also introduces unnecessary hairpinning and complexity without solving the lack of transitive connectivity between the hub VPC and the peered spokes. As a result, this design does not meet the requirement for scalable access to applications in all VPCs.

Análisis de la pregunta

Core Concept: This question tests whether AWS Client VPN can be centralized for multi-account, multi-VPC remote access and what network construct is required for scalable routing. The key point is that VPC peering is non-transitive, so traffic from a Client VPN endpoint associated with a hub VPC cannot simply traverse peering connections to reach spoke VPCs. To provide remote users access to workloads across multiple VPCs and accounts from a single Client VPN endpoint, you need a transit routing service such as AWS Transit Gateway. Why the Answer is Correct: A single Client VPN endpoint in the shared networking account combined with AWS Transit Gateway is the correct design because Transit Gateway supports centralized, transitive routing across multiple VPCs and accounts. The Client VPN endpoint associates with subnets in the hub VPC, that VPC attaches to the Transit Gateway, and each spoke VPC in the other accounts also attaches to the Transit Gateway. With proper route propagation and authorization rules, remote users can access applications in all six VPCs from one VPN endpoint. Although Transit Gateway adds cost, it is the only option presented that satisfies the connectivity requirement with a single scalable endpoint. Key AWS Features / Configurations: - AWS Client VPN endpoint associated with subnets in the shared networking account. - AWS Transit Gateway attachments for the hub VPC and each spoke VPC across accounts. - Transit Gateway route tables configured so the Client VPN client CIDR and all VPC CIDRs are mutually reachable. - Client VPN authorization rules for the required destination CIDRs. - Security groups and network ACLs updated to allow traffic from the Client VPN client CIDR. Common Misconceptions: - Assuming VPC peering allows a centralized Client VPN endpoint to reach peered VPCs. It does not, because peering does not support transitive routing. - Thinking the existing Site-to-Site VPN can be reused as a path for remote users to reach all AWS VPCs. That would not solve the core routing limitation and is not how Client VPN is designed for this use case. - Assuming the cheapest-looking option is automatically correct. If it cannot meet the routing requirement, it is not a valid solution. Exam Tips: When you see centralized remote access to many VPCs across accounts, immediately check whether the existing network is based on peering or Transit Gateway. For hub-and-spoke remote access with one VPN endpoint, Transit Gateway is usually the required service because it supports transitive routing. On the exam, eliminate answers that rely on VPC peering for transit between VPN clients and other VPCs.

5
Pregunta 5

A company uses AWS Organizations across 12 AWS accounts in two OUs. The Commerce account (ID 111122223333) runs an order processing application that stores confidential data in an Amazon DynamoDB table named Orders in us-east-1 (about 60 GB, 50,000 items). The analytics team in the Insights account (ID 444455556666) needs read-only access for Query and GetItem operations to only the order_id, total_amount, and order_date attributes; they must not be able to view or write the email, card_token, or address attributes. The solution must work cross-account, enforce least privilege, and must not require data duplication. What should a solutions architect do?

Incorrect. Service control policies (SCPs) in AWS Organizations do not grant permissions; they only define the maximum available permissions for accounts/OUs. An SCP cannot be used to provide the Insights account access to Commerce resources, and SCPs also do not implement DynamoDB attribute-level filtering/redaction. At best, an SCP could prevent certain actions, but it cannot create cross-account read access.

Correct. Creating an IAM role in the Commerce account with a trust policy that allows assumption from the Insights account is the standard cross-account pattern. Attach an identity-based policy to that role allowing only dynamodb:Query and dynamodb:GetItem on the Orders table (and required indexes). This enforces least privilege without duplicating data and keeps control with the data-owning account.

Incorrect. DynamoDB does not support attaching a general resource-based IAM policy directly to a table in the same way Amazon S3 bucket policies work for cross-account access. Cross-account access to DynamoDB is typically done through IAM role assumption (or via AWS services like API Gateway/Lambda acting as a controlled access layer). Also, attribute-level hiding is not achieved by a table policy.

Incorrect. A permissions boundary is a guardrail that limits the maximum permissions a role can receive, but it does not by itself implement the correct cross-account access pattern or replace the need for a properly scoped identity policy and trust relationship. You would still need the Commerce role with least-privilege permissions and a trust policy. The boundary adds complexity without solving the attribute-visibility requirement.

Análisis de la pregunta

Core Concept: This question tests cross-account access design in AWS Organizations with least privilege, specifically using IAM role assumption and DynamoDB fine-grained access control. It also tests understanding of what can (and cannot) be restricted at the IAM layer for DynamoDB items/attributes. Why the Answer is Correct: Option B is the correct architectural pattern for cross-account access without data duplication: create an IAM role in the resource-owning account (Commerce) that the consumer account (Insights) can assume. This keeps control with the data owner and supports least privilege by limiting actions to dynamodb:Query and dynamodb:GetItem on the specific Orders table. The trust policy on the role allows sts:AssumeRole only from the Insights account (and ideally a specific principal/role in that account). Key AWS Features: 1) Cross-account IAM role assumption: A role in Commerce with a trust policy for Insights is the standard, auditable approach. 2) Least privilege on DynamoDB actions/resources: Scope permissions to the table ARN (and optionally indexes) and only required read APIs. 3) Important limitation: IAM “fine-grained access control” for DynamoDB can restrict access by leading keys (dynamodb:LeadingKeys) and can restrict actions, but IAM does not reliably enforce per-attribute redaction for Query/GetItem results. To truly prevent viewing email/card_token/address while still querying the same items, you typically need application-layer controls (e.g., a proxy/API that projects attributes) or a separate table/view. However, among the provided options, B is the only one that correctly implements cross-account access control in the owning account without duplication and aligns with AWS best practices. Common Misconceptions: - Confusing SCPs with permission grants: SCPs only set maximum permissions; they do not grant access. - Assuming DynamoDB supports table-attached resource policies like S3: DynamoDB does not support general resource-based policies on tables for IAM authorization in the way S3 does. - Using permissions boundaries to “enforce” least privilege across accounts: boundaries limit what a role can do but do not solve cross-account trust/authorization by themselves. Exam Tips: For cross-account access to a data store in one account, the default exam answer is usually “create a role in the data-owning account and allow the other account to assume it,” then apply least-privilege permissions on that role. Remember: SCPs restrict, they don’t grant; and not all services support resource-based policies. When a question asks for attribute-level secrecy, be cautious—DynamoDB IAM controls are strongest for item-level/key-based restrictions, not true column-level masking.

¿Quieres practicar todas las preguntas en cualquier lugar?

Descarga Cloud Pass — incluye exámenes de práctica, seguimiento de progreso y más.

6
Pregunta 6

A digital media conglomerate uses AWS Organizations with all features enabled and has three OUs (Studio, Gaming, Analytics), each with a shared services account used by central vendor managers; leadership requires that about 120 developers across 12 product accounts can purchase only preapproved third-party software via AWS Marketplace Private Marketplace, and that Private Marketplace administration be restricted strictly to a role named vendor-admin-role assumed only by vendor managers in the shared services accounts, while all other IAM users, groups, roles, account administrators, and root users across the organization are denied Private Marketplace administrative permissions; what is the MOST efficient architecture to centrally enforce these controls and also prevent creation of spoofed admin roles?

Incorrect. This is operationally heavy (role and inline deny policies in every account) and not centrally enforced. IAM policies cannot reliably override all possible privilege paths, and account admins/root could still grant themselves permissions unless constrained by SCPs. Also, creating vendor-admin-role in every account expands the attack surface and does not address spoofing or consistent organization-wide enforcement.

Incorrect. Permission boundaries limit only the roles/users they are attached to; they do not restrict other principals (including admins/root) and do not provide organization-wide enforcement. Creating vendor-admin-role in every account with AdministratorAccess is the opposite of least privilege and increases risk. Boundaries on developer roles do not prevent other roles from gaining Private Marketplace admin permissions.

Correct. Creating vendor-admin-role only in the shared services accounts aligns with the requirement that only vendor managers in those accounts can administer AWS Marketplace Private Marketplace. A root-level SCP can deny the relevant Private Marketplace administrative actions for all principals except the specific vendor-admin-role ARNs, which ensures the restriction applies consistently across all OUs and accounts, including highly privileged principals. A second SCP can deny iam:CreateRole for the role name vendor-admin-role outside the trusted shared services accounts, which prevents role-name spoofing that could otherwise bypass the exception.

Incorrect. It places the admin role in developer product accounts, violating the requirement that administration be restricted to vendor managers in shared services accounts. Attaching an SCP only to shared services OUs is backwards; the restriction must apply across the entire organization. It also fails to prevent spoofing and does not centrally deny other accounts from gaining Private Marketplace admin capabilities.

Análisis de la pregunta

Core Concept: This question is about using AWS Organizations SCPs for centralized preventive guardrails, combined with a tightly scoped IAM role design for AWS Marketplace Private Marketplace administration. The requirement explicitly says all other principals, including account administrators and root users, must be denied administrative permissions, which points to SCPs rather than account-local IAM controls. Why the Answer is Correct: Option C is the most efficient design because it keeps the privileged vendor-admin-role only in the shared services accounts and uses root-level SCPs to enforce organization-wide denial of Private Marketplace administrative actions for everyone except that trusted role in those accounts. SCPs apply to all member accounts and limit the maximum permissions available, so even highly privileged IAM principals in product accounts cannot bypass them. A second SCP can deny creation of any IAM role named vendor-admin-role outside the trusted shared services accounts, preventing role-name spoofing that could otherwise satisfy a role-name-based exception. Key AWS Features: 1) SCPs: SCPs are the correct control when the requirement is to centrally deny actions across accounts, including for administrators. They define the permission ceiling for member accounts in the organization. 2) Principal-based exception: The deny SCP can use a condition such as aws:PrincipalArn to exempt only the specific vendor-admin-role ARNs in the shared services accounts. This preserves least privilege while allowing the vendor managers to administer Private Marketplace. 3) Anti-spoofing guardrail: A separate SCP can deny iam:CreateRole when the requested role name is vendor-admin-role in untrusted accounts or OUs. This prevents product accounts from creating a same-named role to exploit the exception logic. Common Misconceptions: A common mistake is assuming IAM policies or permission boundaries can centrally block all users across an organization. They cannot override root-level organizational guardrails and are scoped to individual accounts or attached principals. Another misconception is that duplicating the admin role into every account is simpler, when it actually increases blast radius and weakens central governance. Exam Tips: When a question says to deny permissions for everyone except a specific trusted role across many accounts, think SCPs first. When the exception is based on a role identity, also consider how to prevent look-alike role creation in other accounts. Prefer centralized controls at the organization root when the requirement spans all OUs and accounts.

7
Pregunta 7

A university genomics department runs an on-premises NAS that exports NFSv3 shares and wants to back up 8 TB of lab data to Amazon S3 each week using a solution that preserves NFS access, automatically archives objects after 5 days, and allows up to 72 hours of retrieval time during disaster recovery; which approach is the most cost-effective?

File Gateway correctly preserves NFSv3 access and Lifecycle can transition objects after 5 days, but S3 Standard-IA is not the most cost-effective archival choice for backups when multi-hour retrieval is acceptable. Standard-IA is designed for infrequent access with millisecond retrieval and has higher storage cost (plus minimum storage duration and retrieval fees). Given a 72-hour DR retrieval window, Deep Archive would reduce storage cost significantly.

Volume Gateway presents iSCSI block volumes, not NFS file shares, so it fails the “preserves NFS access” requirement. While transitioning to S3 Glacier Deep Archive could be cost-effective for archival, the access method is wrong for the stated workflow. Volume Gateway is better when applications require block storage semantics (e.g., iSCSI) rather than file-based NFS/SMB shares.

Tape Gateway is intended for integrating with existing backup software that writes to a virtual tape library (VTL), not for preserving NFSv3 file access. It can be cost-effective for tape-based backup workflows, but it changes the access pattern and tooling. Additionally, transitioning to S3 Standard-IA is not an archival optimization compared to Glacier Deep Archive for long-term backup retention.

File Gateway meets the requirement to preserve NFSv3 access by exposing an NFS share on premises while storing data in S3. An S3 Lifecycle rule transitioning objects to S3 Glacier Deep Archive after 5 days provides the lowest-cost archival storage. Deep Archive retrieval is asynchronous and can take hours, which fits the allowed up to 72-hour retrieval time during disaster recovery, making this the most cost-effective compliant option.

Análisis de la pregunta

Core Concept: This question tests selecting the right AWS Storage Gateway mode for on-premises NFS access while using Amazon S3 as durable backup storage, and pairing it with the correct S3 storage class based on retrieval-time requirements and cost. Why the Answer is Correct: The department needs to preserve NFSv3 access for backups, so AWS Storage Gateway File Gateway is the correct gateway type because it presents an NFS (and SMB) file share on premises while storing objects in S3. The data should be automatically archived after 5 days, and disaster recovery allows up to 72 hours retrieval time. S3 Glacier Deep Archive is the lowest-cost archival storage class and supports retrieval times that can be hours (standard retrieval typically within 12 hours; bulk can be longer), which fits within a 72-hour RTO for data restore. Therefore, File Gateway + S3 Lifecycle transition to Glacier Deep Archive after 5 days is the most cost-effective solution that still meets access and retrieval constraints. Key AWS Features: - AWS Storage Gateway File Gateway: NFSv3-compatible mount point for on-premises systems; stores files as S3 objects; supports local cache for frequently accessed data. - Amazon S3 Lifecycle rules: automate transition based on object age (e.g., 5 days) to archival classes. - S3 Glacier Deep Archive: lowest storage cost for long-term retention; retrieval is asynchronous and must be initiated (acceptable given 72-hour allowance). Common Misconceptions: A common trap is choosing S3 Standard-IA because it is “cheaper than Standard,” but it is not an archival tier and is typically far more expensive than Deep Archive for weekly 8 TB backups retained long-term. Another trap is picking Volume Gateway or Tape Gateway: they can be valid for block or VTL workflows, but they do not preserve NFS file-share access as required. Exam Tips: When you see “preserve NFS/SMB access,” think Storage Gateway File Gateway. When you see “archive after X days” and “hours-to-days retrieval acceptable,” think Glacier/Glacier Deep Archive with Lifecycle transitions. Match retrieval-time tolerance to the archival tier: Deep Archive is best when the business can wait hours and wants the lowest cost.

8
Pregunta 8
(Selecciona 2)

A retail enterprise operates 9 AWS accounts under a shared billing model; a compliance review found that 37 out of 120 Amazon RDS DB instances (MySQL and PostgreSQL) have unencrypted storage, and a solutions architect must migrate these DB instances to use a customer managed AWS KMS key with no more than 10 minutes of downtime per instance while also ensuring that any newly created unencrypted RDS instances in any account are automatically detected within 15 minutes and that centralized, multi-account governance emphasizes security and compliance. Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)

AWS Control Tower with strongly recommended guardrails can improve governance posture, but this option does not directly address the requirement to detect newly created unencrypted RDS instances within 15 minutes. Strongly recommended controls are optional enhancements rather than a concrete detection workflow tied to RDS creation events. It also does nothing to remediate the existing unencrypted DB instances. Therefore, it is incomplete for the stated requirements.

This is incorrect because Amazon RDS does not support enabling storage encryption in place on an existing unencrypted DB instance. The ModifyDBInstance operation cannot convert unencrypted storage to encrypted storage after creation. This is a common exam trap because many AWS services allow in-place changes, but RDS encryption at rest is not one of them. A replacement-based migration is required instead.

This is the correct remediation path for existing unencrypted Amazon RDS DB instances. RDS storage encryption is defined when the DB instance is created, so the supported approach is to create a snapshot, copy that snapshot with encryption enabled using a customer managed KMS key, and restore a new encrypted DB instance. Because the source database can remain online during the snapshot copy and restore process, the outage is generally limited to the final cutover. With proper endpoint planning and maintenance coordination, that cutover can typically be kept within the 10-minute downtime requirement.

AWS Control Tower with mandatory guardrails is useful for establishing a multi-account governance baseline, but it does not directly fulfill the explicit requirement to automatically detect newly created unencrypted RDS instances within 15 minutes. Mandatory guardrails focus on foundational governance and compliance controls, not necessarily near-real-time event detection for this specific RDS condition. Since the question asks for only two choices, the option that directly implements rapid detection is a better fit. As a result, D is helpful contextually but not one of the best answers.

This option best satisfies the requirement to automatically detect newly created unencrypted RDS instances within 15 minutes. AWS CloudTrail captures RDS API activity such as DB instance creation, and Amazon EventBridge can evaluate those events and trigger notifications or remediation workflows almost immediately across accounts. Although the wording says 'automatically encrypt,' which is not technically possible in place, the key capability here is rapid automatic detection of noncompliant instance creation. In exam terms, this is the only option that directly addresses the explicit detection timing requirement with an event-driven mechanism.

Análisis de la pregunta

Core concept: This question combines two tasks: remediating existing unencrypted Amazon RDS instances and detecting future noncompliant RDS creations across multiple AWS accounts. For existing RDS instances, storage encryption cannot be enabled on an unencrypted DB instance in place, so a replacement-based migration is required. For new instances, the requirement is explicit about automatic detection within 15 minutes, which is best met by event-driven monitoring of RDS creation activity across accounts. Why correct: Option C is the standard and supported way to migrate an unencrypted RDS DB instance to encrypted storage with a customer managed AWS KMS key. By snapshotting the source, copying the snapshot with encryption enabled, restoring a new DB instance, and cutting over at the end, most of the work happens while the original database remains available, keeping downtime limited to the final switchover. Option E provides near-real-time detection of newly created unencrypted RDS instances by using CloudTrail management events and EventBridge rules, which can trigger notifications or remediation workflows well within 15 minutes. Key features: - Amazon RDS encryption is immutable after DB instance creation; remediation requires snapshot/copy/restore or equivalent replacement. - Snapshot copy supports enabling encryption with a customer managed KMS key. - AWS CloudTrail records CreateDBInstance and related API calls across accounts. - Amazon EventBridge can match those events and invoke automation quickly for centralized compliance detection and response. Common misconceptions: - You cannot modify an existing unencrypted RDS instance to turn on encryption. - Control Tower guardrails improve governance, but they do not by themselves guarantee the specific event-driven detection behavior described in the requirement. - EventBridge cannot retroactively encrypt an already created unencrypted DB instance in place; it can only detect and trigger follow-up actions. Exam tips: When a question asks for both remediation of existing RDS encryption gaps and fast detection of future violations, look for one option that handles the immutable nature of RDS encryption and another that provides near-real-time event detection. Be careful not to overvalue governance frameworks when the requirement explicitly calls for a concrete detection mechanism and timing target.

9
Pregunta 9

A logistics startup runs a serverless order-tracking backend where a public Amazon API Gateway HTTP API invokes multiple AWS Lambda functions for different routes, and the company’s FinOps team needs an automated biweekly (every 14 days at 00:30 UTC) CSV report that lists, for each API Lambda function, the recommended configured memory, the estimated monthly cost based on the recommendation, and the price difference compared to current settings, with each report covering the prior 14-day window and being saved to an Amazon S3 bucket (for example, s3://finops-reports/serverless/biweekly/), and the company wants the solution that requires the least development time; which approach should they use?

This option requires building a full custom recommendation engine. CloudWatch Logs can show invocation behavior and memory usage details, but they do not directly provide right-sizing recommendations or estimated monthly savings, so the team would need to implement analysis logic and pricing calculations themselves. That creates significantly more development and maintenance effort than using Compute Optimizer. It also increases the risk of inaccurate recommendations compared to AWS’s managed optimization service.

Correct. AWS Compute Optimizer is the AWS service designed to generate Lambda memory recommendations and estimated cost impact based on observed utilization patterns. Using Compute Optimizer avoids writing custom code to infer optimal memory settings or calculate pricing deltas, which is exactly what the question means by least development time. Adding an EventBridge schedule and a small Lambda function to trigger the export to S3 is a lightweight automation pattern that fits the required biweekly CSV delivery model.

This option is incorrect because Compute Optimizer does not provide a native console feature to schedule recurring biweekly CSV exports directly to S3 as described. While the console can display recommendations and support export-related workflows, recurring automation still needs an external scheduler or orchestration mechanism. Therefore, this answer claims a built-in scheduling capability that is not the right assumption for the exam. It sounds convenient, but it is not the supported least-effort implementation pattern.

Trusted Advisor is not the primary AWS service for detailed Lambda memory right-sizing recommendations with estimated monthly cost impact. Even with a Business Support plan, Trusted Advisor does not replace Compute Optimizer for this use case and does not provide the described recurring export-to-S3 workflow. It also introduces unnecessary cost and an additional service dependency. For Lambda optimization recommendations, Compute Optimizer is the more direct and purpose-built choice.

Análisis de la pregunta

Core Concept: This question is about using AWS Compute Optimizer to obtain AWS Lambda memory right-sizing recommendations and estimated cost impact, then automating delivery of a CSV report to Amazon S3 on a biweekly schedule with minimal custom development. The key is to choose the managed service that already calculates recommended memory and savings rather than building that logic yourself. Why correct: AWS Compute Optimizer supports AWS Lambda recommendations, including recommended memory configuration and estimated monthly savings or cost difference compared to the current setting. The lowest-effort design is to opt in to Compute Optimizer and use scheduled automation, such as Amazon EventBridge invoking a small AWS Lambda function, to initiate the export of recommendations to an S3 bucket every 14 days at 00:30 UTC. This satisfies the need for recurring CSV output in S3 while avoiding custom analytics over logs or metrics. Key features: - Compute Optimizer analyzes Lambda usage and produces memory recommendations. - Recommendations include projected cost impact, which aligns with the FinOps reporting requirement. - Amazon EventBridge can run on a cron schedule in UTC for biweekly automation. - Amazon S3 is a supported destination for exported recommendation reports. Common misconceptions: - CloudWatch Logs do not natively provide right-sizing recommendations, so building them yourself is much more work. - Compute Optimizer does not provide a simple console-only recurring scheduler for this exact export workflow. - Trusted Advisor is not the primary service for Lambda memory right-sizing recommendations and is not the best fit here. Exam tips: When a question asks for recommended resource sizing plus estimated savings with the least development effort, prefer AWS Compute Optimizer over custom analysis. If recurring delivery is required, pair the managed recommendation service with EventBridge scheduling and S3 storage.

10
Pregunta 10

A multinational fintech firm is replatforming 80% of its payment and analytics workloads to AWS by the end of Q4, but all cardholder-processing services for 12 jurisdictions must remain in country or in the firm’s Frankfurt colocation to satisfy data residency and maintain sub-5 ms round-trip latency to on-prem HSMs, while 140 rural retail kiosks have only 5–10 Mbps backhaul with daily 30–60 minute outages, and the firm requires that developers use the same AWS APIs, IAM, and CI/CD tools to build once and deploy unchanged across on-premises, an AWS Region, or a hybrid topology; which solution provides a consistent hybrid experience under these constraints?

Direct Connect improves bandwidth and latency from a colocation to an AWS Region, but it does not keep workloads physically on premises or in-country for jurisdictions requiring local processing. It also cannot guarantee sub-5 ms RTT to on-prem HSMs if compute is in-Region. Finally, it does not address kiosk outages or provide local compute during disconnections; it’s primarily a connectivity solution, not a consistent hybrid runtime.

Snowball Edge Storage Optimized is mainly for data transfer and local storage with limited compute; it is not intended to host regulated, always-on cardholder-processing services with a broad set of native AWS services and consistent control-plane integration like Outposts. AWS Wavelength is designed for ultra-low latency applications on 5G networks in carrier locations, not for rural kiosks with 5–10 Mbps backhaul and frequent outages, and it doesn’t provide disconnected operation.

AWS Outposts racks are the correct choice for the regulated cardholder-processing workloads because they place AWS infrastructure directly in the customer’s colocation or in-country facility while preserving the same AWS APIs, IAM model, and deployment tooling used in AWS Regions. That directly satisfies the requirement for a consistent hybrid experience and supports very low latency to nearby on-prem HSMs because traffic stays on local network paths instead of traversing to a Region. Snowball Edge Compute Optimized is the best fit among the listed options for the rural kiosks because it can run local compute workloads during low-bandwidth periods and temporary WAN outages. Although Snowball Edge does not provide the same full AWS-consistent control-plane experience as Outposts, it is specifically designed for disconnected edge scenarios and is therefore the best available kiosk component in the answer set.

Local Zones extend an AWS Region closer to large metros, but they are still AWS-managed locations and may not satisfy strict “must remain in country or in our Frankfurt colocation” requirements across 12 jurisdictions, especially where no Local Zone exists. Wavelength targets 5G carrier edge and assumes reliable connectivity; it does not solve kiosk disconnections or low-bandwidth backhaul constraints, nor does it provide an on-prem consistent AWS runtime like Outposts.

Análisis de la pregunta

Core Concept: This question tests which AWS hybrid and edge services best satisfy three simultaneous constraints: strict in-country/on-prem data residency with very low latency to local HSMs, intermittent low-bandwidth edge operation at kiosks, and a consistent AWS development and operations model. The key distinction is that AWS Outposts provides a true on-prem AWS experience with the same APIs, IAM, and tooling, while Snowball Edge is an edge compute device designed for remote or disconnected environments. Why correct: Option C is the best overall solution because AWS Outposts racks can be installed in the Frankfurt colocation or other in-country facilities so regulated cardholder-processing workloads remain local and can communicate with on-prem HSMs over local network paths with very low latency. Outposts extends AWS infrastructure and services on premises and uses the same AWS APIs, IAM, CloudFormation, and operational tooling, which directly supports the requirement to build once and deploy with a consistent model across on-premises and AWS Regions. For the rural kiosks, Snowball Edge Compute Optimized is the best listed service because it can run local compute such as EC2 instances and container-based workloads even when WAN connectivity is poor or temporarily unavailable. Key features: - AWS Outposts racks provide AWS-managed infrastructure on premises with native AWS APIs, IAM integration, and familiar deployment tooling. - Outposts is designed for low-latency access to on-prem systems and for workloads that must remain in a specific physical location. - Snowball Edge Compute Optimized provides ruggedized local compute for edge sites with limited or intermittent connectivity. - Snowball Edge can support local processing at kiosks, but any data movement or reconciliation back to AWS must be designed by the application or transfer workflow rather than assumed as transparent built-in synchronization. Common misconceptions: A common mistake is to choose Direct Connect, Local Zones, or Wavelength simply because they reduce latency. Those services do not place the workload inside the customer’s facility in the same way Outposts does, and they do not solve disconnected edge operation at rural kiosks. Another misconception is to treat Snowball Edge as equivalent to Outposts for consistent AWS APIs and control-plane behavior; Snowball Edge supports edge compute, but it is not the primary service for full on-prem AWS consistency. Exam tips: When a question emphasizes same AWS APIs, IAM, and deployment tooling on premises, Outposts is usually the strongest signal. When a question emphasizes remote sites with poor connectivity or temporary disconnections, Snowball Edge is often the edge-device answer if Outposts is impractical. In mixed-constraint questions, select the option that best maps each requirement to the most appropriate AWS service, even if one component is not identical in operational model to the primary platform.

Historias de éxito(9)

S
s******Nov 24, 2025

Periodo de estudio: 3 months

I used these practice questions and successfully passed my exam. Thanks for providing such well-organized question sets and clear explanations. Many of the questions felt very close to the real exam.

T
t**********Nov 13, 2025

Periodo de estudio: 3 months

Just got certified last week! It was a tough exam, but I’m really thankful to cloud pass. the app questions helped me a lot in preparing for it.

효
효**Nov 12, 2025

Periodo de estudio: 1 month

앱 이용 잘했습니다 ^^

P
p*******Nov 7, 2025

Periodo de estudio: 2 months

These practice exams are help for me to pass the certification. A lot of questions are mimicked from here.

D
d***********Nov 7, 2025

Periodo de estudio: 1 month

Thanks. I think I passed because of high quality contents here. I am thinking to take next aws exam here.

Otros exámenes de práctica

Practice Test #1

75 Preguntas·180 min·Aprobación 750/1000

Practice Test #2

75 Preguntas·180 min·Aprobación 750/1000

Practice Test #4

75 Preguntas·180 min·Aprobación 750/1000

Practice Test #5

75 Preguntas·180 min·Aprobación 750/1000
← Ver todas las preguntas de AWS Certified Solutions Architect - Professional (SAP-C02)

Comienza a practicar ahora

Descarga Cloud Pass y comienza a practicar todas las preguntas de AWS Certified Solutions Architect - Professional (SAP-C02).

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

App de práctica para certificaciones TI

Get it on Google PlayDownload on the App Store

Certificaciones

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

Preguntas frecuentesPolítica de privacidadTérminos de servicio

Empresa

ContactoEliminar cuenta

© Copyright 2026 Cloud Pass, Todos los derechos reservados.

¿Quieres practicar todas las preguntas en cualquier lugar?

Obtén la app

Descarga Cloud Pass — incluye exámenes de práctica, seguimiento de progreso y más.