CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
AWS Certified Solutions Architecture - Associate (SAA-C03)
AWS Certified Solutions Architecture - Associate (SAA-C03)

Practice Test #10

Simulasikan pengalaman ujian sesungguhnya dengan 65 soal dan batas waktu 130 menit. Berlatih dengan jawaban terverifikasi AI dan penjelasan detail.

65Soal130Menit720/1000Skor Kelulusan
Jelajahi Soal Latihan

Didukung AI

Jawaban & Penjelasan Terverifikasi oleh 3 AI

Setiap jawaban diverifikasi silang oleh 3 model AI terkemuka untuk memastikan akurasi maksimum. Dapatkan penjelasan detail per opsi dan analisis soal mendalam.

GPT Pro
Claude Opus
Gemini Pro
Penjelasan per opsi
Analisis soal mendalam
Akurasi konsensus 3 model

Soal Latihan

1
Soal 1

A web service runs on EC2 behind a load balancer, but many clients can only allowlisted IPs through their firewalls. Make the service reachable via fixed IPs. What should be recommended?

Correct. A Network Load Balancer supports static IP addresses by using Elastic IPs (or AWS-owned static IPs) per Availability Zone via subnet mappings. This directly addresses client firewall allowlisting requirements while maintaining multi-AZ high availability and managed load balancing. Clients should allowlist all NLB EIPs (one per enabled AZ) to ensure resilience during failover.

Incorrect. An Application Load Balancer cannot be assigned Elastic IP addresses. ALB is accessed via a DNS name and the underlying IPs can change over time due to scaling and maintenance. For IP allowlisting requirements, ALB alone is not suitable unless paired with another service (e.g., Global Accelerator), which is not offered in the options.

Incorrect. A Route 53 A record can point to an Elastic IP, but that would route traffic to a single public endpoint and does not provide load balancing or inherent multi-AZ resilience. You would be replacing the load balancer with a single IP target, creating a single point of failure and losing managed load balancing behavior.

Incorrect. Putting a public EC2 proxy in front of the load balancer can provide a fixed IP, but it is an anti-pattern for resilience and operations: it introduces a single point of failure (unless you build and manage multiple proxies), adds patching/maintenance burden, and can become a bottleneck. AWS-managed solutions (NLB with EIPs) are preferred for availability and security.

Analisis Soal

Core Concept: This question tests how to provide fixed, allowlistable public IP addresses for a load-balanced service on AWS. The key distinction is that most AWS load balancers (especially ALB) do not provide static IPs; instead, you typically use DNS. When customers require IP-based firewall allowlisting, you need an architecture that can present stable IPs. Why the Answer is Correct: A Network Load Balancer (NLB) can be associated with Elastic IP addresses (EIPs) for each subnet/AZ where the NLB has a node. This gives you stable, fixed public IPs that clients can allowlist, while still providing managed load balancing and high availability. NLB operates at Layer 4 (TCP/UDP/TLS), which is sufficient for many web services and is commonly used specifically for the “static IP for load balancer” requirement. Key AWS Features: - NLB + Elastic IPs: You can allocate EIPs and assign them to the NLB’s subnet mappings, resulting in one static IP per AZ. - High availability: Deploy the NLB across multiple AZs; clients allowlist all EIPs. - Preserves client IP: NLB can pass the source IP to targets (useful for logging and security controls). - Works with TLS: NLB supports TLS listeners if needed, or you can terminate TLS on targets. Common Misconceptions: Many assume an Application Load Balancer can use EIPs; it cannot. ALB is reached via DNS name and its underlying IPs can change. Another misconception is to “just use Route 53 to point to an EIP,” but an EIP is a single endpoint and does not provide load balancing or multi-AZ resilience by itself. Exam Tips: When you see “clients require fixed IPs/allowlisting” + “load balancer,” think NLB with EIPs (or AWS Global Accelerator in other scenarios, but it’s not an option here). Also remember: ALB/NLB are typically addressed by DNS, but only NLB supports static IPs via EIPs. Ensure you account for one EIP per enabled AZ and instruct clients to allowlist all of them.

2
Soal 2

A global consulting firm uses AWS Organizations to manage 25 AWS accounts across different regional offices. The headquarters IT team maintains a central Amazon S3 bucket in the master account containing sensitive client contracts and compliance documents. Due to recent security audits, the firm needs to ensure that only users from accounts within their AWS Organization can access this critical S3 bucket. Implement access control to restrict S3 bucket access to only users from accounts within the AWS Organization, while minimizing ongoing administrative tasks and maintenance effort. Which solution achieves the access control requirements with the LEAST operational overhead?

Correct. aws:PrincipalOrgID in the S3 bucket policy restricts access to principals that belong to the specified AWS Organization ID. It automatically adapts as accounts are added to or removed from the Organization, requiring no ongoing policy edits. This is the standard, lowest-maintenance approach for Organization-scoped access to shared resources like a central S3 bucket.

Not the best choice for least overhead. aws:PrincipalOrgPaths can restrict access based on OU membership, but it introduces additional complexity and potential maintenance if accounts move between OUs or if the OU structure changes. The requirement is simply “within the Organization,” so using Organization ID is simpler and more stable than OU path-based controls.

High operational overhead. CloudTrail-driven automation to detect org membership changes and then rewrite bucket policies is complex, brittle, and unnecessary. It adds moving parts (rules, functions, permissions, error handling) and creates risk of policy drift or delayed enforcement. Native IAM condition keys already provide real-time evaluation without custom automation.

Not aligned with the requirement and increases admin work. Tagging each user (or role) and enforcing aws:PrincipalTag requires consistent tag governance across 25 accounts and ongoing lifecycle management as identities change. It also doesn’t inherently ensure the principal is in the Organization—only that it has a tag—so it’s not the most reliable or minimal-maintenance control for Organization membership.

Analisis Soal

Core Concept: This question tests Amazon S3 resource-based access control using IAM policy condition keys that integrate with AWS Organizations. The goal is to restrict access to a central S3 bucket so that only principals (users/roles) that belong to accounts in a specific AWS Organization can access it, with minimal ongoing administration. Why the Answer is Correct: Using the S3 bucket policy condition key aws:PrincipalOrgID is the most direct, low-maintenance way to enforce “only principals from my Organization” access. You specify the Organization ID (o-xxxxxxxxxx) once in the bucket policy, and S3 evaluates the caller’s principal at request time. Any account that is a member of the Organization automatically matches; any account outside the Organization is denied (when combined with an explicit Deny or when Allow statements are scoped accordingly). This eliminates the need to enumerate account IDs, manage per-account statements, or update policies as accounts join/leave. Key AWS Features: - S3 bucket policies (resource-based policies) can use global condition keys. - aws:PrincipalOrgID condition key restricts access to principals that are part of a specific AWS Organization. - Works well with centralized data lakes, shared compliance repositories, and multi-account governance patterns. - Aligns with AWS Well-Architected Security Pillar: implement least privilege and reduce manual processes that can drift. Common Misconceptions: - Many assume you must list every account ID in the bucket policy. That increases operational overhead and is error-prone as accounts change. - Some confuse OU-based controls (paths) with Organization-wide controls; OU paths can be useful but are more complex and can change with reorganizations. - Monitoring and auto-updating policies (CloudTrail + automation) sounds robust, but it’s unnecessary when a native condition key already provides dynamic membership enforcement. Exam Tips: When you see “only accounts in my AWS Organization” and “least operational overhead,” look for aws:PrincipalOrgID in a resource policy (S3, KMS, etc.). Prefer native policy conditions over event-driven automation unless the requirement explicitly needs custom logic beyond what IAM conditions provide.

3
Soal 3

A financial services company operates multiple AWS accounts for different trading algorithm testing environments. The quantitative research teams frequently deploy expensive GPU-enabled Amazon EC2 instances and high-performance database configurations that significantly exceed the allocated quarterly budget of $50,000 per testing account. The company needs to implement centralized governance to prevent the creation of costly AWS resources across all testing accounts while minimizing administrative overhead. Which solution will meet these requirements with the LEAST operational complexity?

CloudFormation templates and Systems Manager Automation help standardize provisioning, but they do not prevent users from launching resources through the console/CLI/SDK unless additional restrictive IAM/SCP controls are in place. Enforcing “must use templates” across multiple accounts typically requires ongoing IAM policy engineering, permission boundaries, and exception handling, increasing operational overhead. This is more of a standardization approach than a centralized preventive guardrail.

AWS Organizations with OUs and SCPs provides centralized, preventive governance with minimal ongoing administration. SCPs can explicitly deny launching specific EC2 instance types (including GPU families) and restrict creation/modification of expensive database configurations across all testing accounts. Because SCPs set the maximum permissions, they cannot be bypassed by account-level admins. This directly meets the requirement to prevent costly resources across accounts with low complexity.

EventBridge rules with Lambda termination is a detective/reactive control. Expensive instances may run long enough to incur significant cost, and automated termination can disrupt legitimate testing workflows. It also adds operational complexity (rule coverage, edge cases, retries, permissions, multi-region considerations) and can be circumvented if resources are created in ways not captured by events or if termination fails. Preventive controls are preferred here.

Service Catalog can provide pre-approved products and improve governance, but by itself it does not guarantee users cannot provision resources outside the catalog unless you also tightly restrict IAM permissions (often with SCPs anyway). Rolling out portfolios, managing product versions, constraints, and cross-account sharing adds administrative overhead. It is excellent for standardization and self-service, but SCPs are the simplest way to outright prevent costly resource creation.

Analisis Soal

Core Concept: This question tests centralized preventive governance for cost control in a multi-account environment. The primary AWS concept is AWS Organizations with Service Control Policies (SCPs), which provide account-level guardrails that define the maximum available permissions regardless of what IAM allows. Why the Answer is Correct: Option B is the least operationally complex way to prevent creation of costly resources across many accounts. By placing all testing accounts into an OU and attaching SCPs, the company can centrally deny specific API actions or deny usage of specific resource configurations (for example, restricting ec2:RunInstances when certain instance types are requested, or restricting RDS/DB instance classes). SCPs are evaluated before IAM permissions and apply consistently across all member accounts, ensuring teams cannot bypass controls even if they have admin-like IAM roles within their accounts. Key AWS Features: - AWS Organizations OUs: group accounts by environment (e.g., “Trading-Testing”). - SCPs: explicit deny guardrails; can be attached at OU or account level; inherited down the hierarchy. - Condition keys for fine-grained restrictions: e.g., EC2 supports conditions such as ec2:InstanceType to deny GPU families (p3, p4, g4, g5, etc.). Similar patterns can be applied to RDS via rds:DatabaseClass (where supported) or by denying specific create/modify actions unless approved tags/parameters are present. - Preventive vs detective controls: SCPs stop the spend before it happens, aligning with cost-optimization and governance best practices. Common Misconceptions: A and D can standardize deployments, but they do not inherently prevent users from launching resources outside the templates unless you also implement strong permission boundaries/SCPs. C is reactive (detect-and-terminate) and can still incur cost, cause disruption, and be bypassed or delayed. Exam Tips: When the requirement is “centralized governance across multiple accounts” with “least operational overhead” and “prevent creation,” think AWS Organizations + SCPs. Use EventBridge/Lambda for detective remediation, and Service Catalog/CloudFormation for standardization—not as the primary hard guardrail unless paired with SCPs. Map this to the AWS Well-Architected Cost Optimization pillar: implement guardrails and controls to prevent unapproved spend.

4
Soal 4

A financial services company operates a real-time stock trading platform. The application currently uses a centralized database in a single data center in Asia-Pacific. The company plans to expand globally to serve traders in North America and Europe. The company needs to deploy the trading platform across multiple AWS Regions. All trade executions must have latency under 800 milliseconds globally. The company wants separate regional deployments of the trading interface, but requires a single master trading database that maintains global consistency for portfolio balances and trade records. Which solution should a solutions architect recommend to meet these requirements?

DynamoDB global tables are a multi-active design, not a single master database architecture. Cross-Region replication is eventually consistent, and conflicts are resolved with last-writer-wins semantics, which is dangerous for financial balances and trade execution records. The requirement explicitly calls for a single master database that maintains global consistency, which global tables do not provide. Although DynamoDB can offer low local latency, it does not satisfy the consistency and single-writer intent of the question.

Amazon Aurora MySQL with cross-Region read replicas best fits the requirement for a single master trading database. Aurora keeps one primary writer in a single Region, which preserves a single source of truth for portfolio balances and trade records while allowing regional deployments to read from nearby replicas. This design supports separate regional application stacks and lower read latency for users in North America and Europe. Compared with standard MySQL replicas, Aurora is the more capable AWS-managed relational option for global read scaling and lower-lag replication.

Amazon RDS for MySQL with cross-Region read replicas also provides a single writer and regional readers, but it is generally less capable than Aurora for this type of globally distributed, latency-sensitive workload. Aurora offers better replication performance, scalability, and managed features than standard RDS MySQL. Since both B and C are similar in architecture, the exam expects the more optimized AWS-native relational choice. Therefore C is not the best answer even though it is closer than the DynamoDB option.

Aurora Serverless is not the right answer because the proposed design relies on custom Lambda-based synchronization across Regions rather than using a native single-master replication model. That introduces unnecessary complexity, operational risk, and potential consistency problems for financial transactions. The option also does not clearly preserve one authoritative writer for all trade records and balances. For exam purposes, custom cross-Region synchronization is almost never preferred over a built-in managed database replication feature.

Analisis Soal

Core Concept: This question is testing how to design a multi-Region application when the business requires regional application deployments for lower user latency, but still wants a single master database of record for globally consistent trade and portfolio data. The key distinction is between a single-writer architecture and a multi-active replicated database. Why Correct: Amazon Aurora MySQL with cross-Region read replicas provides one primary writer database in a single Region, which satisfies the requirement for a single master trading database. Regional application deployments in North America and Europe can use local read replicas for lower-latency reads, while all writes are directed to the primary Region to preserve consistency for portfolio balances and trade records. This is the closest AWS-managed design to a globally deployed application with one authoritative transactional database. Key Features: Aurora supports cross-Region replication, local reader endpoints, and a primary writer instance that acts as the source of truth. Aurora generally offers lower replication lag and better performance than standard RDS MySQL replicas, making it more suitable for latency-sensitive global applications. It also preserves relational database semantics that are often important for financial transactions. Common Misconceptions: A common trap is choosing DynamoDB global tables whenever a question mentions multiple Regions and low latency. However, global tables are multi-active and eventually consistent across Regions, with conflict resolution that is not appropriate for a single-master financial ledger. Another misconception is that standard MySQL read replicas are equivalent to Aurora for demanding global transactional workloads; Aurora is typically the stronger managed option. Exam Tips: When the question explicitly says single master database, avoid multi-writer or eventually consistent global database options. For globally distributed applications that still need one authoritative transactional writer, look for Aurora with cross-Region replicas rather than DynamoDB global tables. Pay close attention to wording like single master, global consistency, and trade records, which strongly suggests a relational single-writer design.

5
Soal 5

A financial services company has an Amazon Elastic File System (Amazon EFS) file system that stores critical regulatory compliance documents and audit templates. The company has multiple Amazon EC2 instances running compliance monitoring applications that need to access these documents for analysis and reporting purposes. The compliance applications must have read access to the regulatory documents but should never be able to modify, delete, or corrupt these critical files. The company needs to implement IAM-based access controls to ensure data integrity while maintaining operational efficiency. The solution must prevent any write operations from the applications while still allowing necessary read operations for compliance reporting. Which solution will meet these requirements?

Using the ro mount option can prevent writes from the OS perspective, but it is not IAM-based and is not a robust security control. A user with sufficient privileges on the instance (or a compromised root process) could remount the file system as read-write. It also requires per-instance configuration and drift management, which reduces operational efficiency and does not meet the stated IAM-based requirement.

An EFS file system resource-based policy can explicitly deny elasticfilesystem:ClientWrite for the EC2 instance roles. Because explicit Deny overrides any Allow, this reliably blocks write operations (create/modify/delete) while still permitting ClientMount and ClientRead as needed. It is centrally managed on the EFS resource, scales across many instances, and aligns directly with “IAM-based access controls” and “prevent any write operations.”

An identity-based policy with an explicit Deny on elasticfilesystem:ClientWrite would also block writes for those roles, but it is less operationally efficient and more error-prone at scale than a single EFS resource policy. You must ensure every relevant role has the Deny and that no alternate roles are used. The question emphasizes operational efficiency and centralized integrity controls, which favors a resource-based policy.

EFS access points and POSIX permissions are useful for enforcing directory-level access and consistent UID/GID mapping, but they are not primarily IAM-based controls. POSIX permissions can be misconfigured, and applications might still access the file system through other paths (different access points or direct mounts) if allowed. This option is better as defense-in-depth, not as the primary IAM-based guarantee to prevent all writes.

Analisis Soal

Core Concept: This question tests Amazon EFS authorization using IAM (EFS file system policies) versus OS-level controls (mount options/POSIX). With EFS, IAM authorization is enforced at the EFS API layer via the client actions (ClientMount, ClientRead, ClientWrite, and optionally ClientRootAccess) and is evaluated using IAM identity policies and/or an EFS resource-based policy. Why the Answer is Correct: Option B uses an EFS resource-based policy to explicitly deny elasticfilesystem:ClientWrite for the IAM roles used by the EC2 instances. An explicit Deny is the strongest control in AWS policy evaluation logic and will override any Allow that might be granted elsewhere (identity policies, permission boundaries, or other attached policies). This directly meets the requirement to “never be able to modify, delete, or corrupt” the documents while still allowing read operations (assuming ClientRead and ClientMount are allowed). It is IAM-based, centrally managed on the file system, and operationally efficient because you can apply it once to the EFS file system rather than relying on per-instance configuration. Key AWS Features: - EFS file system policy (resource-based policy) to control NFS client access using IAM principals. - EFS IAM authorization actions: elasticfilesystem:ClientMount, elasticfilesystem:ClientRead, elasticfilesystem:ClientWrite. - Policy evaluation: explicit Deny overrides any Allow. - Works well with EC2 instance profiles/roles and scales across many instances and applications. Common Misconceptions: - “Read-only mount” (Option A) is not IAM-based and is not a strong security boundary; a privileged user/process can remount read-write. - Identity-based Deny (Option C) can work, but it is less operationally efficient and easier to mismanage at scale than a single resource policy on the file system. - Access points and POSIX permissions (Option D) are valuable, but they are primarily POSIX/OS-level controls; by themselves they do not satisfy the requirement for IAM-based access controls and can be bypassed if other access paths/permissions exist. Exam Tips: When you see “must never write” and “IAM-based controls,” look for explicit Deny using EFS ClientWrite. Prefer resource-based policies when you want centralized control over a shared resource accessed by many roles/accounts. Remember: mount options and POSIX permissions are important defense-in-depth, but they are not substitutes for IAM enforcement when the question explicitly asks for IAM-based access controls.

Ingin berlatih semua soal di mana saja?

Unduh Cloud Pass — termasuk tes latihan, pelacakan progres & lainnya.

6
Soal 6

A global technology company 'GlobalTech' with hundreds of AWS accounts, managed by AWS Organizations, needs to prove compliance with multiple security standards like NIST and PCI DSS to its auditors. The company wants a central and automated way to monitor the security posture of all accounts. The solution must allow the company's security team to designate a single account to monitor the current state of security controls across all member accounts without having to manually enable and manage compliance standards in each individual account. Which solution will meet these requirements?

Correct. AWS Security Hub is the AWS service designed to aggregate security findings and report compliance posture against enabled standards (e.g., PCI DSS and NIST-related controls where available). With AWS Organizations integration and a delegated administrator account, the security team can centrally enable Security Hub and standards for all member accounts (including auto-enablement), avoiding per-account manual setup while providing auditors a consolidated compliance view.

Incorrect. Amazon GuardDuty is a threat detection service that analyzes logs (e.g., VPC Flow Logs, DNS logs, CloudTrail events) to identify suspicious activity and potential compromise. It does not provide organization-wide compliance standards dashboards for frameworks like PCI DSS or NIST. While GuardDuty supports delegated administration and multi-account management, it is not the primary service for enabling and tracking compliance controls across accounts.

Incorrect. AWS CloudTrail organization trails centralize API activity logging across accounts, which is valuable for auditing and forensics. However, CloudTrail does not have a feature to “enable CloudTrail security standards for NIST and PCI DSS” nor does it provide a compliance controls dashboard. CloudTrail data can feed other services (e.g., Security Hub via findings sources, SIEMs), but it is not itself a compliance posture management solution.

Incorrect. Amazon Inspector is focused on automated vulnerability management (e.g., CVEs, package vulnerabilities, network reachability) for supported compute and container resources across accounts. Although it can be centrally managed with Organizations, it does not provide broad compliance standards enablement and control-by-control posture reporting like Security Hub. Inspector findings can be sent to Security Hub, but Inspector alone will not meet the stated compliance monitoring requirement.

Analisis Soal

Core Concept: This question tests centralized, automated compliance posture management across many AWS accounts in AWS Organizations. The primary service for aggregating security findings and mapping them to compliance frameworks (e.g., PCI DSS, NIST) is AWS Security Hub, using delegated administrator and organization-wide auto-enablement. Why the Answer is Correct: Option A correctly uses AWS Security Hub with AWS Organizations. By designating a Security Hub delegated administrator account, GlobalTech can centrally manage Security Hub for all member accounts and enable security standards once from the admin account. Security Hub then continuously evaluates and aggregates findings (from AWS services and partners) and presents compliance status against enabled standards across the organization, satisfying the requirement to avoid manually enabling and managing standards in each account. Key AWS Features: Security Hub supports: - Delegated administrator for Organizations, allowing a non-management account to administer Security Hub organization-wide. - Auto-enable for new and existing accounts, reducing operational overhead as accounts are added. - Security standards (e.g., PCI DSS, CIS, NIST packages depending on region/availability) with control status dashboards and consolidated findings. - Cross-account aggregation and centralized visibility, aligning with AWS Well-Architected Security Pillar principles (continuous monitoring, centralized governance, and automated controls). Common Misconceptions: GuardDuty is for threat detection (malicious activity/anomalies), not compliance standards dashboards. Inspector focuses on vulnerability management (EC2/ECR/Lambda) rather than broad multi-control compliance frameworks. CloudTrail is an audit log service; while essential for investigations and governance, it does not provide built-in “enable NIST/PCI standards” compliance scoring across accounts. Exam Tips: When you see “prove compliance,” “standards like PCI/NIST,” “central dashboard,” and “across all accounts,” think Security Hub. When you see “threat detection,” think GuardDuty. When you see “vulnerability scanning,” think Inspector. When you see “API logging,” think CloudTrail. Also watch for “delegated administrator” and “Organizations integration” as key phrases indicating centralized multi-account management.

7
Soal 7

A financial services company operates an online banking platform hosted on Amazon EC2 instances behind an Application Load Balancer. Due to regulatory compliance requirements, the platform must only be accessible from customers located within the United States. The company needs to implement a solution that restricts access to the banking platform based on geographic location while maintaining high performance and security. Which configuration will meet this geographic access restriction requirement?

Security groups can only allow/deny based on protocol/port and source IP/CIDR (or other SGs), not by country. Trying to allow only “US IP ranges” would require continuously maintaining large, changing CIDR lists and still would not reliably map to geography. This approach is operationally heavy, error-prone, and not a true geo-based control suitable for compliance.

An ALB security group cannot filter by geographic location. Like all security groups, it supports only IP/CIDR sources, ports, and protocols. There is no native “country” condition in security group rules. Therefore, this option is not technically feasible and would not meet the stated requirement.

AWS WAF supports geo match statements that can allow or block requests based on the originating country. Associating a WAF Web ACL with the ALB enforces the restriction at the application entry point, before traffic reaches EC2. It also provides logging/metrics for audit and compliance, and it scales automatically with the ALB, maintaining performance and security.

Network ACLs operate at Layer 3/4 and can only allow/deny based on IP/CIDR and ports/protocols, not geography. Like security groups, they would require maintaining large IP lists and still would not provide reliable country-based enforcement. NACLs are also stateless, increasing complexity, and they lack the application-layer visibility and logging features that WAF provides.

Analisis Soal

Core Concept: This question tests edge/application-layer access control based on client geography. In AWS, the purpose-built service for geo-based allow/deny at Layer 7 (HTTP/S) is AWS WAF using a geo match statement, typically associated with an Application Load Balancer (ALB) or CloudFront. Why the Answer is Correct: Configuring AWS WAF on the ALB with geographic match conditions allows the company to explicitly allow requests originating from the United States and block all other countries. This meets regulatory requirements while preserving performance and security because filtering happens before traffic reaches the EC2 instances. WAF integrates natively with ALB, supports managed visibility (metrics/logging), and provides deterministic policy enforcement at the application entry point. Key AWS Features: - AWS WAF Web ACL associated with an ALB: central policy enforcement for all targets behind the ALB. - Geo match statement: matches the request’s source country (derived from the client IP) and can be used in allow-list or block-list rules. - Default action + rule priority: common pattern is “Block” by default, then “Allow US” (or vice versa depending on design), ensuring non-US is denied. - Observability and audit: WAF logging to Amazon S3/CloudWatch Logs/Kinesis Data Firehose and CloudWatch metrics supports compliance evidence. - Security best practice: combine geo restriction with rate-based rules, bot control, and managed rule groups for OWASP protections. Common Misconceptions: Many assume security groups or network ACLs can filter by country. They cannot; they only evaluate IP/CIDR and ports/protocols. Maintaining “US IP ranges” manually is impractical and error-prone because IP ownership and geolocation change frequently, and there is no authoritative “US-only” CIDR list you can reliably apply at SG/NACL scale. Exam Tips: When you see “restrict by geographic location” for HTTP/S workloads behind ALB, think AWS WAF geo match (or CloudFront geo restriction if CloudFront is in the design). For L3/L4 IP-based restrictions, use SG/NACL, but for country-based controls and compliance-friendly logging, WAF is the standard answer. Also note that ALB security groups do not support geo conditions—only IP/CIDR-based rules.

8
Soal 8

A healthcare organization needs to provide access to their AWS account for a third-party compliance auditing firm. The auditing firm uses specialized automated scanning tools that run from their own AWS environment to assess security configurations and compliance posture across multiple client accounts. The auditing firm operates from AWS account ID 123456789012 and needs read-only access to security-related services for a 30-day assessment period. The healthcare organization must maintain full control over access permissions and be able to revoke access immediately when the audit concludes. How should a solutions architect securely grant the auditing firm access to the healthcare organization's AWS account?

Correct. Create an IAM role in the healthcare account and allow the auditor’s AWS account (preferably a specific role ARN) to assume it via a trust policy. Attach least-privilege read-only security policies. Use STS for temporary credentials and set maximum session duration. Add an ExternalId condition to mitigate confused deputy risk. Revocation is immediate by editing the trust policy or deleting the role.

Incorrect. Creating an IAM user and sharing access keys introduces long-term credentials that can be copied, reused, and are harder to control. Even if you later deactivate keys, you’ve increased exposure during the audit window and violated best practices for third-party access. AWS recommends roles with temporary STS credentials instead of sharing programmatic access keys with external entities.

Incorrect. IAM groups cannot include principals (users/roles) from another AWS account; cross-account “membership” is not supported. Cross-account access is achieved through role assumption (STS) and resource-based policies, not by adding external users to local groups. Time-based controls are typically implemented via role session duration, conditions, or automation—not cross-account group membership.

Incorrect. IAM identity providers are for federation using SAML 2.0 or OIDC, not for trusting an “external AWS account” by providing root credentials. Sharing or using root credentials is explicitly against AWS security best practices. The correct pattern for another AWS account is an IAM role with a trust policy and STS AssumeRole, optionally with ExternalId.

Analisis Soal

Core Concept: This question tests secure cross-account access using IAM roles (STS AssumeRole) and least-privilege delegation. For third parties operating from their own AWS account, the AWS best practice is to grant access via a role in the customer account that the third party can assume, rather than sharing long-term credentials. Why the Answer is Correct: Option A creates an IAM role in the healthcare organization’s account with a trust policy that allows AWS account 123456789012 (the auditor) to assume it. The healthcare organization retains full control because permissions are attached to the role in their account, and access can be revoked instantly by modifying/deleting the role or its trust policy. The 30-day requirement is naturally supported because role sessions are temporary (STS credentials) and can be further constrained via maximum session duration and monitoring. Key AWS Features: - IAM Role + Trust Policy: The trust policy specifies who can assume the role (the auditor’s account, ideally restricted to a specific auditor role ARN rather than the entire account). - STS Temporary Credentials: AssumeRole returns short-lived credentials, reducing risk versus long-term keys. - External ID: Protects against the “confused deputy” problem when a third party serves multiple customers; the auditor must present the agreed external ID to assume the role. - Least privilege: Attach read-only, security-focused policies (e.g., SecurityAudit managed policy and/or custom policies scoped to required services). Consider permission boundaries or explicit denies for sensitive data services. - Governance: Use CloudTrail to log AssumeRole events, and optionally require MFA or source conditions where feasible. Common Misconceptions: Sharing access keys (IAM user) can seem simpler, but it creates long-lived credentials outside your control and is harder to revoke safely if copied. “Cross-account group membership” doesn’t exist in IAM. “External AWS Account” is not an IAM identity provider type; federation uses SAML/OIDC, not another account’s root credentials. Exam Tips: When a third party needs access from their own AWS account, look for “create a role in your account and allow them to assume it.” Add ExternalId for third-party access, prefer temporary credentials, and ensure immediate revocation by changing the trust policy or deleting the role. This aligns with AWS Well-Architected Security Pillar: least privilege, strong identity foundation, and traceability.

9
Soal 9

A fintech startup is developing a real-time stock trading platform that will handle fluctuating trading volumes throughout market hours. The platform experiences peak loads during market opening/closing times with up to 50,000 concurrent users, but drops to 2,000 users during off-peak hours. The company requires zero infrastructure management overhead and automatic scaling to handle sudden spikes in trading activity within 30 seconds. The platform must maintain 99.99% availability during market hours and support both real-time market data queries and high-frequency trade executions. Which solution will meet these requirements?

This option best satisfies the requirement for zero infrastructure management because it uses fully managed services across the stack. API Gateway and Lambda automatically scale with incoming traffic and avoid the instance launch delays, patching, and capacity planning associated with EC2. DynamoDB on-demand is a strong fit for unpredictable traffic because it removes throughput provisioning and supports highly elastic request patterns. S3 and CloudFront provide resilient, globally distributed delivery for static dashboard content and reduce latency for end users.

This option keeps the application tier serverless, which is attractive, but Aurora is a relational database service that introduces more database administration considerations than DynamoDB. Aurora Auto Scaling typically refers to scaling read capacity or adjusting database capacity, but it is not the same operational model as DynamoDB on-demand for highly bursty request patterns. The question does not state a need for relational queries or SQL transactions, so Aurora adds complexity without a clear requirement. For an exam question emphasizing minimal operations and unpredictable spikes, DynamoDB is the more aligned choice.

This option uses DynamoDB, which is a good database choice for variable traffic, but the compute layer relies on EC2 instances and Auto Scaling groups. EC2 Auto Scaling still requires infrastructure management such as AMI maintenance, patching, scaling policy tuning, and instance lifecycle handling. It also may not react as quickly as a serverless stack because new instances need time to launch and become healthy behind the load balancer. That makes it less aligned with the requirement for zero infrastructure management overhead and rapid scaling for sudden spikes.

This option is the least aligned because it combines EC2-based infrastructure management with a relational database layer that is not clearly required by the scenario. Although ALB, Auto Scaling, and Aurora can be designed for high availability, they require more operational effort than a serverless architecture. EC2 scaling is generally slower than Lambda-based scaling because instances must boot and register before serving traffic. Aurora may be appropriate for relational workloads, but the question emphasizes elasticity and minimal management rather than relational database features.

Analisis Soal

Core Concept: This question tests selecting fully managed, serverless services that provide rapid, automatic scaling with minimal operational overhead while meeting high availability and low-latency needs. Key services are API Gateway + AWS Lambda for compute, DynamoDB on-demand for spiky transactional workloads, and S3/CloudFront for highly available static delivery. Why the Answer is Correct: Option A is the only end-to-end serverless design. API Gateway and Lambda eliminate infrastructure management and can scale automatically to sudden traffic changes. Lambda scales by increasing concurrent executions; API Gateway scales horizontally without pre-provisioning. DynamoDB on-demand capacity is purpose-built for unpredictable, bursty traffic and can accommodate large spikes without capacity planning, aligning with “within 30 seconds” responsiveness and market open/close surges. S3 plus CloudFront provides highly available, low-latency delivery for static dashboards globally, reducing load on the APIs. Key AWS Features: - API Gateway: managed front door, throttling/usage plans, caching (optional) for market data queries, and multi-AZ resilience. - AWS Lambda: automatic scaling, no servers, supports rapid scale-out; use reserved concurrency to protect critical trade execution paths and provisioned concurrency if ultra-low cold-start latency is required. - DynamoDB on-demand: instant elasticity for read/write throughput, multi-AZ by default; consider DynamoDB Accelerator (DAX) for read-heavy market data and DynamoDB Streams for event-driven workflows. - CloudFront + S3: high availability and global edge caching for static UI and potentially cacheable market data. Common Misconceptions: Aurora “Auto Scaling” (Option B/D) can be attractive for relational trade data, but Aurora capacity changes (especially Serverless v2 scaling) are not the same as instantaneous, unconstrained scaling under extreme spikes, and it introduces more tuning/connection management considerations. EC2 Auto Scaling (Option C/D) can scale, but it violates “zero infrastructure management overhead” and typically cannot guarantee scaling to large spikes within 30 seconds due to instance launch/warm-up times. Exam Tips: When requirements emphasize “no infrastructure management,” “sudden spikes,” and very fast scaling, default to serverless (API Gateway/Lambda) and DynamoDB on-demand for unpredictable throughput. Use EC2/ALB when you need OS-level control or long-lived connections, and use Aurora when strong relational constraints are explicitly required. Also map availability targets to managed multi-AZ services (S3, CloudFront, API Gateway, Lambda, DynamoDB) for 99.99%+ designs.

10
Soal 10

A financial services company has deployed a RESTful API using Amazon API Gateway and AWS Lambda to process loan application documents. The system handles document uploads in PDF and PNG formats containing customer financial statements, tax returns, and bank records. The company needs to modify the Lambda code to automatically detect and identify personally identifiable information (PII) such as social security numbers, bank account numbers, and credit card information from the uploaded documents to ensure regulatory compliance. Which solution will meet these requirements with the LEAST operational overhead?

Using open-source Python libraries for OCR and PII detection shifts responsibility to the company for accuracy, scaling, patching, and ongoing maintenance. OCR quality on scanned PDFs and varied document layouts can be inconsistent, and building robust PII detection rules/models is non-trivial. This option has higher operational overhead and higher compliance risk because the company must validate and continuously improve detection performance.

Textract is a strong choice for extracting text from PDFs/PNGs, but using SageMaker to identify PII adds significant operational overhead. The team would need to select an algorithm, prepare labeled training data, train/tune models, deploy and scale endpoints, and monitor drift and performance. This contradicts the “least operational overhead” requirement when a managed PII detector (Comprehend) exists.

Textract provides managed OCR optimized for documents, including multi-page PDFs and structured data like forms and tables—common in loan applications. Amazon Comprehend offers managed PII entity detection (DetectPiiEntities) that can identify items like SSNs and payment card numbers without building or hosting models. Together, they meet the requirement with minimal operations: Lambda orchestrates API calls and handles results for compliance workflows.

Rekognition can extract text from images, but it is not the best fit for document-centric OCR, especially for multi-page PDFs and structured financial forms. Textract is purpose-built for document processing and generally provides better results and features (forms/tables). While Comprehend is appropriate for PII detection, the OCR choice increases complexity and may reduce accuracy, making this less optimal than Textract + Comprehend.

Analisis Soal

Core Concept: This question tests selecting fully managed AWS AI services to extract text from documents (OCR) and then detect PII with minimal operational overhead. The key services are Amazon Textract for document text extraction from PDFs/PNGs and Amazon Comprehend (specifically Comprehend PII) for identifying sensitive entities like SSNs, bank account numbers, and credit card numbers. Why the Answer is Correct: Option C is the lowest-ops, purpose-built approach: Textract reliably extracts text from scanned/structured financial documents (including forms and tables), and Comprehend provides managed PII entity detection without building, training, or hosting models. This aligns with the requirement to “modify the Lambda code” (i.e., call managed APIs from Lambda) while keeping operational burden low. The workflow is straightforward: Lambda receives/locates the uploaded document (often in S3), invokes Textract (sync for small docs or async jobs for multi-page PDFs), collects extracted text, then calls Comprehend DetectPiiEntities to return PII types and offsets for redaction, alerting, or compliance logging. Key AWS Features: Textract supports PDFs and images and can extract text plus structure (forms/tables), which is common in loan packages. Comprehend PII provides pre-trained detection for common PII (e.g., CREDIT_DEBIT_NUMBER, BANK_ACCOUNT_NUMBER, SSN) and returns confidence scores and character offsets. Both are serverless/managed, integrate well with Lambda, and reduce patching, scaling, and model lifecycle management. For production, store documents in S3, use asynchronous Textract for large PDFs, and apply least-privilege IAM permissions and encryption (SSE-KMS) to meet financial compliance expectations. Common Misconceptions: Rekognition can do OCR (DetectText) but is optimized for images/video and is not the best fit for multi-page PDFs and complex document layouts compared to Textract. SageMaker can certainly build a PII classifier, but that introduces training, hosting endpoints, monitoring, and retraining—high operational overhead versus Comprehend’s managed PII detection. DIY Python libraries may seem simple, but accuracy, maintenance, scaling, and compliance validation become the company’s responsibility. Exam Tips: For “least operational overhead,” prefer managed AI services over custom ML. For document OCR on PDFs/forms/tables, think Textract. For managed NLP tasks like PII detection, think Comprehend (PII). Rekognition OCR is typically for scene text in images, not document processing pipelines.

Kisah Sukses(31)

이
이**Apr 25, 2026

Masa belajar: 1 month

시험문제보다 난이도가 있는편이고 같은문제도 조금 나왔습니다

C
C*********Mar 23, 2026

Masa belajar: 1 week

요구사항 정확히 인지하기(이거 젤중요 이 훈련이 제일 중요한듯), 오답노트 갈겨서 200문제만 확실히 해서 갔어요 실제 시험 지문은 훨씬 간단한데 난이도는 앱이랑 비슷하거나 더 낮았던것같아요 느낌상 탈락이었는데 통과해서 기쁘네요 큰 도움 되었습니다 감사해요!

소
소**Feb 22, 2026

Masa belajar: 1 week

그냥 문제 풀면서 개념들 GPT에 물어보면서 학습했어요 768점 턱걸이 합격,,

조
조**Jan 12, 2026

Masa belajar: 3 months

그냥 꾸준히 공부하고 문제 풀고 합격했어요 saa 준비생분들 화이팅!!

김
김**Dec 9, 2025

Masa belajar: 1 month

앱으로는 4일만에 몇 문제를 풀었는지 모르겠지만 1딜동안 aws 기본 개념부터 덤프로 시나리오 그려보고 하니까 합격했습니다. 시험은 생각보다 헷갈리게 나와서 당황했는데 30분 추가 테크로 flag한 지문들 다시 확인하니까 문제 없었습니다.

Tes Latihan Lainnya

Practice Test #1

65 Soal·130 mnt·Lulus 720/1000

Practice Test #2

65 Soal·130 mnt·Lulus 720/1000

Practice Test #3

65 Soal·130 mnt·Lulus 720/1000

Practice Test #4

65 Soal·130 mnt·Lulus 720/1000

Practice Test #5

65 Soal·130 mnt·Lulus 720/1000

Practice Test #6

65 Soal·130 mnt·Lulus 720/1000

Practice Test #7

65 Soal·130 mnt·Lulus 720/1000

Practice Test #8

65 Soal·130 mnt·Lulus 720/1000

Practice Test #9

65 Soal·130 mnt·Lulus 720/1000
← Lihat Semua Soal AWS Certified Solutions Architecture - Associate (SAA-C03)

Mulai Latihan Sekarang

Unduh Cloud Pass dan mulai berlatih semua soal AWS Certified Solutions Architecture - Associate (SAA-C03).

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

Aplikasi Latihan Sertifikasi IT

Get it on Google PlayDownload on the App Store

Sertifikasi

AWSGCPMicrosoftCiscoCompTIADatabricks

Hukum

FAQKebijakan PrivasiSyarat dan Ketentuan

Perusahaan

KontakHapus Akun

© Hak Cipta 2026 Cloud Pass, Semua hak dilindungi.

Ingin berlatih semua soal di mana saja?

Dapatkan aplikasi

Unduh Cloud Pass — termasuk tes latihan, pelacakan progres & lainnya.