CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
AWS Certified Advanced Networking - Specialty (ANS-C01)
AWS Certified Advanced Networking - Specialty (ANS-C01)

Practice Test #1

Simulasikan pengalaman ujian sesungguhnya dengan 65 soal dan batas waktu 170 menit. Berlatih dengan jawaban terverifikasi AI dan penjelasan detail.

65Soal170Menit750/1000Skor Kelulusan
Jelajahi Soal Latihan

Didukung AI

Jawaban & Penjelasan Terverifikasi oleh 3 AI

Setiap jawaban diverifikasi silang oleh 3 model AI terkemuka untuk memastikan akurasi maksimum. Dapatkan penjelasan detail per opsi dan analisis soal mendalam.

GPT Pro
Claude Opus
Gemini Pro
Penjelasan per opsi
Analisis soal mendalam
Akurasi konsensus 3 model

Soal Latihan

1
Soal 1

A company has two AWS Direct Connect links to its on-premises data center. One Direct Connect link terminates in the us-east-1 Region, and the other Direct Connect link terminates in the af-south-1 Region. The company is using BGP to exchange routes with AWS. The company's on-premises environment needs to be configured to use the us-east-1 link as the primary path to AWS and the af-south-1 link as the secondary (backup) path. A network engineer must configure BGP on the on-premises router to ensure that the us-east-1 link is preferred for all traffic to AWS, and the af-south-1 link is used only if the primary link fails. The solution must use standard BGP attributes and AWS BGP community tags. How should a network engineer configure BGP to ensure that af-south-1 is used as a secondary link to AWS?

Incorrect. Although the local preference values are set properly for outbound traffic from on premises, the AWS communities are not. The backup af-south-1 link should carry the AWS low-preference community 7224:7300 so AWS de-prefers that path for return traffic, but this option places 7224:7300 on neither the correct path nor in the correct role. As a result, it does not cleanly establish af-south-1 as the AWS-side backup.

Correct. This option sets local preference to 200 on the us-east-1 BGP peer and 50 on the af-south-1 peer, so the on-premises router prefers us-east-1 for traffic going to AWS. It also applies AWS community 7224:7300 to the af-south-1 connection, which lowers AWS local preference for routes learned over that link and makes it the backup path for return traffic. Together, these settings create the intended primary/secondary behavior using both a standard BGP attribute and an AWS-supported BGP community.

Incorrect. This option reverses the customer-side local preference values, assigning a lower value to us-east-1 and a higher value to af-south-1. That would cause the on-premises router to prefer af-south-1 for traffic to AWS, which directly violates the requirement that us-east-1 be primary. Even if the communities were otherwise useful, the wrong local preference makes this option invalid.

Incorrect. This option is wrong on both major controls. It gives af-south-1 the higher local preference, causing the customer network to prefer the backup link, and it also applies the AWS low-preference community to us-east-1 instead of af-south-1. That combination makes the intended backup path more attractive and the intended primary path less attractive, which is the opposite of the requirement.

Analisis Soal

Core concept: This question tests how to build deterministic primary/backup routing over two AWS Direct Connect connections by combining customer-side BGP best-path selection with AWS Direct Connect BGP communities. The on-premises router should prefer the us-east-1 session for traffic destined to AWS by assigning it a higher local preference, while the af-south-1 session should be treated as backup with a lower local preference. On the AWS side, the customer should tag routes advertised over the backup DX with the AWS low-local-preference community so AWS prefers the primary DX for return traffic when both paths are available. Why correct: The correct design is to set a higher local preference on the us-east-1 BGP peer and a lower local preference on the af-south-1 BGP peer so the enterprise network always chooses us-east-1 first for outbound traffic to AWS. In addition, the af-south-1 link should be tagged with AWS community 7224:7300, which tells AWS to assign a lower local preference to routes learned on that connection. This makes af-south-1 the less-preferred path from AWS back to on premises, so it functions as the backup unless the primary path fails. Key features: - Local Preference is a standard BGP attribute used inside the customer AS; higher values are preferred. - AWS Direct Connect supports BGP communities such as 7224:7300 to influence AWS local preference for inbound traffic toward customer prefixes. - Using both controls together provides symmetric primary/backup intent: customer-side preference for outbound traffic and AWS-side preference for return traffic. Common misconceptions: - Many candidates confuse AWS DX communities with customer local preference. AWS communities influence AWS's handling of routes you advertise, not your router's best-path decision. - Another common mistake is assuming 7224:7100 means 'primary' and 7224:7300 means 'secondary' in a generic sense. In practice, 7224:7300 is the explicit low-preference community used to de-prefer a backup path. - It is also easy to reverse local preference values, which would make the backup link active instead of standby. Exam tips: - For customer-to-AWS path preference, look first at local preference on the customer router: higher on the primary, lower on the backup. - For AWS-to-customer return path preference, look for the AWS DX community that lowers AWS local preference on the backup path, typically 7224:7300. - If an option gets local preference right but applies the low-preference AWS community to the primary link, it is not the best answer.

2
Soal 2

A company recently implemented a security policy that prohibits developers from launching VPC network infrastructure. The policy states that any time a NAT gateway is launched in a VPC, the company's network security team must immediately receive an alert to terminate the NAT gateway. The network security team needs to implement a solution that can be deployed across AWS accounts with the least possible administrative overhead. The solution also must provide the network security team with a simple way to view compliance history. The solution must be able to detect the creation of a NAT Gateway in any VPC, alert the security team, automatically terminate the resource, and provide a historical record of compliance. The solution must also be easily deployable across multiple AWS accounts with minimal manual effort. Which solution will meet these requirements?

Running a cron-based script on EC2 in each account creates significant administrative overhead (instances, patching, IAM, scheduling, scaling). A 5-minute interval is not immediate detection and can allow policy violations to persist. Logging to RDS adds cost and management and still doesn’t provide native compliance reporting. This approach is operationally heavy and not aligned with AWS managed governance patterns.

Lambda reduces server management, but the option describes “programmatically checks,” implying polling rather than event-driven detection. Polling still isn’t immediate and requires scheduling and custom state storage. OpenSearch per account is expensive and operationally complex, and it doesn’t provide the straightforward compliance history and audit views that AWS Config provides. SAM helps deployment, but governance/reporting remains custom.

GuardDuty is a threat detection service and does not provide a standard finding type for NAT gateway creation (the referenced finding type is not a typical GuardDuty taxonomy). Even if events were captured, storing runtime logs in S3 is not the same as compliance history with rule evaluations and timelines. This option also mixes services in a way that is less reliable for compliance governance than AWS Config.

AWS Config is purpose-built for compliance monitoring and provides an easy-to-consume compliance history and configuration timeline. A custom Config rule can detect prohibited NAT gateways, and SSM Automation remediation can automatically alert (e.g., via SNS) and delete the NAT gateway. CloudFormation StackSets enables centralized, low-touch deployment across many accounts/OUs with consistent IAM roles and runbooks, meeting the multi-account and minimal overhead requirements.

Analisis Soal

Core Concept: This question tests governance and compliance automation across multiple AWS accounts. The best-fit pattern is AWS Config for continuous resource compliance evaluation, paired with automated remediation (AWS Systems Manager Automation) and multi-account deployment (AWS CloudFormation StackSets). Why the Answer is Correct: A custom AWS Config rule can evaluate whether prohibited resources (NAT gateways) exist or are created, and it records compliance state changes over time. That directly satisfies the requirement for a “simple way to view compliance history” because AWS Config provides a timeline of configuration changes and compliance results per resource and per rule. By attaching an SSM Automation remediation action, the security team can automatically respond: send an alert (typically via SNS/SES integration from the automation or a Lambda step) and then call the EC2 API to delete the NAT gateway. Finally, StackSets provides the least administrative overhead for deploying the same Config rule, IAM roles, and SSM runbooks consistently across many accounts (and OUs) in AWS Organizations. Key AWS Features: - AWS Config custom rules: evaluate resources and maintain compliance history. - Remediation with SSM Automation: standardized, auditable runbooks; can be set to auto-remediate on noncompliance. - CloudFormation StackSets: centralized, multi-account/multi-Region rollout with drift detection. - Well-Architected (Security pillar): continuous monitoring, automated remediation, and centralized governance. Common Misconceptions: - “Just detect creation events”: Event-driven detection alone (e.g., periodic scripts or ad hoc logs) often lacks durable compliance history and standardized reporting. - “GuardDuty will detect NAT gateway creation”: GuardDuty is for threat detection; NAT gateway creation is not a typical GuardDuty finding type. - “Cron on EC2 is simplest”: it increases ops burden, is not real-time, and doesn’t provide native compliance reporting. Exam Tips: When you see requirements like multi-account deployment, minimal overhead, automatic remediation, and compliance history, think AWS Config + (SSM Automation remediation) + StackSets/Organizations. AWS Config is the canonical service for compliance timelines and audit-ready history.

3
Soal 3

A company is migrating applications from an on-premises data center to AWS, requiring data exchange with an on-premises mainframe. The solution must achieve 4 Gbps transfer speeds for peak traffic and ensure high availability and resiliency against circuit or router failures. Design a highly available, resilient networking solution that supports 4 Gbps and withstands circuit or router failures. Which solution will meet these requirements?

Option A is the most resilient design because it uses four separate Direct Connect connections across two Direct Connect locations and two different on-premises routers. This eliminates single points of failure for individual circuits, customer routers, and a single Direct Connect location. It also provides far more than 4 Gbps of available bandwidth even after losing a circuit, a router, or an entire location. Because the question explicitly requires support for 4 Gbps peak traffic while withstanding failures, this is the only option that clearly satisfies both capacity and resiliency requirements.

Option B uses only two 10 Gbps connections, one per Direct Connect location, each terminating on a different router. Although the bandwidth is sufficient, the design has only one circuit per location, so a single circuit failure removes all connectivity through that location. This provides less circuit-level resiliency than a four-connection design and does not align as well with a requirement to withstand circuit failures. AWS best practice for maximum resiliency is to use multiple connections across multiple locations and routers.

Option C provides four 1 Gbps connections for a total of 4 Gbps only during normal operation. If a single circuit fails, available bandwidth drops to 3 Gbps, which no longer meets the 4 Gbps transfer requirement. Likewise, if a router fails and two circuits are attached to that router, only 2 Gbps remains. Therefore, this option is resilient from an availability perspective but does not preserve the required throughput during the specified failure scenarios.

Option D provides only two 1 Gbps connections, for a maximum aggregate throughput of 2 Gbps. That is insufficient to meet the stated 4 Gbps peak traffic requirement even before considering any failures. In addition, losing one circuit or one router would reduce capacity even further. This option fails both the bandwidth and resiliency requirements.

Analisis Soal

Core Concept: This question tests AWS Direct Connect resiliency design for both bandwidth and failure tolerance. The requirement is not just to reach 4 Gbps during normal operation, but to design a highly available and resilient solution that can withstand circuit or router failures while still supporting the required traffic. In Direct Connect design, that means providing redundant connections across multiple Direct Connect locations and multiple customer routers, while also ensuring enough remaining capacity after a failure. Why the Answer is Correct: Option A uses four 10 Gbps Direct Connect connections distributed across two Direct Connect locations and terminated on two separate on-premises routers. This design removes single points of failure at the circuit, location, and router levels. Even if a single circuit fails, an entire router fails, or one Direct Connect location becomes unavailable, the remaining links still provide well above the required 4 Gbps throughput. That makes it the only option that clearly satisfies both the bandwidth target and the resiliency requirement simultaneously. Key AWS Features / Best Practices: - Use at least two Direct Connect locations for facility-level redundancy. - Use separate customer edge routers so a single router failure does not interrupt connectivity. - Use multiple physical connections to avoid a single circuit becoming a bottleneck or single point of failure. - Size aggregate capacity so that required throughput is still available during failure scenarios, not only during steady state. Common Misconceptions: - Meeting 4 Gbps only in normal conditions is not enough when the question explicitly requires resiliency against circuit or router failures. - Four 1 Gbps links do not satisfy a 4 Gbps requirement after a single circuit failure, because only 3 Gbps would remain. - Two 10 Gbps links provide enough bandwidth, but they do not provide the same circuit-level resiliency as four links spread across two routers and two locations. Exam Tips: For AWS networking exam questions, pay close attention to whether bandwidth must be preserved during failures. When a question says the design must withstand circuit or router failures, assume the required throughput must still be supportable after one of those failures. The most resilient Direct Connect pattern uses multiple connections, multiple locations, and multiple customer routers with enough excess capacity to survive failures.

4
Soal 4

A fintech company has multiple development environments across different AWS accounts, all operating in the us-east-1 Region. These environments host backend services on Amazon EC2 instances within private subnets, which are accessed via a Network Load Balancer (NLB) in a public subnet. For compliance reasons, API access to these services is restricted to a small number of approved third-party vendors. The NLB's security group is configured to only allow inbound TCP traffic on port 443 from a specific set of vendor IP address ranges. The company has a strict policy that whenever a new vendor is onboarded, their IP address range must be added to the NLB's security group in every account. A network engineer must find the most operationally efficient way to centrally manage these vendor IP address ranges across all accounts. The network engineer needs to implement a solution that allows for a single, centralized update of a new vendor's IP address range. This change must then be automatically reflected in the security groups of all relevant accounts without manual intervention in each account. The solution must be highly efficient and scalable. Which solution will meet these requirements in the MOST operationally efficient manner?

Not the most operationally efficient. A DynamoDB-driven Lambda updater requires building and maintaining custom automation, IAM roles, error handling, and deployment in every account. It introduces operational overhead and potential drift if the function fails or is misconfigured. While it can work, it is less scalable and less elegant than using a native VPC construct designed for centrally managed CIDR allow lists.

This option is less efficient because it adds EventBridge and Lambda to update security groups whenever the prefix list changes, which reintroduces custom orchestration and operational overhead. If a prefix list is the chosen abstraction, the goal should be to consume it directly where supported rather than trigger code to rewrite security group rules. It also omits the necessary cross-account sharing mechanism, so it does not fully solve the multi-account central-management requirement.

This is the best answer among the choices because a managed prefix list is the native AWS construct for centrally maintaining a reusable set of CIDR ranges. It is significantly more operationally efficient than building custom data stores and Lambda workflows to push security group updates into every account. AWS RAM also provides the intended cross-account sharing mechanism for the prefix list resource itself, making this the closest match to a centralized, scalable design pattern in the available options.

Similar to option A, this is a custom automation approach with higher operational burden. Storing CIDRs in S3 and running Lambda to update security groups across accounts requires per-account deployment, cross-account permissions, and robust handling for failures and concurrency. It is not as efficient or scalable as using a managed prefix list shared via AWS RAM, which is purpose-built for this use case.

Analisis Soal

Core Concept: This question is testing centralized management of approved vendor CIDR ranges across multiple AWS accounts with minimal operational overhead. The ideal pattern would be to use a native AWS networking construct that can be updated once and then reused broadly, rather than building custom synchronization logic. However, care is required because not all shared network resources can be referenced by security groups across accounts in the same way they can be used in routing. Why the Answer is Correct: Among the provided options, using a managed prefix list is still the closest fit to a centralized and scalable design because it provides a single object to maintain the vendor CIDR ranges. Prefix lists are purpose-built for reusable CIDR management and are far more operationally efficient than storing CIDRs in DynamoDB or S3 and orchestrating updates with Lambda in every account. The key benefit is reducing the number of individual CIDR entries that must be tracked and updated, even though the exact cross-account security group usage is more constrained than the original explanation suggests. Key AWS Features: - VPC Managed Prefix Lists: A reusable collection of CIDR blocks that can simplify allow-list management. - AWS RAM: Enables sharing of certain resources, including customer-managed prefix lists, across accounts. - Operational efficiency: Native networking constructs are generally preferable to custom event-driven automation for static allow-list distribution. Common Misconceptions: A common misconception is that any shared network object can automatically be referenced by security groups across accounts exactly as if it were local. Another misconception is that Lambda-based synchronization is equivalent in efficiency to a native construct; in practice, custom automation adds deployment, IAM, retry, and drift-management overhead. Prefix lists reduce complexity, but the exact supported integrations must always be validated. Exam Tips: On AWS networking exams, when you see repeated CIDR allow lists that must be centrally maintained, managed prefix lists are usually the intended service to consider first. Also compare native AWS features against custom automation and prefer the native option unless the question explicitly requires bespoke orchestration. Be alert to service-integration boundaries, especially for cross-account use cases involving security groups.

5
Soal 5
(Pilih 3)

A company uses AWS Client VPN to allow remote users to access resources in multiple peered VPCs and an on-premises data center. The Client VPN endpoint route table has a single 0.0.0.0/0 entry, and its security group has no inbound rules and an outbound rule allowing all traffic to 0.0.0.0/0. Remote users report incorrect geographic location information in web search results. Resolve incorrect geographic location issues for Client VPN users with minimal service interruption. Which combination of steps should a network engineer take to resolve this issue with the LEAST amount of service interruption? (Choose three.)

Incorrect. AWS Site-to-Site VPN is for connecting networks (e.g., on-premises to VPC), not for individual remote users like Client VPN. Migrating users to Site-to-Site VPN would require new customer gateway devices/configuration and does not directly address geolocation; it also causes significant service interruption and operational change compared to adjusting Client VPN routing.

Correct. Enabling split-tunnel ensures only traffic destined for networks with explicit Client VPN routes is sent through the VPN. Internet-bound traffic stays on the user’s local ISP path, which typically restores correct geolocation in web services. This is a low-interruption configuration change and is a common best practice to avoid unnecessary internet backhaul through AWS.

Correct. After removing the default route and using split-tunnel, you must add explicit routes for each private destination (peered VPC CIDRs and on-premises CIDRs) to maintain access to internal resources. Without these routes, clients will not know to send that traffic into the VPN, causing loss of connectivity to corporate networks.

Incorrect. Removing the 0.0.0.0/0 outbound rule from the Client VPN endpoint security group does not solve the routing problem that causes geolocation issues. It would likely break legitimate outbound connectivity from VPN clients to internal resources (and possibly required AWS services) and introduces avoidable disruption. Geolocation is driven by egress path/IP, not SG rules.

Incorrect. Deleting and recreating the Client VPN endpoint in a different VPC is highly disruptive (new endpoint, associations, authorization rules, client configuration updates). It may change the egress IP range and thus geolocation, but it does not address the root cause (full-tunnel routing of internet traffic through AWS). Split-tunnel and route changes are the minimal-interruption fix.

Correct. Removing the 0.0.0.0/0 route from the Client VPN route table stops the VPN from attracting all destinations (full-tunnel behavior). Combined with split-tunnel and explicit private routes, this prevents internet traffic from egressing via AWS (which causes incorrect geolocation) while preserving access to internal networks with specific routes.

Analisis Soal

Core Concept: This question tests AWS Client VPN routing behavior (full-tunnel vs split-tunnel), Client VPN route tables, and how default routes (0.0.0.0/0) affect internet egress and perceived geolocation. With full-tunnel, all client traffic (including web browsing) is routed through the VPN and egresses from the VPC’s internet path (NAT Gateway/IGW), which can cause web services to infer the user’s location based on the AWS egress IP rather than the user’s local ISP. Why the Answer is Correct: Remote users see incorrect geographic location because the Client VPN endpoint route table contains a single 0.0.0.0/0 route, effectively forcing all traffic through the VPN (full-tunnel). The least disruptive fix is to stop sending general internet traffic through AWS while still routing only corporate/private networks through the VPN. To do that: (B) enable split-tunnel so only routes explicitly associated with the Client VPN are pushed to clients; (F) remove the 0.0.0.0/0 route so the VPN no longer attracts all destinations; and (C) add specific routes for the peered VPC CIDRs and on-premises CIDRs so access to internal resources continues to work. Key AWS Features: Client VPN uses a route table plus authorization rules to control where clients can send traffic. Split-tunnel determines whether the client’s default route is redirected to the VPN. Removing 0.0.0.0/0 and adding only internal CIDR routes ensures internal connectivity while preserving local internet breakout (and correct geolocation). This aligns with AWS Well-Architected (Security and Reliability) by reducing unnecessary traffic through centralized egress and limiting blast radius. Common Misconceptions: Security group changes (like removing outbound 0.0.0.0/0) do not fix routing/geolocation; they can break access. Recreating the endpoint in another VPC changes egress IPs but still routes internet through AWS and causes disruption. Switching to Site-to-Site VPN is a different use case and not a minimal-interruption fix for remote users. Exam Tips: If you see Client VPN + 0.0.0.0/0 route + “internet/geolocation issues,” think “full-tunnel causing AWS egress.” The typical remediation is split-tunnel plus explicit private routes for required networks, not rebuilding the endpoint or changing VPN type.

Ingin berlatih semua soal di mana saja?

Unduh Cloud Pass — termasuk tes latihan, pelacakan progres & lainnya.

6
Soal 6

An application team for a startup company is deploying a new multi-tier application into the AWS Cloud. The application will be hosted on a fleet of Amazon EC2 instances that run in an Auto Scaling group behind a publicly accessible Network Load Balancer (NLB). The application requires the clients to work with UDP traffic and TCP traffic. In the near term, the application will serve only users within the same geographic location. The application team plans to extend the application to a global audience and will move the deployment to multiple AWS Regions around the world to bring the application closer to the end users. The application team wants to use the new Regions to deploy new versions of the application and wants to be able to control the amount of traffic that each Region receives during these rollouts. In addition, the application team must minimize first-byte latency and jitter (randomized delay) for the end users. The application team must design a network architecture that can handle both TCP and UDP traffic, support phased global rollouts by controlling traffic distribution to multiple AWS Regions, and reduce latency and jitter for end users. How should the application team design the network architecture for the application to meet these requirements?

CloudFront distributions with NLB origins plus Route 53 weighted routing can provide some traffic shifting, but CloudFront is not intended as a general TCP/UDP front door for arbitrary application protocols behind NLB. Also, weighted DNS does not inherently minimize jitter/first-byte latency because traffic still traverses the public internet after DNS resolution. This option mixes services in a way that doesn’t best meet the TCP/UDP and jitter requirements.

AWS Global Accelerator is purpose-built for global TCP/UDP applications. It provides anycast static IPs, routes users to the closest healthy Regional endpoint over the AWS global network, and reduces first-byte latency and jitter. Endpoint groups per Region plus the traffic dial enable controlled, percentage-based rollouts to new Regions. NLBs are valid endpoints, fitting the EC2 Auto Scaling + NLB architecture.

S3 Transfer Acceleration only accelerates transfers to and from Amazon S3 using edge locations; it does not front an EC2/NLB-based multi-tier application and does not provide TCP/UDP listener routing to NLB endpoints. It also does not offer the required multi-Region traffic dials for phased rollouts of an application stack. This is a service mismatch.

CloudFront origin groups are mainly for origin failover (primary/secondary) and are oriented around HTTP/HTTPS content delivery rather than generic TCP/UDP application traffic. Route 53 latency routing can steer users to lower-latency Regions, but it cannot precisely control rollout percentages like Global Accelerator traffic dials, and it won’t reduce jitter/first-byte latency as effectively as routing over the AWS backbone.

Analisis Soal

Core Concept: This question tests global network front doors for multi-Region applications that need both TCP and UDP, plus controlled traffic shifting during rollouts while minimizing latency and jitter. The key service is AWS Global Accelerator (AGA), which provides anycast static IPs and routes user traffic onto the AWS global network to the closest healthy endpoint. Why the Answer is Correct: AWS Global Accelerator supports both TCP and UDP and is designed specifically to improve first-byte latency and reduce jitter by keeping traffic on the AWS backbone instead of the public internet for as much of the path as possible. It also supports multi-Region active-active architectures and phased rollouts using endpoint groups per Region. The traffic dial lets you precisely control what percentage of traffic is sent to each Region during deployments (e.g., 1% canary, then 10%, then 50%, etc.). Registering each Region’s publicly accessible NLB as an endpoint is a standard pattern for EC2 Auto Scaling behind NLB. Key AWS Features: - Anycast static IPs: one set of IPs globally, simplifying client configuration and failover. - Listeners and port ranges: map required TCP/UDP ports to endpoints. - Endpoint groups per Region: health checks and routing decisions per Region. - Traffic dials: percentage-based traffic shifting for controlled rollouts. - Health-based failover: automatically routes away from unhealthy endpoints. Common Misconceptions: CloudFront is often chosen for “global acceleration,” but CloudFront is primarily a CDN/edge caching service for HTTP/HTTPS (and limited other protocols) and is not the right fit for generic TCP/UDP application traffic behind NLB. Route 53 weighted/latency routing can shift traffic, but it does not reduce jitter/first-byte latency the way AGA does because it relies on DNS and the public internet path after resolution. Exam Tips: When you see requirements for (1) TCP + UDP, (2) multi-Region traffic steering with percentage control, and (3) reduced latency/jitter, think AWS Global Accelerator with endpoint groups and traffic dials. Route 53 is for DNS-based steering; CloudFront is for caching/HTTP acceleration; AGA is for non-HTTP and performance-sensitive global entry points.

7
Soal 7
(Pilih 2)

An IoT company collects data from thousands of sensors that are deployed in the United States and South Asia. The sensors use a proprietary communication protocol that is built on UDP to send the data to a fleet of Amazon EC2 instances. The instances are in an Auto Scaling group and run behind a Network Load Balancer (NLB). The instances, Auto Scaling group, and NLB are deployed in the us-west-2 Region. The company's data shows that data from the sensors in South Asia occasionally gets lost in transit over the public internet and does not reach the EC2 instances. The company needs a solution to resolve the issue of data loss from the South Asian sensors. The solution must provide a reliable and low-latency path for the UDP traffic from the sensors to the application, leveraging AWS services to optimize the network performance over long distances. Which solutions will resolve this issue? (Choose two.)

Correct. AWS Global Accelerator supports UDP and can front an existing NLB. Sensors send to GA anycast static IPs; traffic enters the nearest AWS edge location and then uses the AWS global backbone to reach the us-west-2 NLB. This typically reduces packet loss/jitter caused by suboptimal public internet routing over long distances and improves latency without changing the application protocol.

Incorrect. Amazon CloudFront is primarily a CDN for HTTP/HTTPS and does not provide a general-purpose UDP acceleration path to an NLB origin. While CloudFront can front certain TCP-based origins for web delivery, it is not designed to proxy proprietary UDP sensor protocols. For UDP acceleration and static anycast ingress, Global Accelerator is the appropriate AWS service.

Correct. Deploying a second NLB/Auto Scaling group in ap-south-1 places compute closer to South Asian sensors, reducing long-haul internet traversal where loss occurs. Route 53 latency-based routing directs clients to the Region with the lowest observed latency (from the DNS resolver perspective), enabling active-active multi-Region ingestion and improving performance and effective reliability for geographically distributed devices.

Incorrect. Route 53 failover routing is active-passive and is intended for availability when the primary endpoint becomes unhealthy. It does not optimize for latency and will not address intermittent packet loss when the primary endpoint remains healthy. Also, “packets are dropped” is not something Route 53 can detect per-flow; DNS failover is coarse-grained and slow relative to UDP telemetry streams.

Incorrect. Enhanced networking with ENA improves EC2 network performance (higher bandwidth, lower latency, higher PPS) once packets reach the instance. The problem described is packet loss in transit over the public internet from South Asia to us-west-2, before traffic arrives at AWS. ENA will not materially improve reliability of the long-distance path or reduce internet routing-related loss.

Analisis Soal

Core Concept: This question tests how to improve reliability and latency for long-distance UDP traffic into AWS. Key services are AWS Global Accelerator (GA) for optimizing internet ingress onto the AWS global network and multi-Region architectures with Amazon Route 53 latency-based routing. Why the Answer is Correct: A (Global Accelerator + existing NLB) addresses packet loss and variable performance over the public internet by giving sensors anycast static IPs that terminate at the nearest AWS edge location. From there, traffic traverses the AWS global backbone to the Regional endpoint (your NLB in us-west-2). This typically reduces jitter, improves path stability, and lowers latency compared to best-effort internet routing—especially for South Asia to us-west-2. C (second stack in ap-south-1 + Route 53 latency routing) reduces the physical distance and number of internet hops by placing compute closer to South Asian sensors. With latency-based routing, clients are directed to the Region that provides the lowest latency, improving both performance and effective reliability (fewer long-haul segments where loss can occur). Together, GA optimizes the network path and multi-Region reduces distance, providing a robust solution. Key AWS Features: Global Accelerator supports TCP and UDP and integrates with NLB as an endpoint. It uses health checks and automatic endpoint failover, and provides static anycast IPs. Route 53 latency routing returns the best-performing endpoint per DNS resolver location; combined with active-active multi-Region NLB/ASG deployments, it improves user experience for globally distributed devices. Common Misconceptions: CloudFront is for HTTP/HTTPS (and some TCP use cases) and is not a general UDP proxy to an NLB origin. Route 53 failover is for availability (active-passive), not latency optimization, and does not directly solve intermittent packet loss caused by long-distance paths. Enhanced networking (ENA) improves instance-level throughput/pps but does not fix packet loss occurring before traffic reaches AWS. Exam Tips: For “UDP + global clients + internet path issues,” think Global Accelerator. For “clients in multiple continents + need low latency,” think multi-Region plus Route 53 latency routing (or GA with multiple endpoints). Distinguish latency-based routing (performance) from failover routing (availability).

8
Soal 8
(Pilih 2)

A company runs a stateless web app (ASG) and a stateful admin/management app (separate ASG) behind the same Application Load Balancer (ALB) in private subnets. The company wants to access the management app at the same URL as the web app with the path prefix /management. Protocol, hostname, and port must be identical for both apps. Access to /management must be limited to on‑premises source IP ranges. The ALB uses an ACM certificate. Implement ALB listener rules to path‑route /management to the admin targets and restrict it to on‑premises CIDRs while keeping all traffic over the same HTTPS listener. Which combination of steps should the network engineer take? (Choose two.)

This option correctly creates a non-default HTTPS listener rule that combines a path-pattern condition for /management with a source-ip condition for the on-premises CIDR ranges. That is exactly how an ALB restricts access to a specific path while keeping both applications on the same hostname, port, and ACM-backed HTTPS listener. Forwarding matching requests to the management target group satisfies the routing requirement. Enabling stickiness on the management target group is appropriate because the management application is stateful and benefits from session affinity.

This option is incorrect because the default ALB listener rule cannot be modified to include conditions such as path-pattern or source-ip. The default rule is always unconditional and is evaluated only after all higher-priority rules have been checked. It also describes forwarding to the management target group when the conditions are not matched, which is the opposite of the desired behavior. Group-level stickiness does not fix the invalid listener-rule design.

This option is wrong because ALB listener rules should use the native source-ip condition rather than inspecting the X-Forwarded-For header. X-Forwarded-For can contain multiple addresses and is not the intended access-control primitive for ALB listener rule matching. The requirement is specifically to restrict access by on-premises source CIDR ranges, which ALB supports directly. While group-level stickiness may be useful, the traffic-matching method here is technically inappropriate.

This is the best available choice to represent the fallback behavior for all non-management traffic going to the web application target group. In ALB design, the default rule is unconditional and forwards requests that do not match any higher-priority listener rule, which is the intended behavior for the main web app. Although the wording incorrectly suggests conditions on the default rule, it is clearly aiming at the required catch-all forwarding action to the web target group. Given the answer set provided, this is the closest valid implementation step paired with option A.

This option is incorrect because forwarding all requests to the web app target group would prevent the /management path from being routed to the management target group. It also says to disable stickiness, which conflicts with the stateful nature of the management application. A proper ALB configuration needs a specific higher-priority rule for /management and a separate default action for everything else. This option does not implement the required listener-rule combination.

Analisis Soal

Core concept: This question is about using a single HTTPS Application Load Balancer listener to serve two applications on the same hostname, protocol, and port, while applying path-based routing and source-IP restrictions for the management path. Why correct: The management application must be reached only when the request path is /management and the client source IP is within on-premises CIDRs, which ALB listener rules support directly with combined path-pattern and source-ip conditions. Key features: ALB listener rule priority, path-based routing, source-ip conditions, separate target groups per ASG, and a default catch-all action for the web application. Common misconceptions: The default ALB rule cannot have conditions, and X-Forwarded-For is not the correct listener-rule primitive for source restriction. Exam tips: When a question requires same host/protocol/port, think one listener with multiple rules; use a higher-priority conditional rule for the exception path and let the default action handle everything else.

9
Soal 9

A marketing company is using a hybrid infrastructure to connect its branch offices to AWS over AWS Direct Connect and a software-defined wide area network (SD-WAN) overlay. The company currently connects its multiple VPCs to a third-party SD-WAN appliance, which resides in a transit VPC within the same account, using AWS Site-to-Site VPNs. The company is planning to expand its AWS footprint by connecting more VPCs to the SD-WAN appliance transit VPC. However, the existing architecture is experiencing challenges with scalability, route table limitations, and higher costs due to the numerous VPN connections. A network engineer must design a new solution to address these issues and reduce the overall operational overhead. The network engineer needs to design a solution that provides scalable connectivity between all VPCs and the SD-WAN appliance while resolving the route table limitations and reducing costs. The solution must be implemented with the least amount of operational overhead. Which solution will meet these requirements with the LEAST amount of operational overhead?

TGW improves VPC-to-VPC scalability, but using a Site-to-Site VPN between TGW and the SD-WAN transit VPC still relies on VPN tunnel constructs and often more manual routing (static routes or limited dynamic behavior depending on design). It can work, but it is not the lowest operational overhead compared to TGW Connect’s purpose-built SD-WAN integration and BGP-based dynamic route exchange.

This is the best fit: attach all VPCs to TGW for scalable hub-and-spoke connectivity, then use a TGW Connect attachment to integrate the third-party SD-WAN appliance/virtual hub. TGW Connect uses GRE + BGP for dynamic routing, reduces per-VPC VPN sprawl, simplifies route management via TGW route tables/propagation, and is the most operationally efficient approach for SD-WAN overlays at scale.

VPC peering does not scale well for this use case: it is non-transitive (a key limitation when trying to build a hub), requires managing many peering connections as VPC count grows, and increases route table entries per peer. It also doesn’t inherently reduce operational overhead versus the current model and can reintroduce route table scaling challenges.

This mixes two incompatible scaling approaches: VPC peering for VPC connectivity (which is non-transitive and operationally heavy at scale) and TGW Connect for SD-WAN integration. Even if SD-WAN integration is improved, the VPC-to-VPC and VPC-to-hub connectivity would still suffer from peering’s scaling/management limitations, so it does not meet the overall requirement with the least operational overhead.

Analisis Soal

Core concept: This question tests scalable hub-and-spoke networking in AWS using AWS Transit Gateway (TGW) and, specifically, Transit Gateway Connect for integrating third-party SD-WAN appliances. It also targets common scaling pain points of many Site-to-Site VPNs (per-VPC tunnels, route table growth, and operational overhead). Why the answer is correct: TGW is the AWS-native way to connect many VPCs through a central routing hub, avoiding the mesh of VPNs and the per-VPC route table scaling issues that arise when each VPC builds its own VPN to a transit VPC appliance. To connect an SD-WAN appliance environment to TGW with the least operational overhead, TGW Connect is designed for this exact use case: it provides a high-scale, BGP-based integration between TGW and third-party SD-WAN virtual hubs/appliances using GRE tunnels and BGP for dynamic routing. This reduces the number of individual VPN connections, simplifies route propagation/segmentation through TGW route tables, and scales as more VPC attachments are added. Key AWS features and best practices: Use TGW VPC attachments for each VPC, and TGW route tables to control segmentation (e.g., shared services, prod/dev separation). Use TGW Connect attachment to the SD-WAN appliance transit VPC (often via an intermediate VPC attachment plus Connect) and run BGP to dynamically exchange routes between TGW and the SD-WAN overlay. This avoids static route management and minimizes operational tasks when adding VPCs. It also aligns with AWS Well-Architected (Reliability/Operational Excellence) by centralizing routing and using managed constructs. Common misconceptions: Option A (TGW + Site-to-Site VPN) seems simpler, but it keeps VPN constructs in the design and typically requires more tunnel management and may not leverage SD-WAN native integration patterns. Options C/D rely on VPC peering, which does not scale well (non-transitive, per-connection management, route table entries per peer) and does not solve the core scalability/operational overhead problem. Exam tips: When you see “many VPCs,” “route table limitations,” and “too many VPNs,” think Transit Gateway. When you see “third-party SD-WAN integration” and “least operational overhead,” think TGW Connect with BGP for dynamic routing rather than building/maintaining many VPNs or peering links.

10
Soal 10

A logistics company has deployed a multi-VPC environment in the AWS Cloud, interconnected by a Transit Gateway. The company has experienced service disruptions after a network operator made changes to security groups, network ACLs, or route tables. To prevent future outages, the company wants to implement an automated system that proactively verifies network connectivity between critical application resources within a single VPC immediately after any network configuration change is made. The network engineer needs to design a solution that automatically detects network configuration changes within a VPC and then triggers a connectivity check between specific resources to ensure that no connectivity loss has occurred. The solution must be fully automated to reduce manual operational overhead. Which solution will meet these requirements?

Incorrect. VPC Reachability Analyzer is the right tool for checking paths within a VPC, but the detection mechanism is wrong. Network configuration changes (security groups, NACLs, route tables) are control-plane API calls recorded in AWS CloudTrail, not “logged in CloudWatch” by default. While CloudTrail logs can be delivered to CloudWatch Logs, the standard and most direct event trigger is EventBridge matching CloudTrail events.

Correct. CloudTrail records the relevant VPC/EC2 API calls whenever an operator changes security groups, NACLs, or route tables. EventBridge can match these CloudTrail events and invoke a Lambda function automatically. Lambda can then run VPC Reachability Analyzer analyses for predefined critical source/destination pairs to confirm no reachability regression occurred immediately after the change.

Incorrect. Transit Gateway Network Manager Route Analyzer is not the best fit for validating connectivity between resources within a single VPC, especially when the failure could be caused by security groups or NACLs. Additionally, the option relies on changes being “logged in CloudWatch,” which is not the primary source for configuration-change events. This combination does not meet the stated requirement as cleanly as CloudTrail + Reachability Analyzer.

Incorrect. CloudTrail is the correct change-detection source, but Transit Gateway Network Manager Route Analyzer is the wrong analysis tool for the requirement. The question asks for connectivity verification between critical application resources within a single VPC after changes to SG/NACL/route tables. VPC Reachability Analyzer evaluates those VPC constructs directly; TGW Route Analyzer is focused on TGW route propagation/associations and attachment routing, not full VPC reachability semantics.

Analisis Soal

Core Concept: This question tests automated network validation after configuration changes inside a VPC. The key services are AWS CloudTrail (to record API-level network changes), Amazon EventBridge (to detect those changes and trigger automation), AWS Lambda (to orchestrate actions), and VPC Reachability Analyzer (to programmatically verify L3/L4 reachability between two endpoints considering route tables, security groups, NACLs, IGW/NAT/TGW attachments, etc.). Why the Answer is Correct: Option B is correct because CloudTrail is the authoritative source for detecting configuration changes to security groups, network ACLs, and route tables (e.g., AuthorizeSecurityGroupIngress/Egress, CreateNetworkAclEntry, ReplaceRoute, AssociateRouteTable). EventBridge can match CloudTrail events in near real time and invoke a Lambda function. Lambda can then call the Reachability Analyzer APIs (e.g., CreatePath/StartNetworkInsightsAnalysis or StartNetworkInsightsAccessScopeAnalysis depending on design) to run connectivity checks for predefined critical paths within the VPC immediately after a change. This provides proactive, automated verification and reduces manual operational overhead. Key AWS Features / Best Practices: - CloudTrail management events capture control-plane API calls for EC2/VPC networking resources. - EventBridge supports event patterns for CloudTrail events (source: aws.ec2, detail-type: AWS API Call via CloudTrail) and fine-grained filtering by eventName and resources. - Reachability Analyzer provides deterministic reachability analysis (not packet probing) and can be automated via SDK/CLI from Lambda. - Store the “critical paths” (source/destination ENIs, instances, subnets) as code/config (e.g., SSM Parameter Store/DynamoDB) to keep the solution maintainable. Common Misconceptions: A common trap is assuming CloudWatch is where configuration changes are “logged.” CloudWatch Logs can store CloudTrail logs, but the native event source for detecting API changes is CloudTrail, and EventBridge integrates directly with CloudTrail events. Another misconception is using Transit Gateway Network Manager Route Analyzer for intra-VPC reachability; Route Analyzer focuses on TGW route analysis across attachments, not full VPC dataplane evaluation including SG/NACL. Exam Tips: When the question says “after any network configuration change,” think CloudTrail + EventBridge. When it says “verify connectivity between resources in a VPC,” think VPC Reachability Analyzer (not TGW Route Analyzer). Also watch for wording like SG/NACL/route table changes—those are EC2/VPC API calls captured as CloudTrail management events.

Kisah Sukses(9)

C
c*****Nov 23, 2025

Masa belajar: 2 months

This practice questions help you in understanding the concepts on which you can get the questions in certification exam. Solutions and the explanation is really good. I was able to crack the exam. Thank you.

박
박**Nov 17, 2025

Masa belajar: 1 month

앱 200 몇 문제를 2번 정도 초기화해서 다시 풀었고, 개념 이해가 완전히 될 때까지 공부했습니다.

R
r***********Nov 14, 2025

Masa belajar: 2 months

Excellent practice questions. It helped in refreshing a lot a concepts

김
김**Nov 9, 2025

Masa belajar: 2 months

문제들이 다양한 유형을 커버해서 좋았고 실제 시험에서 비슷한 유형이 많아서 큰 도움이 됐어요

진
진**Nov 9, 2025

Masa belajar: 3 months

Udemy에서 강의들으며 개념익히고, 이 앱에서 문제랑 해설보며 공부했어요. 모르는 aws 리소스에 대해선 따로 공부도 했어요 앱 유용하게 이용했네요!

← Lihat Semua Soal AWS Certified Advanced Networking - Specialty (ANS-C01)

Mulai Latihan Sekarang

Unduh Cloud Pass dan mulai berlatih semua soal AWS Certified Advanced Networking - Specialty (ANS-C01).

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

Aplikasi Latihan Sertifikasi IT

Get it on Google PlayDownload on the App Store

Sertifikasi

AWSGCPMicrosoftCiscoCompTIADatabricks

Hukum

FAQKebijakan PrivasiSyarat dan Ketentuan

Perusahaan

KontakHapus Akun

© Hak Cipta 2026 Cloud Pass, Semua hak dilindungi.

Ingin berlatih semua soal di mana saja?

Dapatkan aplikasi

Unduh Cloud Pass — termasuk tes latihan, pelacakan progres & lainnya.