CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
AWS Certified Solutions Architect - Professional (SAP-C02)
AWS Certified Solutions Architect - Professional (SAP-C02)

Practice Test #4

Simule a experiência real do exame com 75 questões e limite de tempo de 180 minutos. Pratique com respostas verificadas por IA e explicações detalhadas.

75Questões180Minutos750/1000Nota de Aprovação
Ver Questões de Prática

Powered by IA

Respostas e Explicações Verificadas por 3 IAs

Cada resposta é verificada por 3 modelos de IA líderes para garantir máxima precisão. Receba explicações detalhadas por alternativa e análise aprofundada das questões.

GPT Pro
Claude Opus
Gemini Pro
Explicações por alternativa
Análise aprofundada da questão
Precisão por consenso de 3 modelos

Questões de Prática

1
Questão 1

A multimedia analytics startup stores JSON summaries and thumbnails in a single Amazon S3 bucket (average object size 200 KB; ~80,000 new objects per hour; both reads and writes by the application); The company must deploy identical application stacks in two AWS Regions (us-east-1 and ap-southeast-2) to reduce latency, support writes from both Regions, provide a single data access endpoint for the application, accept eventual consistency with replication lag under 15 minutes, require only minimal application code changes, and achieve the least operational overhead—which solution should the company implement?

CloudFront can cache and reduce read latency, but it is not a multi-Region write solution and does not replicate objects between Regions. Global Accelerator improves routing to regional endpoints, but S3 is not fronted by GA in a way that provides active-active writes and a single S3 data endpoint. This option also keeps a single bucket, so ap-southeast-2 writes still traverse to us-east-1, increasing latency.

This is the intended AWS-native pattern: two regional buckets, replication between them, and an S3 Multi-Region Access Point to provide one global endpoint with latency-based routing and failover. With bidirectional replication (two CRR rules), both Regions can write locally while data is asynchronously replicated. This meets the single endpoint, eventual consistency, <15-minute lag expectation, and low operational overhead requirements.

One-way CRR only replicates from the original bucket to the new bucket. If the application in ap-southeast-2 writes to its local bucket, those objects will not replicate back to us-east-1, violating the requirement to support writes from both Regions while keeping datasets aligned. It also lacks a single global access endpoint; the application would need region-specific bucket logic.

S3 gateway endpoints provide private connectivity from a VPC to S3 within a Region; they do not create a global endpoint, do not enable multi-Region writes, and do not replicate data. S3 Intelligent-Tiering is a storage class optimization for cost and access patterns, unrelated to cross-Region latency reduction or active-active multi-Region data access.

Análise da Questão

Core Concept: This question tests active-active, multi-Region Amazon S3 access with writes from multiple Regions, minimal application change, and low operational overhead. The key services are S3 Cross-Region Replication (CRR) and S3 Multi-Region Access Points (MRAP), which provide a single global endpoint that routes requests to the closest healthy S3 bucket. Why the Answer is Correct: Option B is the only choice that satisfies all requirements simultaneously: two identical stacks in us-east-1 and ap-southeast-2, writes originating from both Regions, a single data access endpoint, and minimal code changes. With two buckets (one per Region) and an MRAP, the application can use one S3 endpoint name globally while MRAP routes reads/writes to the nearest bucket based on latency and availability. Bidirectional replication keeps data convergent across Regions, and eventual consistency with replication lag under 15 minutes aligns with typical CRR behavior (with Replication Time Control available if stricter SLAs are needed). Key AWS Features: - S3 Multi-Region Access Points: one global endpoint, intelligent routing, automatic failover, and simplified multi-Region architecture. - Bidirectional replication: implemented using two one-way CRR rules (each bucket replicates to the other). Requires versioning enabled on both buckets and appropriate IAM roles/policies. - Replication considerations: handle replication of new objects; optionally configure delete marker replication and replica modification sync depending on requirements. Common Misconceptions: CloudFront and Global Accelerator (Option A) can improve read latency, but they do not provide multi-Region, active-active S3 writes with a single consistent data layer. One-way CRR (Option C) fails the “writes from both Regions” requirement. VPC gateway endpoints and Intelligent-Tiering (Option D) are networking/cost features and do not solve multi-Region replication or a single global endpoint. Exam Tips: When you see “single endpoint” + “multi-Region S3” + “writes from both Regions” + “minimal app changes,” think S3 MRAP plus two buckets and replication. Also remember S3 replication is configured per bucket pair and is inherently asynchronous; if the question mentions a replication SLA, consider Replication Time Control, but MRAP remains the global-access building block.

2
Questão 2

A fintech startup operates a currency-quotation API on AWS for a compliance program that will last the next 3 years. The API runs on 20 Amazon EC2 On-Demand Instances registered in a target group behind a Network Load Balancer (NLB) across two Availability Zones. The service is stateless and runs 24x7. Users report slow responses; metrics show average CPU at 10% during normal traffic, but CPU spikes to 100% during market open hours for a few hours each day. The company needs a new architecture that resolves latency during spikes while minimizing cost over the 3-year period. Which solution is the MOST cost-effective?

This improves performance by adding capacity (desired 28) and uses Reserved Instances for 20, but it is not the most cost-effective. The workload’s average CPU is only 10%, indicating 20 instances are significantly overprovisioned for baseline. Reserving 20 instances for 3 years locks in unnecessary spend. Also, setting desired capacity to 28 is not true elasticity; it keeps extra instances running even outside spike hours unless scaling policies reduce it.

Spot Fleet with DefaultTargetCapacityType set to On-Demand effectively provisions mostly On-Demand capacity, so it doesn’t materially reduce cost versus using Auto Scaling with On-Demand. Additionally, Spot Fleet is a legacy approach compared to EC2 Auto Scaling with mixed instances policies. It also doesn’t address right-sizing the baseline; it keeps TotalTargetCapacity at 20 continuously, which remains overprovisioned given 10% average CPU.

This relies primarily on Spot capacity (DefaultTargetCapacityType=Spot) with a maintain request, which can reduce cost but introduces interruption and capacity-availability risk during critical market-open spikes. The question emphasizes resolving latency during spikes for a compliance program; without explicit interruption tolerance and fallback to On-Demand, Spot is risky. Replacing NLB with ALB is unnecessary for a stateless API already behind NLB and adds change without clear benefit.

This is the most cost-effective: it right-sizes the always-on baseline (min 4) and scales out to meet spike demand (max 28), resolving latency during market open hours. Buying Reserved Instances for only the baseline minimizes 3-year committed spend, while burst capacity runs only for a few hours daily on On-Demand. It aligns with AWS Cost Optimization best practices: commit to steady usage and use elasticity for variable demand.

Análise da Questão

Core Concept: This question tests cost-optimized elasticity for a steady 24x7 baseline with predictable daily spikes. The key services/concepts are EC2 Auto Scaling (dynamic scaling for stateless workloads behind a load balancer) and EC2 pricing models (Reserved Instances/Savings Plans for baseline, On-Demand for burst). Why the Answer is Correct: The workload is stateless, behind an NLB, and experiences a low average CPU (10%) with short daily periods of saturation (100%). The most cost-effective 3-year design is to right-size the always-on baseline and scale out only during spike windows. Option D sets a small minimum capacity (4) to cover baseline and uses Auto Scaling up to 28 during market open hours, eliminating latency by adding capacity when CPU spikes. Purchasing Reserved Instances for the baseline (4 instances) minimizes long-term cost, while the additional instances run only during spikes (On-Demand), which is cheaper than reserving capacity you don’t need 24x7. Key AWS Features: - EC2 Auto Scaling with target tracking (e.g., average CPU or NLB metrics) to add/remove instances automatically. - NLB target group integration with Auto Scaling for health checks and registration/deregistration. - Reserved Instances (or Compute Savings Plans) for the always-on baseline; On-Demand for burst capacity. - Well-Architected Cost Optimization: match supply to demand and commit only to steady-state usage. Common Misconceptions: A common trap is reserving too many instances (Option A) because the current fleet is 20. But the metrics show chronic overprovisioning (10% CPU), so reserving 20 wastes money for 3 years. Another misconception is that Spot is always cheapest (Options B/C). For a compliance program and latency-sensitive market-open spikes, Spot interruption risk and capacity availability can harm performance and reliability unless carefully engineered with diversification and fallback. Exam Tips: When you see “24x7 for 3 years” plus “spiky,” think “reserve the baseline, autoscale the burst.” Use metrics to infer baseline overprovisioning. Prefer Auto Scaling behind the existing load balancer for stateless services. Spot can be cost-effective, but on exams it’s usually chosen when interruption tolerance is explicit and architecture includes graceful handling and fallback.

3
Questão 3

A fintech firm runs a risk analytics tool on Windows VMs in a co-location facility, and the tool reads and writes to a 65 TB shared file repository over SMB with an average daily change rate of about 100 GB. The firm wants to move the shared storage to Amazon S3 for durability and cost, but the application will not be refactored to use native S3 APIs for another 4 months. During this period, the on-premises application must continue to access the same dataset via SMB with minimal client changes, and the cutover must not require running file servers in AWS. A solutions architect must migrate the data to its new S3 location while ensuring uninterrupted SMB access from on premises during and after migration. Which solution will meet these requirements?

Amazon FSx for Windows File Server provides a managed SMB file system that applications can access over SMB, and AWS DataSync can migrate data into it. However, the requirement is specifically to move the shared storage to Amazon S3 for durability and cost efficiency during the interim period. FSx for Windows stores data in the FSx file system rather than using S3 as the primary storage backend, so it does not satisfy the core storage-location requirement. It also introduces an AWS-hosted file system service when the design goal is to keep SMB access on premises while S3 serves as the durable backend.

Copying data directly from the on-premises SMB repository to an S3 bucket can satisfy the durability/cost goal, but it does not satisfy the access requirement. The application cannot use native S3 APIs for 4 months and must continue to access the dataset via SMB with minimal changes. S3 is an object store and does not provide an SMB endpoint for Windows clients without an intermediary service.

AWS Server Migration Service (historical) is for migrating server workloads, not for providing an SMB interface backed by S3. Even if a file server were lifted and shifted to EC2, that would still mean operating a file server in AWS, which the requirements prohibit. It also adds operational burden (patching, scaling, HA) and does not inherently achieve the goal of storing the dataset in S3 as the primary repository.

AWS Storage Gateway File Gateway is designed to present SMB (or NFS) file shares to on-premises applications while storing the data as objects in Amazon S3. Deploying the gateway on an on-premises VM meets the constraint of not running file servers in AWS, and it allows minimal client change (repointing to a new SMB share). Local caching improves performance, and S3 becomes the durable, cost-effective backend during and after migration.

Análise da Questão

Core Concept: This question tests how to provide SMB/NFS file access to data that is ultimately stored durably and cost-effectively in Amazon S3, without refactoring applications and without running file servers in AWS. The key service is AWS Storage Gateway (File Gateway), which presents an SMB file share to on-premises clients while storing objects in S3. Why the Answer is Correct: The firm wants the dataset to live in S3 now, but the Windows VMs must keep using SMB for ~4 months with minimal client changes. AWS Storage Gateway file gateway can be deployed as a VM on premises and expose an SMB share backed by an S3 bucket. The application continues to access the same data over SMB (typically by repointing the UNC path to the gateway share), while the gateway transparently stores files as S3 objects. This meets the “no file servers in AWS” constraint because the SMB server endpoint is the on-premises gateway VM, not an EC2-based file server. Key AWS Features: File Gateway supports SMB shares, integrates with Active Directory for authentication/authorization, and uses local cache for low-latency access to hot data while asynchronously uploading to S3. It supports large datasets and incremental changes (100 GB/day) efficiently. You can perform the initial data copy by copying from the existing SMB repository to the gateway SMB share (or using tools like Robocopy), after which ongoing access continues through the gateway with S3 as the system of record. Common Misconceptions: FSx for Windows File Server (option A) provides SMB but stores data on FSx (EBS-backed) rather than S3, and it is a managed file server in AWS—explicitly disallowed. Directly copying to S3 (option B) breaks SMB access unless the app is refactored. Lifting and shifting a file server to EC2 (option C) violates the “must not require running file servers in AWS” requirement and adds admin overhead. Exam Tips: When you see “keep SMB/NFS clients unchanged” + “store in S3” + “no refactor,” think Storage Gateway File Gateway. When you see “SMB in AWS” with Windows semantics, think FSx for Windows, but verify constraints about S3 backing and whether running a file system in AWS is allowed.

4
Questão 4
(Selecione 3)

A healthcare analytics enterprise operates an AWS Organizations organization with 320 member accounts across three Regions (us-east-1, us-west-2, and eu-west-1) and needs to enforce baseline protection against the OWASP Top 10 by using AWS WAF on all existing and new Application Load Balancers (ALBs) in every account within those Regions; which combination of steps should the solutions architect take to centrally apply and continuously enforce this protection across the organization? (Choose three.)

Correct. AWS Firewall Manager relies on AWS Config to discover supported resources (such as ALBs) and to evaluate ongoing compliance. Enabling AWS Config in all member accounts and in each in-scope Region is a standard prerequisite to centrally enforce WAF associations and detect/remediate drift. Without Config, Firewall Manager cannot continuously track which ALBs exist and whether they are protected by the required Web ACL.

Incorrect. Amazon GuardDuty is a threat detection service that analyzes logs (VPC Flow Logs, DNS logs, CloudTrail, etc.) to identify suspicious activity. It does not deploy or enforce AWS WAF Web ACLs on ALBs. GuardDuty can complement WAF by detecting threats, but it is not a control plane mechanism for centrally attaching WAF rules across an AWS Organizations environment.

Correct. AWS Firewall Manager requires AWS Organizations with “all features” enabled to manage policies across accounts and OUs and to use organization-wide service-linked roles/delegated administration. With only consolidated billing features, you cannot centrally apply and enforce Firewall Manager security policies across member accounts. This is a common prerequisite for organization-level governance services.

Correct. AWS Firewall Manager is the purpose-built service to centrally deploy and continuously enforce AWS WAF protections across multiple accounts and resources (including ALBs) in an AWS Organizations organization. You can target all accounts/OUs, specify Regions, use AWS Managed Rules aligned to OWASP Top 10, and enable automatic remediation so new ALBs are protected and unauthorized changes are corrected.

Incorrect. AWS Shield Advanced primarily provides enhanced DDoS protection and response features. While Shield Advanced integrates with AWS WAF and can help with DDoS-related WAF configurations, it is not the standard service for centrally deploying and continuously enforcing WAF Web ACLs across hundreds of accounts and ALBs. Firewall Manager is the correct centralized enforcement mechanism for WAF policies.

Incorrect. AWS Security Hub aggregates security findings and provides compliance/security posture management across accounts and Regions. It does not push configuration changes like attaching WAF Web ACLs to ALBs. Security Hub can report on misconfigurations or missing controls (via standards and integrations), but it is not used to deploy and enforce WAF rules organization-wide.

Análise da Questão

Core concept: This question tests centralized, organization-wide enforcement of AWS WAF protections (e.g., AWS Managed Rules for OWASP Top 10) across many AWS accounts and Regions. The key services are AWS Organizations (for multi-account governance), AWS Firewall Manager (for centralized WAF policy deployment and continuous enforcement), and AWS Config (a prerequisite for Firewall Manager to discover and continuously evaluate in-scope resources). Why the answer is correct: To apply AWS WAF to all existing and newly created Application Load Balancers across 320 member accounts in three Regions, the correct pattern is to use AWS Firewall Manager with an AWS WAF policy scoped to ALBs and the specified Regions. Firewall Manager can automatically associate the Web ACL to in-scope ALBs and remediate drift when new ALBs appear or when associations are removed. For Firewall Manager to work, AWS Organizations must have “all features” enabled so Firewall Manager can operate across accounts using service-linked roles and organization-wide policy targeting. Additionally, AWS Config must be enabled in each account/Region so Firewall Manager can inventory and evaluate resources for compliance and continuously enforce the policy. Key AWS features and best practices: - AWS Firewall Manager: Create an AWS WAF policy, select AWS Managed Rules (e.g., Core rule set / OWASP Top 10 coverage), define scope (ALB resources), target OUs/accounts, and enable automatic remediation. - Multi-Region: Enable AWS Config and deploy Firewall Manager policies in each required Region (us-east-1, us-west-2, eu-west-1) because WAF/ALB associations are regional. - AWS Organizations all features: Required for centralized security services and delegated administrator patterns. Common misconceptions: - GuardDuty and Security Hub are detection/aggregation services; they do not deploy WAF rules to ALBs. - Shield Advanced provides DDoS protections and can integrate with WAF, but it is not the primary mechanism for organization-wide WAF policy enforcement across accounts. Exam tips: When you see “centrally apply and continuously enforce” security controls across many accounts and “existing and new resources,” think AWS Firewall Manager + AWS Organizations (all features) + AWS Config. Also remember that WAF/ALB is regional, so prerequisites and policies must be enabled in each Region in scope.

5
Questão 5

An ad-tech company operates 12 microservices across dev, staging, and prod and manages IaC with AWS CDK while storing synthesized templates and container artifacts in Amazon S3 and Amazon ECR with versioning enabled; developers connect to a single Amazon EC2 Linux workstation that hosts an IDE to pull artifacts from S3/ECR, run unit tests locally, and push updates back, but the team now wants to modernize by implementing CI/CD with AWS CodePipeline with the following requirements: use AWS CodeCommit for source control, automatically run unit tests and security scanning on every commit to the main and release/* branches, alert the team within 2 minutes when unit tests fail, allow dynamic feature toggles to turn specific APIs on/off and customize environment configuration during the pipeline, and require the lead developer to approve before promotion to production; which solution best meets these requirements?

Option A best matches an AWS-native CI/CD design centered on CodeCommit, CodePipeline, and CodeBuild. AWS CodeBuild is the correct managed service to run unit tests and security scanning on every commit because it provides build environments, buildspec support, IAM integration, and straightforward artifact handling. Amazon EventBridge can capture CodeBuild state change events and route failures to Amazon SNS, which can notify the team quickly enough to satisfy the 2-minute alerting requirement. Using AWS CDK constructs together with a manifest or configuration file is a reasonable way to implement feature toggles and environment-specific customization during deployment, and CodePipeline’s built-in manual approval action is the standard mechanism for requiring lead developer approval before production.

AWS Lambda is not an appropriate primary service for running CI unit tests and security scans for multiple microservices because of execution time limits, packaging constraints, and lack of a full build environment. A second Lambda function to send notifications is also unnecessary when EventBridge and SNS already provide native event-driven alerting for pipeline and build failures. AWS Amplify plugins and interactive prompts are designed for developer workflows, not automated enterprise CI/CD pipelines that must run unattended. Using Amazon SES as an approval mechanism is not how CodePipeline approvals are implemented; the service already includes a native manual approval action for gated promotions.

Jenkins can technically run tests and scans, but it introduces unnecessary operational overhead and does not align with the requirement to modernize using AWS CodePipeline and managed AWS developer tools. EventBridge can emit events, but sending alerts directly with Amazon SES is less standard and less suitable than SNS for rapid team notifications and fan-out integrations. CloudFormation nested stacks with parameters can support configuration, but the option is weaker because the rest of the design relies on non-native tooling and custom approval logic. Using AWS Lambda to perform lead developer approvals is also incorrect because CodePipeline already provides a built-in manual approval stage specifically for this purpose.

AWS CodeDeploy is intended for deployment orchestration and application rollout strategies, not for executing unit tests and security scans during CI. CloudWatch alarms are not the best fit for immediate notification of build failures because CodeBuild state change events through EventBridge provide a more direct and reliable event source for failed test runs. Managing separate Docker images for each feature and toggling them with the AWS CLI is operationally cumbersome and does not represent a clean feature-toggle or environment-configuration strategy within a pipeline. Although the manual approval stage is appropriate, the core testing, alerting, and feature-management portions of this option do not satisfy the requirements well.

Análise da Questão

Core Concept: This question tests designing a modern AWS-native CI/CD pipeline using CodePipeline with CodeCommit as the source, CodeBuild for build/test/security scanning, event-driven notifications, and controlled promotions (manual approvals). It also touches on configuration/feature management patterns that integrate cleanly with IaC (AWS CDK/CloudFormation). Why the Answer is Correct: Option A aligns with standard AWS CI/CD architecture: CodeCommit triggers CodePipeline on commits to specific branches (main and release/*). CodeBuild is the correct managed service to run unit tests and security scans (e.g., SAST, dependency scanning, container image scanning steps). For the “alert within 2 minutes” requirement, EventBridge can subscribe to CodeBuild build state change events and fan out to SNS, which can notify email/SMS/ChatOps quickly and reliably. Finally, CodePipeline natively supports a Manual approval action, which is the canonical way to require the lead developer’s approval before production. Key AWS Features: - CodePipeline source action with CodeCommit and branch filters (or separate pipelines per branch pattern) to ensure only main and release/* trigger. - CodeBuild buildspec.yml to run unit tests and security scanning tools; integrate with IAM least privilege and artifact encryption. - EventBridge rule for CodeBuild “FAILED” (or phase failure) events targeting SNS for near-real-time notifications. - Feature toggles/configuration: using CDK constructs plus a manifest/config file (or context/parameters) is a valid pipeline-time mechanism to enable/disable APIs and customize environment configuration without rebuilding the entire architecture each time. - Manual approval stage in CodePipeline for gated promotion to production. Common Misconceptions: Some may think CodeDeploy runs tests (it primarily handles deployments) or that Lambda is a general-purpose CI runner (timeouts/runtime constraints make it unsuitable for typical builds/scans). Others may assume approvals must be custom-built with Lambda/SES; however, CodePipeline already provides a first-class manual approval action. Exam Tips: For AWS-native CI/CD, default to CodeCommit/CodePipeline/CodeBuild/CodeDeploy. Use EventBridge for event-driven notifications and SNS for alerting. For gated releases, look for CodePipeline Manual approval. For dynamic configuration, prefer parameterization/manifest/context integrated with CDK/CloudFormation rather than ad-hoc scripts or separate images per feature.

Quer praticar todas as questões em qualquer lugar?

Baixe o Cloud Pass — inclui simulados, acompanhamento de progresso e mais.

6
Questão 6

A real-time multiplayer gaming platform is deploying a session state and matchmaking service on a fleet of Amazon EC2 instances where one coordinator node and ten worker nodes must share and replicate hot game-state shards entirely in memory, the coordinator monitors node health, ingests up to 250,000 player events per second, dispatches work to workers, and aggregates responses, worker nodes perform frequent east-west replication to maintain sub-millisecond consistency, and the company requires the absolute lowest possible instance-to-instance network latency within a single Availability Zone to maximize throughput and minimize tail latency; which solution will meet these requirements?

Memory-optimized instances fit the in-memory shard requirement, but a partition placement group is aimed at fault isolation by placing instances into separate partitions (different racks). That separation can increase network hop count and does not target the absolute lowest instance-to-instance latency. Partition placement groups are better when you want to reduce correlated failures for large distributed systems, not when you need the tightest possible east-west latency.

Compute-optimized instances may help with CPU-heavy workloads, but the question emphasizes that hot game-state shards must be stored and replicated entirely in memory, which typically points to memory-optimized instances. Additionally, partition placement groups prioritize failure-domain isolation rather than the lowest latency. This option misses both the primary resource need (RAM) and the primary networking requirement (lowest latency).

Correct. A cluster placement group is purpose-built for the lowest network latency and highest throughput between instances in a single AZ by placing them physically close together. Pairing that with memory-optimized instances directly supports keeping hot game-state shards entirely in RAM while enabling extremely fast east-west replication and coordinator/worker communication. This best matches the stated goal of minimizing tail latency and maximizing throughput.

A spread placement group places instances on distinct underlying hardware to reduce correlated failures, which is useful for a small number of critical instances but generally increases physical separation and can worsen latency. It is not intended for high-throughput, low-latency east-west replication. Compute-optimized instances also don’t align as well as memory-optimized for an explicitly in-memory state replication service.

Análise da Questão

Core Concept: This question tests EC2 placement groups and how they influence instance-to-instance network latency and throughput within a single Availability Zone (AZ). For ultra-low latency, high packet-per-second east-west traffic, the key concept is placing instances on hardware that is physically close and connected with high-bandwidth, low-latency networking. Why the Answer is Correct: A cluster placement group is designed to provide the lowest possible network latency and the highest network throughput between instances in the same AZ. The workload described (in-memory hot shard replication, sub-millisecond consistency, coordinator/worker fan-out and aggregation, and very high event ingestion rates) is extremely sensitive to tail latency and benefits from tightly coupled networking. A cluster placement group attempts to place instances in the same rack/segment of the network, minimizing hop count and contention, which is exactly what you want for frequent east-west replication and rapid coordinator-to-worker dispatch. Key AWS Features: - Cluster placement group: optimized for low-latency, high-throughput networking within one AZ. - Instance type choice: “memory-optimized” aligns with the requirement to keep game-state shards entirely in memory (large RAM footprint, high memory bandwidth). While compute also matters, the question explicitly emphasizes in-memory state. - Operational note: cluster placement groups can have capacity constraints; using the same instance type, launching together, and having flexibility in instance sizes can improve placement success. Common Misconceptions: Partition placement groups are often mistaken as “best performance,” but they are primarily for fault isolation (spreading instances across partitions with separate racks/power/network) and are common for large distributed systems like HDFS/Cassandra where resilience is prioritized over absolute lowest latency. Spread placement groups maximize isolation (distinct hardware) and are the opposite of what you want for micro-latency east-west traffic. Exam Tips: When you see phrases like “absolute lowest latency,” “tightly coupled,” “HPC-style,” “sub-millisecond,” or heavy east-west traffic within one AZ, think cluster placement group. When you see “fault isolation” for large fleets, think partition. When you see “avoid correlated failures” for a small number of critical instances, think spread. Then pick the instance family (memory/compute) based on the dominant resource requirement (here: in-memory shards).

7
Questão 7
(Selecione 2)

A mobility analytics startup needs to ingest telemetry from connected e-bikes every 60 seconds to calculate usage-based insurance rates. Each device sends JSON payloads to Amazon API Gateway, which invokes an AWS Lambda function that writes normalized records to an Amazon DynamoDB table. During a limited beta (2,000 bikes), the Lambda invocations completed in 2–4 seconds. After scaling to 35,000 bikes and adding new accelerometer and battery health metrics, the average Lambda duration increased to 70–120 seconds, and durations continue to grow as additional metrics are added. The team observes many ProvisionedThroughputExceededException errors on DynamoDB PutItem calls and frequent TooManyRequestsException errors returned from Lambda via API Gateway. Which combination of changes will most effectively remediate these issues while maintaining near-real-time processing? (Choose two.)

The workload is explicitly generating many ProvisionedThroughputExceededException errors on DynamoDB PutItem calls, which indicates the table's configured write throughput is insufficient for the current ingestion rate or partition distribution. Increasing the table's write capacity units directly addresses that bottleneck and reduces write throttling, retries, and backoff inside the Lambda function. This in turn lowers Lambda execution time and helps reduce the cascading pressure that leads to upstream throttling. Because the question asks for the most effective combination, fixing the database capacity issue is necessary rather than optional.

Increasing Lambda memory can improve CPU, network throughput, and sometimes SDK performance, so it may reduce execution duration for JSON parsing and normalization. However, the scenario includes explicit DynamoDB write throttling, and more Lambda power does not increase DynamoDB provisioned write capacity. In some cases, faster Lambda execution can even drive writes into DynamoDB more aggressively, leaving the underlying table bottleneck unresolved. Therefore, it is a useful tuning step but not one of the two most effective remediations compared with directly fixing capacity and decoupling ingestion.

Increasing the payload size so each request contains more minutes of data would move the system away from near-real-time processing, which the question explicitly requires. Larger payloads also increase per-request processing time, retry cost, and failure blast radius if a request is dropped or throttled. This change would likely worsen Lambda duration and could create larger write bursts against DynamoDB when each request is finally processed. It does not address the root causes of synchronous coupling and insufficient write throughput.

Kinesis Data Streams decouples the ingestion layer from the processing layer so API Gateway no longer depends on a long-running Lambda invocation to complete each request. API Gateway can hand off records quickly to Kinesis, while a Lambda consumer reads records in batches and processes them with controlled concurrency. This buffering smooths spikes, reduces per-record invocation overhead, and supports near-real-time processing without forcing every device request to wait on DynamoDB latency. It is a strong architectural improvement for high-scale telemetry ingestion where synchronous request processing is causing TooManyRequestsException.

SQS FIFO is designed for strict ordering and deduplication, but those features come with lower throughput characteristics than alternatives such as Kinesis or SQS Standard. The option also states that Lambda processes each message individually, which preserves high per-message overhead and does not take advantage of efficient batch-oriented processing. For high-volume telemetry, strict ordering across all messages is usually unnecessary, and FIFO is not the best fit for maintaining scalable near-real-time ingestion. This makes it less effective than Kinesis for the stated requirements.

Análise da Questão

Core Concept: This question is about scaling a near-real-time ingestion pipeline when both the compute layer and the database layer are being overwhelmed. The architecture currently uses a synchronous API Gateway to Lambda to DynamoDB path, which causes upstream throttling when Lambda slows down and downstream throttling when DynamoDB cannot sustain the write rate. The best remediation is to decouple ingestion from processing with a streaming service and to increase DynamoDB write throughput so the database can absorb the larger volume of writes. Key AWS features involved are Kinesis Data Streams for buffering and batch consumption, and DynamoDB provisioned write capacity scaling to prevent ProvisionedThroughputExceededException. A common misconception is that Lambda memory tuning alone solves the problem; it may improve duration, but it does not fix an underprovisioned DynamoDB table or the tight synchronous coupling. Exam tip: when you see explicit DynamoDB throughput exceptions plus synchronous Lambda/API Gateway throttling, look for one answer that addresses architectural decoupling and another that addresses the database capacity bottleneck directly.

8
Questão 8

An international e-learning platform operates workloads for 12 cost centers across 8 AWS accounts under a single AWS Organizations management account, every resource in all accounts already carries a tag named CostCenter with the correct value, and Finance must allocate 100% of monthly and daily AWS spend to each cost center across all accounts and publish interactive dashboards by cost center within 48 hours after month end while retaining at least 12 months of history; which solution will meet these requirements?

Correct. Activating the CostCenter cost allocation tag in the management account ensures the tag is included in billing data. An organization-wide CUR to a central S3 bucket provides consolidated, line-item costs across all accounts with daily granularity and resource IDs. Athena can query the CUR for chargeback/showback, and QuickSight provides interactive dashboards. S3 retention easily meets 12+ months of history and supports month-end reporting SLAs.

Incorrect. Cost allocation tags are activated (not “created”) for billing, and doing this in each member account is not the right control point for consolidated billing reporting. More importantly, CloudWatch dashboards are not designed for CUR-based financial analytics or cost allocation reporting. While a single CUR in the management account is good, the visualization choice and tag activation approach do not meet the finance dashboard requirement.

Incorrect. Activating the tag in the management account is good, but generating separate CURs into separate S3 buckets per member account creates unnecessary fragmentation and operational overhead. It complicates organization-wide aggregation and increases the risk of missing the 48-hour post–month-end dashboard SLA. Additionally, CloudWatch dashboards are not appropriate for interactive cost allocation analytics based on CUR data.

Incorrect. It suggests creating cost allocation tags in each member account (unnecessary since tags already exist; activation for billing is what matters). Having each account produce its own CUR to its own bucket is operationally complex and makes centralized governance and consistent schema management harder. Although Athena + QuickSight are appropriate tools, the multi-bucket, multi-CUR approach is inferior to an organization-wide CUR from the management account.

Análise da Questão

Core Concept: This question tests AWS cost allocation across an AWS Organization using cost allocation tags, the AWS Cost and Usage Report (CUR), and analytics/visualization tooling (Athena + QuickSight) to produce chargeback/showback dashboards with daily and monthly accuracy and multi-account scope. Why the Answer is Correct: Finance needs 100% allocation of spend to 12 cost centers across 8 accounts, with daily and monthly views, dashboards within 48 hours after month end, and at least 12 months of history. The correct pattern is: activate the CostCenter tag as an AWS cost allocation tag in the management account (so it appears in billing data), generate an organization-wide CUR to a central S3 bucket (so all linked accounts are included in one dataset), then query with Athena and visualize with QuickSight. CUR provides the most detailed line-item billing data (including tag columns once activated) and supports daily granularity and resource IDs, enabling accurate allocation and drill-down. S3 provides durable, low-cost retention for 12+ months. Key AWS Features: - AWS Organizations consolidated billing + management account ownership of billing data. - Cost allocation tags must be activated (typically in the management account) before they appear in CUR/cost tools; activation is not retroactive. - Organization-wide CUR delivers a single report covering all member accounts to one S3 bucket, simplifying governance and analytics. - CUR with daily granularity and resource IDs supports both daily and monthly rollups and detailed investigation. - Athena queries CUR data in S3 (often via Glue Data Catalog) and QuickSight builds interactive dashboards suitable for finance stakeholders. Common Misconceptions: - CloudWatch dashboards are for operational metrics/logs, not authoritative billing allocation and chargeback reporting. - Creating separate CURs per account increases complexity and makes cross-account aggregation and governance harder, risking missed SLAs. - “Creating” tags in each account is unnecessary here because tags already exist on resources; the critical step is activating the tag for billing. Exam Tips: When you see “allocate 100% spend,” “multi-account,” “daily + monthly,” “retain history,” and “dashboards,” think: activate cost allocation tags + CUR to S3 + Athena/QuickSight. Prefer organization-wide CUR from the management account for consolidated billing analytics and simpler operations.

9
Questão 9

A global logistics enterprise plans to migrate 420 VMware VMs (Windows and Linux) from two on-premises data centers to AWS and, after submitting a Migration Evaluator assessment request, requires a 21-day initial discovery that produces (1) a visualization of inter-application dependencies at port/process level and (2) a Quick Insights assessment report of the on-premises estate, and the company can deploy OVA appliances and install agents without constraints, so which approach provides the required outcomes with the LEAST operational overhead?

The AWS Application Discovery Agent can provide the required port/process-level dependency data, and Migration Hub can visualize those dependencies correctly. However, this option is still wrong because it says the Quick Insights assessment report is downloaded from Migration Hub, which is not where Quick Insights is produced. Quick Insights is a Migration Evaluator output, so this option fails to satisfy the reporting requirement.

This option is incorrect on multiple fronts. Migration Evaluator Collector is not installed on each VM; it is deployed as a collector appliance, so the collection method described is inaccurate. In addition, Migration Evaluator does not provide application dependency visualization, and Amazon QuickSight is unrelated to generating a Migration Evaluator Quick Insights report. Therefore, it does not meet either the dependency-mapping workflow or the reporting workflow correctly.

This option has the lowest apparent overhead, but it does not meet the dependency requirement. The AWS Application Discovery Service Agentless Collector OVA gathers VMware inventory and performance metadata, but it does not capture process-level and port-level communication details needed for inter-application dependency mapping. Because the question explicitly requires port/process-level visualization, agentless discovery alone is insufficient even if the discovered inventory can be uploaded to Migration Evaluator for Quick Insights.

This is the only option that includes both required toolsets for the stated outcomes. Installing the AWS Application Discovery Agent on each VM enables detailed dependency mapping at the process and port level, and those relationships are visualized in AWS Migration Hub. Deploying a Migration Evaluator Collector appliance in each data center supports the 21-day collection needed to generate the Quick Insights assessment report in Migration Evaluator. Although it is not the lightest deployment model overall, it is the least operational overhead among the options that actually meet both technical requirements.

Análise da Questão

Core concept: The question combines two separate migration assessment capabilities: AWS Application Discovery Service for dependency mapping and Migration Evaluator for Quick Insights. Why correct: Port/process-level inter-application dependency visualization requires the AWS Application Discovery Agent on each server, while the Quick Insights assessment report is produced by Migration Evaluator after a 21-day collection. Key features: ADS agents collect detailed process, connection, and port metadata for dependency analysis in AWS Migration Hub; Migration Evaluator Collector appliances gather inventory and utilization data for cost and rightsizing analysis. Common misconceptions: The ADS Agentless Collector OVA can inventory VMware environments, but it does not capture process/port-level dependencies; Migration Hub does not generate Quick Insights reports; Migration Evaluator does not visualize application dependencies. Exam tips: When a question explicitly asks for port/process-level dependency mapping, prefer agent-based ADS even if agentless options seem lower effort. If Quick Insights is also required, pair ADS with Migration Evaluator.

10
Questão 10

A regional energy utility must archive 180 TB of monthly meter reports (PDF/CSV) for 7 years and make them available only to internal staff through the corporate intranet; employees connect via an AWS Client VPN to a VPC, all access must stay private (no public endpoints), the files are duplicate copies of records also stored on offline tape, the expected request rate is fewer than 10 retrieval batches per week, and retrieval latency of up to 12 hours is acceptable with no availability or speed-of-retrieval requirements. Which solution will meet these requirements at the lowest cost?

S3 One Zone-IA keeps data online with millisecond access and is meant for infrequently accessed but still readily retrievable objects. For 7-year retention of 180 TB/month, storage cost is the dominant factor, and One Zone-IA will be significantly more expensive than Glacier Deep Archive. Also, enabling static website hosting is unnecessary and typically implies public website endpoints, conflicting with the “private-only” intent even if an endpoint policy is used.

EFS (even One Zone-IA) is a shared POSIX file system optimized for low-latency file access, not massive archival at minimal cost. You would also pay for EC2 instances (web server) and potentially data transfer/throughput, making it far more expensive than S3 archival classes. It also adds operational overhead (patching, scaling) that provides no benefit given the low request rate and high latency tolerance.

EBS Cold HDD (sc1) is the lowest-cost EBS option but is still block storage attached to EC2 and is not intended for multi-petabyte, long-term archives. You must run EC2 to serve files, manage backups/snapshots, and handle durability/availability yourself. Costs scale poorly compared to S3 Glacier Deep Archive, and operational complexity is high for a simple archive use case.

S3 Glacier Deep Archive is the correct storage choice because it is optimized for long-term retention of rarely accessed data at the lowest storage cost in S3. The archive is enormous, retained for 7 years, and can tolerate retrieval delays of hours, which aligns directly with Glacier Deep Archive retrieval characteristics. This makes it far more cost-effective than online S3 classes, EFS, or EBS-based designs. The option’s mention of static website hosting and an interface endpoint is not technically how private S3 access is implemented, but among the provided choices, D is still the best answer because the storage class selection is the decisive factor.

Análise da Questão

Core Concept: This question tests selecting the lowest-cost archival storage that still supports private-only access from a VPC. The key services are Amazon S3 storage classes (especially S3 Glacier Deep Archive) and private connectivity to S3 using VPC endpoints, plus access control via bucket policies. Why the Answer is Correct: S3 Glacier Deep Archive is designed for long-term retention (years) with very infrequent access and the lowest storage cost in S3. The utility has 180 TB per month retained for 7 years, duplicates exist on tape (so ultra-low durability/availability beyond S3’s baseline isn’t a driver), and retrieval is rare (<10 batches/week) with acceptable latency up to 12 hours—this aligns directly with Deep Archive’s retrieval model (hours, not milliseconds). To keep all access private, users connect via AWS Client VPN into a VPC and then reach S3 through an S3 Interface VPC Endpoint (AWS PrivateLink). A bucket policy can require access via that endpoint (using aws:SourceVpce), preventing any public-path access. Key AWS Features: - S3 Glacier Deep Archive storage class for lowest-cost archival storage. - S3 Interface Endpoint (PrivateLink) to keep traffic on the AWS network without public endpoints. - Bucket policy condition keys (e.g., aws:SourceVpce) to enforce private-only access. - Optional: S3 Object Lock / lifecycle policies for retention governance (not required by the prompt but common for 7-year archives). Common Misconceptions: - “Static website hosting” implies public website endpoints and is not compatible with private-only access requirements; it’s unnecessary for intranet access. - One Zone-IA is cheaper than Standard-IA but is still an online class intended for faster retrieval and higher access frequency than Deep Archive; at this scale and retention period, storage cost dominates. - EFS/EBS are not cost-effective for multi-petabyte, long-term archives and require continuously running EC2/web tiers. Exam Tips: When you see “archive for years,” “rare access,” and “hours of retrieval latency acceptable,” default to S3 Glacier Deep Archive (or Glacier Flexible Retrieval if latency needs are shorter). For “no public endpoints,” look for VPC endpoints + restrictive bucket policies; avoid static website hosting and public S3 access patterns.

Histórias de Sucesso(9)

S
s******Nov 24, 2025

Período de estudo: 3 months

I used these practice questions and successfully passed my exam. Thanks for providing such well-organized question sets and clear explanations. Many of the questions felt very close to the real exam.

T
t**********Nov 13, 2025

Período de estudo: 3 months

Just got certified last week! It was a tough exam, but I’m really thankful to cloud pass. the app questions helped me a lot in preparing for it.

효
효**Nov 12, 2025

Período de estudo: 1 month

앱 이용 잘했습니다 ^^

P
p*******Nov 7, 2025

Período de estudo: 2 months

These practice exams are help for me to pass the certification. A lot of questions are mimicked from here.

D
d***********Nov 7, 2025

Período de estudo: 1 month

Thanks. I think I passed because of high quality contents here. I am thinking to take next aws exam here.

Outros Simulados

Practice Test #1

75 Questões·180 min·Aprovação 750/1000

Practice Test #2

75 Questões·180 min·Aprovação 750/1000

Practice Test #3

75 Questões·180 min·Aprovação 750/1000

Practice Test #5

75 Questões·180 min·Aprovação 750/1000
← Ver Todas as Questões de AWS Certified Solutions Architect - Professional (SAP-C02)

Comece a Praticar Agora

Baixe o Cloud Pass e comece a praticar todas as questões de AWS Certified Solutions Architect - Professional (SAP-C02).

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

App de Prática para Certificações de TI

Get it on Google PlayDownload on the App Store

Certificações

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

Perguntas FrequentesPolítica de PrivacidadeTermos de Serviço

Empresa

ContatoExcluir Conta

© Copyright 2026 Cloud Pass, Todos os direitos reservados.

Quer praticar todas as questões em qualquer lugar?

Baixe o app

Baixe o Cloud Pass — inclui simulados, acompanhamento de progresso e mais.