
Simula la experiencia real del examen con 65 preguntas y un límite de tiempo de 90 minutos. Practica con respuestas verificadas por IA y explicaciones detalladas.
Impulsado por IA
Cada respuesta es verificada de forma cruzada por 3 modelos de IA líderes para garantizar la máxima precisión. Obtén explicaciones detalladas por opción y análisis profundo de cada pregunta.
A media-streaming startup needs to broadcast playback error alerts generated by a single monitoring microservice to more than 10,000 endpoints (mobile push, email, SMS, and HTTPS webhooks) across two AWS Regions using a topic-based publish/subscribe pattern to fan out messages with minimal code changes. Which AWS service should the team use to implement this publisher-and-subscriber model?
AWS Lambda is a serverless compute service, not a managed pub/sub messaging fabric. While Lambda can be a subscriber to SNS (or can call external endpoints), using Lambda as the primary mechanism to broadcast to 10,000+ endpoints would require custom code to manage subscriptions, retries, throttling, and protocol-specific delivery (SMS/email/push/webhooks). That violates the “minimal code changes” and “topic-based pub/sub” intent.
Amazon SNS is purpose-built for topic-based publish/subscribe fanout. A single publisher sends a message to an SNS topic, and SNS delivers it to many subscribers across supported protocols including SMS, email, mobile push notifications, and HTTP/HTTPS webhooks. SNS scales to large numbers of endpoints and reduces application code to a simple publish call. Cross-Region requirements are handled by deploying topics per Region and forwarding/dual-publishing as needed.
Amazon CloudWatch focuses on observability: metrics, logs, alarms, and events. CloudWatch alarms can notify via SNS, but CloudWatch itself is not the pub/sub service that manages topic subscriptions and multi-protocol endpoint delivery. In this scenario, the microservice is generating alerts and needs a publisher/subscriber model; SNS is the correct messaging layer, and CloudWatch would be optional only if alerts were derived from metrics/logs.
AWS CloudFormation is an infrastructure-as-code service used to provision and manage AWS resources (including SNS topics and subscriptions) through templates. It does not implement runtime message broadcasting or a pub/sub delivery mechanism. CloudFormation could help deploy the SNS-based solution consistently across two Regions, but it is not the service that provides the publisher-and-subscriber messaging model.
Core Concept: This question tests AWS managed messaging for a topic-based publish/subscribe (pub/sub) fanout pattern. In AWS, the canonical service for pub/sub with multiple subscriber protocols is Amazon Simple Notification Service (Amazon SNS). Why the Answer is Correct: Amazon SNS lets a single publisher (the monitoring microservice) publish an alert message to an SNS topic, and SNS then delivers copies of that message to potentially thousands (or more) of subscribers. SNS natively supports multiple endpoint types that match the question’s requirements: mobile push notifications, email, SMS, and HTTPS webhooks (via HTTP/HTTPS subscriptions). This achieves fanout to >10,000 endpoints with minimal code changes: the microservice only needs to publish to a topic; subscriber management and delivery are handled by SNS. Key AWS Features: SNS topics provide durable, highly available message ingestion and delivery with automatic scaling. You can use subscription filter policies (message attributes) to route only relevant alerts to specific subscriber groups, reducing downstream noise. For cross-Region needs, SNS is Regional, but you can implement multi-Region delivery by creating topics in each Region and using cross-Region forwarding patterns (for example, publish to both topics, or subscribe an HTTPS endpoint/Lambda in the other Region that republishes). SNS also integrates with other AWS services (Lambda, SQS, EventBridge) if you later need buffering, retries, or additional processing. Common Misconceptions: Lambda is compute, not a pub/sub broker; using it alone would require custom fanout logic and endpoint management. CloudWatch is for metrics, logs, and alarms; it can trigger notifications but is not the general-purpose pub/sub service for multi-protocol endpoint fanout. CloudFormation is infrastructure-as-code and does not deliver messages. Exam Tips: When you see “topic-based publish/subscribe,” “fan out,” and “multiple protocols (SMS, email, mobile push, HTTP/S),” think SNS first. If the question emphasizes buffering/consumer pull or ordered processing, consider SQS/Kinesis instead. Remember SNS is Regional; multi-Region designs typically replicate topics and forward or dual-publish for resilience and locality.
A mid-size logistics company with 220 employees is launching a 15-month cloud transformation across 6 product teams, aiming to align IT delivery metrics (deployment frequency > 10 per week) with business KPIs (customer onboarding < 48 hours) and to establish cross-functional guilds for upskilling and continuous learning. Which perspective in the AWS Cloud Adoption Framework (AWS CAF) serves as the bridge between technology and business to cultivate this culture of continuous growth and learning?
People is correct because it addresses organizational change management: roles, skills, staffing, incentives, and culture. The scenario’s focus on cross-functional guilds, upskilling, and continuous learning is directly within the People perspective. It also helps connect technology delivery practices (e.g., DevOps metrics like deployment frequency) to business outcomes by shaping how teams collaborate and improve.
Governance focuses on decision-making mechanisms, policies, risk management, compliance oversight, and aligning cloud investments with business strategy through guardrails and controls. It can influence KPI alignment at a portfolio level, but it does not primarily target building a learning culture or creating guilds for upskilling. Those culture-and-skills elements are more squarely People perspective concerns.
Operations is about operating and supporting cloud workloads: monitoring, incident and problem management, change management processes, reliability practices, and operational readiness. While Operations supports continuous improvement through operational feedback, it is not the primary CAF perspective for workforce upskilling, organizational structure, or establishing cross-functional guilds. The question emphasizes culture and learning rather than runtime operations.
Security covers security strategy, identity and access management, data protection, threat detection, and compliance controls. Security teams may participate in cross-functional models (e.g., DevSecOps), but the core of the question is cultivating continuous growth and learning and bridging business and technology via people and culture. That is not the primary focus of the Security perspective.
Core Concept: This question tests knowledge of the AWS Cloud Adoption Framework (AWS CAF) perspectives and which one connects business outcomes (KPIs) with technology delivery while building a culture of learning. AWS CAF organizes cloud adoption guidance into perspectives; the relevant idea here is organizational change management, skills, roles, and collaboration models. Why the Answer is Correct: The People perspective is the “bridge” between technology and business because it focuses on organizational structure, roles and responsibilities, staffing, skills, incentives, and culture. The scenario explicitly mentions aligning IT delivery metrics (deployment frequency) with business KPIs (customer onboarding time) and establishing cross-functional guilds for upskilling and continuous learning. Those are classic People perspective concerns: enabling product teams, creating communities of practice (guilds), defining new ways of working (DevOps/product operating model), and driving continuous improvement through training and career development. Key AWS Features / Best Practices: While not a single AWS service, the People perspective commonly maps to practices such as: - Building cloud skills via AWS Training and Certification, AWS Skill Builder, and structured enablement plans. - Establishing cross-functional teams (product + platform + security) and communities of practice to standardize patterns. - Defining roles (cloud center of excellence, platform team, SRE/DevOps) and aligning incentives to business outcomes. - Using metrics (DORA metrics like deployment frequency) as feedback loops for learning and improvement. These align with the AWS Well-Architected Framework’s Operational Excellence pillar emphasis on learning, experimentation, and continuous improvement, but the CAF lens for culture and skills is specifically People. Common Misconceptions: Governance can sound like “business alignment,” but it is more about decision rights, portfolio management, risk management, and policies. Operations relates to running and supporting workloads, incident/problem management, and operational processes. Security focuses on risk, controls, and compliance. None of those primarily address upskilling, guilds, and culture change. Exam Tips: When you see keywords like “skills,” “training,” “organizational change,” “roles,” “culture,” “cross-functional teams,” or “communities of practice/guilds,” think People perspective. When you see “policies, guardrails, compliance reporting, portfolio prioritization,” think Governance. “Runbooks, monitoring, incident response” maps to Operations, and “IAM, encryption, threat detection” maps to Security.
A streaming media startup runs its web application on Amazon EC2 instances in a single AWS Region across 4 Availability Zones; 75% of requests are for static images, CSS, and JavaScript, and the company expects 30,000 requests per minute from users in North America, Europe, and Asia while requiring the median latency for static assets to be under 60 ms globally without deploying additional application servers in other Regions; which AWS service should the company use to meet these requirements?
Amazon Route 53 is a highly available DNS service that can route users to endpoints using policies like latency-based routing or geolocation. However, Route 53 does not cache and serve static files from edge locations. Even if Route 53 directs users to the “closest” Region, the company is not deploying multi-Region origins, and DNS routing alone cannot achieve sub-60 ms median latency globally for static assets.
Amazon CloudFront is the correct choice because it caches static assets at a worldwide network of edge locations, minimizing latency for users in North America, Europe, and Asia while keeping the origin in a single Region. CloudFront reduces origin load and improves performance for high request volumes. Features like cache policies, TTLs, compression, and Origin Shield help meet stringent latency and scalability requirements.
Elastic Load Balancing distributes traffic across targets (EC2, containers, IPs) and can span multiple Availability Zones, improving availability and scaling within a Region. But ELB does not provide global edge caching or accelerate delivery to distant geographies. Users in Europe and Asia would still fetch static assets from the single Region, likely exceeding the 60 ms median latency requirement.
AWS Lambda is a serverless compute service and can be used with Lambda@Edge to customize requests/responses at CloudFront edge locations. However, Lambda alone is not a content delivery solution and does not inherently provide caching or global acceleration for static assets. The primary service needed here is CloudFront; Lambda@Edge is optional for advanced logic (headers, auth, rewrites).
Core Concept: This question tests global content delivery and latency optimization for static assets without deploying additional application servers in multiple Regions. The key service is a CDN (Content Delivery Network), which caches content close to end users. Why the Answer is Correct: Amazon CloudFront is AWS’s CDN that uses a global network of edge locations to cache and serve static content (images, CSS, JavaScript) with very low latency worldwide. Because 75% of requests are for static assets and the user base is distributed across North America, Europe, and Asia, serving these assets from CloudFront edge locations dramatically reduces round-trip time compared to fetching them from a single Region. This directly supports the requirement of median latency under 60 ms globally, while avoiding deployment of additional application servers in other Regions. Key AWS Features / How You’d Configure It: CloudFront distributions can use the existing Regional origin (e.g., an Application Load Balancer or S3 bucket) and cache static objects at the edge. You would: - Configure an origin (ALB/EC2 or S3) and behaviors for static paths (e.g., /static/*). - Set appropriate TTLs and cache policies; use versioned filenames or cache invalidations for updates. - Enable compression (Brotli/Gzip) for CSS/JS. - Use Origin Shield (optional) to reduce origin load for high request rates (30,000/min). - Use HTTPS and optionally AWS WAF at the edge for protection. Common Misconceptions: Route 53 can route users to endpoints but does not cache content at the edge; it won’t meet strict global latency for static assets by itself. Elastic Load Balancing improves distribution within a Region but does not provide global edge caching. Lambda can run code at the edge (Lambda@Edge) or in Regions, but it’s not the primary solution for accelerating static content delivery. Exam Tips: When you see “static content,” “global users,” “low latency,” and “single Region origin,” the default best answer is CloudFront. Pair it with S3 for static hosting when possible, or with ALB/EC2 as the origin when assets are served by the application. Remember: DNS routing (Route 53) and load balancing (ELB) are not CDNs; CloudFront is the CDN service designed for this requirement.
A media startup must store 200 TB of user-uploaded photos with 99.999999999% durability, accessible over HTTPS via REST as objects with object-level metadata and lifecycle rules that transition data to a cheaper infrequent-access tier after 30 days; which AWS service best meets these requirements?
Amazon S3 is the correct choice because it is object storage with native REST/HTTPS access, object-level metadata, and lifecycle policies to transition data to cheaper storage classes (e.g., S3 Standard-IA) after 30 days. S3 is designed for massive scale (200 TB+) and provides 99.999999999% durability by storing data redundantly across multiple AZs. It directly matches every requirement in the prompt.
Amazon EFS is a managed NFS file system for Linux workloads that need shared POSIX file access from multiple instances. While EFS can scale to large sizes, it is not an object store and does not provide S3-style REST object access, object-level metadata semantics, or S3 Lifecycle transitions to infrequent-access object tiers. EFS has its own storage classes (Standard/IA) but the access model is file-based, not object-based.
Amazon EBS is block storage designed to be attached to a single EC2 instance (or limited multi-attach scenarios) and used like a disk volume. It does not provide direct HTTPS/REST access, object-level metadata, or lifecycle rules to transition objects to infrequent-access tiers. EBS durability/availability characteristics differ from S3 and it is not intended for large-scale, internet-accessible object repositories.
Amazon FSx provides managed file systems (e.g., Windows File Server via SMB, Lustre, NetApp ONTAP, OpenZFS) for specialized file workloads. Like EFS, it is file storage rather than object storage and does not natively meet requirements for REST/HTTPS object access, S3 object metadata, or S3 Lifecycle transitions. FSx is chosen for file protocol compatibility and performance features, not for object storage durability and lifecycle tiering.
Core Concept: This question tests selecting the correct AWS storage service based on object storage requirements: extremely high durability, REST/HTTPS access, object-level metadata, and lifecycle tiering to lower-cost infrequent access. Why the Answer is Correct: Amazon S3 is AWS’s purpose-built object storage service. It is designed to store and retrieve any amount of data (including 200 TB and far beyond) as objects in buckets, accessed natively via REST APIs over HTTPS. S3 provides 99.999999999% (11 9s) durability for objects by redundantly storing data across multiple devices and multiple Availability Zones within a Region. The requirement for “objects with object-level metadata” maps directly to S3 object metadata (system-defined and user-defined) and tagging. Key AWS Features: S3 Lifecycle rules can automatically transition objects between storage classes based on age (e.g., after 30 days). A common pattern is S3 Standard for recent uploads, then transition to S3 Standard-IA (infrequent access) or S3 One Zone-IA depending on resiliency needs, and optionally later to S3 Glacier Instant Retrieval/Flexible Retrieval/Deep Archive for archival. Lifecycle can also expire objects, manage noncurrent versions (if versioning is enabled), and use object tags to apply different policies. S3 supports encryption (SSE-S3, SSE-KMS), bucket policies, IAM, and access logging—important for secure HTTPS access patterns. Common Misconceptions: EFS and FSx are file systems (POSIX/SMB/NFS semantics) rather than object stores; they don’t provide S3-style object metadata, REST object APIs, or S3 Lifecycle transitions to IA tiers. EBS is block storage attached to EC2 and is not suitable for direct HTTPS/REST object access or massive shared object repositories. Exam Tips: When you see “REST/HTTPS objects,” “object metadata,” “lifecycle transition,” and “11 9s durability,” think Amazon S3 immediately. File storage keywords (NFS/SMB/POSIX) point to EFS/FSx; block storage keywords (volumes, attached to EC2, low-latency) point to EBS.
A ride-sharing startup experiences weekday traffic spikes where concurrent requests jump from 800 to 4,500 within 10 minutes and drop back below 600 after midnight, and it wants the cloud to automatically add compute capacity when average CPU utilization across its instances exceeds 70% for 5 minutes and to remove that capacity when it falls below 30% for 10 minutes so that it pays only for what it uses; which AWS concept best represents this goal?
Scalability is the ability of an application to handle increased load by adding resources (vertical or horizontal) and is often discussed in the context of long-term growth or designing systems to support higher throughput. While the scenario involves handling spikes, the key requirement is automatic scale out and scale in based on CPU thresholds and time windows. That dynamic, bidirectional behavior is more precisely elasticity than scalability.
Sustainability is an AWS Well-Architected Framework pillar focused on minimizing environmental impacts, improving energy efficiency, and optimizing resource utilization over time. Although scaling down unused capacity can indirectly support sustainability goals, the question is fundamentally about automatically adjusting compute capacity to match demand and cost. The primary concept being tested is elasticity, not sustainability practices or reporting.
Elasticity is the ability to automatically provision and deprovision resources to match current demand, scaling out during spikes and scaling in during low usage to optimize cost. The question explicitly describes CPU-based thresholds over defined durations (e.g., >70% for 5 minutes, <30% for 10 minutes) and a desire to pay only for what is used. This aligns directly with EC2 Auto Scaling policies driven by CloudWatch metrics.
Operational excellence is a Well-Architected pillar emphasizing operational processes, automation, monitoring, and continuous improvement. Auto Scaling can be part of operational excellence, but the scenario’s core goal is not about operational processes or governance; it is about dynamically matching capacity to demand and cost efficiency. Therefore, operational excellence is adjacent but not the best representation of the described objective.
Core Concept: This question tests the AWS Cloud concept of elasticity: the ability to automatically increase and decrease resources to match demand, paying only for what you use. In AWS, this is commonly implemented with Amazon EC2 Auto Scaling (often in an Auto Scaling group) and scaling policies driven by Amazon CloudWatch metrics. Why the Answer is Correct: The startup’s workload has predictable daily spikes and troughs, and it explicitly wants the cloud to add capacity when average CPU > 70% for 5 minutes and remove capacity when CPU < 30% for 10 minutes. That is the textbook definition of elasticity: rapid, automated scaling out and scaling in based on demand signals, minimizing cost during low utilization while maintaining performance during peaks. Key AWS Features: A typical implementation uses an Auto Scaling group across multiple Availability Zones, with CloudWatch alarms on the ASG’s Average CPUUtilization metric. Target tracking or step scaling policies can be configured with thresholds and evaluation periods (e.g., 5 minutes above 70% to scale out; 10 minutes below 30% to scale in). Cooldowns/warm-up settings help prevent thrashing. Elasticity is also supported at higher layers (e.g., AWS Lambda concurrency, ECS Service Auto Scaling), but the question’s “instances” and CPU thresholds strongly align with EC2 Auto Scaling + CloudWatch. Common Misconceptions: Many confuse scalability with elasticity. Scalability is the ability of a system to handle growth (often long-term) by adding resources or designing for higher capacity, but it does not necessarily imply automatic, rapid, and bidirectional scaling. Sustainability and operational excellence are Well-Architected pillars; they are important but not the primary concept described by CPU-based automatic scale-out/scale-in behavior. Exam Tips: When you see “automatically add capacity” and “remove capacity” based on metrics over time windows, think elasticity and Auto Scaling. Look for cues like “pay only for what it uses,” “spikes,” “drops,” “scale out/in,” and CloudWatch alarm thresholds. If the question emphasized long-term growth planning or redesigning for higher throughput, scalability might be the better fit; if it emphasizes dynamic right-sizing with automation, it’s elasticity.
¿Quieres practicar todas las preguntas en cualquier lugar?
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
A fintech company undergoing a SOC 2 Type II audit must automatically record and retain resource configuration changes for 200 Amazon EC2 instances and 60 Amazon RDS DB instances across three AWS Regions for at least 90 days, and it needs to export periodic configuration and compliance reports that external auditors can review; which AWS service should the company use to meet these requirements?
Amazon Cognito is an identity service for authentication, authorization, and user management (user pools, identity pools, federation). It does not record AWS resource configuration changes, maintain configuration history, or produce compliance reports for EC2/RDS. Cognito can be part of an application’s security architecture, but it is not an audit evidence system for infrastructure configuration drift or compliance monitoring.
Amazon FSx provides managed file systems (e.g., FSx for Windows File Server, Lustre, NetApp ONTAP, OpenZFS). While it can store files, it does not natively track AWS resource configuration changes or evaluate compliance. Using FSx as a storage target would still require another service to generate the configuration history and compliance data; AWS Config is the service that produces that evidence.
AWS Config is purpose-built to record AWS resource configuration changes, maintain configuration history, and evaluate compliance through AWS Config Rules. It supports the resource types in the question, including Amazon EC2 instances and Amazon RDS DB instances, and it operates across multiple Regions with centralized visibility available through aggregators. For the 90-day audit requirement, AWS Config can deliver configuration history, snapshots, and compliance data to Amazon S3, where the company can retain those records for the required period and make them available to external auditors. This makes it the best fit for automated evidence collection and periodic compliance reporting during a SOC 2 Type II audit.
Amazon Inspector is a vulnerability management service that scans workloads (e.g., EC2, container images, and Lambda) for software vulnerabilities and unintended network exposure. It is valuable for security posture, but it does not provide authoritative configuration timelines, configuration snapshots, or compliance reporting for resource configuration drift across Regions. For SOC 2 evidence of configuration changes and compliance states, AWS Config is the correct service.
Core Concept: This question tests continuous configuration monitoring, change tracking, and audit-ready reporting across multiple AWS Regions—capabilities provided by AWS Config. SOC 2 Type II emphasizes ongoing controls and evidence over time, so you need an automated, durable record of resource configuration changes and compliance status. Why the Answer is Correct: AWS Config records configuration changes for supported resources (including Amazon EC2 instances and Amazon RDS DB instances), maintains a timeline of configuration history, and can evaluate resources against desired configurations using AWS Config Rules. It is designed for compliance audits: you can retain configuration items and snapshots, query historical state, and export reports/evidence to Amazon S3 for external auditors. It also supports multi-Region and multi-account aggregation via AWS Config Aggregators, which is ideal for “three Regions” and a sizable fleet. Key AWS Features: 1) Configuration recording: Enable AWS Config recorders in each Region to capture changes for EC2 and RDS. 2) Retention: Configure the retention period (e.g., 90+ days) for configuration items; store delivered configuration history and snapshots in an S3 bucket with lifecycle policies for longer retention if needed. 3) Compliance evaluation: Use managed or custom Config Rules to assess compliance (e.g., encryption, public access, security group rules). 4) Reporting/export: Deliver configuration snapshots and compliance results to S3; auditors can review exported files, or you can generate periodic reports using AWS Config data in S3/Athena/QuickSight. 5) Central visibility: Use an aggregator to view compliance and configuration across Regions (and accounts) from one place. Common Misconceptions: Teams often confuse AWS Config with Amazon Inspector. Inspector focuses on vulnerability management and runtime/package exposure, not authoritative configuration history and change timelines. Others may think CloudTrail is the answer for “changes,” but CloudTrail records API activity, not the resulting resource configuration state and compliance evaluation. Exam Tips: When you see “record configuration changes,” “configuration history,” “compliance rules,” “audit evidence,” “multi-Region,” and “export reports,” default to AWS Config. Pair it with S3 for durable retention and optional aggregation for centralized reporting—common patterns in SOC 2, ISO 27001, and PCI evidence collection.
A media streaming company needs to run complex analytical SQL queries on 20 TB of historical clickstream data with nightly batch loads from Amazon S3, requiring columnar storage, massively parallel processing, and integration with BI tools for up to 30 concurrent analysts, which AWS service best provides a petabyte-scale data warehouse for this use case?
Amazon RDS is a managed relational database service primarily intended for OLTP workloads (e.g., application backends) using engines like MySQL, PostgreSQL, and SQL Server. While it can run SQL queries, it is not optimized for petabyte-scale analytics, columnar storage, or MPP execution. For 20 TB clickstream analytics with many concurrent analysts and BI reporting, RDS typically becomes cost-inefficient and performance-limited compared to a purpose-built warehouse.
Amazon DynamoDB is a fully managed NoSQL database optimized for low-latency key-value and document access patterns at massive scale. It does not provide a traditional analytical SQL data warehouse experience with complex joins, large aggregations, and columnar storage. Although DynamoDB has PartiQL and can support some query patterns, it is not designed for MPP analytics over 20 TB of historical clickstream data or for standard BI warehouse workloads.
Amazon Redshift is AWS’s managed, petabyte-scale cloud data warehouse built for complex analytical SQL. It uses columnar storage and MPP to parallelize queries across nodes, supports efficient compression, and integrates tightly with Amazon S3 for batch loads (COPY) and data lake querying (Redshift Spectrum). It also supports BI tools via JDBC/ODBC and can handle many concurrent analysts using features like Concurrency Scaling or Redshift Serverless.
Amazon Aurora is a high-performance managed relational database compatible with MySQL/PostgreSQL, optimized for OLTP and high availability. While Aurora can support read scaling and some reporting, it is not a columnar, MPP data warehouse and is not the best fit for large-scale clickstream analytics with nightly S3 batch loads and heavy aggregation queries. For warehouse-style BI analytics at this scale, Redshift is the intended service.
Core Concept: This question tests recognition of AWS’s purpose-built, petabyte-scale analytical data warehouse service for complex SQL analytics: Amazon Redshift. Key cues include columnar storage, massively parallel processing (MPP), large historical datasets (20 TB and growing), nightly batch loads from Amazon S3, and BI tool connectivity for many concurrent analysts. Why the Answer is Correct: Amazon Redshift is designed for OLAP workloads (analytics) and supports complex analytical SQL queries over large datasets using MPP architecture. It natively integrates with Amazon S3 for bulk loading (COPY command) and is commonly used for clickstream analytics and reporting. Redshift’s columnar storage and compression reduce I/O and improve scan performance, which is critical for large fact tables typical of clickstream data. It also supports standard JDBC/ODBC connections, making it straightforward to integrate with BI tools and support ~30 concurrent analysts (often via Concurrency Scaling and/or Redshift Serverless). Key AWS Features: - Columnar storage + advanced compression for efficient scans. - MPP query execution across nodes; RA3 instances decouple compute and storage for scalable, cost-effective growth. - COPY from S3 for high-throughput batch ingestion; can be orchestrated nightly with AWS Glue, Step Functions, or managed workflows. - Concurrency Scaling to automatically add transient capacity for bursts of concurrent queries. - Redshift Spectrum to query data directly in S3 (data lake) using the same SQL, complementing warehouse storage. - Integration with IAM, KMS encryption, VPC, and BI via JDBC/ODBC. Common Misconceptions: RDS and Aurora are relational databases optimized for OLTP (transactional) workloads, not large-scale analytics with columnar/MPP execution. DynamoDB is a NoSQL key-value/document store and does not support complex analytical SQL joins/aggregations across large datasets in the same way. Exam Tips: When you see “data warehouse,” “columnar,” “MPP,” “petabyte-scale analytics,” “S3 bulk loads,” and “BI tools,” default to Amazon Redshift. If the question emphasizes querying S3 data without loading, think Redshift Spectrum or Athena; if it emphasizes lakehouse governance, consider Glue/Lake Formation—but for a managed warehouse, Redshift is the canonical answer.
A media analytics company is migrating an on-premises monolithic reporting application to AWS and intends to decompose it into approximately 12 domain-aligned microservices, containerize the workloads, and deploy them on Amazon EKS across 2 Availability Zones with independent CI/CD pipelines and autoscaling, and the team is willing to rewrite about 30% of the code to adopt event-driven patterns and managed cloud-native capabilities. Which migration strategy best aligns with these goals?
Rehost (“lift and shift”) moves the application largely as-is to AWS (e.g., onto EC2) with minimal or no code changes. It’s chosen for speed and low risk, not for major architectural modernization. Decomposing a monolith into microservices, implementing event-driven patterns, and building independent CI/CD pipelines goes far beyond rehosting, so this option does not align with the stated goals.
Repurchase means replacing the existing application with a different product, typically a SaaS solution (e.g., moving from a custom CRM to Salesforce). This strategy reduces operational burden but requires adopting a new product’s capabilities and constraints. The question describes containerizing and decomposing the existing application into microservices on EKS, not replacing it with a commercial off-the-shelf or SaaS reporting platform.
Replatform (“lift, tinker, and shift”) involves making a few optimizations to take advantage of cloud services without changing the core architecture—e.g., moving a monolith to containers or switching the database to a managed service with minimal code changes. While EKS and autoscaling could fit replatforming, the explicit microservices decomposition and event-driven redesign indicate a deeper re-architecture than replatforming typically implies.
Refactor (re-architect) is the best migration strategy when an organization is intentionally changing the application's architecture to take advantage of cloud-native patterns and managed services. In this scenario, the company is not merely moving the monolith into containers; it plans to decompose the application into about 12 domain-aligned microservices, which is a substantial redesign of service boundaries, deployment units, and operational ownership. The willingness to rewrite about 30% of the code to support event-driven patterns is another strong indicator of refactoring, because that level of code and communication-model change goes beyond simple platform optimization. Running the services on Amazon EKS across two Availability Zones with independent CI/CD pipelines and autoscaling further supports a modernized microservices architecture rather than a minimally changed migration. On AWS exams, keywords like microservices, event-driven, managed cloud-native capabilities, and meaningful code changes strongly point to Refactor.
Core Concept: This question tests the AWS migration strategies (the “6 Rs”) and how to choose among them based on desired architectural change. It also touches cloud-native modernization patterns: microservices decomposition, containers on Amazon EKS, event-driven design, managed services, independent CI/CD, and autoscaling. Why the Answer is Correct: Refactor (also called re-architect) is the best fit because the company explicitly plans significant application changes: breaking a monolith into ~12 domain-aligned microservices, adopting event-driven patterns, and leveraging managed cloud-native capabilities. Even though only ~30% of the code will be rewritten, the architectural shift is substantial: service boundaries, communication patterns (events vs synchronous calls), deployment model (containers), and operational model (independent pipelines and scaling). These are hallmark indicators of refactoring rather than a lift-and-shift or minor platform tweaks. Key AWS Features: On EKS across two AZs, each microservice can run in separate Kubernetes Deployments with Horizontal Pod Autoscaler (HPA) and Cluster Autoscaler for elasticity. Independent CI/CD pipelines commonly use AWS CodePipeline/CodeBuild or third-party tooling, deploying via Helm/Argo CD. Event-driven patterns typically use Amazon EventBridge, Amazon SNS/SQS, or Amazon MSK, with services publishing/consuming events to reduce coupling. Managed cloud-native capabilities may include Amazon RDS/Aurora, DynamoDB, ElastiCache, or Step Functions, aligning with AWS Well-Architected principles (reliability, performance efficiency, operational excellence). Common Misconceptions: Replatform can sound tempting because containers and EKS are “platform changes,” but replatforming implies minimal code changes and no major redesign (e.g., “lift, tinker, and shift”). Here, the intent is to decompose into microservices and adopt event-driven architecture—this is beyond replatforming. Rehost is clearly insufficient because it preserves the monolith with minimal changes. Repurchase is moving to a SaaS/COTS product, which doesn’t match the stated plan to containerize and redesign. Exam Tips: Look for keywords: “microservices,” “event-driven,” “managed services,” “rewrite code,” and “re-architect” strongly indicate Refactor. If the scenario emphasizes minimal code change and mostly infrastructure/platform adjustments, that points to Replatform. If it’s “move as-is,” it’s Rehost. If it’s “replace with SaaS,” it’s Repurchase.
An e-commerce startup operates 4 AWS accounts and needs to repeatedly provision identical VPCs, IAM roles, and Amazon RDS instances across 2 Regions using version-controlled YAML templates with change sets and automated rollback; which AWS service should they use to implement infrastructure as code?
AWS CodeDeploy automates application deployments (e.g., to EC2, on-premises, Lambda, ECS) and focuses on rolling updates, traffic shifting, and deployment hooks. It does not define or provision foundational infrastructure like VPCs, IAM roles, or RDS from YAML templates, nor does it provide CloudFormation-style change sets for infrastructure updates. CodeDeploy is CI/CD for code, not primary IaC for AWS resources.
AWS Elastic Beanstalk is a managed application platform that deploys web apps and creates supporting resources (like EC2, ALB, Auto Scaling) based on environment configuration. While it can create infrastructure, it is opinionated around application environments and is not intended for repeatable, version-controlled provisioning of arbitrary resources such as custom VPC architectures, IAM roles, and RDS with explicit change sets and rollback semantics like CloudFormation.
Amazon API Gateway is a service for creating, publishing, and securing APIs (REST, HTTP, WebSocket). It is not an infrastructure provisioning tool and does not manage multi-resource deployments via YAML templates, change sets, or rollback. Although API Gateway can be defined within IaC tools (including CloudFormation), API Gateway itself is not the service used to implement infrastructure as code across accounts and Regions.
AWS CloudFormation is AWS’s native Infrastructure as Code service. It uses YAML/JSON templates stored in version control to provision and update AWS resources consistently. CloudFormation supports Change Sets to preview updates before execution and provides automatic rollback on failures. For repeated deployments across multiple AWS accounts and Regions, CloudFormation StackSets (often integrated with AWS Organizations) enables centralized, consistent provisioning at scale.
Core Concept: This question tests Infrastructure as Code (IaC) on AWS and the service that natively provisions and manages AWS resources from declarative templates with safe deployment mechanisms. Why the Answer is Correct: AWS CloudFormation is the AWS-native IaC service that uses version-controlled YAML/JSON templates to repeatedly provision identical stacks of resources (VPCs, IAM roles, RDS, etc.) across multiple Regions and accounts. CloudFormation supports Change Sets to preview the impact of template updates before execution, and it provides automated rollback on failure to return the stack to the last known good state. These requirements (YAML templates, change sets, rollback) map directly to CloudFormation’s core capabilities. Key AWS Features: - Templates (YAML/JSON) define resources and dependencies declaratively. - Stacks and Stack Updates manage lifecycle (create/update/delete) consistently. - Change Sets show what will be added/modified/removed prior to applying changes. - Automatic rollback and drift detection improve safety and governance. - Cross-account and multi-Region deployments are commonly implemented via CloudFormation StackSets (often with AWS Organizations), enabling consistent provisioning across multiple accounts and Regions. - Integration with CI/CD (e.g., CodePipeline) supports version-controlled deployments. Common Misconceptions: Some may choose Elastic Beanstalk because it “provisions resources,” but it is focused on application platform deployment, not general-purpose IaC for VPC/IAM/RDS with change sets. CodeDeploy is also part of CI/CD but deploys application code, not foundational infrastructure. API Gateway is unrelated to provisioning infrastructure. Exam Tips: When you see “YAML templates,” “change sets,” “rollback,” and “provision identical AWS resources,” think CloudFormation. If the question adds “across multiple accounts and Regions,” remember StackSets as the CloudFormation feature designed for that governance and scale scenario.
After retiring a 25-server on-premises environment that cost $8,000 per month in colocation fees, a media startup moved to AWS; in its first month it ran 2,400 Amazon EC2 compute hours and stored 15 TB-month in Amazon S3 with no long-term commitments, and the finance team noted the bill varied directly with actual usage—what advantage of cloud computing does this primarily demonstrate?
Stopping spending money running and maintaining data centers refers to eliminating the operational burden and costs of owning/operating facilities (power, cooling, physical security, hardware refresh, racking/stacking). While the company did retire a colocation setup, the question’s primary emphasis is that charges varied directly with EC2 hours and S3 TB-month. That points more specifically to usage-based variable cost, not just avoiding data center operations.
Increase speed and agility is about rapidly provisioning resources (minutes instead of weeks), experimenting quickly, and shortening time-to-market using services like EC2, Auto Scaling, and managed services. The scenario does not discuss faster provisioning, deployment velocity, or iterative experimentation. Instead, it highlights billing that tracks consumption, which is a cost model advantage rather than an agility advantage.
Go global in minutes refers to deploying workloads across multiple AWS Regions and edge locations quickly to serve users worldwide with low latency and resilience. Nothing in the prompt mentions Regions, global expansion, or international user reach. The metrics provided (compute hours and S3 TB-month) and the finance team’s observation are about metered billing, not global infrastructure reach.
Trade fixed expense for variable expense is directly demonstrated: the prior $8,000/month colocation fee is a largely fixed monthly cost, whereas AWS charges are metered (EC2 compute hours and S3 TB-month) and scale with actual usage. The prompt explicitly states the bill varied directly with usage and that there were no long-term commitments, which is the clearest indicator of pay-as-you-go variable spending.
Core Concept: This question tests a foundational cloud economics principle from AWS Cloud Concepts: shifting from fixed, upfront costs to usage-based pricing (pay-as-you-go). The scenario references metered consumption (EC2 compute hours and S3 TB-month) and explicitly notes the bill varied directly with actual usage. Why the Answer is Correct: The company previously paid $8,000/month in colocation fees for a 25-server environment—largely a fixed expense that must be paid regardless of utilization. After moving to AWS with “no long-term commitments,” they consumed 2,400 EC2 hours and 15 TB-month of S3 storage, and finance observed charges tracked actual usage. That is the hallmark of “trade fixed expense for variable expense”: costs scale up or down with demand rather than being locked into a constant monthly facility/server cost. Key AWS Features: Amazon EC2 is billed based on instance usage (seconds/minutes depending on OS and purchasing model) and can be run On-Demand with no commitment. Amazon S3 storage is billed per GB-month (plus requests and data transfer), so storing 15 TB for a month maps directly to a measurable unit. AWS also offers Savings Plans/Reserved Instances for commitment-based discounts, but the question explicitly says no commitments, reinforcing pure variable spend. Common Misconceptions: Option A (stop spending money running and maintaining data centers) is also a real cloud advantage, and the mention of retiring an on-prem environment can tempt you there. However, the key clue is the finance observation that the bill varies directly with usage—this is specifically about cost structure (fixed vs variable), not operational responsibility. Options B and C are unrelated to billing behavior. Exam Tips: When you see phrases like “pay only for what you use,” “billed by the hour,” “GB-month,” “no upfront,” “no long-term commitments,” or “costs scale with usage,” the best match is usually variable expense. When the focus is eliminating facilities, power, cooling, racking, and hardware lifecycle, that points to “stop spending money running and maintaining data centers.”
Periodo de estudio: 2 months
기초만 따로 공부하고 무한으로 문제 돌렸습니다. 믿을만한 앱임
Periodo de estudio: 2 months
I have very similar questions on my exam, and some of them were nearly identical to the original questions.
Periodo de estudio: 2 months
다음에 또 이용할게요
Periodo de estudio: 2 months
Would vouch for this practice questions!!!
Periodo de estudio: 1 month
도메인별 문제들이 잘 구성되어 있어서 좋았고, 강의만 듣고 시험보기엔 불안했는데 잘 이용했네요


¿Quieres practicar todas las preguntas en cualquier lugar?
Obtén la app gratis
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.