
Simula la experiencia real del examen con 65 preguntas y un límite de tiempo de 90 minutos. Practica con respuestas verificadas por IA y explicaciones detalladas.
Impulsado por IA
Cada respuesta es verificada de forma cruzada por 3 modelos de IA líderes para garantizar la máxima precisión. Obtén explicaciones detalladas por opción y análisis profundo de cada pregunta.
A university IT team manages a single AWS account with 180 IAM users across 4 departments. During a quarterly security review, the compliance officer requests a downloadable CSV report that shows, for every IAM user and the root user, whether MFA is enabled (true/false), the user’s creation date, the last password change date, and whether access keys are active. The team must generate this report directly from the AWS Management Console without writing scripts, enabling additional services, or incurring extra cost. Which AWS feature or service will meet this requirement?
AWS Cost and Usage Reports (CUR) provide detailed billing and usage line items and can be delivered to S3 (often queried with Athena). While CUR can be exported and analyzed, it is not an IAM security/compliance report and does not include per-user MFA status, password change dates, or access key active/inactive state. It also typically requires setup/delivery configuration, which conflicts with the “directly from the console” constraint.
IAM credential reports are designed for exactly this audit use case: a downloadable CSV generated from the IAM console that lists every IAM user and the root user with credential and security posture fields. It includes MFA enabled status, user creation time, password last changed information, and access key active/inactive indicators. It requires no scripting, no additional services, and has no extra cost, matching all constraints.
Detailed Billing Reports are billing-focused artifacts (historically associated with detailed cost breakdowns) and are not intended for identity or credential compliance. They do not contain IAM user attributes like MFA enabled, user creation date, password last changed, or access key status. Even if downloadable, they address financial reporting rather than security posture, so they cannot meet the compliance officer’s requested IAM credential details.
AWS Cost Explorer reports help visualize and analyze costs over time and can export cost data. However, Cost Explorer is strictly about spend and usage allocation, not IAM credential hygiene. It cannot report MFA status, password change dates, or access key activation state for IAM users or the root user. Therefore, it does not satisfy the security review reporting requirements.
Core Concept: This question tests knowledge of IAM account-level reporting features, specifically the IAM credential report, which provides a downloadable CSV snapshot of credential-related security posture for all IAM users and the root user. Why the Answer is Correct: IAM credential reports are generated directly from the AWS Management Console (IAM console) and downloaded as a CSV at no additional cost. The report includes exactly the types of fields the compliance officer requested: whether MFA is enabled for each user (including the root account), the user creation time, password last used/changed indicators (including password last changed), and the status/last rotated information for access keys (active/inactive). This satisfies the constraints: no scripts, no enabling additional services, and no extra cost. Key AWS Features: - IAM Credential Report: An account-wide report listing all IAM users and the root user. - Security-relevant columns commonly used for audits: mfa_active, user_creation_time, password_last_changed, access_key_1_active/access_key_2_active (and related last rotated/last used fields). - Generated on demand in the console (IAM > Credential report) and downloadable as CSV, which aligns with “downloadable CSV report” requirements. - Supports periodic compliance checks and aligns with AWS Well-Architected Security Pillar practices (strong identity foundation, MFA, and credential lifecycle management). Common Misconceptions: Billing and cost tools (Cost Explorer, Cost and Usage Reports, Detailed Billing Reports) can produce CSVs, but they focus on spend/usage, not IAM credential hygiene. They do not report MFA status, password change dates, or access key activation state per IAM user. Another common confusion is with IAM Access Analyzer or AWS Config; those can help with security posture, but they either don’t produce this exact consolidated CSV in-console or require enabling additional services (violating constraints). Exam Tips: When you see requirements like “CSV of all IAM users + root,” “MFA enabled true/false,” “password/access key status,” and “no scripts,” immediately think “IAM credential report.” Also remember that the root user is included in the credential report, which is a frequent exam detail used to distinguish it from other IAM views or reports.
A retail company operates a 9-year-old on-premises Java monolith that handles about 15,000 orders per day and, while migrating to AWS, wants to split it into 12 independently deployable microservices running in containers with CI/CD and domain-driven boundaries; which migration strategy should the company choose?
Rehost (lift-and-shift) moves the application with minimal or no code changes, typically to Amazon EC2 (and possibly a load balancer and managed database). It is chosen for speed and risk reduction, not for redesign. It would keep the Java monolith largely intact and would not achieve decomposition into 12 microservices, independent deployments, or domain-driven boundaries.
Replatform (lift-tinker-and-shift) involves limited changes to gain some cloud benefits (e.g., moving from self-managed middleware to managed services, or adjusting runtime configurations). While you might containerize the monolith or move databases to RDS, replatforming does not imply a major architectural rewrite into multiple microservices with separate deployment lifecycles and CI/CD per service.
Repurchase means replacing the application with a different product, often SaaS (e.g., moving from a custom order system to a commercial platform). This can reduce operational burden but changes business processes and typically does not align with a goal of creating 12 custom microservices in containers. The question states an explicit target architecture rather than adopting a packaged solution.
Refactor (re-architect) is the correct strategy because the company intends to redesign the monolith into 12 independently deployable microservices, containerize them, and implement CI/CD with domain-driven boundaries. This requires significant code and architectural changes to enable independent scaling, faster releases, and cloud-native patterns (service discovery, decoupled messaging, per-service data ownership, and improved observability).
Core Concept: This question tests AWS migration strategies (commonly described as the “7 Rs”) and how to choose the right one based on the desired target architecture. The scenario describes deliberate modernization from a legacy on-premises Java monolith into containerized, independently deployable microservices with CI/CD and domain-driven boundaries. Why the Answer is Correct: The company explicitly wants to split a 9-year-old monolith into 12 independently deployable microservices, run them in containers, and implement CI/CD with domain-driven boundaries. That is not a lift-and-shift or minor platform adjustment; it requires significant code and architecture changes such as service decomposition, API design, data ownership separation, deployment automation, observability, and resilience patterns. In AWS migration terminology, this is Refactor/Re-architect, which is used when an organization wants cloud-native benefits like agility, independent scaling, and faster releases. Key AWS Features: A typical AWS implementation could use Amazon ECS or Amazon EKS for containers, Amazon ECR for container images, and CI/CD through AWS CodePipeline, CodeBuild, and CodeDeploy or equivalent third-party tooling. Microservices may use Application Load Balancer or Amazon API Gateway for routing, Amazon SQS, SNS, or EventBridge for decoupled communication, and CloudWatch plus AWS X-Ray for monitoring and tracing. Teams may also adopt separate data stores per service using services such as Amazon Aurora or DynamoDB. Common Misconceptions: Replatform can sound plausible because it allows some optimization, but it does not usually involve breaking a monolith into multiple independently deployable microservices. Rehost is primarily about moving the application as-is with minimal changes. Repurchase means replacing the custom application with another product or SaaS offering, which does not match the stated goal of building a custom microservices architecture. Exam Tips: When a question mentions microservices, independently deployable services, containers, CI/CD, or domain-driven design, Refactor/Re-architect is usually the best answer. Rehost and Replatform focus on faster migration with fewer code changes, while Refactor is chosen for deep architectural modernization and cloud-native outcomes.
A telehealth startup with a 6-person engineering team must prototype three versions of a real-time data-processing pipeline within 10 days and pivot based on results without committing to long-term infrastructure; which AWS Cloud capability most directly helps them build agility into their processes and architecture?
Avoiding overprovisioning relates to elasticity and right-sizing capacity when demand is uncertain. This is valuable for workloads with variable traffic, but it is primarily a capacity planning/cost optimization benefit. The question’s main challenge is building three different pipeline prototypes quickly with minimal operational burden, not sizing compute resources. You can avoid overprovisioning and still lack the service breadth needed to iterate rapidly.
Expanding into additional geographic Regions is a global infrastructure advantage used for latency reduction, disaster recovery, and regulatory needs. It does not directly help a 6-person team prototype three versions of a real-time processing pipeline in 10 days. Regional expansion can add complexity (data residency, replication, multi-Region failover) rather than accelerate experimentation for an early-stage prototype.
Access to a broad set of technologies and managed services most directly enables agility: the team can rapidly assemble and compare multiple pipeline designs (e.g., Kinesis + Lambda vs. Managed Flink vs. SQS/SNS patterns) without managing servers or long-lived infrastructure. This supports fast iteration, quick pivots, and reduced undifferentiated heavy lifting—exactly what a small team needs under a tight deadline.
Paying only when resources are consumed is the pay-as-you-go pricing model. It helps avoid upfront commitments and reduces financial risk during prototyping, but it is primarily a billing/pricing benefit rather than the most direct enabler of architectural/process agility. The question focuses on rapidly experimenting with multiple pipeline versions; that is better addressed by the breadth of managed services and rapid provisioning.
Core Concept - This question tests the AWS Cloud value proposition of agility through rapid experimentation, enabled by on-demand access to a broad portfolio of managed services (often framed as “increase speed and agility” and “stop spending money running and maintaining data centers”). For a real-time data-processing pipeline prototype, this commonly maps to services like Amazon Kinesis (Streams/Firehose), AWS Lambda, Amazon Managed Service for Apache Flink, Amazon SQS/SNS, AWS Step Functions, Amazon DynamoDB, Amazon OpenSearch Service, and Amazon S3. Why the Answer is Correct - The startup must build three pipeline versions in 10 days and pivot quickly without long-term infrastructure commitments. The capability that most directly enables this is the ability to choose from many managed technologies and assemble multiple architectures quickly (serverless, managed streaming, managed databases, managed analytics) without procuring hardware, installing software, or building undifferentiated heavy lifting. This directly supports rapid prototyping, A/B testing architectures, and iterating based on results. Key AWS Features - Managed services reduce operational overhead (patching, scaling, high availability). Serverless and managed streaming services can be provisioned in minutes, integrated via IAM, and observed via Amazon CloudWatch. Infrastructure as Code (AWS CloudFormation/CDK) further accelerates repeatable experiments. The AWS Well-Architected Framework’s Performance Efficiency and Operational Excellence pillars emphasize selecting managed services and experimenting frequently to evolve architectures. Common Misconceptions - Option D (pay-as-you-go) and A (avoid overprovisioning) are cost/capacity benefits that can help, but they don’t most directly address “prototype three versions and pivot fast.” You could pay only for what you use yet still be slow if you must build and operate everything yourself. Option B (global expansion) is unrelated to rapid prototyping of a pipeline. Exam Tips - When the stem emphasizes “prototype quickly,” “experiment,” “innovate,” “pivot,” and “small team,” look for answers about breadth of managed services and rapid provisioning (agility). When it emphasizes “reduce cost,” “only pay for what you use,” or “unknown demand,” then pay-as-you-go or elasticity/rightsizing options become more likely.
A fintech company runs a PostgreSQL workload on Amazon RDS in us-west-2 and must achieve an RTO under 60 seconds with zero data loss during an AZ outage, requiring AWS to automatically provision a primary DB instance and a same-Region standby in a different Availability Zone with continuous synchronous replication and automatic failover—what RDS feature should they enable?
Read replicas are primarily intended to offload read traffic and improve read scalability, not to serve as the main high-availability mechanism for AZ outages. In Amazon RDS for PostgreSQL, read replicas typically use asynchronous replication, which means there can be replication lag and potential data loss if the primary fails. Promotion of a read replica is a different operational model from Multi-AZ failover and does not match the standard automatic synchronous standby design described in the question. Read replicas are useful for reporting, analytics, and some disaster recovery scenarios, but not for this exact HA requirement. Therefore, they do not best satisfy the need for synchronous replication and automatic failover across AZs.
Blue/green deployment is a deployment and upgrade feature that creates a separate staging environment so changes can be tested before cutover. Its purpose is to reduce risk during engine upgrades, schema changes, and other planned modifications, not to provide continuous high availability during an AZ outage. Although it can help minimize downtime during planned transitions, it is not the same as maintaining a synchronously replicated standby for automatic failover. The question specifically asks for an always-on standby in another AZ managed by AWS, which is the Multi-AZ pattern. Therefore, blue/green deployment is not the correct feature here.
Multi-AZ deployment is the Amazon RDS high-availability feature that maintains a primary DB instance and a standby in a different Availability Zone within the same AWS Region. For Amazon RDS for PostgreSQL, data is synchronously replicated to the standby so committed transactions are preserved if the primary instance or its AZ fails. Amazon RDS automatically detects infrastructure or AZ failure conditions and promotes the standby without requiring manual intervention. The application continues to use the same RDS endpoint, although existing connections must reconnect after failover. This makes Multi-AZ the correct choice for same-Region HA with synchronous replication and automatic failover.
Reserved Instances are a pricing option that lets customers reduce costs by committing to a usage term, typically one or three years. They do not create a standby database, change replication behavior, or enable automatic failover. Purchasing Reserved Instances has no effect on recovery objectives such as RTO or RPO during an Availability Zone outage. This option addresses billing optimization rather than architecture or resilience. Therefore, it is unrelated to the technical requirement in the question.
Core Concept: This question tests Amazon RDS high availability for PostgreSQL within a single AWS Region, specifically meeting strict recovery objectives during an Availability Zone (AZ) outage. The key RDS capability is Multi-AZ deployment, which provides automated failover to a standby in a different AZ. Why the Answer is Correct: The requirements explicitly call for: (1) same-Region standby in a different AZ, (2) continuous synchronous replication, (3) automatic failover, (4) zero data loss (RPO = 0), and (5) RTO under 60 seconds during an AZ outage. RDS Multi-AZ is designed for exactly this: it maintains a primary DB instance and a synchronous standby replica in another AZ. If the primary or its AZ becomes unavailable, RDS automatically fails over to the standby, typically completing within ~60–120 seconds depending on conditions, and it is the standard exam answer for “AZ outage + automatic failover + synchronous + zero data loss” in RDS. Key AWS Features / Best Practices: With RDS for PostgreSQL Multi-AZ, writes are synchronously replicated to the standby (and in newer architectures, Multi-AZ DB clusters can provide even faster failover). Failover is managed by RDS (DNS endpoint remapping to the new primary), requiring the application to reconnect. Best practices include using the DB instance endpoint (not the underlying host), enabling application retry logic, and monitoring with Amazon CloudWatch and RDS events. Multi-AZ is for high availability, not read scaling. Common Misconceptions: Read replicas are often mistaken for HA, but for PostgreSQL they are typically asynchronous and intended for read scaling and disaster recovery with potential replication lag (non-zero data loss) and manual promotion in many scenarios. Blue/green deployments are for safer updates with controlled cutover, not AZ-failure HA. Reserved Instances reduce cost, not downtime. Exam Tips: When you see “AZ outage,” “automatic failover,” “synchronous replication,” and “RPO 0,” think RDS Multi-AZ. When you see “read scaling” or “cross-Region DR,” think read replicas (often async) or cross-Region strategies. Also distinguish HA (Multi-AZ) from deployment strategies (blue/green) and pricing constructs (Reserved Instances).
A fintech startup with 120,000 monthly active users is launching a customer portal backed by Amazon Cognito and needs to quickly strengthen its AWS security without adding paid third-party tools; given that they must avoid wildcard permissions and want actionable checks within 10 minutes, which two actions from the following would most directly improve security? (Choose two.)
AWS Artifact provides on-demand access to AWS compliance reports (e.g., SOC, ISO) and allows acceptance of certain agreements. It helps with audits and compliance evidence, but it does not directly harden the portal’s security posture or produce rapid, actionable technical findings within minutes. It’s valuable for governance, not immediate risk reduction for authentication/authorization.
Granting the broadest permissions to all IAM roles violates the principle of least privilege and directly conflicts with the requirement to avoid wildcard permissions. Overly permissive IAM policies increase blast radius and are a common root cause of security incidents. In exam scenarios, broad permissions are almost never a “security improvement” unless tightly scoped break-glass access is described.
“Running application code with AWS Cloud” is not a specific AWS service or a well-defined security action. If the intent were AWS CloudShell, AWS Lambda, or AWS Cloud9, those still would not be the most direct controls for strengthening portal security under the stated constraints. The question asks for immediate security improvements and rapid checks, which this option does not provide.
Enabling MFA in Amazon Cognito directly improves end-user account security for the customer portal by requiring an additional factor beyond passwords. Cognito user pools can enforce MFA (optional or required) and support TOTP and SMS. For fintech, MFA is a high-impact, fast-to-implement control that reduces credential stuffing and account takeover risk without third-party tools.
AWS Trusted Advisor security checks provide quick, prioritized recommendations to improve account security (e.g., exposed security groups, IAM credential hygiene, MFA on the root account). These checks are actionable and can be reviewed rapidly, aligning with the “within 10 minutes” requirement. It’s an AWS-native mechanism to identify common misconfigurations without deploying external tooling.
Core Concept: This question tests practical, rapid security hardening using native AWS capabilities. The key themes are identity security for a customer portal (Amazon Cognito) and fast, actionable security posture checks (AWS Trusted Advisor). It also emphasizes least privilege (avoid wildcard permissions) and “no paid third-party tools.” Why the Answer is Correct: D (Enable MFA with Amazon Cognito) directly strengthens authentication for end users of the portal. MFA reduces account takeover risk, which is critical for fintech. Cognito supports MFA (SMS and TOTP) and can enforce it per user pool policies, making it a high-impact control that can be implemented quickly without external tools. E (Use AWS Trusted Advisor security checks) provides actionable security findings quickly—often within minutes—such as overly permissive security groups, IAM use, and other account-level risks. This aligns with the requirement for “actionable checks within 10 minutes” and avoids third-party tooling. Key AWS Features / Best Practices: - Amazon Cognito MFA: Configure user pool MFA settings (optional/required), choose factors (TOTP via authenticator apps is generally preferred over SMS where possible), and combine with strong password policies and adaptive authentication features where applicable. - Trusted Advisor Security checks: Review and remediate findings like open security groups, IAM access key rotation, and MFA on root account. While some checks are tied to Support plans, the security checks are a canonical AWS-native way to get quick, prioritized guidance. - Least privilege: The prompt explicitly rejects wildcard permissions; remediation from Trusted Advisor and Cognito policy controls supports least-privilege and strong identity boundaries. Common Misconceptions: AWS Artifact (A) is about compliance reports and agreements, not immediate security hardening. Granting broad permissions (B) is the opposite of least privilege and increases blast radius. “Running application code with AWS Cloud” (C) is not a concrete service/action and does not directly address identity hardening or rapid security checks. Exam Tips: When you see “quickly improve security” and “no third-party tools,” look for native controls like MFA, IAM least privilege, and AWS advisory/security posture services (Trusted Advisor, Security Hub, Inspector—depending on the question). Also map requirements to services: customer authentication hardening strongly points to Cognito MFA, and “actionable checks within minutes” points to Trusted Advisor security checks.
¿Quieres practicar todas las preguntas en cualquier lugar?
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
A startup needs a compute option for a data-processing function that only runs when new records arrive, must automatically scale from 0 to 10,000 invocations per hour, can run each invocation for up to 15 minutes, and uses pay-per-request pricing without managing servers; which AWS service provides this serverless compute capability?
AWS Lambda is a serverless compute service that runs code in response to events and automatically scales with demand, including scaling down to zero when idle. It supports pay-per-request pricing (per invocation and duration) and requires no server management. A key exam clue is the maximum execution time: Lambda functions can run for up to 15 minutes, exactly matching the requirement.
AWS CloudFormation is an infrastructure-as-code service used to model and provision AWS resources using templates. It does not provide compute execution for event-driven processing and does not run code per request. While CloudFormation can deploy Lambda functions and related resources, it is not the runtime service that executes the data-processing function.
AWS Elastic Beanstalk is a managed application platform that simplifies deploying web applications by provisioning resources like EC2 instances, Auto Scaling groups, and load balancers. Although it can scale, it does not scale to zero and is not pay-per-request; you pay for the underlying resources. It also requires managing application environments rather than purely event-driven invocations.
Elastic Load Balancing distributes incoming network or application traffic across multiple targets (such as EC2 instances, containers, or IPs) to improve availability and fault tolerance. It is not a compute service and does not execute code. While an ALB can route requests to Lambda targets in some architectures, ELB itself is not the serverless compute capability described.
Core Concept: This question tests knowledge of AWS serverless compute—specifically event-driven execution, automatic scaling, and pay-per-request pricing without server management. Why the Answer is Correct: AWS Lambda is the AWS service designed for running code only when triggered by events (such as new records arriving in a stream, queue, or database). It scales automatically based on the number of incoming events and can scale down to zero when idle, matching the requirement to run only when new records arrive. Lambda uses a pay-per-request model (requests plus execution duration), and customers do not provision or manage servers. The requirement that each invocation can run up to 15 minutes directly matches Lambda’s maximum execution timeout (15 minutes). Key AWS Features: Lambda integrates natively with many event sources (e.g., Amazon S3 object events, Amazon SQS messages, Amazon Kinesis or DynamoDB Streams, EventBridge schedules/rules, API Gateway). Concurrency scaling allows handling large bursts of invocations; by default, Lambda scales rapidly and can be tuned using reserved concurrency to cap usage or provisioned concurrency to reduce cold starts for latency-sensitive workloads. Operationally, Lambda emits logs to Amazon CloudWatch Logs and metrics (invocations, errors, duration, throttles) to CloudWatch. IAM execution roles control access to other AWS services, aligning with least-privilege best practices. Common Misconceptions: Some may confuse “serverless” with managed platforms like Elastic Beanstalk, but Beanstalk still provisions and manages EC2 instances under the hood and does not scale to zero. CloudFormation is infrastructure-as-code, not compute. Elastic Load Balancing distributes traffic but does not execute code. Exam Tips: When you see: event-driven, scale from 0, pay-per-request, no servers to manage, and a 15-minute maximum runtime, think AWS Lambda. If runtime exceeds 15 minutes or you need long-running workflows, consider alternatives like AWS Step Functions with ECS/Fargate or EC2, but for this exact constraint set, Lambda is the canonical answer.
A company with 1,200 employees uses an external SAML 2.0 identity provider (for example, Okta) and needs users to sign in with their existing corporate credentials to access the AWS Management Console and AWS CLI across 7 AWS accounts using centralized single sign-on, without creating IAM users or new passwords in those accounts; which AWS service meets this requirement?
AWS Directory Service provides managed directory capabilities (AWS Managed Microsoft AD, AD Connector, Simple AD) and can help integrate AWS resources with Active Directory. However, it is not the primary service for centralized SSO to multiple AWS accounts’ consoles and CLI using an external SAML IdP like Okta. It may be part of an identity ecosystem, but it does not replace IAM Identity Center for multi-account AWS access management.
Amazon Cognito is designed for authenticating and authorizing end users of applications (customer identity), supporting OIDC/SAML federation for app sign-in and issuing tokens for app access. It is not intended to provide workforce SSO into the AWS Management Console across multiple AWS accounts or to manage permission sets/role assignments in AWS Organizations. For employee access to AWS accounts, IAM Identity Center is the correct fit.
AWS IAM Identity Center (formerly AWS SSO) is the AWS service built for centralized workforce SSO to the AWS Management Console and AWS CLI across multiple AWS accounts. It integrates with AWS Organizations and supports external SAML 2.0 identity providers (e.g., Okta), enabling users to sign in with corporate credentials. It uses permission sets to provision IAM roles in target accounts, avoiding IAM users and long-term credentials.
AWS Resource Access Manager (AWS RAM) is used to share AWS resources (such as subnets, Transit Gateways, Route 53 Resolver rules) across accounts within an organization or with external accounts. It does not provide authentication, SSO, federation, or console/CLI sign-in capabilities. While RAM helps with multi-account architectures, it is unrelated to centralized identity and access sign-on requirements described here.
Core Concept: This question tests federated authentication and centralized access management across multiple AWS accounts without creating IAM users. The AWS-native service for workforce single sign-on (SSO) to the AWS Management Console and AWS CLI, integrated with an external SAML 2.0 identity provider, is AWS IAM Identity Center. Why the Answer is Correct: AWS IAM Identity Center (formerly AWS SSO) provides centralized SSO for multiple AWS accounts in an AWS Organization. It can connect to an external SAML 2.0 IdP (such as Okta) so employees authenticate with existing corporate credentials. Users then obtain access to assigned permission sets (role-based access) in each of the 7 AWS accounts. This avoids creating IAM users or managing separate passwords in each account, while supporting both console access and CLI access (via IAM Identity Center credential process / SSO token-based sessions). Key AWS Features: IAM Identity Center integrates with AWS Organizations for multi-account management, allowing administrators to assign users/groups to accounts and permission sets centrally. Permission sets map to IAM roles that are automatically created in target accounts, enabling least-privilege access. It supports SAML 2.0 federation with external IdPs and provides a unified user portal for account/role selection. For CLI, it supports short-lived credentials and centralized sign-in flows, aligning with security best practices (no long-term access keys for end users). Common Misconceptions: Some assume AWS Directory Service is required for SSO, but it primarily provides managed Microsoft AD/AD Connector and is not the central multi-account AWS console/CLI SSO solution. Amazon Cognito is for customer identity (app users) rather than workforce access to AWS accounts. AWS RAM is for sharing resources across accounts, not authentication. Exam Tips: When you see “external SAML 2.0 IdP,” “AWS Management Console and AWS CLI,” “multiple AWS accounts,” and “no IAM users,” think IAM Identity Center + AWS Organizations + permission sets. If the scenario is workforce access to AWS (not app sign-in), Cognito is usually not the right answer.
When a media streaming company uses an AWS Organizations organization trail to record both management and data events across 4 AWS accounts, sets a 90-day retention period, and delivers logs to a single SSE-S3–encrypted Amazon S3 bucket and to Amazon CloudWatch Logs for real-time alerts, which AWS Cloud design principle does this implementation primarily apply?
Correct. The AWS Cloud design principle "Enable traceability" focuses on collecting, centralizing, and analyzing logs so teams can understand who performed an action, when it occurred, and what resources were affected. An AWS Organizations organization trail is specifically designed to capture activity consistently across multiple AWS accounts, which strengthens governance and auditability in multi-account environments. Recording both management and data events increases visibility into both control-plane changes and resource-level access, which is essential for investigations and compliance. Delivering logs to Amazon S3 for durable storage and to CloudWatch Logs for near-real-time alerting directly supports traceability by enabling both historical analysis and immediate detection.
Incorrect. "Use serverless compute architectures" is a design principle about reducing operational complexity by using managed compute services such as AWS Lambda, AWS Fargate, or other event-driven managed runtimes. The scenario does not describe an application compute model or a decision to replace servers with managed execution environments. Instead, it focuses on logging, retention, encryption, and alerting for audit purposes. Those capabilities align with observability and security traceability, not serverless compute design.
Incorrect. "Perform operations as code" refers to defining infrastructure, operational procedures, and remediation workflows in code so they can be versioned, tested, and automated. While the organization trail and related resources could certainly be deployed using infrastructure as code, the question asks which principle is primarily being applied by the implementation itself. The implementation's main outcome is centralized recording and monitoring of account activity, not automation of operational changes. Therefore, this is better classified as traceability rather than operations as code.
Incorrect. "Go global in minutes" is about leveraging AWS's global infrastructure to deploy workloads rapidly across multiple geographic Regions and serve users worldwide with low latency. Nothing in the scenario emphasizes international expansion, multi-Region application delivery, or globally distributed user access. The mention of multiple AWS accounts refers to organizational governance boundaries, not global deployment strategy. As a result, this option does not match the primary design principle demonstrated.
Core Concept: This scenario is primarily about centralized logging, auditing, and monitoring using AWS CloudTrail (organization trail), Amazon S3 log archival with encryption, and Amazon CloudWatch Logs for near-real-time detection. These are core security and compliance capabilities that support governance and incident response. Why the Answer is Correct: The AWS Well-Architected Framework design principle “Enable traceability” (Security pillar) emphasizes collecting, centralizing, and analyzing logs and metrics so actions can be traced, investigated, and alerted on. An AWS Organizations organization trail records API activity across multiple accounts, including management events (control-plane actions like IAM, EC2, S3 configuration) and data events (object-level S3 activity, Lambda invokes, etc.). Delivering logs to a central S3 bucket with a defined retention period supports auditability and forensic investigations, while streaming to CloudWatch Logs enables real-time alerting and automated response. Together, these choices directly implement traceability: you can answer “who did what, when, from where, and to which resource” across the organization. Key AWS Features: Key features include: (1) CloudTrail organization trails for multi-account coverage and centralized governance; (2) management events plus data events for deeper visibility into resource access; (3) S3 as a durable, centralized log archive; (4) SSE-S3 encryption at rest to meet baseline security controls; (5) CloudWatch Logs integration to create metric filters/alarms and trigger actions (often via EventBridge/Lambda) for suspicious activity; and (6) retention settings to align with compliance requirements (note: CloudTrail itself doesn’t “retain for 90 days” in S3—retention is typically enforced via S3 lifecycle policies and CloudWatch Logs retention settings). Common Misconceptions: Some may pick “Perform operations as code” because Organizations and trails are often deployed via IaC, but the question focuses on logging/alerting outcomes, not deployment automation. Others may associate “serverless” with CloudWatch, but CloudWatch is monitoring, not compute architecture. “Go global in minutes” is unrelated to audit logging. Exam Tips: When you see CloudTrail + centralized S3 + CloudWatch Logs/alarms across accounts, map it to Security pillar traceability. Also remember: organization trails are the standard pattern for multi-account governance, and real-time detection typically involves CloudWatch Logs/EventBridge rather than S3-only archival.
A startup plans to launch a web app using 2 t3.micro Amazon EC2 instances, 100 GB of EBS gp3 storage, and 1 TB of data transfer out per month, and before deployment they want a tool that can forecast expected monthly charges without accessing any existing bills or historical usage—what can the AWS Pricing Calculator do in this situation?
Correct. AWS Pricing Calculator is built to estimate monthly (or annual) costs for a proposed architecture using published AWS prices. You can input 2 t3.micro instances, 100 GB of gp3 EBS, and 1 TB of data transfer out, choose a Region and purchase options, and receive a forecasted monthly estimate without needing any existing bills or historical usage data.
Incorrect. Calculating historical costs from the past 12 months is the role of AWS Cost Explorer (and related billing reports), which requires an AWS account with billing data already generated. AWS Pricing Calculator does not ingest or analyze historical usage; it is a planning and estimation tool for future workloads.
Incorrect. AWS Pricing Calculator does not provide internal pricing strategy analysis (e.g., how AWS sets prices or margin structures). It simply applies publicly available pricing to the configuration you enter. Exam questions that mention “internal pricing strategies” are typically distractors; AWS exposes rates and calculators, not internal pricing rationale.
Incorrect. Viewing and downloading monthly bills is done in the AWS Billing and Cost Management console (Bills/Invoices), and can be supplemented with Cost and Usage Reports (CUR). AWS Pricing Calculator is not connected to your account’s billing artifacts and cannot display actual invoices or bills.
Core Concept: This question tests knowledge of AWS cost-estimation tools, specifically the AWS Pricing Calculator, which is designed to estimate costs for planned (future) AWS usage without requiring access to an AWS account, billing data, or historical usage. Why the Answer is Correct: The startup has a defined architecture plan (2 t3.micro EC2 instances, 100 GB gp3 EBS, and 1 TB data transfer out per month) and wants to forecast expected monthly charges before deployment. AWS Pricing Calculator can model these inputs and produce an estimated monthly cost by applying public AWS pricing for the selected Region, instance type, storage type/size, and outbound data transfer. Because it is a forward-looking estimator, it fits the requirement of “without accessing any existing bills or historical usage.” Key AWS Features: AWS Pricing Calculator supports building estimates across services (e.g., EC2, EBS, data transfer), selecting Regions, purchase options (On-Demand, Savings Plans, Reserved Instances), and assumptions such as hours per month and storage capacity. It can generate a shareable estimate and export it for budgeting. For EC2, it can incorporate OS licensing, tenancy, and utilization assumptions; for EBS gp3 it accounts for provisioned storage (and potentially additional IOPS/throughput if configured); for data transfer it estimates internet egress based on the entered GB/TB. Common Misconceptions: Learners often confuse Pricing Calculator with AWS Cost Explorer or AWS Bills. Cost Explorer analyzes historical spend and usage but requires an AWS account with billing data. Bills/Invoices are for actual charges after usage occurs. The calculator also does not reveal internal pricing strategy; it uses published rates. Exam Tips: When a question says “before deployment,” “forecast,” or “estimate planned resources,” choose AWS Pricing Calculator. When it says “analyze past spend,” “trends,” or “historical usage,” choose Cost Explorer (and possibly Budgets). When it says “view/download invoices,” choose Billing console features. Map the tool to the time horizon: planned (calculator) vs actual/historical (billing tools).
A fintech startup undergoing a Q3 PCI DSS Level 1 audit must, within the next 24 hours and without opening a support case, obtain AWS-provided PCI compliance reports (such as the Attestation of Compliance (AOC) and Responsibility Summary) that validate the operating effectiveness of AWS security controls across 5 production accounts in 3 Regions; how should the company obtain these reports?
Incorrect. AWS Support is not the standard mechanism for retrieving AWS compliance reports such as PCI DSS AOCs and responsibility summaries. The question explicitly says the company must obtain the reports without opening a support case, which directly rules this option out. Even aside from that constraint, AWS provides these documents through a self-service portal so customers can access them on demand. For exam purposes, requests for downloadable compliance reports should point you to AWS Artifact rather than Support.
Correct. AWS Artifact is the official self-service portal where AWS customers can access compliance reports and agreements, including PCI DSS documents such as the Attestation of Compliance (AOC) and the PCI Responsibility Summary. It is designed specifically for audit and compliance use cases, allowing immediate download without opening a support case or waiting for manual fulfillment. This makes it the best fit for a time-sensitive PCI DSS audit requirement that spans multiple AWS accounts and Regions. The reports in Artifact describe AWS’s compliance posture and the AWS side of the shared responsibility model, which is exactly the type of AWS-provided documentation the auditor is requesting.
Incorrect. AWS Security Hub is a security posture management service that aggregates findings and evaluates resources against security standards and controls. It helps customers monitor their own environment for compliance-related issues, but it does not distribute formal AWS compliance documents like the PCI DSS AOC or Responsibility Summary. Those documents are auditor-issued or AWS-published reports intended for external audit evidence and are accessed through AWS Artifact. A common exam trap is confusing compliance monitoring tools with compliance documentation repositories.
Incorrect. A technical account manager can provide guidance and help coordinate with AWS resources, but a TAM is not the delivery channel for official compliance reports. Many AWS customers do not even have a TAM, and relying on a person would not satisfy the requirement for immediate, self-service retrieval. The question’s constraints strongly indicate that the company should use an AWS console-based service rather than contacting AWS personnel. AWS Artifact is the purpose-built service for this exact need.
Core Concept: This question tests knowledge of AWS compliance reporting and where AWS publishes third-party audit artifacts. The primary service is AWS Artifact, which is AWS’s self-service portal for on-demand access to AWS compliance reports (e.g., PCI DSS AOC) and agreements. Why the Answer is Correct: For a PCI DSS Level 1 audit, the company needs AWS-provided evidence demonstrating the operating effectiveness of AWS-managed controls (the “AWS side” of the shared responsibility model). AWS Artifact is specifically designed to let customers immediately download compliance reports such as the PCI DSS Attestation of Compliance (AOC) and the PCI Responsibility Summary. It is available without opening a support case and can be accessed quickly (within minutes), meeting the “within 24 hours” constraint. These reports are not account- or Region-specific in the way service configuration evidence is; they are AWS-wide audit reports that customers can use across multiple accounts and Regions. Key AWS Features: AWS Artifact provides: - Reports: Downloadable compliance reports (PCI DSS AOC, SOC, ISO, etc.) issued by independent auditors. - Agreements: Ability to review/accept certain compliance-related agreements. - Centralized access: A single place to retrieve AWS audit documentation for use in external audits. This aligns with AWS compliance best practices and the shared responsibility model: AWS provides evidence for AWS-managed controls; customers provide evidence for their configurations and processes. Common Misconceptions: Security Hub is often mistaken as a compliance-reporting portal, but it aggregates security findings and can map controls to standards; it does not provide auditor-issued AOCs. Similarly, “contact Support” or a TAM may sound like the fastest path, but the question explicitly forbids opening a support case and requires immediate self-service access. Exam Tips: When you see “download AWS compliance reports,” “AOC,” “SOC reports,” “ISO reports,” or “responsibility summary,” the answer is almost always AWS Artifact. If the question is about continuous compliance posture, findings, or control monitoring, think Security Hub/AWS Config instead. Also note constraints like “no support case” strongly indicate a self-service console feature (Artifact).
Periodo de estudio: 2 months
기초만 따로 공부하고 무한으로 문제 돌렸습니다. 믿을만한 앱임
Periodo de estudio: 2 months
I have very similar questions on my exam, and some of them were nearly identical to the original questions.
Periodo de estudio: 2 months
다음에 또 이용할게요
Periodo de estudio: 2 months
Would vouch for this practice questions!!!
Periodo de estudio: 1 month
도메인별 문제들이 잘 구성되어 있어서 좋았고, 강의만 듣고 시험보기엔 불안했는데 잘 이용했네요


¿Quieres practicar todas las preguntas en cualquier lugar?
Obtén la app gratis
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.