
65問と90分の制限時間で実際の試験をシミュレーションしましょう。AI検証済み解答と詳細な解説で学習できます。
AI搭載
すべての解答は3つの主要AIモデルで交差検証され、最高の精度を保証します。選択肢ごとの詳細な解説と深い問題分析を提供します。
A solutions lead at a media company must, within the next hour, find vetted AWS reference architectures and design diagrams for a multi-tier serverless video-metadata API targeting 10,000 requests per second and multi-Region disaster recovery (RPO 15 minutes, RTO 1 hour); where should they look for examples of AWS Cloud solution designs?
AWS Marketplace is a digital catalog for finding, buying, and deploying third-party software, SaaS, and data products that run on AWS. While some listings may include deployment guides or architecture diagrams, Marketplace is not the primary, vetted repository for AWS reference architectures. It’s oriented toward procurement and licensing of solutions rather than quickly locating official AWS design patterns for serverless scale and multi-Region DR.
AWS Service Catalog helps organizations create and manage catalogs of approved IT services (CloudFormation templates, products, portfolios) for internal use. It’s excellent for governance and standardized provisioning, but it does not provide public AWS reference architectures or design diagrams. If the question were about distributing preapproved architectures within a company, Service Catalog could fit, but not for discovering AWS examples.
AWS Architecture Center is the correct place to find vetted reference architectures, solution patterns, and design diagrams across workloads and industries. It aggregates official AWS guidance (including Well-Architected best practices) and provides examples relevant to serverless, high-scale APIs, and multi-Region disaster recovery objectives like RPO/RTO targets. It is purpose-built for quickly locating architecture examples and diagrams.
AWS Trusted Advisor is an account analysis tool that provides recommendations across cost optimization, performance, security, fault tolerance, service limits, and operational excellence. It helps identify risks and improvements in an existing AWS environment, but it does not function as a library of reference architectures or design diagrams. It’s useful after you have an architecture deployed, not for finding example designs.
Core Concept: This question tests where to find vetted AWS reference architectures, design patterns, and diagrams. The key resource is the AWS Architecture Center, which curates official and partner-validated architectural guidance, including the AWS Well-Architected Framework, reference architectures, and architecture diagrams. Why the Answer is Correct: The solutions lead needs examples quickly (within the next hour) for a multi-tier serverless API at high scale (10,000 RPS) and multi-Region disaster recovery targets (RPO 15 minutes, RTO 1 hour). The AWS Architecture Center is specifically designed for this: it provides solution patterns and reference architectures across industries (including media) and across requirements like serverless, high availability, and disaster recovery. It is the most direct place to find “AWS reference architectures and design diagrams” without having to procure software or set up internal catalogs. Key AWS Features / Guidance You’d Expect There: In the Architecture Center and related official guidance, you’ll commonly find patterns such as API Gateway + Lambda + DynamoDB/Aurora Serverless for serverless APIs, caching with CloudFront/ElastiCache, asynchronous ingestion with SQS/SNS/EventBridge, and multi-Region DR approaches (active-active or active-passive) using Route 53 health checks/failover, DynamoDB global tables or Aurora Global Database, and cross-Region replication. For RPO 15 minutes, you’d look for near-real-time replication options; for RTO 1 hour, you’d look for automated failover runbooks and infrastructure-as-code. Common Misconceptions: AWS Marketplace can contain “reference architectures” in listings, but it’s primarily for purchasing third-party software and solutions, not curated AWS design diagrams. AWS Service Catalog is for distributing internally approved products/blueprints within an organization, not for discovering public AWS reference architectures. Trusted Advisor provides account-specific best-practice checks (cost, security, fault tolerance, etc.), not architecture diagrams. Exam Tips: When a question asks “where to find AWS reference architectures, solution designs, and diagrams,” default to AWS Architecture Center (and often the Well-Architected Framework). If the question is about buying third-party solutions, think Marketplace. If it’s about internal standardized deployments, think Service Catalog. If it’s about account optimization checks, think Trusted Advisor.
A university research lab uses a single AWS account with 150 IAM users across three projects and must perform a quarterly compliance check to verify, for each user, the date of the last password change and whether any active access keys are older than 90 days, and the auditor requires a downloadable CSV listing these details for every user; which AWS service or tool will meet this requirement?
IAM Access Analyzer helps identify unintended access to AWS resources by analyzing resource-based policies (for example, S3 bucket policies, KMS key policies, IAM role trust policies). It is useful for detecting public or cross-account access paths, but it does not generate a per-IAM-user CSV showing last password change dates or access key rotation/age. Therefore, it does not meet the auditor’s specific reporting requirement.
AWS Artifact provides on-demand access to AWS compliance documentation (such as SOC reports, ISO certifications) and allows management of some agreements. It is focused on evidence of AWS’s compliance posture, not on reporting your account’s IAM user credential details. It cannot produce a CSV listing each IAM user’s password change date and access key age, so it does not satisfy the requirement.
The IAM credential report is an account-level CSV report that lists all IAM users and key credential metadata, including password_last_changed and access_key_last_rotated fields (for up to two access keys). This directly supports quarterly compliance checks and provides a downloadable CSV artifact for auditors. It is the standard AWS mechanism for auditing IAM user credential status and rotation hygiene at scale.
AWS Audit Manager helps continuously collect evidence from AWS services and map it to compliance frameworks, producing audit-ready reports. While powerful for governance programs, it is not the simplest or most direct tool for generating a per-user CSV of IAM password change dates and access key ages. For this specific, narrowly defined IAM credential inventory requirement, the IAM credential report is the correct tool.
Core Concept: This question tests knowledge of AWS IAM account-level reporting for credential hygiene and compliance evidence. Specifically, it focuses on producing an auditable, exportable report that includes password and access key age details for all IAM users in an account. Why the Answer is Correct: The IAM credential report is purpose-built for exactly this requirement: it generates a downloadable CSV that lists every IAM user and key credential metadata, including the date of the last password change and the age/status of access keys. Because the auditor requires a CSV for every user (150 users) and the lab must check quarterly for passwords and access keys older than 90 days, the credential report provides a single, standardized artifact that can be downloaded and reviewed or filtered (e.g., in Excel) to identify noncompliant users and keys. Key AWS Features: IAM credential reports are generated at the account level and include fields such as password_last_changed, password_enabled, access_key_1_active, access_key_1_last_rotated, access_key_2_active, and access_key_2_last_rotated. This enables straightforward determination of whether any active access key is older than 90 days by comparing the “last_rotated” date to the current date. The report is delivered in CSV format, meeting the auditor’s “downloadable CSV” requirement. It can be generated from the IAM console or via AWS CLI/API (GenerateCredentialReport and GetCredentialReport), which is useful for quarterly automation. Common Misconceptions: Services like IAM Access Analyzer and AWS Audit Manager are often associated with “auditing,” but they do not directly produce a per-user CSV listing password change dates and access key ages. AWS Artifact is also compliance-related, but it provides AWS compliance reports (about AWS), not your account’s IAM user credential status. Exam Tips: When you see requirements like “list all IAM users,” “last password change,” “access key age/rotation,” and “downloadable CSV,” immediately think “IAM credential report.” Access Analyzer is for resource access policies and external access findings, while Audit Manager is for broader evidence collection against frameworks, not a simple IAM credential inventory export.
A retail analytics team stores about 6 TB of CSV and JSON files across 7 Amazon S3 buckets in us-east-1 and needs a fully managed service that can automatically discover and classify sensitive data such as names and 16-digit credit card numbers, generate findings per object, and support scheduled scans without deploying any infrastructure; which AWS service should they use?
AWS IAM Access Analyzer helps identify unintended access to AWS resources by analyzing IAM policies, bucket policies, and access points (e.g., whether an S3 bucket is publicly accessible or shared with another account). It does not inspect object contents, cannot classify PII/PCI within CSV/JSON files, and does not generate per-object sensitive data findings. It’s an access governance tool, not a data discovery/classification service.
Amazon GuardDuty is a managed threat detection service that analyzes AWS CloudTrail events, VPC Flow Logs, DNS logs, and certain S3 data events to detect suspicious activity (e.g., credential compromise, anomalous API calls, malware indicators). While it produces security findings, it does not scan S3 object contents to discover names or credit card numbers, and it is not intended for sensitive data classification jobs.
Amazon Inspector is a vulnerability management service focused on identifying software vulnerabilities and unintended network exposure for EC2 instances, container images in ECR, and (in supported modes) Lambda functions. It assesses CVEs, package vulnerabilities, and configuration risks. It does not perform content-based inspection of S3 objects for PII/PCI, nor does it provide scheduled sensitive data discovery across S3 buckets.
Amazon Macie is purpose-built for discovering and protecting sensitive data in Amazon S3. It can automatically and on a schedule scan selected buckets, inspect object contents (including CSV/JSON), and use managed data identifiers to detect PII and PCI data such as names and 16-digit credit card numbers. Macie generates detailed findings per object and integrates with EventBridge/Security Hub for workflow automation—fully managed with no infrastructure to deploy.
Core Concept: This question tests knowledge of AWS managed data security services for Amazon S3—specifically automated discovery and classification of sensitive data (PII/PCI) within objects, with findings and scheduled scans. Why the Answer is Correct: Amazon Macie is the fully managed service designed to discover, classify, and protect sensitive data stored in Amazon S3. It uses machine learning and pattern matching to identify data such as names and credit card numbers (e.g., 16-digit PANs) in common formats like CSV and JSON. Macie can evaluate multiple buckets, generate per-object findings (including the bucket, object key, and detected data types), and supports scheduled or recurring jobs without requiring any infrastructure deployment. This aligns exactly with the requirement: “automatically discover and classify sensitive data,” “generate findings per object,” and “support scheduled scans” across several S3 buckets. Key AWS Features: Macie provides S3 inventory and automated sensitive data discovery, managed data identifiers (built-in for common PII/PCI), and custom data identifiers (regex/keywords) for organization-specific patterns. Findings are sent to Amazon EventBridge and visible in the Macie console; they can be routed to ticketing/SIEM workflows. Macie integrates with AWS Organizations for multi-account enablement and supports scoping via S3 bucket selection and job definitions (one-time or scheduled). This supports least-ops, serverless security posture consistent with AWS Well-Architected Security Pillar guidance. Common Misconceptions: GuardDuty is often confused here because it also produces “findings,” but it detects threats (malicious activity/anomalies) rather than classifying sensitive data inside objects. Inspector focuses on vulnerability management for compute and container images, not S3 data classification. IAM Access Analyzer helps analyze access policies and external access paths, not content inspection. Exam Tips: When you see “S3,” “PII/PCI,” “discover/classify,” “sensitive data,” and “scheduled scans/jobs,” the answer is almost always Amazon Macie. If the question instead emphasizes “threat detection” or “anomalous API calls,” think GuardDuty; if it emphasizes “vulnerabilities/CVEs,” think Inspector; if it emphasizes “public/cross-account access analysis,” think IAM Access Analyzer.
A media analytics startup running 40 microservices on Amazon ECS in us-east-1 and eu-west-1 needs a centrally managed way to store 12 third-party API keys and a database password encrypted with a KMS customer managed key and allow tasks to retrieve these credentials programmatically at runtime with fine-grained IAM permissions and automatic rotation every 30 days; which AWS service or feature should they use?
AWS Encryption SDK is a client-side cryptographic library, not a managed secret storage service. It can help an application encrypt and decrypt data by using envelope encryption with AWS KMS, but it does not provide a central repository for secrets. It also does not include built-in secret rotation, lifecycle management, or native runtime retrieval patterns for ECS tasks. Using it here would require the startup to build and operate its own storage and rotation solution.
AWS Security Hub is used to aggregate and prioritize security findings from AWS services and partner tools. It helps with visibility into security posture, compliance checks, and alerting, but it does not store application credentials. ECS tasks cannot use Security Hub as a runtime secret source for API keys or database passwords. It also has no feature for rotating secrets every 30 days.
AWS Secrets Manager is the correct choice because it is a managed service specifically designed to store sensitive values such as API keys, tokens, and database credentials. It encrypts secrets at rest using AWS KMS, including customer managed keys, which satisfies the requirement for KMS CMK protection. ECS tasks can retrieve secrets programmatically at runtime by using task IAM roles with least-privilege permissions such as secretsmanager:GetSecretValue on specific secret ARNs. Secrets Manager also supports automatic rotation on a defined schedule, including every 30 days, which directly matches the operational requirement in the question.
AWS Artifact is a portal for accessing AWS compliance reports and managing agreements such as business associate addendums. It is not an application runtime service and has no capability to store, encrypt, or return secrets to ECS tasks. Artifact does not integrate with IAM for per-secret retrieval permissions in the way required by the scenario. It also provides no secret rotation functionality.
Core Concept: This question tests managed secret storage for applications, encryption with AWS KMS customer managed keys (CMKs), fine-grained IAM access from compute (ECS tasks), and built-in automatic rotation. The AWS service purpose-built for this is AWS Secrets Manager. Why the Answer is Correct: AWS Secrets Manager provides centralized storage for sensitive values like third-party API keys and database passwords, encrypts secrets at rest using AWS KMS (including a customer managed key), and exposes programmatic retrieval via AWS SDK/CLI/API at runtime. ECS tasks can retrieve secrets securely using an IAM task role with least-privilege permissions (e.g., secretsmanager:GetSecretValue for specific secret ARNs). Secrets Manager also supports automatic rotation on a schedule (e.g., every 30 days) using AWS Lambda rotation functions, which is a direct requirement in the prompt. Key AWS Features: - KMS CMK encryption: Configure each secret to use a specific customer managed key; enforce key policies and grants. - Fine-grained IAM: Restrict access per secret, per environment, or per service using IAM policies and resource-based policies on secrets. - Runtime retrieval: ECS integrates well with Secrets Manager; tasks can fetch secrets at startup or on demand using the SDK, and you can also inject secrets as environment variables via ECS secret options. - Automatic rotation: Native rotation scheduling with Lambda, including AWS-provided templates for common databases; rotation every 30 days is a standard pattern. - Multi-Region considerations: Because workloads run in us-east-1 and eu-west-1, you typically create secrets in each region (or use Secrets Manager multi-Region secrets for replication) to reduce latency and improve resilience while keeping centralized governance. Common Misconceptions: Some may think “encryption” implies AWS Encryption SDK, but that is a client-side cryptography library and does not provide centralized secret lifecycle management or rotation. Others may confuse Security Hub with secret management; it aggregates security findings rather than storing credentials. AWS Artifact is for compliance reports and agreements, not runtime secrets. Exam Tips: When you see requirements like “store credentials centrally,” “retrieve at runtime with IAM,” “encrypt with KMS CMK,” and especially “automatic rotation,” default to AWS Secrets Manager. If rotation is not required and the need is simple parameter storage, AWS Systems Manager Parameter Store could be considered, but rotation is the key differentiator here.
A municipal records office archives 80 TB of compliance PDFs that must be retained for 7 years and are accessed by external auditors only twice per year, but when an audit occurs the files must be retrievable in milliseconds at the lowest possible storage cost; which Amazon S3 storage class should be used?
Amazon S3 on Outposts is for storing objects on-premises on AWS Outposts racks to meet data residency, local processing, or low-latency local access requirements. It is not an archival storage class and generally won’t be the lowest-cost option for 7-year retention of 80 TB. The question does not indicate an on-premises requirement, so this is not a fit.
Amazon S3 Glacier Instant Retrieval is designed for long-term, rarely accessed data that still requires immediate (millisecond) retrieval. This aligns perfectly with compliance archives accessed only during audits but needing fast access when requested. It offers lower storage cost than S3 Standard, with retrieval charges that are acceptable given the twice-per-year access pattern.
Amazon S3 Standard provides millisecond access and is ideal for frequently accessed data, but it has the highest storage cost among the listed options. For 80 TB retained for 7 years and accessed only twice per year, S3 Standard would be unnecessarily expensive. It meets the latency requirement but fails the “lowest possible storage cost” requirement.
Amazon S3 Intelligent-Tiering automatically moves objects between frequent and infrequent access tiers based on usage, which is helpful when access patterns are unknown or change over time. However, it includes a per-object monitoring/automation fee and is not always the cheapest for clearly predictable, very infrequent access. For known archival use with millisecond retrieval, Glacier Instant Retrieval is typically more cost-effective.
Core Concept: This question tests Amazon S3 storage class selection based on access pattern, retrieval latency, retention duration, and cost optimization. The key requirement is “milliseconds retrieval” while keeping storage cost as low as possible for data that is rarely accessed (twice per year) but must be retained for 7 years. Why the Answer is Correct: Amazon S3 Glacier Instant Retrieval is purpose-built for long-lived, rarely accessed data that still needs immediate (millisecond) access when requested. It provides the low storage cost characteristics of the Glacier family while avoiding the minutes-to-hours restore delays associated with other Glacier options (e.g., Flexible Retrieval or Deep Archive). For compliance PDFs accessed only during audits, this matches the pattern: very infrequent access, but when access happens, auditors expect near-real-time retrieval. Key AWS Features: S3 Glacier Instant Retrieval offers millisecond retrieval, high durability (11 9s), and integrates with standard S3 APIs (GET/PUT/HEAD, lifecycle policies, IAM, bucket policies). You can use S3 Lifecycle rules to transition objects into Glacier Instant Retrieval after ingestion, and optionally apply S3 Object Lock (WORM) for compliance retention controls. Cost-wise, you pay lower storage than S3 Standard, with retrieval charges when audits occur—appropriate for twice-yearly access. Common Misconceptions: Many assume S3 Intelligent-Tiering is always best for unknown access patterns. However, Intelligent-Tiering adds a monitoring/automation fee per object and is most valuable when access frequency is unpredictable. Here, the pattern is clearly “rare access,” so a Glacier class is typically cheaper. Another misconception is choosing S3 Standard for performance; while it is millisecond access, it is not the lowest storage cost for 7-year archival. Exam Tips: When you see “archive/retain for years” + “rarely accessed” + “must be retrieved in milliseconds,” think S3 Glacier Instant Retrieval. If the question instead said “can wait minutes/hours,” then Glacier Flexible Retrieval or Deep Archive would be candidates. Always map: access frequency (rare), retrieval time objective (milliseconds), and cost goal (lowest storage cost that still meets latency).
外出先でもすべての問題を解きたいですか?
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。
A fintech company preparing for a SOC 2 Type II review needs, within 15 minutes and without opening a support case, to download AWS-issued compliance reports (for example, ISO/IEC 27001 certificates and PCI DSS Attestation of Compliance v4.0) and third-party attestations to share with an external auditor across 8 business units operating in us-east-1, eu-west-1, and ap-southeast-2; which AWS service should they use to centrally self-serve these documents on demand?
AWS Artifact is the correct service because it is AWS’s self-service portal for compliance documentation and audit artifacts. Customers can use it to download AWS-issued reports and certifications, including ISO/IEC 27001 certificates, SOC reports, and PCI DSS Attestation of Compliance documents, without opening a support case. That directly satisfies the requirement for rapid, centralized access to official compliance evidence for an external auditor. It is also appropriate across multiple business units because access can be managed centrally with IAM and organizational governance controls.
AWS Trusted Advisor provides real-time guidance and best-practice checks across cost optimization, performance, security, fault tolerance, and service limits. While it includes security-related checks, it does not provide downloadable AWS compliance reports or third-party attestations. It helps improve your AWS environment posture but is not an evidence repository for SOC 2/ISO/PCI audit documentation.
AWS Health Dashboard (including Personal Health Dashboard) provides visibility into AWS service health, planned maintenance, and account-specific events that may impact resources. It is useful for operational awareness and incident management, not for retrieving compliance certifications, audit reports, or attestations. It cannot be used to download ISO certificates or PCI DSS AoCs for auditors.
AWS Config records and evaluates configuration changes of AWS resources and can assess compliance against rules (managed or custom) to detect drift and enforce governance. It is valuable for demonstrating control effectiveness and continuous compliance of your own configurations, but it does not distribute AWS-issued compliance reports or third-party audit attestations like ISO/PCI documents.
Core Concept: This question tests knowledge of the AWS service used to self-serve official compliance documentation for audits. AWS Artifact is the portal customers use to access AWS-issued compliance reports and certifications, such as ISO/IEC 27001 certificates, SOC reports, and PCI DSS Attestation of Compliance documents. Why the Answer is Correct: AWS Artifact is the correct choice because it provides on-demand, self-service access to compliance reports without requiring a support case. That directly matches the requirement to obtain documents within 15 minutes for an external auditor. The references to multiple business units and Regions are distractors, because Artifact is the centralized service for retrieving AWS compliance evidence regardless of where workloads run. Key AWS Features: AWS Artifact includes Artifact Reports, which contains AWS compliance reports and certifications, and Artifact Agreements, which supports reviewing and accepting certain legal agreements. Access can be controlled with IAM so audit, risk, and compliance teams can retrieve documents in a governed way. It is specifically intended to streamline audit readiness and evidence collection for frameworks such as SOC, ISO, and PCI. Common Misconceptions: Trusted Advisor is not a repository of official compliance reports; it provides best-practice recommendations and checks. AWS Health Dashboard is for service health and account events, not audit evidence. AWS Config helps assess the compliance of customer resource configurations, but it does not provide AWS-issued certifications or attestation documents. Exam Tips: If a question mentions downloading AWS compliance reports, certifications, SOC reports, ISO certificates, or PCI documents without opening a support case, think AWS Artifact. If the question is about checking your own resource configurations against rules, think AWS Config instead. If it is about operational recommendations or outages, think Trusted Advisor or AWS Health respectively.
A fintech startup must deploy a payment API so that all compute and storage remain within the U.S. Midwest and the chosen location is no more than 500 miles from Columbus, Ohio, while using fully managed AWS services and providing at least two isolated fault domains in that location. Which feature of the AWS global infrastructure enables the team to choose a specific geographic location (for example, us-east-2 in Ohio) to meet this requirement?
Scalability refers to the ability to increase or decrease resources to meet demand (for example, Auto Scaling, serverless scaling, or managed database scaling). While important for a payment API, scalability does not determine where workloads run geographically. You can scale in any Region, but scalability is not the AWS global infrastructure feature that lets you choose us-east-2 (Ohio) to meet location and residency constraints.
Global footprint describes AWS’s worldwide presence of Regions and Availability Zones. This is the feature that enables customers to select a specific Region (such as us-east-2 in Ohio) to meet geographic proximity, latency, and data residency requirements. Once the Region is chosen, services are deployed within that Region, and you can then use multiple AZs there to achieve at least two isolated fault domains.
Availability relates to designing systems that remain operational through failures, commonly achieved by deploying across multiple Availability Zones and using managed services with Multi-AZ capabilities. Although the scenario requires two isolated fault domains (AZs), the question specifically asks what enables choosing a geographic location like us-east-2. That selection is a Region/global footprint concept, not “availability” itself.
Performance is about responsiveness and low latency, often improved by choosing a closer Region, using caching (CloudFront, ElastiCache), or optimizing networking. However, performance is an outcome/benefit, not the AWS infrastructure feature that provides the ability to choose a specific geographic location. The mechanism for selecting Ohio is choosing an AWS Region enabled by AWS’s global footprint.
Core Concept: This question tests understanding of AWS Global Infrastructure concepts: Regions, Availability Zones (AZs), and how AWS’s worldwide presence (its “global footprint”) lets customers select specific geographic locations for data residency, latency, and regulatory needs. Why the Answer is Correct: The requirement is to keep compute and storage within the U.S. Midwest and within 500 miles of Columbus, Ohio, using fully managed services and at least two isolated fault domains. The AWS feature that enables choosing a specific geographic location such as us-east-2 (Ohio) is AWS’s global footprint—AWS operates multiple Regions around the world, and customers explicitly select the Region where resources are deployed. By choosing the us-east-2 Region, the startup ensures services are provisioned in that geographic area, supporting residency/sovereignty and proximity constraints. Key AWS Features: Within a chosen Region, AWS provides multiple Availability Zones, which are physically separate, isolated fault domains with independent power, cooling, and networking. Deploying across at least two AZs in us-east-2 satisfies the “two isolated fault domains” requirement while remaining in the same geographic Region. Many fully managed services (for example, Amazon RDS Multi-AZ, Amazon DynamoDB, Amazon S3, AWS Lambda, and Amazon API Gateway) are Region-scoped and can be configured for Multi-AZ resilience (or are inherently multi-AZ), aligning with high availability and fault isolation best practices. Common Misconceptions: “Availability” might seem correct because AZs provide fault isolation, but the question asks what enables choosing a specific geographic location (a Region), not how to achieve redundancy within it. “Performance” and “Scalability” are benefits of AWS infrastructure, but they do not describe the mechanism for selecting a geographic deployment location. Exam Tips: If a question mentions selecting a location like us-east-2, think “Region” and the AWS global footprint. If it mentions “two isolated fault domains,” think “Availability Zones” within a Region. For residency/sovereignty constraints, the primary control is Region selection; for resilience inside that Region, use Multi-AZ designs and managed services that support AZ redundancy. References: AWS Global Infrastructure documentation (Regions and Availability Zones) and AWS Well-Architected Reliability pillar (fault isolation and multi-AZ design).
An online ticketing startup sees traffic surge from 800 requests per minute during normal hours to 24,000 requests per minute for 90 minutes when major concerts go on sale, and they want the platform to automatically add and remove compute capacity in real time so they provision only what is needed without overbuying; which advantage of cloud computing addresses this requirement?
“Go global in minutes” refers to deploying applications in multiple AWS Regions worldwide quickly (e.g., using Route 53, CloudFront, multi-Region architectures) to reduce latency and improve global availability. The scenario is not about geographic expansion; it is about handling short-lived demand spikes by scaling compute up and down automatically. Therefore, this option does not best address the requirement described.
“Stop guessing capacity” is the cloud advantage that directly matches automatic, real-time scaling to meet variable demand. With services like EC2 Auto Scaling, ALB request-based scaling, ECS/Fargate scaling, or Lambda’s inherent scaling, you can add capacity during the 90-minute surge and remove it afterward. This prevents overprovisioning for peak and underprovisioning during spikes, aligning capacity to actual usage.
“Benefit from massive economies of scale” means AWS can offer lower prices because it buys infrastructure at scale and spreads costs across many customers. While this can reduce overall cost, it does not specifically describe the operational capability of automatically adding/removing compute capacity in response to demand. The question is about elasticity and right-sizing, not AWS’s purchasing power and unit-cost reductions.
“Trade fixed expense for variable expense” describes moving from upfront capital expenditures (buying servers) to pay-as-you-go operational expenses. This is related to cost optimization, but the question emphasizes automatically scaling capacity in real time to avoid overbuying. The more precise advantage for dynamic scaling and right-sizing is “Stop guessing capacity,” not the CapEx-to-OpEx shift.
Core Concept - The question tests the AWS Cloud value proposition of elasticity and the ability to right-size capacity dynamically. In AWS exam terms, this maps to the cloud advantage commonly phrased as “Stop guessing capacity,” which is about scaling resources up or down based on actual demand rather than forecasting and overprovisioning. Why the Answer is Correct - The startup experiences highly variable, spiky traffic (800 to 24,000 requests per minute for a short 90-minute window). They want compute capacity to be added and removed automatically in real time so they only provision what is needed. This is exactly the “stop guessing capacity” advantage: you can use elastic services to match supply to demand, reducing both performance risk (underprovisioning) and waste (overprovisioning). Key AWS Features - In practice, this requirement is typically implemented with Amazon EC2 Auto Scaling (often behind an Elastic Load Balancer) using scaling policies based on metrics like CPU utilization, request count per target (ALB), or custom Amazon CloudWatch metrics. Alternatively, serverless options like AWS Lambda or container platforms like Amazon ECS/Fargate can scale automatically. The architectural principle is elasticity (scale out/in) combined with monitoring and automation (CloudWatch alarms, target tracking, scheduled scaling for known events). Common Misconceptions - “Trade fixed expense for variable expense” is about shifting from CapEx to OpEx and pay-as-you-go billing; while related, the question emphasizes real-time automatic scaling and avoiding overbuying capacity, which is more directly “stop guessing capacity.” “Benefit from massive economies of scale” refers to AWS’s lower unit costs due to aggregated demand, not dynamic scaling. “Go global in minutes” is about rapid geographic expansion using global infrastructure, not handling short-lived demand spikes. Exam Tips - When you see keywords like “traffic surge,” “automatically add and remove capacity,” “real time,” and “only what is needed,” think elasticity and Auto Scaling, and map it to the cloud advantage “Stop guessing capacity.” If the prompt focuses instead on budgeting/CapEx vs OpEx, then “Trade fixed expense for variable expense” is usually the best match.
An engineering firm with 12 AWS accounts under a single AWS Organizations setup needs to centrally manage sign-in and account-specific permission sets for 350 employees so users can access multiple accounts with single sign-on without creating IAM users in each account; which AWS service or feature should the company use to meet this requirement?
AWS IAM Access Analyzer helps identify unintended access to AWS resources by analyzing IAM policies, resource policies, and access paths. It is used for security posture management (e.g., detecting publicly accessible S3 buckets or cross-account access). It does not provide workforce single sign-on, user portals, or centralized permission set assignment across AWS accounts, so it cannot meet the requirement for centralized sign-in and account access management.
AWS Secrets Manager is designed to store, manage, and rotate secrets such as database credentials, API keys, and other sensitive values. While it supports fine-grained access control via IAM, it is not an identity provider and does not provide SSO, user lifecycle management, or permission sets across multiple AWS accounts. It is unrelated to centrally managing employee sign-in to multiple AWS accounts.
AWS IAM Identity Center is the correct service for centrally managing workforce access across multiple AWS accounts in an AWS Organizations environment. It provides a single sign-on portal and lets administrators create permission sets and assign them to users/groups per account, automatically provisioning corresponding IAM roles in each member account. This avoids creating IAM users in each account and enables scalable, centralized access management for hundreds of employees.
AWS STS issues temporary security credentials used for federation, role assumption, and cross-account access. Although IAM Identity Center and many federation patterns rely on STS under the hood, STS alone does not provide a centralized user portal, directory integration, or the permission set management model required here. You would still need an identity management layer (like IAM Identity Center) to meet the stated SSO and centralized administration needs.
Core Concept: This question tests centralized workforce identity and access management across multiple AWS accounts in AWS Organizations, specifically enabling single sign-on (SSO) and managing account-specific permission sets without creating IAM users in each account. Why the Answer is Correct: AWS IAM Identity Center (formerly AWS SSO) is purpose-built to centrally manage user access to multiple AWS accounts and applications. With IAM Identity Center integrated to AWS Organizations, administrators can create and assign permission sets (collections of IAM policies) to users and groups for specific member accounts. Users then sign in once to the IAM Identity Center portal and can access multiple accounts via role-based access, eliminating the need to provision IAM users in each account. Key AWS Features: IAM Identity Center provides a central directory (or can connect to an external IdP like Azure AD/Okta via SAML 2.0), a user portal for SSO, and permission sets that are automatically provisioned as IAM roles in target accounts. It supports fine-grained assignments (user/group + permission set + account), session duration controls, MFA integration (depending on identity source), and centralized auditing via AWS CloudTrail. This aligns with AWS Well-Architected Security Pillar best practices: use federation, least privilege, and centralized identity management. Common Misconceptions: AWS STS is often associated with “temporary credentials” and cross-account access, but STS is a building block, not a centralized SSO and permission-set management solution. Access Analyzer is for analyzing resource access and policies, not for user sign-in. Secrets Manager stores and rotates secrets; it does not manage workforce SSO. Exam Tips: When you see “multiple accounts,” “AWS Organizations,” “central management,” “permission sets,” and “single sign-on without IAM users,” the exam almost always points to IAM Identity Center. Remember the terminology: permission sets are an IAM Identity Center construct that becomes IAM roles in each account. If the question mentions external corporate directories, think IAM Identity Center + external IdP integration.
A retail company needs to allow web access from the corporate office range 198.51.100.0/24 and a partner VPN range 203.0.113.16/28, and it wants to specify these ranges directly using CIDR notation when configuring access rules; which AWS network services or features support entering an IP address range in CIDR block format? (Choose two.)
Correct. Security groups are virtual firewalls attached to ENIs/instances and support inbound/outbound rules where the source/destination can be an IPv4 or IPv6 CIDR block (e.g., 198.51.100.0/24). They are stateful, so return traffic is automatically permitted. Security groups are the most common way to allow corporate office or partner IP ranges to reach web ports like 80/443.
Incorrect. An AMI is an image template used to launch EC2 instances (OS + software configuration). It does not provide network access control or rule configuration and has no concept of allowing IP ranges in CIDR notation. Network access is controlled by security groups, NACLs, routing, and optionally WAF/Firewall Manager—not by the AMI itself.
Correct. Network ACLs operate at the subnet boundary and use ordered rules that explicitly allow or deny traffic based on protocol, port range, and source/destination CIDR blocks. Because NACLs are stateless, you must account for both directions of traffic (including ephemeral ports). They are useful for subnet-wide restrictions and explicit deny requirements.
Incorrect. AWS Budgets is a cost management service used to set budget thresholds and alerts for spending or usage. It does not configure network access rules and does not accept CIDR blocks for traffic filtering. Any “rules” in Budgets relate to notifications and budget conditions, not IP-based access control.
Incorrect. Amazon EBS provides block storage volumes for EC2. Access to EBS is controlled through attachment to instances and IAM permissions for API actions, not by network CIDR-based rules. While EBS supports encryption, snapshots, and policies, it does not offer a feature to allow/deny inbound web access from specific CIDR ranges.
Core Concept: This question tests which AWS networking controls let you define allowed/denied traffic sources using IP ranges in CIDR notation. In AWS, the primary constructs for IP-based filtering at the VPC level are security groups (stateful, instance/ENI-level) and network ACLs (stateless, subnet-level). Both accept CIDR blocks like 198.51.100.0/24 and 203.0.113.16/28. Why the Answer is Correct: Security groups support specifying sources (inbound rules) and destinations (outbound rules) using IPv4 CIDR blocks and IPv6 CIDR blocks. For example, you can allow TCP/443 from 198.51.100.0/24 and 203.0.113.16/28 directly in the rule source field. Network ACLs also support CIDR-based rules. You can create allow/deny entries with a rule number, protocol/port range, and a source/destination CIDR. This is commonly used for subnet-wide controls, including explicit denies (which security groups do not support). Key AWS Features / Best Practices: - Security groups are stateful: return traffic is automatically allowed, reducing rule complexity for typical web access patterns. - Network ACLs are stateless: you must allow both inbound and outbound (including ephemeral ports for return traffic) when needed. - Prefer security groups for most workload access control; use NACLs for broad subnet guardrails, explicit denies, or compliance-driven subnet boundaries. - Both support referencing other security groups (SG only) and prefix lists (where applicable), but the question specifically asks for CIDR notation entry. Common Misconceptions: Some candidates confuse IAM policies or WAF with CIDR entry. While AWS WAF also supports IP sets with CIDRs, it is not an option here. Others may think AMIs or EBS have “access rules”; they do not provide network-layer CIDR filtering. Exam Tips: When you see “configure access rules” and “CIDR block format,” think VPC traffic filtering: Security Groups (instance/ENI) and Network ACLs (subnet). Remember the key distinction: SG = stateful allow rules; NACL = stateless allow/deny with ordered rule numbers. This helps quickly eliminate non-networking services in multiple-choice questions.
学習期間: 2 months
기초만 따로 공부하고 무한으로 문제 돌렸습니다. 믿을만한 앱임
学習期間: 2 months
I have very similar questions on my exam, and some of them were nearly identical to the original questions.
学習期間: 2 months
다음에 또 이용할게요
学習期間: 2 months
Would vouch for this practice questions!!!
学習期間: 1 month
도메인별 문제들이 잘 구성되어 있어서 좋았고, 강의만 듣고 시험보기엔 불안했는데 잘 이용했네요
外出先でもすべての問題を解きたいですか?
無料アプリを入手
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。