CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
AWS Certified DevOps Engineer - Professional (DOP-C02)
AWS Certified DevOps Engineer - Professional (DOP-C02)

Practice Test #2

75問と180分の制限時間で実際の試験をシミュレーションしましょう。AI検証済み解答と詳細な解説で学習できます。

75問題180分750/1000合格点
練習問題を見る

AI搭載

3重AI検証済み解答&解説

すべての解答は3つの主要AIモデルで交差検証され、最高の精度を保証します。選択肢ごとの詳細な解説と深い問題分析を提供します。

GPT Pro
Claude Opus
Gemini Pro
選択肢ごとの解説
深い問題分析
3モデル合意の精度

練習問題

1
問題 1

A company operates 10 Amazon OpenSearch Service domains with Auto-Tune enabled across 2 AWS Regions and needs to visualize every Auto-Tune action (for example, memory or shard rebalancing adjustments) on an Amazon CloudWatch dashboard at 1-minute resolution for the last 24 hours; which solution will meet this requirement?

Correct. OpenSearch Auto-Tune emits events that EventBridge can match. Lambda can parse each event and publish a CloudWatch custom metric (e.g., Count=1) with dimensions (Domain, ActionType, Region). CloudWatch dashboards can graph these metrics at 1-minute resolution over the last 24 hours, providing a clear visualization of every Auto-Tune action across domains/Regions.

Incorrect. CloudTrail management events record control-plane API calls (e.g., CreateDomain, UpdateDomain), not the internal Auto-Tune actions taken by the service. Sending CloudTrail to CloudWatch Logs and using metric filters is useful for auditing API activity, but it won’t reliably capture every Auto-Tune memory/shard adjustment event, so it won’t meet the requirement.

Incorrect. EventBridge can trigger actions, but “changing the status of a CloudWatch alarm” is not how alarms work; alarm state is evaluated from metric data against thresholds. Even if you forced some workflow, an alarm shows state (OK/ALARM/INSUFFICIENT_DATA), not a per-action, 1-minute resolution time series of all Auto-Tune actions for visualization.

Incorrect. CloudTrail data events are for data-plane operations on supported resources (e.g., S3 object-level, Lambda invoke, DynamoDB item-level) and are not applicable to OpenSearch Auto-Tune actions. This option also misapplies CloudTrail for operational tuning events. Therefore, it cannot provide a complete, 1-minute resolution visualization of Auto-Tune actions.

問題分析

Core Concept: This question tests event-driven monitoring and near-real-time visualization. Amazon OpenSearch Service Auto-Tune emits events (configuration changes and tuning actions). To visualize “every Auto-Tune action” at 1-minute resolution on a CloudWatch dashboard, you need to convert those discrete events into CloudWatch metrics with 1-minute granularity. Why the Answer is Correct: EventBridge is the native way to capture OpenSearch Auto-Tune events as they occur. By routing those events to a Lambda function, you can publish a CloudWatch custom metric (PutMetricData) each time an Auto-Tune action happens (optionally with dimensions like DomainName, Region, ActionType). CloudWatch dashboards can graph custom metrics at 1-minute periods and show the last 24 hours. This approach works across 10 domains and 2 Regions by deploying the rule/Lambda per Region (or centralizing via event bus forwarding) and using a multi-Region dashboard. Key AWS Features: - Amazon EventBridge rules for service events (Auto-Tune notifications). - AWS Lambda for lightweight event transformation/enrichment. - CloudWatch custom metrics with dimensions and 1-minute period visualization. - CloudWatch dashboards support cross-account/cross-Region widgets (as applicable) and time range selection (last 24 hours). Common Misconceptions: CloudTrail is for API activity auditing, not operational service events like Auto-Tune actions. Even if you could capture something in logs, metric filters are derived from log ingestion and are not the intended/most reliable mechanism for OpenSearch Auto-Tune event visualization. Also, alarms represent threshold state, not a per-action time series. Exam Tips: When you see “visualize events/actions” at a specific resolution, think: Event source (EventBridge) → transform (Lambda) → metric (CloudWatch custom metric) → dashboard. Use CloudTrail primarily for governance/audit of API calls, and CloudWatch alarms for threshold-based alerting, not for counting/plotting every discrete operational action.

2
問題 2
(2つ選択)

A logistics enterprise operates a multi-OU AWS Organizations setup across two AWS Regions with 68 member accounts and has acquired a fintech startup that uses 11 standalone AWS accounts with separate billing; the platform team must centralize administration under a single management account while retaining break-glass full administrative control across all imported accounts and must centrally aggregate and group security findings across the entire environment with new accounts automatically included as they are onboarded; which combination of actions should the platform team take to meet these requirements with minimal operational overhead? (Choose two.)

Incorrect. Inviting accounts into an organization is right, but the SCP statement is wrong: SCPs cannot grant permissions to the management account (or anyone). SCPs only define the maximum permissions that accounts can use; they do not create IAM roles or allow cross-account access by themselves. Break-glass access requires an assumable IAM role (or equivalent) in each member account.

Correct. After inviting the startup accounts into the organization, creating (or ensuring the existence of) an OrganizationAccountAccessRole in each member account that trusts the management account and has AdministratorAccess provides centralized, break-glass administrative access. This is the standard Organizations pattern for management-account-to-member-account administration with minimal ongoing overhead.

Correct. AWS Security Hub is designed to aggregate, normalize, and group security findings across AWS accounts and Regions. With AWS Organizations integration, you can designate a delegated administrator, enable organization-wide configuration, and automatically enroll new accounts as they join the organization—meeting the requirement for centralized aggregation with automatic inclusion.

Incorrect. AWS Firewall Manager centralizes the administration of firewall-related policies (AWS WAF, Shield Advanced, security groups, Network Firewall, DNS Firewall) across an organization. It is not intended to be the central aggregation and grouping service for security findings across the entire environment. Security Hub is the correct service for findings aggregation.

Incorrect. Amazon Inspector provides vulnerability management findings (e.g., EC2, ECR, Lambda) and can operate across multiple accounts with delegated administration, but it does not serve as the central, cross-service findings aggregation and grouping layer for the whole environment. The requirement is broader and maps to Security Hub’s organization-wide findings aggregation.

問題分析

Core Concept: This question tests AWS Organizations account onboarding and centralized security operations. Specifically: (1) how the management account retains emergency (break-glass) administrative access to member accounts, and (2) how to centrally aggregate and automatically include security findings across all accounts using an organization-integrated security service. Why the Answer is Correct: To centralize administration, the startup’s standalone accounts should be invited into the existing AWS Organization. For break-glass full administrative control, the management account needs a cross-account role in each member account that it can assume with AdministratorAccess. The canonical mechanism is the OrganizationAccountAccessRole (or an equivalent admin role) that trusts the management account; this is exactly what option B describes and is aligned with how Organizations enables centralized access after account creation/invitation. For centralized aggregation and grouping of security findings with automatic inclusion of newly onboarded accounts, AWS Security Hub is the correct service. When integrated with AWS Organizations, Security Hub supports delegated administrator, multi-account/multi-Region aggregation, and auto-enrollment of new organization accounts, minimizing ongoing operational overhead (option C). Key AWS Features: - AWS Organizations: account invitation and consolidated governance. - Cross-account IAM role (OrganizationAccountAccessRole): trusted by the management account; grants AdministratorAccess for emergency access. - AWS Security Hub + Organizations: delegated admin, organization-wide enablement, automatic account enrollment, and centralized findings aggregation across accounts/Regions. - Best practice: use delegated administrator for security tooling to avoid using the management account for day-to-day operations. Common Misconceptions: - SCPs do not “grant” permissions; they only set permission guardrails (maximum allowed). Therefore, an SCP cannot by itself create break-glass admin access for the management account. - Firewall Manager is for centralized firewall policy management (WAF, Shield Advanced, security groups, etc.), not a general security findings aggregator. - Amazon Inspector produces vulnerability findings, but it is not the broad, cross-service findings aggregation layer requested. Exam Tips: - Remember: IAM policies grant permissions; SCPs restrict them. - For org-wide security findings aggregation with auto-enrollment, think “Security Hub + Organizations + delegated admin.” - For centralized admin access into member accounts, look for “assume role into member accounts” patterns (OrganizationAccountAccessRole or equivalent).

3
問題 3

A DevOps engineer is building a data-forwarding service where an AWS Lambda function reads batched records from an Amazon Kinesis Data Streams shard and forwards them to an internal ERP SOAP endpoint over a VPC. Roughly 12% of incoming records fail validation and must be handled manually; the Lambda event source mapping is configured with an Amazon SQS dead-letter queue (DLQ) as an on-failure destination, and the function uses batch processing with retries enabled. During load testing, the engineer observes that the DLQ contains many records that have no data issues and were already accepted by the ERP service, indicating that successful items from partially failing batches are being retried and eventually sent to the DLQ along with the failures. Which event source configuration change should the engineer implement to reduce the number of error-free records sent to the DLQ while preserving current throughput and retry behavior?

Increasing retry attempts does not reduce the number of error-free records retried; it typically increases it. With Kinesis batch retries, the whole batch is retried when any record causes a failure, so more retries means more duplicate sends to the ERP endpoint and a higher chance that good records eventually land in the DLQ after repeated failures. This can also increase downstream load and complicate idempotency.

Enabling bisect (split) batch on function error causes Lambda to automatically divide a failing batch into smaller batches and retry them, isolating the bad records. This reduces reprocessing of successful records and minimizes the number of valid records that end up in the DLQ due to a poisoned batch. It preserves the existing retry model and generally maintains throughput while reducing failure blast radius.

Increasing parallelization factor increases the number of concurrent batches processed per shard, improving throughput and reducing lag. However, it does not change the fundamental behavior that a single failing record causes the entire batch to be retried. Therefore, it will not reduce the number of good records retried or sent to the DLQ; it may even increase duplicate deliveries to the ERP endpoint under failure conditions.

Decreasing maximum record age causes Lambda to stop retrying older records sooner and send them to the on-failure destination earlier. This does not isolate the failing record within a batch; instead, it can increase the number of records (including valid ones) that are discarded to the DLQ because the batch keeps failing and ages out. It reduces recovery time at the cost of higher data loss/noise.

問題分析

Core Concept: This question tests AWS Lambda event source mappings for Amazon Kinesis Data Streams, specifically how batch retry semantics interact with partial failures and on-failure destinations (DLQs). With Kinesis, Lambda reads a batch from a shard and, by default, treats the batch as an atomic unit for success/failure. Why the Answer is Correct: When any record in a Kinesis batch causes the Lambda invocation to fail (for example, validation failure leading to an exception), the entire batch is retried. That means records that were already successfully forwarded to the ERP endpoint can be reprocessed, potentially causing duplicates and, after retries are exhausted, being sent to the DLQ along with the truly bad records. Enabling “bisect batch on function error” (split/bisect) changes the retry behavior so that on an error, Lambda automatically splits the batch into smaller batches (binary search) and retries those. This isolates the problematic records into the smallest failing batch, dramatically reducing the number of good records that get retried and ultimately sent to the DLQ, while keeping the same overall throughput target and preserving the existing retry mechanism. Key AWS Features: - Event source mapping setting: BisectBatchOnFunctionError for Kinesis/DynamoDB Streams. - On-failure destination (SQS DLQ) for event source mappings, used when records expire or retries are exhausted. - Batch processing behavior: without bisection, a single bad record poisons the whole batch. This aligns with Well-Architected Reliability principles: limit blast radius and isolate failures. Common Misconceptions: It’s tempting to increase retries (A) to “fix” transient issues, but that increases duplicate processing and can worsen DLQ noise. Increasing parallelization (C) improves throughput but does not change batch atomicity; it can even amplify duplicates. Reducing max record age (D) can push more records to DLQ sooner, increasing loss/noise rather than isolating failures. Exam Tips: For Kinesis/DynamoDB Streams + Lambda, remember: batch is the unit of retry unless you enable bisection or implement partial batch response (where supported). If the symptom is “good records end up in DLQ because one record fails,” think BisectBatchOnFunctionError (or partial batch response) before tuning retries or concurrency.

4
問題 4

A global media company uses AWS Organizations with two OUs (root and analytics); the root OU has a single SCP that contains one Allow statement for all actions on all resources, while the analytics OU (which contains four accounts including the ext-ml account, ID 222222222222) has an SCP that allows only s3:* and glue:* and explicitly denies all other actions by using a Deny statement with NotAction [s3:, glue:] on all resources; in the ext-ml account, a DevOps engineer's IAM user has the AdministratorAccess policy attached, and when the engineer attempts in us-east-1 to create an Amazon RDS db.m6g.large instance via the console and AWS CLI (rds:CreateDBInstance), the request fails with AccessDeniedException indicating it was blocked by an organization service control policy. Which change will allow the engineer to successfully create the RDS instance in the ext-ml account?

Incorrect. AmazonRDSFullAccess is an IAM policy. IAM permissions cannot override an explicit Deny from an SCP. The error explicitly states the action was blocked by an Organizations SCP, so adding more IAM permissions (even admin) will not help until the SCP no longer denies rds:CreateDBInstance.

Incorrect. Attaching a new SCP that allows RDS to the ext-ml account does not override the analytics OU SCP’s explicit Deny. SCPs combine as guardrails, and explicit Deny in any applicable SCP still blocks the action. You would need to remove/modify the deny condition in the OU SCP (or move the account).

Correct. The analytics OU SCP explicitly denies all actions except s3:* and glue:* via Deny with NotAction. Because RDS is not exempted, rds:CreateDBInstance is denied. Updating this OU SCP to include rds:* (or specific required RDS actions) in the NotAction exception (or otherwise permitting RDS) removes the explicit deny and allows IAM AdministratorAccess to succeed.

Incorrect. A root OU SCP that allows all actions does not cancel a child OU SCP’s explicit Deny. SCP evaluation requires the action to be permitted by all applicable SCP constraints; any explicit Deny still wins. Therefore, adding an allow-RDS SCP at the root will not enable RDS in an OU that explicitly denies it.

問題分析

Core Concept - This question tests how AWS Organizations Service Control Policies (SCPs) set the maximum available permissions for accounts, regardless of IAM permissions. SCPs do not grant permissions by themselves; they define guardrails. An explicit Deny in an SCP cannot be overridden by any IAM policy, including AdministratorAccess. Why the Answer is Correct - The ext-ml account is in the analytics OU, which has an SCP that allows only s3:* and glue:* and explicitly denies everything else using a Deny with NotAction [s3:*, glue:*]. Because rds:CreateDBInstance is not in the NotAction allow-list, it matches the Deny statement and is explicitly denied for all principals in all accounts in that OU. Therefore, the engineer’s AdministratorAccess IAM policy is irrelevant for RDS actions: the request is blocked at the Organizations layer. To allow RDS instance creation, you must change the effective SCPs applying to the ext-ml account so that RDS actions are not explicitly denied. Updating the analytics OU SCP to include rds:* in the NotAction exception (or otherwise removing the explicit deny for RDS) is the direct fix. Key AWS Features - SCP evaluation is “intersection-based”: an action must be allowed by IAM and not blocked by SCPs. Explicit Deny always wins. OU-level SCPs apply to all accounts in the OU; account-level SCPs are additive constraints (more guardrails), not overrides. A root OU SCP that allows * does not negate a child OU’s explicit deny. Common Misconceptions - A frequent mistake is assuming that attaching AmazonRDSFullAccess (or even AdministratorAccess) will fix an SCP denial. It will not. Another misconception is that attaching an “allow RDS” SCP at the account or root will override the OU deny; it won’t, because explicit deny remains in effect. Exam Tips - When you see “blocked by an organization service control policy,” immediately inspect SCPs for explicit Deny or allow-lists (NotAction patterns). If an OU SCP uses Deny with NotAction, you must add the needed service actions to the NotAction list (or redesign the SCP) at the OU (or move the account to an OU where it’s permitted).

5
問題 5
(2つ選択)

A media streaming startup operates a multi-account environment in AWS Organizations with all features enabled. The operations (source) account uses AWS Backup in us-west-2 to protect 120 Amazon EBS volumes and encrypts recovery points with a customer managed KMS key (alias/ops-backup). To meet DR requirements, the company configured cross-account copy in the management account, created a backup vault named dr-vault and a customer managed KMS key (alias/dr-backup) in a newly created DR account, and then updated the backup plan in the operations account to copy all recovery points to the DR account's dr-vault. When a backup job runs in the operations account, recovery points are created successfully in the operations account, but no copies appear in the DR account's dr-vault. Which combination of steps will allow AWS Backup to copy recovery points to the DR account's vault? (Choose two.)

Correct. Cross-account copy requires the destination backup vault (dr-vault) to trust/allow the source (operations) account to write/copy recovery points into it. This is done with a resource-based backup vault access policy in the DR account. Without this explicit allow, AWS Backup cannot create the copied recovery point in the destination vault, so nothing appears there.

Incorrect. The DR account does not need to read from the source account’s backup vault for cross-account copy. The copy is initiated by AWS Backup based on the backup plan and writes into the destination vault. The key authorization point for the vault is on the destination vault policy (allowing write), not on the source vault policy granting read.

Incorrect. Backup vault policies do not grant access to KMS keys in another account. KMS authorization is controlled by the KMS key policy (and possibly grants), not by the backup vault access policy. You must update the destination CMK policy in the DR account to allow AWS Backup (and the source account context) to use it.

Incorrect. Sharing the source account CMK (alias/ops-backup) with the DR account is not the primary requirement for cross-account copy into a DR vault encrypted with a DR-owned CMK. AWS Backup copies and re-encrypts the recovery point using the destination vault’s encryption key. The missing permission is typically on the destination vault policy and destination CMK policy.

Correct. Because the copied recovery points in the DR vault are encrypted with the DR account’s customer managed KMS key (alias/dr-backup), that key policy must allow AWS Backup (and the source account acting through the service) to use the key for encryption-related operations. Without these KMS permissions, AWS Backup cannot encrypt the copied recovery points and the copy operation fails.

問題分析

Core Concept: This question tests AWS Backup cross-account copy prerequisites in an AWS Organizations environment, specifically the two authorization layers involved: (1) the destination backup vault access policy and (2) AWS KMS key policies for encrypting the copied recovery points. Even with Organizations “all features,” AWS Backup still requires explicit resource-based permissions on the destination vault and KMS permissions to use the destination CMK. Why the Answer is Correct: For cross-account copy, AWS Backup in the source account must be allowed to write into the destination vault in the DR account. That permission is granted by a resource-based policy on the destination backup vault (dr-vault). Without it, the copy operation is denied and no recovery points appear in the DR vault. Additionally, because the destination vault encrypts recovery points with a customer managed KMS key (alias/dr-backup), the AWS Backup service (acting on behalf of the source account) must be permitted by the destination CMK key policy to use the key for encryption operations (e.g., GenerateDataKey, Encrypt, Decrypt as required by the service). If the key policy does not allow this, AWS Backup cannot encrypt the copied recovery point in the DR account and the copy fails. Key AWS Features: - AWS Backup cross-account copy requires a destination vault access policy that allows the source account to perform copy/write actions. - AWS KMS CMKs are controlled primarily by key policies (not just IAM). Cross-account usage must be explicitly allowed in the key policy. - In multi-account DR designs, the destination account typically owns the vault and CMK; the source account is granted limited permissions to copy into that vault. Common Misconceptions: A frequent mistake is assuming that enabling AWS Organizations “all features” automatically authorizes cross-account backup copy. It does not; vault policies and KMS key policies still must be configured. Another misconception is thinking the source CMK must be shared with the DR account; for copy, AWS Backup re-encrypts using the destination CMK, so the critical permission is on the destination CMK. Exam Tips: When you see “cross-account copy” + “customer managed KMS key,” immediately check for two required permissions: destination vault policy (resource-based) and destination KMS key policy. If copies don’t appear, it’s almost always one (or both) of these missing permissions rather than the source vault policy.

外出先でもすべての問題を解きたいですか?

Cloud Passをダウンロード — 模擬試験、学習進捗の追跡などを提供します。

6
問題 6

A global logistics company ingests telemetry from 12,000 handheld scanners across 3 AWS Regions; each device type writes to a dedicated Amazon CloudWatch Logs log group per Region, totaling about 90 log groups. Over the past 14 days, CloudWatch Logs ingestion charges have increased by 68% with no planned changes. A DevOps engineer must quickly determine which device types or Regions are driving the spike in ingestion to prioritize remediation. Which solution is the MOST operationally efficient?

Correct. This option is the only one aimed at directly comparing ingestion volume across all log groups over a 14-day period using CloudWatch operational telemetry rather than scanning logs or inferring from billing or API activity. That makes it the most operationally efficient approach for quickly identifying the device types and Regions responsible for the spike. Once the highest-ingesting log groups are identified, the team can immediately prioritize remediation on the noisiest sources.

Incorrect. CloudWatch Logs Insights is best for querying log content and patterns, but it scans log data and can add query cost/latency. Counting events per hour doesn’t identify ingestion bytes (large messages vs small messages) and “how many log groups received events” doesn’t rank the biggest ingest sources. It’s less efficient for rapid cost-driver attribution.

Incorrect. Cost Explorer can confirm CloudWatch Logs costs increased and can break down by usage type (e.g., ingestion, storage), but it typically cannot attribute ingestion to specific log groups or device types. It’s a billing/reporting tool, not an operational tool for pinpointing which of 90 log groups is responsible for the spike.

Incorrect. CloudTrail records API calls such as CreateLogStream and PutLogEvents, but it does not provide a reliable measure of bytes ingested per log group/device type. High API call volume doesn’t necessarily mean high ingestion bytes, and analyzing CloudTrail for this purpose is indirect and operationally heavy compared to using AWS/Logs IncomingBytes.

問題分析

Core Concept: This question is about finding the fastest operational way to identify which sources are increasing Amazon CloudWatch Logs ingestion costs. The ideal solution should directly measure ingestion volume by log group, because each log group maps to a device type in a Region. In practice, CloudWatch Logs Insights can report scanned or queried log content characteristics, Cost Explorer can show service-level spend trends, and CloudTrail can show API activity, but none of those directly and efficiently attribute ingestion bytes to specific log groups as cleanly as a purpose-built ingestion metric would. Why the Answer is Correct: Option A is intended to be the best answer because it focuses on using CloudWatch-native telemetry to compare ingestion volume across log groups over the last 14 days and visualize the top contributors. That is the right operational pattern for rapid triage: use metrics rather than scanning logs or inferring from API calls. Since each device type writes to a dedicated log group per Region, ranking log groups effectively identifies the device types and Regions driving the increase. Key AWS Features: - CloudWatch metrics and dashboards are the most operationally efficient tools for comparing many resources over time. - Metric math can help aggregate, compare, and visualize multiple time series in one place. - Cost Explorer is useful for billing-level trends, but not for resource-level operational attribution in this scenario. - CloudTrail captures control-plane and data-plane API activity, but not the actual ingestion volume needed for cost attribution. Common Misconceptions: - Logs Insights is excellent for analyzing log content, but counting events does not equal measuring bytes ingested, which is what drives ingestion charges. - Cost Explorer can show that CloudWatch costs increased, but it usually cannot pinpoint the exact log groups responsible. - CloudTrail event counts such as PutLogEvents do not reliably correlate to ingestion volume because payload sizes vary. Exam Tips: On the exam, when the goal is to quickly identify top contributors to a usage spike across many resources, prefer native metrics and dashboards over ad hoc queries or billing reports. Also distinguish between event count, API count, and byte volume: only byte volume directly maps to CloudWatch Logs ingestion charges.

7
問題 7

A DevOps engineer deploys an Auto Scaling group of 8 Amazon EC2 instances without public IPs across two private subnets (10.20.1.0/24 and 10.20.2.0/24) in us-east-2; due to a new FedRAMP High control, the VPC has no internet gateway or NAT gateway and all internet egress is prohibited, and the user data uses the AWS CLI to download artifacts from s3://company-artifacts-prod/builds/2.7.4/app.tar.gz at boot, yet while the instances report healthy status checks the application is not installed because the download fails. Which action will install the application while complying with the no-internet egress requirement?

Incorrect. Attaching Elastic IPs implies the instances must have a route to an internet gateway to use those public IPs. The scenario explicitly states there is no IGW and internet egress is prohibited, so EIPs would not provide connectivity to S3. Even if it worked, it would violate the compliance requirement by enabling internet egress during bootstrap.

Incorrect. A NAT gateway requires a public subnet with a route to an internet gateway, and it enables outbound internet access from private subnets. The requirement explicitly prohibits all internet egress and states there is no IGW/NAT. Implementing NAT would directly violate the FedRAMP High control described in the prompt.

Correct. An S3 Gateway VPC Endpoint provides private connectivity from the private subnets to Amazon S3 without using the internet, IGW, or NAT. Adding the endpoint to both private route tables ensures traffic to S3 is routed through the endpoint. An IAM instance profile granting s3:GetObject allows the AWS CLI in user data to download the artifact. A bucket policy can further restrict access to the endpoint ID for compliance.

Incorrect. Security groups only control allowed traffic; they do not provide a network path. Without an IGW or NAT, the instances still cannot reach public IP ranges. Additionally, allowing egress to a public IP range is still internet egress and violates the requirement. S3 also does not have stable, customer-specific public IP ranges suitable for this approach.

問題分析

Core Concept: This question tests private subnet connectivity to AWS services without internet egress, and how to access Amazon S3 from a VPC using VPC endpoints while meeting strict compliance controls (FedRAMP High). It also touches IAM instance profiles for least-privilege access. Why the Answer is Correct: The instances are in private subnets with no internet gateway (IGW) and no NAT gateway, so they cannot reach public S3 endpoints over the internet. However, S3 is an AWS service that can be accessed privately from within a VPC using a Gateway VPC Endpoint for S3. Creating an S3 gateway endpoint and adding it to the private route tables provides a private path from the subnets to S3 without traversing the public internet, satisfying the “no internet egress” requirement. The instances still need authorization to read the object, so an IAM instance profile granting s3:GetObject on the bucket/prefix is required. Optionally, a bucket policy can restrict access to the specific VPC endpoint ID (aws:SourceVpce) to enforce that access only occurs through the endpoint. Key AWS Features: - Gateway VPC Endpoint for S3: Adds routes (prefix list) in route tables to reach S3 privately. - Route table integration: Must be associated with both private subnets’ route tables. - IAM instance profile: Enables AWS CLI on EC2 to call S3 without static credentials. - Bucket policy conditions: Restrict by VPC endpoint ID to prevent any non-endpoint access. These align with AWS Well-Architected Security (network segmentation, least privilege) and Compliance requirements. Common Misconceptions: Many assume S3 always requires internet or NAT because it uses public DNS names. With an S3 gateway endpoint, DNS still resolves normally, but traffic is routed privately via the endpoint. Another misconception is that security groups can “allow” access to S3; security groups don’t create routes or private connectivity. Exam Tips: When you see “private subnets + no IGW/NAT + need S3 access,” the default answer is “S3 Gateway VPC Endpoint + IAM role.” For other AWS services (SSM, CloudWatch, ECR API), you typically need Interface VPC Endpoints (PrivateLink). Always pair network connectivity with IAM authorization and consider endpoint-restricted bucket policies for compliance.

8
問題 8

A university consortium uses AWS Organizations across two Regions with four OUs and a delegated administrator account named platform-ops, and the research OU (58 AWS accounts) must ensure that, before any AWS CloudFormation stack create or update runs in us-east-1 or eu-west-1, every Amazon EBS volume and Amazon SQS queue defined in the stack has server-side encryption enabled with the KMS key alias alias/research-default and that this control is enforced consistently across all accounts in that OU—what solution will enforce this policy prior to CloudFormation stack operations in those accounts?

Correct. CloudFormation Hooks can validate resource configuration during stack create/update and fail the operation if EBS volumes or SQS queues are not encrypted with the required KMS key alias. Deploying the Hook via StackSets with Organizations trusted access ensures consistent, centralized rollout to all 58 accounts in the research OU across us-east-1 and eu-west-1, matching the “prior to stack operations” requirement.

Incorrect. AWS Config rules evaluate resources after they exist (detective control). Even with remediation via Systems Manager, noncompliant EBS volumes or SQS queues could be created during a CloudFormation operation and only corrected later. This violates the requirement to enforce the policy before any stack create/update runs. Config is excellent for continuous compliance reporting, not pre-provision blocking.

Incorrect for this question’s intent. SCPs can deny API actions and are preventive, but they are not evaluated “as a CloudFormation pre-check” and can be difficult to implement reliably for property-level requirements like enforcing a specific KMS key alias across multiple services. SCP conditions for KMS key alias usage are often nontrivial and can lead to false positives/negatives or operational breakage.

Incorrect. A Lambda-based scanner that assumes roles and inspects stacks is reactive and cannot reliably stop a CloudFormation create/update before resources are provisioned. It introduces timing/race conditions, additional operational complexity, and inconsistent enforcement across Regions/accounts. This approach is also less aligned with AWS best practices compared to native preventive controls like CloudFormation Hooks.

問題分析

Core Concept: This question tests preventive guardrails for Infrastructure as Code (IaC) using AWS CloudFormation. Specifically, it requires a control that runs before CloudFormation stack create/update so noncompliant resources never get provisioned. The relevant feature is CloudFormation Hooks (part of CloudFormation Guard/Hook framework) and organization-wide deployment via StackSets. Why the Answer is Correct: CloudFormation Hooks can synchronously validate resource properties during stack operations (create/update) and fail the operation if policy is violated. A Hook can target AWS::EC2::Volume and AWS::SQS::Queue and enforce that encryption is enabled and that the KMS key reference matches alias/research-default. Deploying the Hook to every account in the research OU and in both Regions ensures consistent enforcement across 58 accounts in us-east-1 and eu-west-1. Using AWS Organizations trusted access for CloudFormation StackSets allows centralized, scalable deployment from the delegated administrator (platform-ops) without per-account manual setup. Key AWS Features: - CloudFormation Hooks: pre-provision checks that can block stack operations when resources don’t meet requirements. - StackSets with trusted access: deploy the Hook (and any required IAM roles/permissions) across all accounts in an OU and multiple Regions. - Delegated administrator: enables platform-ops to manage StackSets at org scale. Common Misconceptions: SCPs are powerful preventive controls, but they operate at the API authorization layer and are not CloudFormation-specific. They can be brittle for property-level validation (e.g., ensuring a specific alias is used) and may miss cases where CloudFormation uses different API calls or where encryption is enabled via defaults. AWS Config is detective (post-deployment) and cannot guarantee “before stack runs.” Lambda scanning is also reactive and race-prone. Exam Tips: When the requirement says “before any CloudFormation stack create or update runs,” look for CloudFormation Hooks (preventive) rather than AWS Config (detective) or Lambda remediation. For org-wide consistency across many accounts/Regions, pair the control with StackSets and Organizations trusted access/delegated admin patterns.

9
問題 9

A telemedicine company's real-time consultation platform has grown from 8,000 daily active users to 60,000 across 7 countries, its microservices run on Amazon Elastic Kubernetes Service (Amazon EKS) that scales up to 2,500 nodes during 20-minute peak windows, and security reviews show sporadic unauthorized sign-in attempts and abrupt session drops, so the company requires platform-wide additional protections and a single summarized view across all AWS accounts that shows login attempts, API calls, and network traffic, permits network traffic analysis while minimizing raw log management overhead, and enables rapid investigation of potentially malicious activity involving the EKS workload—what solution best meets these requirements?

This option relies on storing logs in Amazon S3 and analyzing them with Athena and QuickSight, which creates significant operational overhead for schema management, query development, dashboard maintenance, and long-term log handling. Although it can provide reporting, it does not offer the same managed threat investigation experience as Amazon Detective. It also does not best meet the requirement for rapid investigation because analysts would need to manually correlate events across multiple datasets. For an exam question emphasizing low overhead and fast security analysis, this is not the strongest choice.

This option best satisfies the stated requirements because GuardDuty adds managed threat detection for suspicious sign-in behavior, API activity, and EKS audit events without requiring the team to build custom analytics pipelines. Amazon Detective is specifically designed to provide a summarized, cross-account investigation view that helps analysts pivot through related API calls, identities, and network activity tied to GuardDuty findings. That directly addresses the need for a single view across AWS accounts and rapid investigation of potentially malicious activity involving the EKS workload. It also minimizes raw log management overhead because AWS handles the correlation and investigation workflow instead of requiring the company to store and query large volumes of logs manually.

CloudWatch Container Insights is primarily an observability service for container and cluster performance metrics, not a managed threat detection or security investigation service. While CloudTrail logs in S3 can be queried with Athena and visualized in QuickSight, that again increases raw log management overhead and requires manual analysis. The option also lacks GuardDuty, which is the AWS managed service that detects suspicious activity from EKS audit logs, API calls, and network-related telemetry. Therefore, it does not adequately address the security detection and investigation requirements.

This option improves detection coverage by combining GuardDuty, CloudTrail, and VPC Flow Logs, but it still falls short of the requirement for a single summarized view across all AWS accounts and rapid investigation. GuardDuty generates findings, but Amazon Detective is the service that provides the investigation graph, entity relationships, and cross-account summarized analysis experience. Adding Container Insights does not solve the security investigation requirement because it is focused on operational telemetry rather than threat analysis. As a result, D is a strong partial solution for detection, but not the best complete solution for both detection and investigation.

問題分析

Core concept: This question tests AWS threat detection and investigation across accounts for an EKS-based workload with minimal log-management overhead. The key services are Amazon GuardDuty (managed threat detection using CloudTrail, VPC Flow Logs, DNS logs, and EKS audit logs) and Amazon Detective (managed investigation that correlates signals into a unified view). Why the answer is correct: The company needs (1) platform-wide additional protections, (2) a single summarized view across all AWS accounts showing login attempts, API calls, and network traffic, (3) network traffic analysis while minimizing raw log management, and (4) rapid investigation of suspicious activity involving EKS. Enabling GuardDuty for EKS Audit Log Monitoring provides detections for suspicious Kubernetes control-plane activity (e.g., anomalous kubectl exec, suspicious API calls, privilege escalation patterns) in addition to its standard detections for IAM/console sign-in anomalies, API calls (via CloudTrail management events), and network threats (via VPC Flow Logs/DNS). Amazon Detective then provides the “single summarized view” and rapid investigation capability by automatically aggregating and correlating GuardDuty findings plus related activity (CloudTrail, VPC Flow Logs, and other context) across accounts using multi-account management (typically via AWS Organizations). This meets the requirement to minimize raw log management because you do not need to build and operate an S3/Athena/QuickSight pipeline just to investigate incidents. Key AWS features / best practices: - GuardDuty EKS Audit Log Monitoring: detects threats from Kubernetes audit events without you building custom analytics. - Detective: investigation graphs, entity timelines, and cross-account context to pivot from a GuardDuty finding to related API calls, IAM principals, and network connections. - Use delegated administrator in AWS Organizations for centralized enablement and visibility across accounts. Common misconceptions: Storing logs in S3 and querying with Athena/QuickSight can create dashboards, but it increases operational overhead and does not provide the same managed correlation/investigation workflow as Detective. CloudWatch Container Insights is for performance/operational telemetry, not security investigation. Exam tips: When requirements include “rapid investigation,” “summarized view,” and “minimize log management,” think GuardDuty (detect) + Detective (investigate). For EKS-specific suspicious activity, look for “GuardDuty for EKS Audit Logs.”

10
問題 10

A healthcare company uses customer managed AWS KMS keys with automatic rotation disabled due to imported key material requirements and performs manual rotation, and the security team wants to receive an alert if any active key in us-east-1 or eu-west-1 has not been rotated within the last 120 days; which solution will accomplish this?

Incorrect. AWS KMS does not have a native feature to publish SNS notifications based on key age or time since last rotation. KMS can emit API activity to CloudTrail, but it does not generate an event like “key older than 120 days” or “rotation overdue” that you can directly subscribe to. You must implement evaluation logic using services like AWS Config or scheduled Lambda checks.

Incorrect. Trusted Advisor is not the right control plane for verifying KMS key rotation recency for customer managed keys with manual rotation requirements. Trusted Advisor checks are limited and do not provide a customizable, per-key “last rotated within 120 days” evaluation for imported key material scenarios. Also, relying on a periodic EventBridge+Lambda call to Trusted Advisor is indirect and not a strong compliance pattern compared to AWS Config.

Correct. AWS Config custom rules are intended for compliance checks that require custom logic, such as verifying that active KMS keys have been manually rotated within the last 120 days. Because imported key material disables automatic rotation and KMS does not provide a native overdue-rotation notification, a Lambda-backed rule can evaluate each key in us-east-1 and eu-west-1 and compare its last recorded manual rotation date against the threshold. The rule can then mark overdue keys as NON_COMPLIANT and trigger notifications through SNS directly or through EventBridge based on Config compliance changes.

Incorrect. AWS Security Hub aggregates findings from integrated products and some AWS security checks, but it does not provide a built-in, configurable control to alert specifically when a customer managed KMS key has not been manually rotated within 120 days, especially for imported key material workflows. Security Hub is better for centralized posture management, not custom time-since-rotation compliance logic.

問題分析

Core Concept: This question is about building a custom compliance check for customer managed AWS KMS keys when automatic rotation cannot be used because the keys rely on imported key material. The requirement is to alert when any active key in us-east-1 or eu-west-1 has gone more than 120 days without manual rotation, which requires custom evaluation logic rather than a native KMS feature. Why the Answer is Correct: AWS Config is the best fit because it is designed for continuous compliance evaluation and supports custom rules backed by AWS Lambda. A custom rule can evaluate KMS keys in each required Region, determine whether a key is active, and compare the key’s last known manual rotation date against the 120-day threshold. Because KMS does not natively expose an age-based alert for manually rotated imported key material, the implementation must rely on metadata your organization records for each rotation event, such as a tag, a parameter, or another authoritative store updated during the rotation process. When a key is overdue, the rule can mark it NON_COMPLIANT and trigger an SNS notification directly or through an EventBridge rule on Config compliance changes. Key AWS Features: 1. AWS Config custom rules allow organization-specific compliance logic that is not available through managed rules. 2. Regional deployment supports checking KMS keys separately in us-east-1 and eu-west-1, with optional aggregation for centralized visibility. 3. Lambda-backed evaluation enables custom comparison logic against a 120-day threshold and active key state. 4. SNS and EventBridge can be used to notify the security team when a key becomes noncompliant. Common Misconceptions: A common mistake is assuming AWS KMS can directly notify when a key is older than a threshold or when manual rotation is overdue. Another misconception is that Trusted Advisor or Security Hub provides a built-in control for this exact imported-key-material manual-rotation requirement. In reality, this is a custom compliance use case that needs AWS Config plus supporting metadata about when manual rotation occurred. Exam Tips: When a question asks for continuous compliance monitoring with a custom threshold and alerting, AWS Config is usually the strongest answer. Be cautious with KMS manual rotation scenarios, because imported key material disables automatic rotation and KMS does not provide a native overdue-rotation alarm. On the exam, prefer AWS Config custom rules over general security dashboards when the requirement is precise, customizable, and compliance-oriented.

合格体験記(6)

주
주**Nov 25, 2025

学習期間: 2 months

앱의 문제들 3번이나 반복해서 풀고 떨어지면 어떡하지 불안했었는데 시험에서 문제들이 굉장히 유사하게 많이 나와서 쉽게 풀 수 있었어요. 감사해요!

**********Nov 20, 2025

学習期間: 1 month

After going through a Udemy course, I wanted to do some practice exams before taking the real exam. Cloud pass is good resource for exam. I didn't complete every questions, i only completed 70% questions. But i passed! Thanks cloud pass

U
u*********Nov 18, 2025

学習期間: 1 month

A lot of the questions in this app questions indeed appeared on the exam, very helpful.

S
S*******Oct 31, 2025

学習期間: 1 month

Passed the exam DOP-C02 Oct 2025. These practice questions were essential for my preparation. The services covered in the test practices match the exam content very well.

D
D****Oct 27, 2025

学習期間: 1 month

passed the dop exam with the help of Cloud pass questions. The real exam is full of tricky questions and these sets helped me prepared for it.

他の模擬試験

Practice Test #1

75 問題·180分·合格 750/1000
← すべてのAWS Certified DevOps Engineer - Professional (DOP-C02)問題を見る

今すぐ学習を始める

Cloud Passをダウンロードして、すべてのAWS Certified DevOps Engineer - Professional (DOP-C02)練習問題を利用しましょう。

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT認定試験問題演習アプリ

Get it on Google PlayDownload on the App Store

認定資格

AWSGCPMicrosoftCiscoCompTIADatabricks

法務

FAQプライバシーポリシー利用規約

会社

お問い合わせアカウント削除

© Copyright 2026 Cloud Pass, All rights reserved.

外出先でもすべての問題を解きたいですか?

アプリを入手

Cloud Passをダウンロード — 模擬試験、学習進捗の追跡などを提供します。