CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
AWS Certified DevOps Engineer - Professional (DOP-C02)
AWS Certified DevOps Engineer - Professional (DOP-C02)

Practice Test #1

75問と180分の制限時間で実際の試験をシミュレーションしましょう。AI検証済み解答と詳細な解説で学習できます。

75問題180分750/1000合格点
練習問題を見る

AI搭載

3重AI検証済み解答&解説

すべての解答は3つの主要AIモデルで交差検証され、最高の精度を保証します。選択肢ごとの詳細な解説と深い問題分析を提供します。

GPT Pro
Claude Opus
Gemini Pro
選択肢ごとの解説
深い問題分析
3モデル合意の精度

練習問題

1
問題 1

A video analytics company manages 28 AWS accounts in a single AWS Organizations organization with all features enabled and uses AWS CloudFormation StackSets for baseline deployments while AWS Config already monitors general S3 settings, and the security team now requires a preventative control that ensures every S3 PutObject across all accounts and Regions uses AWS Key Management Service (AWS KMS) server-side encryption (SSE-KMS) and blocks any noncompliant upload attempts with minimal operational overhead; which solution will meet these requirements?

AWS Config conformance packs with s3-bucket-server-side-encryption-enabled provide a detective control: they evaluate bucket settings and report compliance. Even with SNS notifications, Config does not prevent or block a noncompliant PutObject request in real time. It also focuses on bucket-level configuration rather than enforcing per-request SSE-KMS usage for every upload attempt across all accounts and Regions.

This SCP targets s3:CreateBucket, which is the wrong API action for the requirement. The company needs to enforce encryption on object uploads (PutObject), not on bucket creation. Additionally, the condition key s3:x-amz-server-side-encryption is relevant to object upload requests, so applying it to CreateBucket would not reliably enforce SSE-KMS for all uploaded objects.

CloudTrail data events plus EventBridge and SNS is an after-the-fact detection and alerting pattern. It can identify unencrypted PutObject calls and notify the security team, but it does not block the upload attempt. The requirement explicitly asks for a preventative control that blocks noncompliant uploads with minimal operational overhead, which points to an SCP rather than monitoring/alerting.

An SCP that denies s3:PutObject unless s3:x-amz-server-side-encryption equals aws:kms is a preventative, org-wide guardrail. Attached to the organization root, it applies across all accounts and Regions and blocks noncompliant uploads regardless of IAM permissions. This meets the requirement to ensure every PutObject uses SSE-KMS and to stop noncompliant attempts with minimal ongoing operations.

問題分析

Core Concept: This question tests preventative, organization-wide security controls for Amazon S3 using AWS Organizations Service Control Policies (SCPs). SCPs are guardrails that set the maximum available permissions for accounts/OUs, enabling you to block noncompliant API calls before they succeed. Why the Answer is Correct: The requirement is to ensure every S3 PutObject across all accounts and Regions uses SSE-KMS and to block noncompliant uploads with minimal operational overhead. An SCP attached at the organization root can explicitly deny s3:PutObject when the request does not specify AWS KMS server-side encryption (x-amz-server-side-encryption = aws:kms). Because explicit denies in SCPs apply regardless of IAM permissions, this creates a preventative control that stops noncompliant PutObject requests across all member accounts (including new accounts) without deploying agents, rules, or per-account tooling. Key AWS Features: - AWS Organizations SCPs: Central, scalable guardrails; explicit Deny overrides Allow. - S3 condition keys: Use s3:x-amz-server-side-encryption to require aws:kms on PutObject. This enforces encryption at request time (preventative). - Organization root attachment: Ensures coverage across all accounts/OUs and Regions with minimal ongoing operations. Common Misconceptions: - AWS Config (including conformance packs) is primarily detective/remediative, not preventative. It can report noncompliance and trigger notifications or remediation, but it does not inherently block a PutObject call. - CloudTrail + EventBridge is also detective: it can alert after the fact, but cannot prevent the upload. - Denying CreateBucket does not address object uploads; encryption requirements must be enforced on PutObject (and potentially multipart upload-related actions in real implementations). Exam Tips: When the question says “preventative control” and “block noncompliant attempts” across many accounts with low overhead, think SCPs (or sometimes permission boundaries) rather than Config/CloudTrail. Also ensure the action in the policy matches the required behavior: object encryption is enforced on s3:PutObject, not on bucket creation. In practice, you may also consider related S3 actions (e.g., multipart upload) and bucket policies, but for org-wide enforcement, SCP is the canonical exam answer.

2
問題 2

A data platform team created an AWS CloudFormation template to let teams quickly launch and tear down a nightly ETL proof of concept; the template provisions an AWS Glue job and an Amazon S3 bucket with versioning enabled and a lifecycle rule that retains objects for 90 days, and when the stack is deleted within 24 hours of creation CloudFormation shows DELETE_FAILED for the S3 bucket because it still contains objects and delete markers, so the team needs the MOST efficient way to ensure all resources are removed automatically and the stack deletion completes without errors.

DeletionPolicy: Delete only tells CloudFormation what to do with the resource when the stack is deleted, but it does not override S3’s requirement that a bucket must be empty before deletion. With versioning enabled, delete markers and old versions still count as contents. The stack will still fail with DELETE_FAILED if objects/versions remain, so this does not solve the root cause.

A Lambda-backed custom resource is a common CloudFormation pattern to perform actions CloudFormation cannot do natively, such as emptying a versioned bucket. On Delete, the function can call ListObjectVersions to retrieve versions and delete markers, then issue DeleteObjects with version IDs to remove everything. This ensures the bucket is empty so CloudFormation can delete it and complete stack teardown automatically.

Manually emptying the bucket (including versions) will work, but it is not efficient and does not meet the requirement for automatic removal. It introduces operational toil and risk of inconsistent cleanup, especially for nightly proof-of-concept stacks. Certification questions typically penalize manual steps when an automated IaC-based approach is feasible.

Replacing the solution with CodePipeline is unnecessary and does not address the core issue: S3 bucket deletion requires emptying versioned contents. A pipeline stage for teardown is an anti-pattern for simple stack lifecycle management and adds complexity, cost, and maintenance. CloudFormation already orchestrates resource creation/deletion; the missing piece is automated bucket cleanup, best handled by a custom resource.

問題分析

Core Concept: This question tests CloudFormation stack deletion behavior with Amazon S3 buckets, especially when versioning is enabled. CloudFormation can delete an S3 bucket only if it is empty. With versioning, “empty” means no current objects, no noncurrent versions, and no delete markers. Why the Answer is Correct: Option B is the most efficient and fully automated approach: a Lambda-backed custom resource performs cleanup during stack deletion. On RequestType=Delete, the function lists and deletes all object versions and delete markers (and any remaining current objects). Once the bucket is truly empty, CloudFormation can successfully delete the bucket resource and complete stack deletion without DELETE_FAILED. This aligns with IaC best practices: stacks should be self-contained and fully reversible (create and delete cleanly) without manual steps. Key AWS Features: - CloudFormation custom resources (Lambda-backed) allow imperative actions during stack lifecycle events (Create/Update/Delete). - S3 versioning introduces delete markers and multiple versions; deletion requires DeleteObjects against version IDs and delete markers. - CloudFormation resource dependency control (DependsOn) can ensure the custom cleanup runs before the bucket deletion is attempted. - IAM least privilege: the Lambda role needs s3:ListBucket, s3:ListBucketVersions, and s3:DeleteObject / s3:DeleteObjectVersion (and potentially s3:DeleteObjectTagging if used). Common Misconceptions: A common trap is assuming CloudFormation’s DeletionPolicy: Delete will “force delete” a non-empty bucket. It does not; S3 still refuses deletion if anything remains. Another misconception is that lifecycle rules will help here; lifecycle expiration is not immediate and won’t run within 24 hours, especially for versions and delete markers. Exam Tips: When you see “S3 bucket deletion fails” + “versioning enabled,” immediately think: you must delete object versions and delete markers. For CloudFormation, the standard exam pattern is a Lambda-backed custom resource (or newer native mechanisms if explicitly mentioned) to empty the bucket during stack deletion. Prefer automated, repeatable IaC solutions over manual cleanup steps.

3
問題 3

An online gaming platform operates 12 Amazon EC2 instances across us-east-1 and us-west-2 in an Auto Scaling group; AWS Health publishes scheduled instance retirement and underlying host maintenance notifications that require reboots, and the company mandates that remediation occur only within an existing AWS Systems Manager maintenance window every Sunday from 01:00 to 03:00 UTC, so how should a DevOps engineer configure an Amazon EventBridge rule to automatically handle these notifications while ensuring reboots run only inside the maintenance window?

Incorrect. Using AWS Health as the event source is the right starting point, but targeting AWS-RestartEC2Instance directly from EventBridge causes the automation to run as soon as the event is received. That means the reboot would happen immediately instead of waiting for the approved Sunday 01:00–03:00 UTC maintenance window. The option therefore fails the core scheduling requirement even though the source service is appropriate.

Incorrect. Systems Manager maintenance window events are not the correct trigger for detecting scheduled instance retirement or underlying host maintenance notifications. The impacted resources and maintenance requirement originate from AWS Health, not from the maintenance window itself. This option also does not explain how the specific affected EC2 instances would be identified and passed to the restart automation.

Correct. EventBridge should listen for AWS Health notifications related to scheduled instance retirement and underlying host maintenance or scheduled change events affecting the EC2 instances. A Lambda function can then register or update an SSM Maintenance Window task in the existing Sunday window so that the AWS-RestartEC2Instance Automation runbook runs only during the approved time. This design satisfies both requirements: automatic response to AWS Health notifications and strict enforcement of the maintenance window for the reboot action.

Incorrect. EC2 instance state-change notifications are not the authoritative signal for upcoming scheduled retirement or host maintenance actions. Those notifications come from AWS Health and are published before the disruptive event so remediation can be planned. Although Lambda could register a maintenance window task, this option starts from the wrong event source and may miss the required proactive handling.

問題分析

Core Concept: This question tests event-driven remediation using Amazon EventBridge, AWS Health events, and AWS Systems Manager (SSM) Maintenance Windows/Automation. The key requirement is not just detecting AWS Health notifications for scheduled instance retirement and underlying host maintenance, but enforcing that disruptive actions such as reboots occur only during a pre-approved maintenance window. Why the Answer is Correct: AWS Health publishes account-specific operational events for EC2, including scheduled instance retirement and underlying host maintenance or scheduled change events that can require a reboot. EventBridge can match those AWS Health events and trigger downstream automation. However, directly invoking a reboot runbook from EventBridge would execute immediately, which violates the requirement to act only during the existing Sunday 01:00–03:00 UTC maintenance window. Option C correctly uses EventBridge to detect the Health event and then invokes a Lambda function that registers an SSM Maintenance Window task against the existing window so the AWS-RestartEC2Instance runbook executes only when that window opens. Key AWS Features: - AWS Health integration with EventBridge for account-specific EC2 operational notifications. - SSM Maintenance Windows to constrain execution to approved time ranges. - SSM Automation runbooks such as AWS-RestartEC2Instance to perform standardized remediation. - Lambda as orchestration glue to translate the Health event payload into a maintenance window task for the affected instances. - Regional awareness, because AWS Health events identify affected resources in specific Regions and the SSM action must run in the Region where the instances exist. Common Misconceptions: A is tempting because it correctly starts from AWS Health, but it invokes the restart automation immediately and therefore ignores the maintenance window requirement. B incorrectly treats Systems Manager maintenance window events as the source of truth for identifying impacted EC2 instances, but the actual trigger must come from AWS Health. D uses EC2 state-change notifications, which are not the authoritative source for upcoming scheduled retirement or host maintenance events and may occur too late. Exam Tips: When a question says remediation must happen only during a maintenance window, SSM Maintenance Windows are the enforcement mechanism. EventBridge is commonly used to detect AWS Health events, but you often need Lambda or another orchestration layer to defer execution into the approved window. Also, for scheduled retirement and host maintenance, AWS Health is the correct event source rather than EC2 state-change notifications.

4
問題 4
(3つ選択)

An energy startup stores 120 GB of sensor diagnostics exported hourly in CSV format in an Amazon S3 bucket under a date/hour partition prefix (for example, s3://env-data/diagnostics/year=2025/month=08/day=16/hour=00-23). A team of SQL analysts needs to run ad hoc queries using standard SQL without managing any servers and must publish interactive dashboards with line and bar charts refreshed every 24 hours. The team also needs an efficient, automated, and persistent way to capture and maintain table metadata (columns, data types, partitions) for the CSV files as new hourly objects arrive. Which combination of steps will meet these requirements with the least amount of effort? (Choose three.)

Incorrect. AWS X-Ray is an application performance monitoring (APM) and distributed tracing service used to analyze request flows and latency across microservices. It does not query CSV data in S3, manage table metadata, or provide BI dashboards. It might be used to trace an ingestion pipeline application, but it does not meet the SQL analytics and visualization requirements described.

Correct. Amazon QuickSight is AWS’s managed BI service for building interactive dashboards with visualizations such as line and bar charts. It integrates with Athena as a data source and supports scheduled refresh (e.g., every 24 hours). This satisfies the requirement to publish interactive dashboards without managing servers or BI infrastructure.

Correct. Amazon Athena provides serverless, standard SQL querying directly on data stored in S3, ideal for ad hoc analysis of CSV files. Analysts can query partitioned prefixes (year/month/day/hour) efficiently when partitions are defined in the catalog. Athena eliminates the need to provision or manage database servers and fits the “least effort” requirement for querying.

Incorrect for “least effort.” Amazon Redshift can query large datasets and supports BI dashboards, but it typically requires more setup: provisioning/operating a cluster (or configuring Redshift Serverless), designing schemas, and often loading/transforming data (COPY/ETL) for best performance. For hourly CSV files already in S3 and ad hoc queries, Athena is simpler and more serverless.

Correct. The AWS Glue Data Catalog is a persistent, managed metadata repository for tables, schemas, and partitions used by Athena and other analytics services. A Glue crawler can automatically infer CSV schema and continuously discover new hourly partitions as objects arrive under date/hour prefixes, minimizing manual DDL and ongoing maintenance.

Incorrect. Amazon DynamoDB is a NoSQL key-value/document database and is not a native metastore for Athena/Presto-style SQL querying. Using DynamoDB to store schema/partition metadata would require building and maintaining custom logic to keep it synchronized and would not integrate seamlessly with Athena and QuickSight the way the Glue Data Catalog does.

問題分析

Core Concept: This question tests a classic serverless analytics pattern on AWS: storing raw files in Amazon S3, querying them with Amazon Athena (serverless SQL), maintaining metadata and partitions in the AWS Glue Data Catalog, and visualizing results in Amazon QuickSight. It also implicitly tests automation of schema/partition discovery for continuously arriving data. Why the Answer is Correct: (C) Amazon Athena lets SQL analysts run ad hoc queries directly against CSV data in S3 without provisioning or managing servers. It supports standard SQL (Presto/Trino-based) and is designed for interactive querying of data lakes. (E) The AWS Glue Data Catalog provides a persistent, managed metastore for table definitions (columns, data types) and partitions. A Glue crawler can automatically infer schema from CSV and discover new partitions as hourly prefixes arrive, keeping metadata current with minimal manual effort. (B) Amazon QuickSight is the managed BI service for interactive dashboards (line/bar charts) and can refresh datasets on a schedule (e.g., every 24 hours). QuickSight integrates natively with Athena (and the Glue Data Catalog tables Athena uses), enabling a low-ops pipeline from S3 to dashboards. Key AWS Features: Athena uses the Glue Data Catalog as its default metastore, so defining tables/partitions in Glue makes them immediately queryable. Glue crawlers can be scheduled or triggered to update partitions as new S3 prefixes appear. QuickSight can use Athena as a data source, import to SPICE for faster dashboard performance, and schedule refreshes every 24 hours. Common Misconceptions: Redshift (D) is powerful but requires cluster management (or even with Serverless, still involves data modeling/loading decisions) and is more effort than querying in-place with Athena for ad hoc analysis. DynamoDB (F) is not a metastore for SQL engines and would require custom schema/partition management logic. X-Ray (A) is for tracing distributed applications, not analytics/visualization. Exam Tips: When you see “ad hoc SQL,” “no servers,” and “data in S3,” think Athena. When you see “persistent metadata,” “schema/partitions,” and “automated discovery,” think Glue Data Catalog + crawler. When you see “interactive dashboards” and “scheduled refresh,” think QuickSight. This trio (S3 + Glue + Athena + QuickSight) is a common Well-Architected, low-ops analytics reference pattern.

5
問題 5

A media-streaming startup manages a real-time ingestion tier entirely with AWS CloudFormation; the template provisions an AWS::AutoScaling::AutoScalingGroup (min=2, desired=4, max=6) that launches Amazon EC2 instances from a Launch Template across two private subnets in us-east-1, a Network Load Balancer, and other supporting resources, and company policy forbids any changes outside CloudFormation; an engineer used the AWS CLI to deploy an update, but the update failed with the message “ERROR: both the deployment and the CloudFormation stack rollback failed. The deployment failed because the following resource(s) failed to update: [AutoScalingGroup]”, leaving the stack in UPDATE_ROLLBACK_FAILED — which action will resolve the issue?

Incorrect. Network Load Balancer subnet mappings are unrelated to the stack being in UPDATE_ROLLBACK_FAILED due to an AutoScalingGroup update failure. Also, update-stack-set applies to CloudFormation StackSets, not a single stack update/rollback recovery. The correct remediation for UPDATE_ROLLBACK_FAILED is to address the rollback blocker and run continue-update-rollback, not to modify NLB mappings.

Correct. UPDATE_ROLLBACK_FAILED means CloudFormation could not complete rollback. The supported fix is to correct the underlying cause (commonly missing IAM permissions for rollback actions on the ASG and related resources) and then run aws cloudformation continue-update-rollback. This returns the stack to a stable state without violating the policy against out-of-band changes.

Incorrect. EC2 quotas can cause instance launch failures, but the command suggested is wrong for this state and problem framing. cancel-update-stack is not the standard recovery for UPDATE_ROLLBACK_FAILED, and it does not address why rollback failed. Even if quotas were involved, you would still typically fix the constraint and then run continue-update-rollback.

Incorrect. Manually deleting the Auto Scaling group violates the stated policy forbidding changes outside CloudFormation and can worsen drift or leave dependent resources in inconsistent states. Additionally, there is no aws cloudformation rollback-stack command for this purpose; the CloudFormation-native recovery action for UPDATE_ROLLBACK_FAILED is continue-update-rollback after fixing the root cause.

問題分析

Core Concept: This question tests AWS CloudFormation stack failure states and recovery actions, specifically UPDATE_ROLLBACK_FAILED, and the operational control plane required for CloudFormation to update/rollback resources like AWS::AutoScaling::AutoScalingGroup. Why the Answer is Correct: When a stack update fails, CloudFormation attempts to roll back to the last known good state. If rollback also fails, the stack enters UPDATE_ROLLBACK_FAILED. At that point, the supported remediation is to fix the underlying cause that prevented rollback (commonly missing IAM permissions, resource-level access issues, or external constraints) and then run aws cloudformation continue-update-rollback to let CloudFormation complete the rollback and return the stack to a stable state. The prompt indicates “both the deployment and the CloudFormation stack rollback failed” and that the failing resource is the AutoScalingGroup. A frequent reason rollback fails on an ASG is that the CloudFormation execution role (or the caller’s permissions if no service role is used) lacks permissions to modify or detach related resources (e.g., UpdateAutoScalingGroup, CreateOrUpdateTags, TerminateInstanceInAutoScalingGroup, DetachLoadBalancerTargetGroups). Updating the IAM role to include the required permissions directly addresses the rollback failure, and continue-update-rollback is the correct command for this stack state. Key AWS Features / Best Practices: - CloudFormation stack states: UPDATE_FAILED vs UPDATE_ROLLBACK_FAILED. - Recovery mechanism: ContinueUpdateRollback after correcting the root cause. - Least privilege with completeness: ensure the CloudFormation service role includes all actions needed for create/update/delete/rollback of every managed resource. - Policy constraint “no changes outside CloudFormation” aligns with using CloudFormation-native recovery rather than manual deletion. Common Misconceptions: - Thinking you should “cancel” the update: cancel-update-stack is not the standard fix for UPDATE_ROLLBACK_FAILED and won’t resolve a rollback that already failed. - Assuming it’s an EC2 quota issue: quotas can cause update failures, but the question emphasizes rollback failure and the correct remediation flow is still to fix the blocker and continue rollback. - Manually deleting resources: this violates policy and often makes drift worse; CloudFormation provides a controlled recovery path. Exam Tips: Memorize the mapping: UPDATE_ROLLBACK_FAILED → fix the underlying issue (often IAM/permissions or resource constraints) → ContinueUpdateRollback. If a question mentions rollback failure, look for continue-update-rollback rather than update-stack/update-stack-set/cancel-update-stack or manual intervention.

外出先でもすべての問題を解きたいですか?

Cloud Passをダウンロード — 模擬試験、学習進捗の追跡などを提供します。

6
問題 6

A media analytics platform runs its production database on an Amazon Aurora PostgreSQL-Compatible Multi-AZ DB cluster in eu-central-1 with a cross-Region read replica in eu-west-1 for disaster recovery, requires an RPO of ≤1 minute and an RTO of ≤10 minutes, and needs to automatically promote the cross-Region replica to primary if the writer cluster endpoint is unreachable for more than 2 minutes while ensuring the containerized application on Amazon ECS Fargate continues with minimal manual intervention and uses a database endpoint that is fetched from AWS Systems Manager Parameter Store at startup and cached for 60 seconds; which solution will accomplish this?

Route 53 latency-based routing with health checks can shift DNS, but it does not promote a cross-Region read replica to writer. Health checks against Aurora cluster endpoints can be misleading (endpoint may resolve while writer is impaired), and DNS caching/TTL can violate the tight 2-minute detection and fast cutover requirement. CloudTrail is not the right source for RDS failure notifications; it logs API activity, not service failover events.

Aurora custom endpoints are for routing within a single Aurora cluster (e.g., subset of instances) and do not span Regions or manage cross-Region replica promotion. You also cannot “update” a custom endpoint in eu-central-1 to point to a cluster/instance in eu-west-1. Using CloudTrail as a trigger for database unavailability is incorrect because CloudTrail does not emit service health/failover events.

Modifying and reapplying a CloudFormation stack during an outage is slow and operationally risky, and it does not directly address the need to promote the cross-Region replica based on writer endpoint unreachability. CloudFormation is not an eventing system for RDS failover, and stack updates can exceed the required RTO. This also adds unnecessary coupling between infrastructure deployment and incident response automation.

This matches the application design (endpoint sourced from Parameter Store and cached for 60 seconds) and provides an event-driven, automated DR workflow. EventBridge can detect Aurora/RDS failure-related events and invoke Lambda to promote the cross-Region read replica and then update Parameter Store with the new writer endpoint. With app retry logic and periodic refresh, ECS Fargate tasks can reconnect quickly with minimal manual intervention, supporting the RPO/RTO goals.

問題分析

Core Concept: This question tests cross-Region Aurora disaster recovery automation, event-driven failover orchestration, and application endpoint management. Key services are Amazon Aurora (cross-Region read replica promotion), Amazon EventBridge (RDS/Aurora events), AWS Lambda (automation), and AWS Systems Manager Parameter Store (dynamic endpoint distribution). Why the Answer is Correct: Option D aligns with the stated behavior: the application reads the DB endpoint from Parameter Store at startup and caches it for 60 seconds. By promoting the eu-west-1 replica and updating the Parameter Store value automatically, the application can reconnect with minimal manual intervention once it refreshes the cached value or on connection failure. EventBridge can react quickly to RDS/Aurora events indicating unavailability/failover conditions, and Lambda can call the appropriate RDS/Aurora APIs to promote the cross-Region read replica. The 2-minute “writer endpoint unreachable” requirement is met by combining event detection (or supplemental health checks/alarms) with automation; the 60-second cache plus retry logic supports the ≤10-minute RTO. Key AWS Features: - Aurora cross-Region read replica promotion: promotes the replica cluster to be writable in the DR Region. - Amazon EventBridge integration with RDS events: near-real-time routing of failover-related events to automation. - Lambda automation: executes promotion and then updates Parameter Store with the new writer endpoint. - Parameter Store as a configuration source: central, auditable endpoint value; supports rapid cutover without redeploying tasks. - Application resiliency best practice: implement connection retries with exponential backoff and refresh configuration on failure/TTL. Common Misconceptions: A common trap is using Route 53 to “fail over” between Aurora cluster endpoints. DNS failover does not promote a read replica to writer and can be slowed by TTL/caching; also, Aurora cluster endpoints are not ideal targets for generic health checks that reflect writer availability. Another misconception is that CloudTrail is the right trigger for RDS failures; CloudTrail records API calls, not service health/failover events. Exam Tips: For DR with Aurora cross-Region replicas, remember: (1) promotion must be explicitly automated (unless using Aurora Global Database managed failover, which is not described here), (2) event-driven automation typically uses EventBridge RDS events, and (3) endpoint management should match how the app consumes configuration (Parameter Store + refresh/retry). Always map RTO/RPO to replication type and automation latency, and avoid DNS-only answers when a stateful promotion is required.

7
問題 7

A media analytics startup operates 180 EBS-backed Amazon EC2 instances (c5 and m6i) across two VPCs in us-east-1, and to reduce manual intervention the DevOps team must ensure that whenever AWS schedules an EC2 instance retirement event, only the affected instance is automatically restarted (stop then start) within 5 minutes using managed services with Systems Manager already enabled on all instances. Which approach should the engineer implement?

A weekly scheduled EventBridge rule is not responsive enough to meet the 5-minute requirement. It could detect a retirement event days late, and “check for events” is also not the normal pattern—AWS already emits the event. Additionally, hibernation is not the requested action (stop then start), requires hibernation prerequisites, and may not be supported/appropriate for all instance configurations.

EC2 Auto Recovery is designed for recovering from underlying hardware/host impairment (triggered by a CloudWatch alarm on StatusCheckFailed_System), not for scheduled instance retirement events. It also does not perform a stop/start cycle; it recovers the instance on new hardware while keeping the instance ID. Adding AWS Config to restrict recovery to a time window is not a standard or reliable control for this scenario.

Rebooting all instances weekly is a blunt, non-targeted approach that violates the requirement to restart only the affected instance. It also does not guarantee remediation of a scheduled retirement event, which typically requires stop/start (or replacement) rather than reboot. CloudWatch alarms for notification add monitoring but do not provide the required automated, event-driven remediation within 5 minutes.

This is the correct event-driven, managed approach. AWS Health publishes EC2 retirement scheduled events for impacted instances. An EventBridge rule can match those events and trigger an SSM Automation runbook that stops and then starts only the instance referenced in the event payload. This meets the 5-minute automation requirement, avoids impacting unaffected instances, and leverages Systems Manager already enabled on the fleet.

問題分析

Core Concept: This question tests event-driven remediation for planned AWS infrastructure lifecycle events (EC2 instance retirement) using managed services. The key services are AWS Health (for retirement notifications), Amazon EventBridge (for event matching and routing), and AWS Systems Manager Automation (for controlled stop/start actions on the specific instance). Why the Answer is Correct: EC2 scheduled retirement is a planned event where AWS indicates an instance will be retired and must be stopped/started (or replaced) to move to healthy hardware. The requirement is to automatically restart only the affected instance within 5 minutes, with minimal manual intervention and using managed services, and Systems Manager is already enabled. An AWS Health event is emitted for the impacted instance(s). An EventBridge rule can match the specific retirement event type and extract the instance ID from the event payload. The rule can then invoke an SSM Automation runbook that performs a stop followed by a start on that instance ID, meeting the “only affected instance” and “within 5 minutes” requirements. Key AWS Features: - AWS Health + EventBridge integration: near-real-time event delivery for service events impacting your account. - EventBridge rule pattern matching on source/detail-type (e.g., AWS Health) and specific EC2 retirement details. - Systems Manager Automation runbooks (AWS-managed or custom) with parameters (InstanceId) and steps to call EC2 StopInstances/StartInstances. - IAM least privilege: EventBridge needs permission to start the Automation execution; Automation needs ec2:StopInstances/ec2:StartInstances for the target instance. Common Misconceptions: Many confuse “retirement” with “impaired host” recovery. EC2 Auto Recovery is for underlying host failure and uses CloudWatch alarms; it does not address scheduled retirement events and does not perform a stop/start. Also, scheduled weekly checks are not event-driven and can miss the 5-minute SLA. Exam Tips: When you see “scheduled event” (retirement, maintenance) and “within minutes,” think EventBridge-driven automation. When you see “managed remediation” and “SSM enabled,” think SSM Automation runbooks triggered by EventBridge. Distinguish Auto Recovery (unplanned host impairment) from retirement notifications (planned lifecycle events).

8
問題 8

A platform team runs a webhook relay broker for external fintech partners on a single Amazon EC2 instance in us-east-1; Amazon Route 53 currently maps broker.payco.example to the instance by using a simple A record (TTL 20 seconds), the broker stores session state and message offsets on an Amazon EFS file system mounted at /data and cannot operate across multiple active nodes, and the team has provisioned a warm standby EC2 instance in a different Availability Zone (us-east-1b) that mounts the same EFS and is kept patched and idle; the team wants automatic failover so that if the primary broker’s /healthz endpoint stops returning HTTP 200 for 3 consecutive checks at 5-second intervals, client traffic is redirected to the standby without manual intervention; which solution will meet these requirements?

Correct. Weighted routing with primary weight 100 and standby weight 0 keeps the standby truly passive. With Route 53 health checks probing /healthz every 5 seconds and failure threshold 3, Route 53 will mark the primary unhealthy after 3 failed checks and stop returning it in DNS answers. The standby record remains and will be returned, achieving automatic DNS failover with the existing low TTL helping speed client cutover.

Incorrect. A 99/1 weighted split intentionally routes a portion of client requests to the standby even when the primary is healthy. The scenario explicitly states the broker cannot operate across multiple active nodes, so any steady-state traffic to the standby risks concurrent active processing, session/state conflicts, and offset inconsistencies. Weighted routing is not the same as active/passive unless the standby effectively receives 0 traffic.

Incorrect. An ALB can perform health checks, but using listener target group weights 100 and 0 is not a dependable way to implement strict failover, and it introduces an always-on load balancer layer that may still attempt routing behaviors not aligned with “never send traffic to standby unless primary is unhealthy.” For single-active workloads, Route 53 health-check-based DNS failover is the simpler and more appropriate mechanism.

Incorrect. Like option B, using 99/1 weights sends some traffic to the standby during normal operation, violating the single-active constraint. Additionally, ALB target group weighting is designed for traffic distribution (canary/blue-green), not strict warm-standby failover for a stateful singleton. This could cause concurrent processing and state/offset issues even if both instances mount the same EFS.

問題分析

Core Concept: This question tests DNS-based failover using Amazon Route 53 health checks and routing policies for a single-active workload. Because the broker cannot run active-active and relies on shared state on EFS, the design must keep only one node receiving traffic while still providing automatic redirection on failure. Why the Answer is Correct: Option A uses Route 53 weighted records with health checks to implement an active/passive pattern. The primary record has weight 100 and the standby has weight 0, so under normal conditions only the primary is returned in DNS responses. When Route 53 marks the primary unhealthy (based on the /healthz endpoint returning non-200 for 3 consecutive checks at 5-second intervals), Route 53 stops returning the primary record. With the primary removed, the only remaining “healthy” record is the standby, so DNS answers shift to the standby automatically without manual intervention. Key AWS Features: - Route 53 health checks: can probe an HTTP endpoint (/healthz), set request interval to 5 seconds (fast checks), and failure threshold to 3 (matches requirement). - Weighted routing behavior with health checks: Route 53 only returns records that are considered healthy; unhealthy records are excluded from responses. - Low TTL (20 seconds): reduces client-side caching time and speeds practical cutover, though some clients may ignore TTL. Common Misconceptions: - Using 99/1 weights (Option B) is not true standby: it intentionally sends some production traffic to the standby even when the primary is healthy, violating the “cannot operate across multiple active nodes” constraint. - Using an ALB with weights (Options C/D) seems attractive for health-based routing, but ALB target group weights are for active distribution, not strict failover. A weight of 0 is not a reliable/valid way to express “never send traffic,” and ALB will still consider routing/health at the load balancer layer rather than DNS-level failover semantics. Exam Tips: When you see “single active node,” “warm standby,” and “Route 53 record with health-based cutover,” think Route 53 failover or weighted routing with health checks. Avoid any option that deliberately sends traffic to both nodes (even 1%) when the application cannot be active-active. Also confirm the health check interval and failure threshold match the requirement exactly.

9
問題 9

A company operates 12 AWS accounts in an AWS Organizations organization, and the SecOps team must receive an Amazon Simple Notification Service (Amazon SNS) notification within 5 minutes if any member account in us-east-1 or eu-west-1 changes an Amazon RDS DB instance to be publicly accessible (PubliclyAccessible=true), and a DevOps engineer must implement this centrally in a delegated administrator account in us-east-1 without disrupting existing workloads while ensuring that individual member accounts cannot disable or bypass the notification. Which solution will meet these requirements?

Amazon GuardDuty is a threat-detection service that analyzes logs and telemetry for suspicious activity, not a configuration-governance service for deterministic detection of every RDS public accessibility change. It does not provide a direct control that says an SNS notification must be sent whenever PubliclyAccessible becomes true on an RDS DB instance. GuardDuty findings also depend on its own detection logic and finding generation, which is not the most precise match for this compliance-style requirement. This option therefore does not cleanly satisfy the requirement for centrally governed notification on a specific RDS configuration state change.

CloudTrail and EventBridge can detect the ModifyDBInstance API call quickly, but this design deploys the EventBridge rule and SNS topic into each member account, which means the notification path exists inside the accounts that are supposed to be prevented from bypassing it. The answer explanation relies on adding SCPs, but SCPs are not included in the proposed solution and would be necessary to make the anti-bypass claim credible. In addition, this approach is event-based on a specific API call rather than a centrally governed compliance evaluation of the resulting resource state. Because the question emphasizes delegated administrator and preventing member-account disablement or bypass, this decentralized StackSets design is not the best answer.

AWS Config supports delegated administrator for AWS Organizations and can deploy organization conformance packs across member accounts in specific Regions such as us-east-1 and eu-west-1. The managed rule rds-instance-public-access-check directly evaluates whether an RDS DB instance is publicly accessible, which aligns exactly with the requirement to detect PubliclyAccessible=true. Because the conformance pack is managed centrally, the monitoring posture is governed from the delegated administrator account rather than relying on each member account to maintain its own local rule. The Systems Manager Automation step provides a mechanism to publish to a central SNS topic from the delegated administrator account, satisfying the requirement for centralized notification to SecOps.

Amazon Inspector is focused on vulnerability management and exposure assessment, not on guaranteed near-real-time notification for a specific RDS configuration attribute change. Inspector findings are not the canonical mechanism for detecting that PubliclyAccessible was set to true on an RDS DB instance. The service does not map as directly to this governance requirement as AWS Config managed rules do. As a result, this option is less precise and less reliable for the stated compliance and notification objective.

問題分析

Core concept: This question is about organization-wide governance and centralized detection of an RDS configuration state change across multiple AWS accounts and Regions, with strong control so member accounts cannot disable or bypass the notification. Why correct: AWS Config with an organization delegated administrator and an organization conformance pack provides centralized, cross-account compliance monitoring for the specific condition that an RDS DB instance is publicly accessible. Key features: the AWS Config managed rule rds-instance-public-access-check evaluates the actual resource state, organization-level deployment covers all member accounts in the specified Regions, and a centralized automation path can notify the SecOps team from the delegated administrator account. Common misconceptions: CloudTrail plus EventBridge is fast for API events, but a per-account deployment does not inherently satisfy the anti-bypass requirement and only detects ModifyDBInstance calls rather than centrally governed compliance state. Exam tips: when a question emphasizes delegated administrator, organization-wide enforcement, and preventing member-account bypass, favor AWS Config organization features or other centrally governed services over decentralized per-account event rules.

10
問題 10

A media analytics startup is standardizing on AWS CodeDeploy to automate rollouts of a Node.js service running under Nginx on Amazon EC2; the team began with a proof of concept, created a deployment group for a dev environment, completed functional tests, and plans to add staging and production groups next, and while the current request logging level is defined in the Nginx configuration, the team wants this log verbosity to change dynamically at deployment time so that dev uses DEBUG, staging uses INFO, and production uses WARN without maintaining separate application revisions or different script versions per group, with the least management overhead and relying on CodeDeploy lifecycle hooks and configuration. Which approach best meets these requirements?

This adds unnecessary complexity and overhead. Tagging instances and then calling IMDS plus the EC2 API requires additional IAM permissions, network/API availability, and extra logic to map tags to environments. It also introduces failure modes unrelated to deployment (throttling, missing tags). CodeDeploy already provides deployment group context directly to scripts, so this is not the least-management approach.

Correct. CodeDeploy exposes DEPLOYMENT_GROUP_NAME to lifecycle event scripts, enabling one script to set Nginx log verbosity based on the deployment group (dev/staging/prod) without separate revisions. Running it in BeforeInstall is appropriate because you can update config before the application/service is started or reloaded later in the deployment, ensuring the correct log level is applied.

CodeDeploy does not support arbitrary “custom environment variables per deployment group” in the way implied here for hook scripts. Typically, you use built-in variables or external configuration sources (Parameter Store/Secrets Manager) if needed. Also, ValidateService is generally for post-deploy validation; changing Nginx config there is late and can cause mismatches unless you also reload/restart, complicating the flow.

DEPLOYMENT_GROUP_ID is not the right selector for human-friendly environment mapping and is not the commonly used variable for this purpose; scripts typically rely on DEPLOYMENT_GROUP_NAME. Even if an ID were available, you would still need to maintain an ID-to-environment mapping, increasing operational overhead. Using the Install hook is also less typical for config changes than BeforeInstall/AfterInstall.

問題分析

Core Concept: This question tests AWS CodeDeploy in-place deployments on Amazon EC2, specifically how to use CodeDeploy lifecycle event hooks and built-in environment variables to apply environment-specific configuration at deploy time without creating separate application revisions. Why the Answer is Correct: Option B is best because CodeDeploy automatically exposes DEPLOYMENT_GROUP_NAME to lifecycle hook scripts running on the instance. A single script can map deployment group names (dev/staging/prod) to the desired Nginx log level (DEBUG/INFO/WARN), update the Nginx configuration (or a templated include file), and reload Nginx. This meets the requirement to avoid maintaining separate revisions or different scripts per group and minimizes management overhead by relying on CodeDeploy’s native context. Key AWS Features / Best Practices: - CodeDeploy lifecycle hooks (e.g., BeforeInstall, AfterInstall, ApplicationStart, ValidateService) allow you to run scripts at controlled points in the deployment. - CodeDeploy environment variables (including DEPLOYMENT_GROUP_NAME) provide deployment context without needing external lookups. - For Nginx, best practice is to change config and then reload gracefully (e.g., nginx -s reload or systemctl reload nginx) so changes take effect without dropping connections. - Using deployment group name as the selector keeps configuration centralized and avoids coupling to EC2 tags, instance metadata, or additional IAM permissions. Common Misconceptions: - It can seem necessary to query EC2 tags or the CodeDeploy API to determine “where you are,” but CodeDeploy already provides the deployment group name to scripts. - Choosing later hooks like ValidateService may appear safer, but configuration should typically be applied before the service is started/restarted so the running process uses the intended settings. Exam Tips: - Remember that CodeDeploy provides useful environment variables to hook scripts; prefer these over custom discovery mechanisms. - Minimize moving parts: avoid adding EC2 API calls, instance metadata parsing, or extra IAM permissions when CodeDeploy context already solves the problem. - Pick lifecycle hooks that align with when configuration must be present (often BeforeInstall/AfterInstall before service start/reload).

合格体験記(6)

주
주**Nov 25, 2025

学習期間: 2 months

앱의 문제들 3번이나 반복해서 풀고 떨어지면 어떡하지 불안했었는데 시험에서 문제들이 굉장히 유사하게 많이 나와서 쉽게 풀 수 있었어요. 감사해요!

**********Nov 20, 2025

学習期間: 1 month

After going through a Udemy course, I wanted to do some practice exams before taking the real exam. Cloud pass is good resource for exam. I didn't complete every questions, i only completed 70% questions. But i passed! Thanks cloud pass

U
u*********Nov 18, 2025

学習期間: 1 month

A lot of the questions in this app questions indeed appeared on the exam, very helpful.

S
S*******Oct 31, 2025

学習期間: 1 month

Passed the exam DOP-C02 Oct 2025. These practice questions were essential for my preparation. The services covered in the test practices match the exam content very well.

D
D****Oct 27, 2025

学習期間: 1 month

passed the dop exam with the help of Cloud pass questions. The real exam is full of tricky questions and these sets helped me prepared for it.

他の模擬試験

Practice Test #2

75 問題·180分·合格 750/1000
← すべてのAWS Certified DevOps Engineer - Professional (DOP-C02)問題を見る

今すぐ学習を始める

Cloud Passをダウンロードして、すべてのAWS Certified DevOps Engineer - Professional (DOP-C02)練習問題を利用しましょう。

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT認定試験問題演習アプリ

Get it on Google PlayDownload on the App Store

認定資格

AWSGCPMicrosoftCiscoCompTIADatabricks

法務

FAQプライバシーポリシー利用規約

会社

お問い合わせアカウント削除

© Copyright 2026 Cloud Pass, All rights reserved.

外出先でもすべての問題を解きたいですか?

アプリを入手

Cloud Passをダウンロード — 模擬試験、学習進捗の追跡などを提供します。