AWS
299+ preguntas de práctica gratuitas con respuestas verificadas por IA
Impulsado por IA
Cada respuesta de AWS Certified Security - Specialty (SCS-C02) es verificada de forma cruzada por 3 modelos de IA líderes para garantizar la máxima precisión. Obtén explicaciones detalladas por opción y análisis profundo de cada pregunta.
While reviewing an AWS CloudFormation template for a payments microservice, a security engineer finds that a parameter named PaymentApiToken exposes a production API token in plaintext as its default value; the token is referenced 12 times throughout the template (for Lambda environment variables, API Gateway headers, and an ECS task definition), and the engineer must remove plaintext from the template, preserve the ability to reference the value in all 12 locations during stack operations, ensure the secret is encrypted at rest and never appears in stack events or logs, and also support automatic rotation every 60 days; which solution will meet these requirements in the MOST secure way?
Systems Manager Parameter Store SecureString does provide KMS-backed encryption at rest and can be referenced from CloudFormation by using the ssm-secure dynamic reference syntax. However, Parameter Store is not the primary AWS service for managed secret rotation, and automatic rotation every 60 days would require custom automation rather than a native built-in rotation workflow. Because the question explicitly requires automatic rotation and asks for the MOST secure solution, Secrets Manager is the stronger fit. Parameter Store is acceptable for encrypted configuration values, but it is less complete for full secret lifecycle management.
AWS Secrets Manager is the AWS service designed specifically for storing and managing secrets such as API tokens. It encrypts the secret at rest with AWS KMS and supports native automatic rotation, which directly satisfies the requirement to rotate the token every 60 days. CloudFormation dynamic references to Secrets Manager let the template reuse the same secret value in many properties without hardcoding it as plaintext in the template. This is the most secure and operationally appropriate option among the choices because it combines managed secret storage, access control, auditing, and rotation in one service.
Amazon DynamoDB is not a supported CloudFormation dynamic reference source for retrieving secrets in template properties. Although DynamoDB can encrypt data at rest, it is a general-purpose NoSQL database rather than a managed secret store, so it does not provide native secret rotation, secret version staging, or secret-specific access patterns. Using DynamoDB for a production API token would add unnecessary application and operational complexity. It also does not align with AWS best practices for storing and rotating secrets used by infrastructure and applications.
Amazon S3 is not a supported CloudFormation dynamic reference source for secret resolution in templates. Even if the object were encrypted with SSE-S3, S3 is object storage rather than a secret management service and does not provide native secret rotation or version-stage semantics for credentials and tokens. SSE-S3 also uses S3-managed keys and does not offer the same secret-focused controls and workflows as Secrets Manager. This makes S3 a weaker and less secure design choice for managing a production API token.
Core Concept: This question tests secure secret management in infrastructure-as-code (CloudFormation) using dynamic references, encryption at rest, and automated rotation. The key services are AWS Secrets Manager (purpose-built for secrets + rotation) and CloudFormation dynamic references (to avoid plaintext in templates and stack outputs/events). Why the Answer is Correct: Option B is the most secure because AWS Secrets Manager is designed to store sensitive values encrypted with AWS KMS, retrieve them at deploy/runtime without embedding plaintext in the template, and support native automatic rotation on a schedule (every 60 days) using a rotation Lambda. CloudFormation dynamic references to Secrets Manager ({{resolve:secretsmanager:...}}) allow the template to reference the secret in all 12 locations while keeping the secret value out of the template body. When used correctly, the secret value is not displayed in CloudFormation stack events/console output because CloudFormation resolves the value at deployment time and does not persist the plaintext in the template. Key AWS Features: - Secrets Manager encryption at rest via KMS (AWS-managed or customer-managed keys). - Automatic rotation with a configurable interval (60 days) and rotation Lambda integration. - CloudFormation dynamic references to Secrets Manager for parameters and resource properties (e.g., Lambda environment variables, ECS task definition environment variables, API Gateway integration/request parameters where supported). - Fine-grained IAM policies to restrict who/what can read the secret (least privilege), plus CloudTrail auditing for secret access. Common Misconceptions: SSM Parameter Store SecureString (Option A) is encrypted and supports dynamic references, but it does not provide the same first-class, built-in rotation workflow as Secrets Manager (rotation typically requires custom automation). Also, some teams incorrectly assume “SecureString” automatically implies rotation and full secret lifecycle management. Exam Tips: When requirements include “automatic rotation” and “most secure,” default to AWS Secrets Manager unless the question explicitly constrains cost or forbids it. For CloudFormation, look for “dynamic references” to avoid plaintext in templates and to reduce accidental exposure in code repositories. Always pair secrets storage with KMS encryption and least-privilege IAM.
A global edtech company operates 15 AWS accounts in AWS Organizations and has enabled AWS Identity and Access Management (IAM) Access Analyzer at the organization level to detect public or cross-account access. The security team requires an automated workflow that, for any newly created IAM or resource policy that triggers an ACTIVE Access Analyzer finding, remediates external access by updating IAM role trust policies to add an explicit Deny for external principals and sends an email notification to security-ops@example.com within 5 minutes. Which combination of steps should a security engineer implement to meet these requirements? (Choose three.)
Correct as the orchestration component in the intended architecture. AWS Step Functions is well suited to inspect the Access Analyzer finding, branch based on resource type, and invoke the necessary remediation actions through AWS SDK integrations or Lambda. It also provides retries, error handling, and auditability, which are valuable for automated security response. Although the option's wording says to add an explicit Deny to a trust policy, the valid remediation pattern is to update the trust policy to remove or restrict the external access and then publish a notification to SNS.
Incorrect. AWS Batch is designed for queued, compute-intensive, longer-running jobs and introduces unnecessary infrastructure and latency for near-real-time security remediation. Forwarding findings through Batch to Lambda is an anti-pattern compared to direct EventBridge-to-Step Functions/Lambda integration. It also complicates cross-account operations and makes meeting the 5-minute notification SLA less deterministic.
Correct. Amazon EventBridge is the appropriate service to receive IAM Access Analyzer finding events and filter for findings whose status is ACTIVE. It can invoke a Step Functions state machine directly, enabling near-real-time remediation without polling or custom event ingestion. This is the standard AWS event-driven automation pattern for security findings across an organization.
Incorrect. CloudWatch metric filters operate on CloudWatch Logs log events, not on IAM Access Analyzer findings as first-class events. Even if findings were logged somewhere, using metric filters to trigger AWS Batch is indirect and brittle. EventBridge is the intended integration point for service events and provides better filtering, routing, and reliability for automated response.
Incorrect. SQS is a message queue and does not “forward a notification” to email recipients by itself. You would need a consumer (Lambda/EC2) to poll the queue and then send email via SNS/SES, adding complexity and potential delay. SQS can be useful for buffering, but it is not required here and does not directly meet the email requirement.
Correct. Amazon SNS is the native AWS service for sending email notifications to operational teams. By creating an SNS topic and subscribing security-ops@example.com, the workflow can notify the security team immediately after remediation. SNS integrates cleanly with Step Functions and supports the required sub-5-minute notification objective.
Core Concept: This question is about building an event-driven, organization-wide automated response for IAM Access Analyzer findings. The required pattern is to detect ACTIVE findings in near real time, invoke an orchestration workflow to remediate the affected policy, and send an email notification. The best-fit AWS services are Amazon EventBridge for routing Access Analyzer events, AWS Step Functions for workflow orchestration, and Amazon SNS for email delivery. Why the Answer is Correct: EventBridge can match IAM Access Analyzer finding events when a finding becomes ACTIVE and trigger a Step Functions workflow. Step Functions can inspect the finding details and call AWS APIs or Lambda functions to remediate the affected policy, such as updating an IAM role trust policy to remove or restrict external access. SNS then sends the required email notification to security-ops@example.com within the required time window. Key AWS Features: 1) IAM Access Analyzer integrates with EventBridge so findings can trigger automation without polling. 2) EventBridge supports event pattern matching on finding status such as ACTIVE and can invoke Step Functions directly. 3) Step Functions provides branching, retries, and service integrations for policy remediation workflows across accounts. 4) SNS supports email subscriptions and is the standard AWS-native service for operational notifications. Common Misconceptions: CloudWatch metric filters are for log-based pattern matching, not for consuming Access Analyzer findings directly. AWS Batch is intended for batch compute workloads and is not an appropriate near-real-time trigger or orchestration mechanism for security findings. SQS is useful for decoupling consumers but does not send email notifications by itself. Exam Tips: For AWS security automation questions, look for EventBridge as the trigger for service-generated findings, Step Functions or Lambda for remediation logic, and SNS for email notifications. Be cautious with IAM policy semantics: trust policies generally grant assumption through Allow statements and are usually remediated by removing or tightening those Allows rather than adding Deny statements.
A media analytics company runs a latency-sensitive ingestion API on Amazon EC2 instances behind an Application Load Balancer; the instances are in an Auto Scaling group with a minimum of 6 and a desired capacity of 9 spread across three private subnets in the same VPC that also hosts other workloads. The security team has enabled an Amazon GuardDuty detector in the same AWS Region and integrated findings with AWS Security Hub. The team must implement an automated response that detects unusual egress traffic spikes (for example, the GuardDuty finding type Behavior:EC2/TrafficVolumeUnusual with severity >= 5) and immediately takes an initial containment action that follows AWS best practices while minimizing impact on the application and unrelated resources in the subnets. Which solution meets these requirements?
Removing or revoking the instance profile is not the most effective first containment step for a GuardDuty finding about unusual egress traffic volume. The suspicious traffic could be caused by malware, a compromised application process, or data exfiltration that does not depend on the instance profile credentials at all. In addition, changing or removing the IAM role from a running EC2 instance is not a direct network isolation mechanism and may not stop outbound connections immediately. AWS best practice for this type of finding is to isolate the instance's network access rather than focus first on credentials.
This is the best-practice containment pattern because it uses GuardDuty findings as the trigger, EventBridge for near-real-time routing, and Lambda for automated remediation. Putting the instance into Auto Scaling Standby or otherwise removing it from service stops it from receiving production traffic while preserving the instance for forensic analysis instead of terminating it immediately. Applying a quarantine security group isolates only the affected instance at the instance level, which is much more precise than changing subnet-wide controls. This approach minimizes blast radius, allows the Auto Scaling group to maintain application capacity, and aligns with AWS incident response guidance for EC2 compromise containment.
Changing the network ACLs for all three private subnets creates an unnecessarily large blast radius because NACLs apply to every resource in those subnets, not just the suspicious instance. That could interrupt unrelated workloads hosted in the same VPC and subnets, which directly conflicts with the requirement to minimize impact on the application and other resources. NACLs are also stateless, so implementing and maintaining precise deny rules is more error-prone than using a quarantine security group. For GuardDuty EC2 findings, AWS generally favors targeted instance isolation over broad subnet-level network changes.
Sending GuardDuty findings to an SNS topic for email notification provides alerting, but it does not perform any automated containment. The question explicitly requires an immediate initial response action, so manual triage by the security team is too slow and does not satisfy the automation requirement. This option also does nothing to reduce the suspicious egress traffic or isolate the potentially compromised instance. SNS can complement an automated workflow, but by itself it is insufficient for incident containment.
Core Concept: This question tests automated incident response for GuardDuty findings, focusing on containment actions that are fast, targeted, and minimize blast radius. Key services are Amazon GuardDuty (detection), Amazon EventBridge (routing findings), AWS Lambda (automation), Auto Scaling (instance lifecycle control), and security groups (instance-level network isolation). Why the Answer is Correct: Option B implements AWS-recommended initial containment by isolating only the suspected EC2 instance rather than changing shared network controls. An EventBridge rule can match GuardDuty findings (e.g., findingType = Behavior:EC2/TrafficVolumeUnusual and severity >= 5) and invoke a Lambda function. The function can identify the instance ID from the finding, then (1) detach it from the load balancer or place it in Auto Scaling Standby to stop serving traffic, and (2) attach a quarantine security group that blocks inbound/outbound except to approved investigation endpoints (for example, a forensics VPC endpoint, SSM endpoints, or a bastion/IR tooling). This contains potential data exfiltration or C2 traffic immediately while keeping the Auto Scaling group healthy (it can launch a replacement instance if needed) and avoiding impact to other workloads in the same subnets. Key AWS Features / Best Practices: - EventBridge integration with GuardDuty provides near-real-time, rule-based automation. - Auto Scaling Standby (or detaching from target group) removes the instance from service without terminating it, preserving evidence. - Security groups are stateful and instance-scoped, ideal for least-blast-radius containment compared to subnet-wide NACL changes. - Quarantine patterns align with AWS incident response guidance: isolate, preserve, investigate, then remediate. Common Misconceptions: It’s tempting to block traffic at the subnet (NACL) level or to “disable credentials” first. However, unusual egress volume may be caused by malware or compromised processes that don’t rely on instance profile credentials, and subnet-level controls can disrupt unrelated applications. Exam Tips: For GuardDuty automated response questions, prefer EventBridge + Lambda/SSM automation. Choose containment that is reversible, preserves forensic evidence, and limits scope (instance-level SG changes, ASG lifecycle actions) over broad network changes (NACL/VPC route changes) unless the question explicitly requires subnet/VPC-wide blocking.
A fintech company operates 12 AWS accounts across us-east-1 and eu-west-1 and suspects that a legacy bastion host (EC2 instance i-0abcfed12345) in VPC vpc-0f12abcd with IMDSv1 enabled leaked its instance profile credentials; the organization has an AWS Organizations organization trail enabled for AWS CloudTrail, VPC Flow Logs are delivered to Amazon CloudWatch Logs, Amazon GuardDuty is enabled with a delegated administrator account, and AWS Audit Manager is used for PCI evidence collection. The security team must determine whether, within the time window 2025-07-04 02:10–03:05 UTC, the stolen temporary credentials were used to access any resources in their environment from an external AWS account. Which solution will most directly provide this information?
Correct. GuardDuty’s InstanceCredentialExfiltration finding is specifically intended to detect EC2 instance role credential theft and suspicious subsequent use (often from outside the expected network environment). With GuardDuty enabled organization-wide and a delegated administrator, the security team can quickly filter findings by time window and region to see whether the suspected stolen credentials were used, without manual multi-account log correlation.
Incorrect. AWS Audit Manager is a compliance automation service that collects evidence for frameworks like PCI DSS (e.g., configuration snapshots, control mappings). It does not function as a near-real-time threat detection engine and does not generate or index GuardDuty-style security findings such as InstanceCredentialExfiltration. Relying on Audit Manager reports would be indirect and likely incapable of answering the specific question in the required time window.
Incorrect. Searching CloudTrail for GetSessionToken is not a reliable method to detect use of stolen instance profile credentials. Instance profile credentials are already temporary STS credentials and can be used directly to call AWS APIs; an attacker may never call GetSessionToken. Also, CloudTrail records the caller identity (access key, principal) and source IP, but “external AWS account” attribution is not guaranteed via a simple GetSessionToken query.
Incorrect. CloudWatch Logs here contain VPC Flow Logs, which are network flow records (5-tuple metadata) and do not include AWS API calls like GetSessionToken. Unless CloudTrail logs were explicitly delivered to CloudWatch Logs (not stated), CloudWatch Logs is the wrong data source for STS API event searches. Even if CloudTrail were in CloudWatch, the same limitation as option C applies: GetSessionToken is not the right indicator.
Core concept: This question tests threat detection and investigation for EC2 instance profile credential theft, specifically how to determine whether stolen temporary credentials were used from outside the organization. The most relevant service is Amazon GuardDuty, which analyzes CloudTrail management events, VPC Flow Logs, and DNS logs to detect credential exfiltration and anomalous use. Why the answer is correct: GuardDuty has a purpose-built finding type for this scenario: InstanceCredentialExfiltration. It is designed to detect when an EC2 instance’s IAM role credentials (often exposed via IMDSv1 SSRF or host compromise) are used in a way that indicates they were stolen and then used from outside the instance’s expected network context. Because GuardDuty is enabled organization-wide with a delegated administrator, reviewing findings in that admin account for the specified time window is the most direct way to answer “were the stolen credentials used to access resources from an external AWS account?” GuardDuty findings include timestamps and contextual details (e.g., API calls observed via CloudTrail, source IP/ASN, and indicators of exfiltration/anomalous use), enabling quick confirmation without building custom log correlation. Key AWS features and best practices: - GuardDuty correlates CloudTrail management events with network telemetry to detect credential compromise patterns. - Organization-wide GuardDuty with delegated admin centralizes findings across multiple accounts/regions. - IMDSv2 enforcement is the preventative control; IMDSv1 increases risk of credential theft via SSRF. - CloudTrail organization trails are foundational for forensics, but GuardDuty provides higher-level detection logic on top of those logs. Common misconceptions: A frequent trap is assuming you must search CloudTrail directly for STS calls like GetSessionToken. Instance profile credentials are already STS-issued role credentials (via AssumeRole behind the scenes), and attackers often use the stolen AccessKeyId/SecretKey/SessionToken directly to call other AWS APIs; they may never call GetSessionToken. Another misconception is expecting Audit Manager to surface security events; it is for compliance evidence, not incident detection. Exam tips: When the question asks for the “most direct” way to determine credential exfiltration/abuse, prefer GuardDuty’s dedicated finding types over raw log searches. Use CloudTrail for deep forensics after detection, but for fast confirmation across many accounts/regions, centralized GuardDuty findings are typically the best first stop.
An online retail marketplace uses a third-party SaaS container vulnerability scanner that integrates with AWS Security Hub in the company’s audit account. The security team must ensure that when a new finding with severity label HIGH or CRITICAL from this third-party product is imported into Security Hub in us-east-1, a remediation workflow is triggered automatically within 60 seconds and can scale to handle bursts of up to 500 findings per minute without managing any servers. Which solution will meet these requirements?
Correct. Security Hub publishes imported findings as EventBridge events in the same Region. An EventBridge rule can match the “Security Hub Findings - Imported” event and filter by the third-party product and severity label HIGH/CRITICAL. EventBridge invokes Lambda within seconds and both services scale automatically, meeting the 60-second SLA and burst requirement without managing servers. Add retries/DLQ or destinations for reliability.
Incorrect. Security Hub custom actions are primarily for manual, analyst-driven workflows (e.g., selecting findings in the console and choosing a custom action) or explicit API invocation. They do not automatically fire when new findings are imported. While Systems Manager Automation can remediate, the triggering mechanism here would not meet the requirement for automatic execution within 60 seconds on import.
Incorrect. Like option B, a Security Hub custom action with Lambda is typically initiated by a user from the Security Hub console (or via explicit custom action API usage), not automatically on every imported finding. Therefore it does not satisfy the requirement to trigger remediation automatically when a new HIGH/CRITICAL finding is imported, even though Lambda itself is serverless and scalable.
Incorrect. AWS Config rules evaluate AWS resource configurations and compliance against desired state; they are not designed to react to third-party vulnerability findings imported into Security Hub. Config evaluations can be periodic or configuration-change triggered, but they won’t natively consume Security Hub imported findings as the event source. This approach also risks missing the 60-second requirement and is conceptually misaligned.
Core Concept: This question tests event-driven security automation using AWS Security Hub findings, Amazon EventBridge (formerly CloudWatch Events), and AWS Lambda to trigger near-real-time remediation at scale without servers. Why the Answer is Correct: When third-party findings are imported into Security Hub, Security Hub emits EventBridge events such as “Security Hub Findings - Imported” in the same Region (us-east-1 here). An EventBridge rule can filter on product/source and on finding severity label (HIGH/CRITICAL) and then invoke a Lambda function to run remediation. EventBridge delivers events in seconds, meeting the 60-second requirement, and both EventBridge and Lambda are fully managed services that automatically scale. Handling bursts of 500 findings/minute (~8.3/sec) is well within typical EventBridge and Lambda scaling patterns, especially with appropriate Lambda concurrency settings. Key AWS Features: - EventBridge rule pattern matching: filter by Security Hub event type, product name/ARN, and severity label to avoid unnecessary invocations. - Serverless scaling: EventBridge scales ingestion and routing; Lambda scales by concurrency. You can set reserved concurrency to protect downstream systems and use DLQs (SQS) or on-failure destinations for resiliency. - Cross-account/central account: Since findings land in the audit account, the EventBridge rule and Lambda can be deployed there in us-east-1, aligning with the event source Region. Common Misconceptions: Custom actions in Security Hub (options B/C) are often confused with automatic triggers. Custom actions are designed for analyst-initiated workflows from the Security Hub console (or explicit API calls), not automatic execution upon import. AWS Config rules (option D) evaluate resource configuration compliance, not third-party Security Hub findings, and are not the right mechanism for reacting to imported vulnerability scanner findings within 60 seconds. Exam Tips: - For “automatically trigger on Security Hub findings,” think “EventBridge rule on Security Hub events.” - For “no servers” and “burst handling,” prefer EventBridge + Lambda (optionally SQS buffering) over manual/console-driven actions. - Always align the automation with the Region where the events are generated (us-east-1 in this scenario).
¿Quieres practicar todas las preguntas en cualquier lugar?
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
A global fintech with 75 AWS accounts in an AWS Organizations organization spanning three Regions (us-east-1, us-west-2, eu-west-1) needs a centralized solution that can aggregate and normalize security and network events from all accounts in the organization, from all AWS Marketplace partner tools deployed in those accounts, and from on-premises data center systems (1,200 Linux servers sending syslog and 30 firewalls), retain the data for 400 days, and allow analysts to run ad hoc SQL queries; which solution will meet these requirements most effectively?
S3 + Glue + Athena can centralize and query logs, and it’s a valid DIY log lake pattern. However, it does not inherently normalize events across many AWS Marketplace partner tools or on-prem syslog/firewall sources; you would need to build and maintain custom ingestion, parsing, and schema mapping for each source. It also doesn’t provide OCSF normalization out of the box, making it less effective for a large fintech at this scale.
CloudWatch Logs subscription filters to OpenSearch supports near-real-time search and dashboards, but it’s typically expensive and operationally heavy for 400-day retention across 75 accounts plus on-prem logs. OpenSearch is optimized for indexed search, not low-cost long-term retention. It also doesn’t automatically normalize heterogeneous security/network events from multiple partner tools and custom sources into a common schema for consistent SQL analytics.
Security Lake is purpose-built for centralized, multi-account/multi-Region security data lakes. With a delegated administrator in AWS Organizations, it can enable collection across all accounts/Regions, store data in S3, and normalize to OCSF. It supports AWS sources, AWS Marketplace partner integrations, and custom sources (including on-prem syslog/firewall events) and is designed for analytics with Athena for ad hoc SQL queries while meeting long retention via S3 lifecycle policies.
SCPs can deny or allow API actions but cannot “force” services to deliver logs to a specific S3 bucket across all services and accounts; configuration still must be implemented per service. Additionally, OpenSearch is not a direct SQL query engine over S3 log files (Athena is), and this option does not address normalization across AWS Marketplace tools and on-prem sources. It’s both technically inaccurate and incomplete for the requirements.
Core Concept: This question tests centralized, multi-account/multi-Region security data aggregation and normalization for detection and hunting, using a managed data lake approach. The key services are Amazon Security Lake (organization-wide collection and normalization to OCSF), Amazon S3-based retention, and Amazon Athena for ad hoc SQL queries. Why the Answer is Correct: Option C directly matches every requirement: (1) aggregates security and network events across 75 accounts and three Regions using AWS Organizations with a delegated administrator; (2) normalizes data into the Open Cybersecurity Schema Framework (OCSF), which is explicitly designed to standardize disparate security telemetry; (3) supports ingestion from AWS sources, AWS Marketplace partner integrations, and custom sources (including on-prem syslog/firewall logs) via Security Lake custom source mechanisms; (4) stores data in an S3-backed data lake with configurable lifecycle/retention to meet 400-day retention; and (5) enables analysts to run ad hoc SQL queries using Athena over the normalized OCSF tables. Key AWS Features / Best Practices: Security Lake provides organization-level enablement, cross-account data access patterns, and a consistent schema (OCSF) that reduces per-tool parsing. It centralizes data without requiring you to manually build and maintain ETL pipelines for each log type. Retention is handled with S3 lifecycle policies (e.g., transition to S3 Glacier Instant Retrieval/Flexible Retrieval for cost optimization while meeting 400 days). Athena is the native query engine for Security Lake data and aligns with serverless, pay-per-query analytics. Common Misconceptions: A looks attractive because S3 + Glue + Athena is a common log lake pattern, but it does not satisfy “normalize” across AWS Marketplace partner tools and on-prem sources without significant custom ETL and schema management; it’s a build-your-own SIEM data lake. B is a classic streaming-to-OpenSearch approach, but OpenSearch is not ideal for 400-day retention at scale/cost, and it doesn’t inherently normalize diverse sources. D is incorrect because SCPs cannot “force all services to deliver logs” in the way described, and OpenSearch querying “files in S3” is not the primary pattern (and still lacks normalization). Exam Tips: When you see “AWS Organizations + aggregate across accounts/Regions + normalize security events + Athena SQL,” think Amazon Security Lake and OCSF. For long retention (400+ days), prefer S3-based lakes with lifecycle policies over hot indexing stores. Also remember SCPs restrict actions; they don’t configure services or guarantee log delivery.
A company runs a containerized REST API on Amazon ECS with AWS Fargate behind an Application Load Balancer, and the NGINX access logs from the Fargate tasks are shipped to an Amazon CloudWatch Logs log group with a 90-day retention policy; last night the security team flagged IPv4 address 203.0.113.77 as suspicious, and a security engineer must, with the least effort, analyze the past 7 days of logs to determine the total number of requests from that IP and the specific request paths (for example, /v1/* and /admin/*) it accessed—what should the engineer do?
Incorrect. Amazon Macie is designed to discover and classify sensitive data (PII, credentials, etc.) primarily in Amazon S3 using managed data identifiers. It is not a log query engine for NGINX access logs and does not natively provide ad hoc aggregations like “count requests by IP and path.” Exporting logs to S3 also adds unnecessary steps and delays for a 7-day investigation.
Incorrect for “least effort.” A CloudWatch Logs subscription filter to OpenSearch is useful for near-real-time indexing and building dashboards, but it requires provisioning and operating an OpenSearch domain and configuring ingestion. It also does not automatically backfill the last 7 days of existing logs unless you perform additional export/replay steps. Overkill for a quick incident query.
Correct. CloudWatch Logs Insights can query the existing log group immediately, scoped to the last 7 days, filter on IP 203.0.113.77, parse NGINX fields if needed, and use stats/count aggregations to return total request count and the request paths accessed (and counts per path/prefix). This is the fastest, lowest-setup approach for incident-response log analysis.
Incorrect. Exporting to S3, running a Glue crawler, and using Glue to view results is significantly more operational work than necessary. Glue crawlers catalog schemas; they do not inherently “catalog only entries that contain the IP” without additional ETL/filtering logic. For simple counts and grouping over a 7-day window, CloudWatch Logs Insights is the intended tool.
Core Concept: This question tests incident-response log analysis using Amazon CloudWatch Logs Insights. Logs are already centralized in CloudWatch Logs with sufficient retention (90 days). The requirement is to quickly query the last 7 days to (1) count requests from a specific IP and (2) list the request paths accessed. Why the Answer is Correct: CloudWatch Logs Insights is purpose-built for ad hoc, interactive querying of CloudWatch Logs without building a separate analytics pipeline. With the least effort, the engineer can run a query scoped to the last 7 days, filter on client IP 203.0.113.77, parse NGINX access log fields (if needed), and aggregate results to return total request count and counts by request path (or by path prefix such as /v1/ and /admin/). This directly satisfies both outputs in minutes and requires no data export, no new infrastructure, and no ongoing ingestion costs. Key AWS Features: CloudWatch Logs Insights supports time-range selection, filtering, parsing (parse command with patterns/regex), and aggregations (stats count() by field). It can query across log streams in a log group and is commonly used for security investigations (e.g., identifying suspicious IP activity). It aligns with AWS Well-Architected Security and Operational Excellence principles by enabling rapid detection/investigation using centralized logging. Common Misconceptions: Streaming to OpenSearch (option B) can provide powerful dashboards, but it is not “least effort” for a one-off 7-day investigation because it requires provisioning a domain, configuring a subscription filter, and waiting for ingestion (and it won’t retroactively include the last 7 days unless you reprocess/export). Exporting to S3 and using Glue (option D) is also heavier operationally and slower to iterate. Macie (option A) is for discovering sensitive data in S3, not for querying IP/path patterns in application logs. Exam Tips: When logs are already in CloudWatch Logs and the task is quick investigation/aggregation over a recent time window, default to CloudWatch Logs Insights. Choose OpenSearch/S3+Athena/Glue when you need long-term analytics, complex dashboards, cross-source correlation at scale, or retention beyond CloudWatch needs—not for immediate, minimal-effort incident triage.
An online education company has centralized AWS Security Hub across 120 AWS accounts in a multi-account structure using AWS Organizations with a delegated administrator in a separate security account; the security operations team requires near-real-time automated response and remediation whenever deployed resources (for example, public S3 buckets or overly permissive security groups) drift from company security baselines, and all actions must be logged centrally in the audit account; the organization has already reached the maximum number of service control policies (SCPs) attached to the production OU and the maximum SCP policy size, and the team must not change any existing SCPs; the solution must maximize scalability and cost-effectiveness; which combination of actions should the security administrator take to meet these requirements? (Choose three.)
Correct. AWS Config is the native AWS service for detecting configuration drift and compliance violations for resources such as S3 buckets and security groups. A custom rule can evaluate organization-specific baselines, and a Lambda function can remediate noncompliant resources by assuming roles into member accounts. This approach scales across many accounts and aligns with the requirement to avoid changing SCPs while still enforcing security baselines through detective and corrective controls.
Incorrect. AWS Systems Manager Change Manager is designed for controlled operational changes with approvals, scheduling, and governance workflows. It is not the primary service for continuously detecting configuration drift or automatically remediating security baseline violations across 120 accounts in near real time. Although SSM runbooks can be used for remediation, the detection and eventing requirements are better met by AWS Config, Security Hub, and EventBridge.
Incorrect. Security Hub custom actions are primarily used to let analysts manually trigger downstream workflows from the Security Hub console. They are not necessary for automatic remediation because EventBridge can already react directly to Security Hub findings without any analyst interaction. Including a custom action adds an extra manual-oriented construct that does not improve scalability or cost-effectiveness for the stated requirement of automated response.
Correct. Amazon EventBridge can match AWS Security Hub findings and invoke a Lambda function to perform automated remediation in near real time. This is a standard event-driven architecture for centralized security operations in a multi-account environment, especially when Security Hub is already aggregated through a delegated administrator. It is scalable, cost-effective, and supports centralized operational visibility because the remediation workflow can log all actions for audit purposes.
Incorrect. Using Lambda to inspect API requests and generate custom findings replaces native AWS compliance and security services with a bespoke solution. API request inspection is typically based on CloudTrail events and does not reliably represent current configuration state, which is what matters for drift detection. This option is less scalable, more operationally complex, and inferior to AWS Config plus Security Hub for detecting public buckets or overly permissive security groups.
Correct. Some AWS Config rules are periodic rather than triggered directly by configuration changes, so a scheduled EventBridge rule can be used to invoke evaluation of selected Config rules to ensure continued compliance coverage. This complements near-real-time detection by covering rule types that otherwise would not evaluate immediately on every resource change. It is more scalable and cost-effective than building a custom inspection framework, and it helps maintain broad baseline enforcement across a large organization.
Core concept: The requirement is to detect security baseline drift across 120 AWS accounts and trigger near-real-time automated remediation without modifying SCPs. In AWS, the most scalable pattern is to use AWS Config to evaluate resource compliance, centralize findings in AWS Security Hub, and use Amazon EventBridge to invoke remediation logic such as AWS Lambda. Because some AWS Config rules are periodic rather than configuration-change-triggered, scheduling evaluations for selected rules can be part of a complete solution when broad baseline coverage is required. Why correct: AWS Config is the native service for configuration compliance and drift detection for resources such as S3 buckets and security groups. Security Hub aggregates findings organization-wide through the delegated administrator account, and EventBridge can react to those findings in near real time to invoke Lambda-based remediation. A scheduled EventBridge rule can be used to trigger evaluation of selected Config rules that do not run continuously, ensuring comprehensive coverage while remaining operationally efficient. Key features: AWS Config supports managed and custom rules for compliance evaluation, including organization-wide deployment patterns. Security Hub consolidates findings from member accounts into the delegated administrator account. EventBridge supports event pattern matching on Security Hub findings and can invoke Lambda for automated remediation. Central logging of remediation actions can be achieved by having Lambda emit logs and by using organization-wide audit logging in the audit account. Common misconceptions: Security Hub custom actions are often confused with automatic remediation triggers, but they are intended for analyst-initiated actions from the console rather than autonomous event-driven response. Systems Manager Change Manager is for governance and approvals, not continuous drift detection. Building a custom API-inspection pipeline with Lambda is more complex and less reliable than using Config and Security Hub for configuration-state compliance. Exam tips: For questions about drift from security baselines, prefer AWS Config over custom event inspection. For near-real-time automated response, look for EventBridge plus Lambda or SSM Automation triggered by Security Hub or Config findings. If SCPs cannot be changed, the answer usually shifts to detective controls and automated remediation rather than preventive guardrails.
A fintech company operates a payment reconciliation microservice that writes 18,000 log events per minute to an Amazon CloudWatch Logs log group named /prod/recon with a 30-day retention period, and the logs contain customer email addresses; 12 developers must use the CloudWatch Logs console and APIs to troubleshoot across all log streams without modifying the application or creating duplicate sanitized log groups, and a security engineer must ensure the developers cannot view any email addresses while all other fields remain visible; which solution meets this requirement?
Amazon Macie primarily discovers and classifies sensitive data in Amazon S3 (and helps with findings/alerts). It does not provide an in-place mechanism to mask/redact email addresses in CloudWatch Logs so developers can still view the rest of the log event. Macie could identify that emails exist, but it won’t enforce “developers cannot view emails” in the CloudWatch Logs console/APIs.
Encrypting the log group with a customer managed KMS key and denying developers access would prevent them from decrypting and reading the logs at all. The requirement is selective protection: developers must still troubleshoot and see all other fields, just not email addresses. KMS key policies and grants do not support field-level redaction within log events; they control access to the entire encrypted data.
A subscription filter to Lambda can parse and mask emails, but it requires forwarding to another destination (often another log group, S3, or OpenSearch). That effectively creates a duplicate sanitized copy, which the question explicitly disallows. It also introduces operational overhead and cost at high ingestion rates (18,000 events/min) and adds failure modes/latency compared to a native CloudWatch Logs feature.
CloudWatch Logs data protection policies are designed to detect sensitive data using managed data identifiers (such as EmailAddress) and apply actions like masking. Applying the policy to /prod/recon allows developers to continue using CloudWatch Logs console and APIs across all streams while ensuring email addresses are redacted and other fields remain visible. This meets all constraints: no app changes, no duplicate sanitized log groups, and enforced PII protection.
Core Concept: This question tests Amazon CloudWatch Logs data protection (sensitive data protection) and how to prevent exposure of PII in log viewing/querying without changing the application or duplicating log destinations. It also touches least-privilege and operational troubleshooting requirements. Why the Answer is Correct: CloudWatch Logs data protection policies can automatically detect sensitive data patterns (using AWS managed data identifiers such as EmailAddress) and apply actions like masking. Once activated on the log group (/prod/recon), developers can continue using the CloudWatch Logs console and APIs to troubleshoot across all log streams, but email addresses are redacted while the rest of each log event remains visible. This directly satisfies: no app changes, no duplicate “sanitized” log groups, and developers must not be able to view email addresses. Key AWS Features: 1) CloudWatch Logs data protection policy: attached at the log group level. 2) Managed data identifiers: AWS provides built-in identifiers (e.g., EmailAddress) to detect common PII. 3) Masking/redaction: preserves log utility by keeping non-sensitive fields intact. 4) Operational simplicity: works in-place on the existing log group and supports console/API access patterns used for troubleshooting. Common Misconceptions: A) Macie is for S3 (and certain other data stores) discovery/classification, not for real-time masking in CloudWatch Logs viewing. It can find sensitive data but doesn’t enforce redaction in the Logs console. B) KMS encryption controls access to decrypt the entire log group; it cannot selectively hide only email addresses while allowing other fields. Denying KMS decrypt would block developers from reading any logs, violating the troubleshooting requirement. C) Subscription filters + Lambda could mask and forward, but that creates a new sanitized destination (effectively duplicating logs) and adds cost/complexity at 18,000 events/min. It also doesn’t meet the constraint of not creating duplicate sanitized log groups. Exam Tips: When you see “mask PII in CloudWatch Logs without app changes” and “developers still need to troubleshoot,” look for CloudWatch Logs data protection policies with managed data identifiers and masking. KMS is all-or-nothing for decryption, and Macie is discovery/classification rather than in-console redaction enforcement.
A media analytics startup runs video transcoding jobs on AWS Batch (EC2 compute environment) that pull private container images from Amazon Elastic Container Registry (Amazon ECR); currently, 12 ECR repositories across two Regions use the default AES-256 encryption, but compliance requires migrating all repositories to a customer managed AWS KMS key (alias/media-sec) and enabling CVE detection on image push; a security engineer must implement an approach that leaves no repositories unencrypted and that provides a vulnerability report after the next image push. Which solution will meet these requirements?
Incorrect. You cannot simply “enable KMS encryption” on existing ECR repositories that were created with AES-256; ECR repository encryption settings are defined at creation and are not generally mutable to switch to a CMK. Also, installing the Amazon Inspector Agent on EC2 instances assesses host vulnerabilities, not container image CVEs triggered by an image push to ECR, so it does not meet the “report after next image push” requirement.
Correct. Recreating the repositories in each Region with encryption set to the customer managed KMS key (alias/media-sec) ensures every repository is encrypted with the required CMK. Enabling ECR image scanning on push ensures that the next time an image is pushed, ECR performs a vulnerability scan and generates findings that can be reviewed as a scan report, satisfying the compliance and detection requirements.
Incorrect. While recreating repositories with a CMK and enabling scanning aligns with the ECR requirements, installing the AWS Systems Manager (SSM) Agent and generating an inventory report does not provide container image CVE findings. SSM Inventory is for instance/software inventory and compliance metadata, not ECR image vulnerability scanning results tied to an image push event.
Incorrect. As with option A, you generally cannot change existing ECR repositories from AES-256 to CMK encryption in-place. Additionally, AWS Trusted Advisor does not provide container image CVE scanning or a vulnerability report after an image push; Trusted Advisor focuses on cost optimization, performance, fault tolerance, service limits, and some security checks, not ECR image CVE findings.
Core concept: This question tests Amazon ECR repository configuration for (1) encryption at rest using a customer managed AWS KMS key (CMK) and (2) vulnerability scanning that produces findings after the next image push. It also implicitly tests what can and cannot be changed in-place on an existing ECR repository. Why the answer is correct: To ensure “no repositories unencrypted” under the new compliance rule, each repository must be configured to use the specified CMK (alias/media-sec). In ECR, encryption configuration is set at repository creation time; you cannot retroactively switch an existing repository from AES-256 (AWS owned key) to a customer managed KMS key in-place. Therefore, the compliant approach is to create new repositories (or recreate with the same names after deletion, if feasible) in each Region with KMS encryption using alias/media-sec. Additionally, enabling “scan on push” ensures that after the next image push, ECR will automatically trigger a vulnerability scan and produce a findings report for that pushed image. Key AWS features and best practices: - ECR encryption at rest: AES-256 (AWS owned) vs AWS KMS (customer managed key). CMKs provide key policy control, rotation options, and auditability via AWS CloudTrail. - ECR image scanning: “scan on push” (basic scanning) or enhanced scanning via Amazon Inspector (depending on account settings). Either way, enabling scan on push meets the requirement to get a report after the next push. - Multi-Region: KMS keys are Regional; you must ensure alias/media-sec exists (or is created) in both Regions and that ECR has permission to use it. Common misconceptions: A frequent trap is assuming you can “enable KMS encryption” on an existing ECR repository like you can for some other services. Another trap is confusing host-based CVE scanning (Inspector Agent/SSM inventory) with container image scanning in ECR; the requirement is a vulnerability report tied to the image push, not the EC2 instances. Exam tips: When you see “after the next image push,” think ECR “scan on push” (or Inspector enhanced scanning). When you see “migrate encryption from AES-256 default to CMK,” verify whether the service supports in-place encryption changes; for ECR repositories, plan for repository recreation and image repush/retag as part of migration.
¿Quieres practicar todas las preguntas en cualquier lugar?
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
A media analytics startup runs 100 Amazon EC2 instances (60 Amazon Linux 2 and 40 Windows Server 2019) in three private subnets across two Availability Zones with no internet gateway and no inbound security group rules; engineers must perform remote administration entirely within the AWS Cloud, without opening TCP 22/3389 or distributing SSH key pairs, and the solution must work for both OS types at scale and support centralized session logging to an S3 bucket. Which solution will meet these requirements?
Correct. Systems Manager Session Manager provides interactive access to both Linux and Windows instances without opening inbound ports (22/3389) and without distributing SSH keys. Access is controlled via IAM, and session logs can be centralized to an S3 bucket (optionally encrypted with KMS). In private subnets without an IGW, connectivity is achieved using VPC interface endpoints for SSM-related services.
Incorrect. While Session Manager is the right remote access mechanism, generating and distributing SSH-RSA key pairs is unnecessary and violates the requirement to avoid distributing SSH key pairs. Session Manager does not require SSH keys at all. This option adds operational overhead and risk (key management/rotation) without providing any benefit for Session Manager-based access.
Incorrect. EC2 Instance Connect is primarily for SSH access to Linux instances and still requires network connectivity to port 22 (inbound allowed from the connecting source) and appropriate security group/NACL rules. It does not satisfy the requirement to avoid opening TCP 22, and it is not the standard unified approach for Windows Server administration at scale with centralized session logging to S3.
Incorrect. This option combines two mismatches: it requires generating/distributing SSH keys (explicitly disallowed) and relies on EC2 Instance Connect, which still uses SSH over port 22 and is not a comprehensive solution for Windows administration. It also does not inherently provide the same centralized session logging/auditing model as Session Manager with S3 logging.
Core Concept: This question tests secure, scalable remote administration of EC2 instances in private subnets using AWS Systems Manager (SSM) Session Manager with IAM-based access control and centralized audit logging. Why the Answer is Correct: Option A meets every constraint: no inbound security group rules, no Internet gateway, no opening TCP 22/3389, no SSH key distribution, works for both Amazon Linux 2 and Windows Server, scales to 100 instances, and supports session logging to Amazon S3. Session Manager provides interactive shell/PowerShell access through the SSM Agent over outbound HTTPS (typically to SSM endpoints). Because the instances are in private subnets with no IGW, connectivity is achieved via VPC interface endpoints (AWS PrivateLink) for Systems Manager (ssm), EC2 Messages (ec2messages), and SSM Messages (ssmmessages) (and optionally CloudWatch Logs if also used). Access is controlled with IAM policies (and can be gated with MFA, permission boundaries, and least privilege). Session activity can be logged to an S3 bucket (and/or CloudWatch Logs) for centralized auditing. Key AWS Features: - Session Manager: browser/CLI-based sessions without inbound ports; supports Linux and Windows. - IAM-based authorization: fine-grained controls (which instances, which users, session duration, tagging-based access). - Centralized logging: S3 log storage (optionally encrypted with SSE-KMS), plus CloudTrail for API auditing. - Private connectivity: VPC interface endpoints keep traffic inside AWS network, aligning with Well-Architected Security pillar (minimize exposure, strong identity, auditability). Common Misconceptions: A common trap is assuming SSH keys are required for Linux administration or that Windows requires RDP. Session Manager eliminates both. Another misconception is that “no internet gateway” prevents SSM; in reality, SSM works privately with VPC endpoints. EC2 Instance Connect is also often confused as “keyless”; it still relies on SSH and network reachability to port 22 and is not a unified solution for Windows. Exam Tips: When you see requirements like “no inbound rules,” “private subnets,” “no SSH/RDP,” “centralized session logging,” and “works for Linux and Windows at scale,” the exam-friendly answer is almost always Systems Manager Session Manager with IAM and S3/CloudWatch logging, plus VPC interface endpoints for private access.
A subscription audio platform distributes long-form live streams as CMAF segments through Amazon CloudFront; each broadcast runs 6–10 hours producing roughly 10,800 two‑second segments stored under the /members/audio/ path, the origin is private and accessible only via CloudFront (origin URL not exposed), the web application authenticates paying users against an internal identity store, and a CloudFront key pair is already issued—what is the simplest and most effective way to restrict access to the segmented content?
Per-object signed URLs would work functionally, but it is not the simplest for CMAF. A 6–10 hour stream with ~10,800 segments would require the application to continuously generate and deliver signed URLs (and likely re-sign as new segments appear). This increases application complexity, can add playback latency, and complicates player logic. Signed URLs are better for a small set of files or one-off downloads.
Signed cookies are the best fit for protecting a large set of objects under a common path like /members/audio/*. After the user authenticates, the app sets CloudFront signed cookies using the existing key pair. CloudFront then authorizes access to all matching segment and playlist objects until the cookie expires. This minimizes operational overhead and aligns with CloudFront’s recommended approach for streaming/segmented content.
JWT validation with Lambda@Edge can enforce custom authorization logic, but it adds moving parts: Lambda@Edge deployment/replication, per-request execution cost/latency, and operational complexity. It is unnecessary here because CloudFront already provides native private content controls (signed cookies/URLs) and the origin is already private behind CloudFront. Use Lambda@Edge only when you need logic beyond time/path/IP-based policies.
Encrypting the CloudFront distribution URL provides no real access control. Even if the app hides or decrypts the URL at runtime, the client ultimately receives requests that can be copied and replayed. Security through obscurity does not prevent unauthorized access, and KMS does not integrate with CloudFront request authorization in this manner. Proper control is done via CloudFront signed URLs/cookies and a private origin.
Core Concept: This question tests CloudFront private content controls for authenticated users, specifically the choice between CloudFront signed URLs vs signed cookies for protecting many objects (CMAF segments) under a path. Why the Answer is Correct: Each live stream produces ~10,800 two-second CMAF segment objects under /members/audio/. Generating a signed URL per segment (and refreshing as segments are created) is operationally heavy and adds latency and complexity to the player workflow. CloudFront signed cookies are designed for exactly this use case: authorize a viewer once, then allow access to multiple restricted objects (e.g., all segments and playlists) that match a path pattern, without signing every request URL. The application authenticates the paying user against its identity store, then sets CloudFront signed cookies (Policy, Signature, Key-Pair-Id) using the already-issued CloudFront key pair. The browser/player then requests segment files normally; CloudFront enforces the cookie policy and blocks unauthorized requests. Key AWS Features / Configurations: - Use CloudFront signed cookies with a custom policy restricting access to resources like https://d123.cloudfront.net/members/audio/* and with an expiration aligned to session length. - Keep the origin private (e.g., S3 with Origin Access Control or custom origin restricted to CloudFront) so bypass is not possible. - Optionally include IP restrictions in the policy if appropriate (often avoided for mobile users). - Ensure cache behavior forwards the required cookies (or uses trusted key groups / trusted signers depending on the signing method) so CloudFront can validate them. Common Misconceptions: Signed URLs and signed cookies both restrict CloudFront content, so signed URLs can look correct. However, signed URLs are best when you need to protect a small number of objects or when you must distribute a single shareable link. For segmented streaming with thousands of objects, signed cookies are simpler and more effective. Exam Tips: When you see “many objects under a path” (HLS/DASH/CMAF segments) plus “authenticated users” and “CloudFront key pair already issued,” think “signed cookies.” Reserve Lambda@Edge/JWT for cases requiring per-request dynamic authorization decisions beyond what signed cookies/policies can express.
A security engineer at a fintech startup finds that neither the corporate IdP (Okta) used for SAML federation into AWS IAM roles nor the Amazon Cognito user pool for a customer-facing application enforces any minimum password length; with 4,000 employees federating to AWS and 200,000 customer accounts in Cognito, a new compliance control mandates a minimum of 12 characters for all user passwords within 7 days—Which combination of actions should the engineer take to implement the required minimum password length across both identity stores? (Choose two.)
Incorrect for the stated population. The IAM account password policy applies only to IAM users in that AWS account (console password for IAM users). It does not affect SAML-federated employees authenticating in Okta, and it does not apply to Amazon Cognito user pool users. It could be useful only if the company also had IAM users with console passwords, which the scenario does not indicate.
Correct. Cognito user pools are the identity store for the customer-facing application, so Cognito is responsible for enforcing password requirements for those 200,000 customer accounts. Updating the user pool password policy to a 12-character minimum directly satisfies the compliance requirement for Cognito-managed users and can be implemented quickly without changing application authorization logic.
Correct. For 4,000 employees using SAML federation from Okta into AWS IAM roles, Okta performs the authentication and issues SAML assertions. AWS does not validate the employee’s password and cannot enforce its length. Therefore, the minimum password length must be enforced in Okta’s password policy (and ideally paired with MFA), meeting the compliance control for the corporate identity store.
Incorrect. Service Control Policies in AWS Organizations restrict which AWS API actions principals can perform in member accounts; they do not provide a mechanism to enforce password length for IAM users, Cognito users, or external IdPs. SCPs cannot reach into Okta, and they cannot impose Cognito password policy settings. At most, SCPs could prevent changing certain configurations, not enforce password complexity.
Incorrect. IAM policies (including condition keys) control authorization to AWS API actions and resources; they do not validate or enforce password length rules for IAM user passwords, and they cannot apply to Cognito user pool passwords. Cognito password policies are configured on the user pool, not via IAM policy conditions. Similarly, federated users’ passwords are managed by the IdP, not AWS.
Core concept: This question tests where password policies are actually enforced in federated identity vs. native AWS identity stores. Okta (SAML IdP) controls employee authentication, while Amazon Cognito user pools control customer authentication. AWS IAM password policy only applies to IAM users (not federated users), and SCPs/IAM policies cannot impose password-length rules on external IdPs or Cognito passwords. Why the answer is correct: To meet a 12-character minimum across both identity stores within 7 days, the engineer must configure the policy at each system that owns the password. For employees federating via SAML, AWS never sees or validates their Okta passwords; it only consumes SAML assertions. Therefore, the minimum password length must be enforced in Okta’s password policy (Option C). For customers authenticating directly against Cognito, Cognito is the password authority, so the user pool password policy must be updated to require at least 12 characters (Option B). Key AWS features / configurations: In Amazon Cognito user pools, the password policy can be configured (min length, complexity requirements) and applies to user pool users. For corporate SAML federation, AWS IAM roles with SAML trust policies rely on the IdP for authentication; AWS enforces authorization via role permissions, not the IdP’s password rules. This aligns with least privilege and separation of authentication (IdP) from authorization (AWS). Common misconceptions: A frequent trap is assuming the AWS account IAM password policy governs “all AWS access.” It only affects IAM users who sign in with an IAM username/password. Federated users authenticate externally. Another misconception is that SCPs or IAM policies can enforce password length; they can restrict API actions but cannot validate password content for Cognito or an external IdP. Exam tips: Always identify the “source of truth” for credentials. If authentication is federated (SAML/OIDC), enforce password/MFA at the IdP. If users authenticate to Cognito, enforce password policy in the user pool. IAM password policy is only for IAM users, and SCPs are guardrails for AWS API permissions—not credential complexity controls.
A security engineer is troubleshooting an application on an Amazon EC2 instance that uses an instance profile role named WebForensicsRole in account 111122223333 to call sts:AssumeRole for a cross-account role named CorpForensicsAccessRole in account 444455556666 with a session duration of 3600 seconds, but the call fails with the error message "An error occurred (AccessDenied) when calling the AssumeRole operation"; which combination of steps should the engineer take to resolve this error? (Choose two.)
Correct. The calling role (WebForensicsRole) must have an identity-based policy that allows sts:AssumeRole on the specific target role ARN in account 444455556666. Without this allow, STS denies the request regardless of the target role’s trust policy. This is one of the two mandatory permission components for cross-account role assumption.
Incorrect. AmazonSSMManagedInstanceCore enables Systems Manager capabilities (SSM Agent, Session Manager, Run Command) and is unrelated to authorizing sts:AssumeRole. Attaching it might help manage or troubleshoot the instance, but it does not grant permission to assume CorpForensicsAccessRole and will not resolve an AssumeRole AccessDenied error.
Correct. The target role (CorpForensicsAccessRole) must include a trust policy that allows the principal arn:aws:iam::111122223333:role/WebForensicsRole to call sts:AssumeRole. This trust relationship is required for cross-account access. Even if WebForensicsRole has sts:AssumeRole permission, the call fails if the target role does not trust it.
Incorrect. WebForensicsRole’s trust policy should allow ec2.amazonaws.com to assume it (so the instance profile can deliver credentials), but the scenario states the application is already using the instance profile role to attempt AssumeRole. The failing operation is the cross-account AssumeRole into CorpForensicsAccessRole, which is governed by A and C, not by modifying WebForensicsRole’s trust policy.
Incorrect. Directing the call to a specific regional STS endpoint (such as us-east-1) is not the typical cause of an AccessDenied error. Endpoint selection issues usually manifest as network/endpoint errors, signature issues, or service unavailability. Authorization failures for AssumeRole are almost always due to missing caller permissions, missing/incorrect trust policy, or explicit denies/conditions.
Core Concept: This question tests cross-account access with AWS STS AssumeRole. For AssumeRole to succeed, two independent permission checks must pass: (1) the caller’s identity-based policy must allow sts:AssumeRole on the target role ARN, and (2) the target role’s trust policy (resource-based policy on the role) must trust the caller principal. Why the Answer is Correct: A is required because WebForensicsRole (the calling role via the EC2 instance profile) must be explicitly permitted to call sts:AssumeRole against arn:aws:iam::444455556666:role/CorpForensicsAccessRole. Without this identity-based allow, STS returns AccessDenied even if the target role trusts the caller. C is required because CorpForensicsAccessRole must trust the principal arn:aws:iam::111122223333:role/WebForensicsRole in its trust policy (the role’s AssumeRolePolicyDocument). If the trust policy does not allow that principal (or an allowed condition such as an external ID), STS returns AccessDenied even if the caller has an allow statement. Key AWS Features / Configurations: - Identity-based policy on WebForensicsRole: Allow action sts:AssumeRole with Resource set to the target role ARN. - Trust policy on CorpForensicsAccessRole: Principal set to the source role ARN (or source account with conditions), Action sts:AssumeRole. - Optional but common: conditions like sts:ExternalId, aws:PrincipalArn, or MFA requirements; session duration must also be within the MaxSessionDuration of the target role. Common Misconceptions: - Confusing the EC2 service trust (needed for the instance profile role) with cross-account trust (needed on the target role). The EC2 trust affects how the instance gets credentials, not whether it can assume a different role. - Thinking STS endpoint region selection causes AccessDenied. Endpoint issues typically cause connectivity or endpoint-related errors, not authorization failures. Exam Tips: For cross-account AssumeRole, always verify “two-sided permissions”: caller policy AND target trust policy. If you see AccessDenied on AssumeRole, check both documents first, then look for explicit denies (SCPs, permission boundaries, session policies) and condition mismatches (ExternalId, MFA, source IP/VPC endpoint conditions).
A fintech company's security engineer must ensure that a temporary vendor IAM user can access only the Amazon DynamoDB console to view and edit tables in a specific account and must be prevented from using any other AWS services under any circumstances. The vendor IAM user might later be added to one or more IAM groups that grant broad permissions (for example, AdministratorAccess), but the user's effective permissions must remain limited to DynamoDB only. Which approach should the security engineer take to meet these requirements?
An inline policy that allows DynamoDB access can grant the needed permissions today, but it does not prevent the user from gaining additional permissions later through group membership or other attached policies. If the user is later added to a group with AdministratorAccess, the user could access other services. Inline policies are not a guardrail; they are just one source of identity-based permissions.
A permissions boundary that allows only DynamoDB actions sets a hard maximum on what the user can do, regardless of any future group memberships or attached policies. Even if AdministratorAccess is attached later, actions outside DynamoDB will still be denied because they are not permitted by the boundary. This directly satisfies the requirement to prevent use of any other AWS services under any circumstances.
Putting the user in a DynamoDB-only group is a common way to grant access, but it is not durable against future changes. If the user is later added to another group with broader permissions (for example, AdministratorAccess), the user’s effective permissions expand. Group-based permission management is additive and does not inherently enforce a maximum permission boundary.
A role with explicit denies could restrict what the role can do, but it relies on the vendor always assuming that role and using only those role credentials. It does not inherently prevent the underlying IAM user from using other permissions if they are later granted directly or via groups. The requirement is to constrain the IAM user’s effective permissions regardless of future attachments, which is better met by a permissions boundary.
Core Concept: This question tests IAM effective permissions and how to enforce a maximum permission set regardless of future group memberships. The key concept is an IAM permissions boundary, which defines the upper limit of permissions that an IAM principal (user or role) can ever receive. Why the Answer is Correct: A permissions boundary policy that allows only DynamoDB actions ensures the vendor IAM user can never use other AWS services, even if later added to groups with broad permissions like AdministratorAccess. IAM evaluation requires that an action is allowed by the identity-based policy AND is within the permissions boundary. Therefore, any non-DynamoDB action will be implicitly denied by the boundary (not allowed within the boundary), preventing privilege escalation via group membership. Key AWS Features: Permissions boundaries are managed policies attached to a user or role to set a maximum permissions guardrail. They are commonly used for delegated administration and to constrain temporary or vendor access. To meet “console to view and edit tables,” the boundary should allow relevant DynamoDB actions (e.g., dynamodb:DescribeTable, dynamodb:UpdateTable, dynamodb:PutItem, etc.) and typically also allow required read-only actions for console usability (often via AWS-managed DynamoDB policies or a custom least-privilege set). Resource scoping can further restrict to specific tables/ARNs in the account. Common Misconceptions: Inline policies or group policies (Options A/C) grant permissions but do not cap future permissions; adding AdministratorAccess later would override the intent. An explicit deny strategy in a role (Option D) can work in theory, but it depends on the vendor consistently assuming the role and does not prevent the IAM user itself from using other permissions if they are granted directly or via groups. Exam Tips: When a requirement says “must remain limited even if later added to admin groups,” look for permissions boundary (or SCP at the organization level). Boundaries restrict maximum permissions for a principal; they do not grant access by themselves—you still need identity-based policies to grant the DynamoDB permissions within that boundary.
¿Quieres practicar todas las preguntas en cualquier lugar?
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
A healthcare analytics company is building a new AWS Organizations structure with 20 AWS accounts across 4 OUs (prod, dev, staging, shared) and must integrate 3,200 workforce users from an external SAML 2.0 IdP (Okta) so that, from the management account, the company can centrally manage access to all accounts and to two internal SAML applications; the company requires MFA to be enforced by the IdP, automatic user and group provisioning, and least-privilege role-based access across accounts; which solution will meet these requirements?
AWS Directory Service for Microsoft AD can integrate with SAML/ADFS-style setups, but it is not the right control plane for centrally managing access to multiple AWS accounts via AWS Organizations. Also, you cannot attach IAM policies directly to directory users; authorization in AWS is done through IAM roles/policies. This option does not meet the centralized multi-account RBAC and SAML app assignment requirements cleanly.
IAM Identity Center is purpose-built for centralized workforce access across AWS Organizations accounts and supported business applications. Using Okta as the external identity source with SAML for authentication and MFA, plus SCIM for automatic user and group provisioning, directly satisfies the federation, MFA, and lifecycle management requirements. Permission sets let the company define least-privilege access once and assign it to groups across accounts and OUs, which scales well for 3,200 users. IAM Identity Center also supports assigning access to SAML-enabled applications, so the two internal SAML apps can be managed from the same centralized access platform.
Configuring SAML federation separately in each of the 20 accounts creates significant administrative overhead and inconsistent governance. It also undermines the requirement to manage access “from the management account” centrally across OUs. Additionally, federated users are not IAM users; you grant access by assuming roles, not by attaching policies directly to “federated users.” This approach does not scale well and is error-prone.
Amazon Cognito is designed mainly for customer-facing application authentication/authorization (web/mobile) and federation to social/SAML IdPs for app users. It is not the standard solution for workforce access to AWS accounts in an AWS Organizations environment, and it does not provide the same centralized multi-account permission-set model as IAM Identity Center. Managing AWS account access for 3,200 employees via Cognito would be atypical and complex.
Core Concept: This question tests AWS IAM Identity Center integration with AWS Organizations for centralized workforce access, including external SAML 2.0 IdP federation, SCIM provisioning, and least-privilege authorization via permission sets. Why the Answer is Correct: IAM Identity Center is the AWS service designed to centrally manage workforce access across AWS Organizations accounts and supported applications. By configuring Okta as the external identity source using SAML 2.0 for authentication and SCIM for automatic user and group provisioning, the company can enforce MFA at the IdP, automate lifecycle management, and assign access based on groups. Permission sets are then used to provision least-privilege IAM roles into target accounts, and application assignments can be used for the two internal SAML applications. Key AWS Features: - IAM Identity Center with AWS Organizations integration: centrally manage access to member accounts and applications across the organization. - Permission sets: reusable access definitions that create IAM roles in target accounts and support AWS managed policies, customer managed policies, and inline policies. - Group-based assignments: map provisioned Okta groups to permission sets and accounts/OUs for scalable RBAC. - External IdP with SAML: authentication and MFA are handled by Okta, while AWS trusts the SAML assertions. - SCIM provisioning: automatic provisioning and deprovisioning of users and groups into IAM Identity Center. - Application assignments: centralized access management for supported SAML applications through the IAM Identity Center user portal. Common Misconceptions: Per-account SAML federation (option C) may appear to work, but it creates unnecessary operational overhead and does not provide centralized governance across the organization. Directory Service (option A) is for Microsoft AD use cases and does not allow IAM policies to be attached directly to directory users. Cognito (option D) is primarily intended for application end users rather than enterprise workforce access to AWS accounts. Exam Tips: When a question includes AWS Organizations, many workforce users, an external SAML IdP, SCIM provisioning, centralized account access, and least-privilege RBAC, IAM Identity Center is the best-practice answer. Also remember that IAM Identity Center can be administered from the management account or a delegated administrator account, not only the management account.
A media archival company must retain 4.5 PB of raw surveillance footage for 9 years to satisfy legal hold requirements; the compliance team instructs the security architect to implement a WORM control so that no user, including the root account, can modify or delete any footage during the retention period while minimizing storage costs and operational overhead. Which solution meets this requirement most cost-effectively?
Correct. S3 Object Lock in compliance mode provides true WORM retention: objects cannot be overwritten or deleted by any user, including the root account, until the retention period ends. This meets strict legal hold requirements. It also keeps operations simple (standard S3 APIs) and can be combined with lifecycle transitions to low-cost archival tiers like S3 Glacier Deep Archive for 9-year retention at minimal storage cost.
Incorrect. S3 Object Lock in governance mode is not absolute WORM. Users with the s3:BypassGovernanceRetention permission (or equivalent privileged access) can override retention and delete/modify objects. That fails the requirement that “no user, including the root account,” can delete or modify footage during the retention period. Governance mode is intended for internal controls, not immutable compliance retention.
Not the best answer for this scenario. S3 Glacier Vault Lock can enforce compliance controls on a Glacier vault, but it uses the legacy Glacier vault model and separate APIs, increasing operational overhead compared to S3. The question asks to minimize overhead and cost-effectively retain 4.5 PB; S3 with Object Lock plus lifecycle to Glacier Deep Archive typically provides simpler management and comparable or better cost optimization within one service.
Incorrect. You cannot apply S3 Glacier Vault Lock to objects that have been transitioned to S3 Glacier storage classes via S3 Lifecycle. Vault Lock applies to S3 Glacier vaults (legacy), not to S3 buckets or S3 Glacier/Deep Archive storage classes. For S3 objects, the correct immutability mechanism is S3 Object Lock (compliance mode) rather than Glacier Vault Lock.
Core Concept: This question tests immutable (WORM) retention for long-term archives in AWS. The primary services/features are Amazon S3 Object Lock (WORM) and S3 storage classes (especially S3 Glacier Deep Archive) to minimize cost while meeting strict legal hold requirements. Why the Answer is Correct: S3 Object Lock in compliance mode enforces WORM such that objects cannot be deleted or overwritten until the retention period expires, and this protection applies even to highly privileged identities (including the root account). This directly matches the requirement: “no user, including the root account, can modify or delete any footage during the retention period.” Using S3 also minimizes operational overhead compared to managing separate archival APIs, and you can combine Object Lock with lifecycle transitions to low-cost archival tiers (e.g., S3 Glacier Deep Archive) while retaining the immutability controls. Key AWS Features: - S3 Object Lock (Compliance mode): Enforces retention that cannot be bypassed by any IAM principal. Supports retention periods and legal holds. - Bucket must have Object Lock enabled at creation and typically uses versioning to preserve object versions. - Lifecycle policies can transition locked objects to S3 Glacier/Glacier Deep Archive for cost optimization while maintaining retention controls. - Bucket policies can enforce guardrails (e.g., require Object Lock headers on PUT, deny unencrypted uploads, restrict access paths), but the WORM guarantee comes from Object Lock compliance mode. Common Misconceptions: - Governance mode (option B) sounds like “retention,” but privileged users with special permissions can bypass it, which violates the “including root” requirement. - Glacier Vault Lock (option C) is a WORM-like control, but it applies to legacy S3 Glacier vaults (separate from S3 buckets) and increases operational complexity; also, the question emphasizes minimizing overhead and cost-effectively managing massive data, which aligns better with S3 + lifecycle. - Mixing S3 lifecycle to Glacier with Vault Lock (option D) is conceptually incorrect because Vault Lock is for Glacier vaults, not S3 Glacier storage classes within an S3 bucket. Exam Tips: When you see “WORM,” “legal hold,” and “even root cannot delete,” think S3 Object Lock in compliance mode. If the question also emphasizes cost, pair it mentally with lifecycle transitions to S3 Glacier Deep Archive. Be cautious: “governance” implies bypass capability; “compliance” implies no bypass.
An IoT analytics platform runs a fleet of short‑lived containers that, on each device registration, call AWS KMS CreateGrant to allow the device identity to use a customer managed key and then, within 100 ms, attempt to encrypt a 1‑KB payload at a sustained rate of 1,200 requests per second, but during performance tests the first Encrypt call intermittently fails with AccessDeniedException immediately after the grant is created; what should the security specialist recommend to eliminate these errors?
Incorrect. Retrying every 2 minutes does not meet the application's requirement to encrypt within 100 ms after device registration. Although eventual consistency might resolve by then, this approach treats the symptom rather than using the KMS feature designed for immediate grant use. It would also introduce severe latency and operational inefficiency for a high-throughput workload.
Incorrect. This option describes a client-supplied token being passed into CreateGrant, but KMS grant tokens are not arbitrary user-provided values in this workflow. The relevant token is the grant token returned by the CreateGrant response, which is then supplied to subsequent KMS operations through GrantTokens. Therefore, this option misstates how KMS grant tokens are obtained and used.
Incorrect. A grant name is only an identifier for the grant and does not provide any authorization shortcut or consistency bypass. Supplying a grant name in an Encrypt request cannot make a newly created grant effective sooner, because KMS expects grant tokens in the GrantTokens parameter, not grant names. This option confuses metadata used to label a grant with the token used to activate immediate use of that grant.
Correct. AWS KMS CreateGrant returns a grant token that allows the new grant to be used immediately before the grant has fully propagated through the service. By passing that returned token to clients and having them include it in the Encrypt request's GrantTokens parameter, the platform avoids intermittent AccessDeniedException caused by eventual consistency. This is the purpose-built KMS mechanism for low-latency workflows that create a grant and then use the key within milliseconds.
Core concept: This question is about AWS KMS grant eventual consistency. After a CreateGrant call succeeds, the new grant might not be visible immediately across KMS authorization paths, so an immediate Encrypt call can intermittently fail with AccessDeniedException. AWS KMS solves this by returning a grant token from CreateGrant, which callers can pass in the GrantTokens parameter of subsequent KMS operations to use the grant right away. Why correct: The engineering team should capture the grant token returned by CreateGrant and provide it to the component that performs Encrypt. Including that token in the Encrypt request tells KMS to honor the newly created grant even if it has not fully propagated yet. This directly eliminates the race condition without adding unacceptable latency. Key features: CreateGrant returns a GrantToken value; Encrypt supports a GrantTokens request parameter; grant tokens are specifically designed to bridge the propagation delay of newly created grants. This is the intended mechanism for short-lived, latency-sensitive workflows that create and use grants immediately. Common misconceptions: A grant name is not a grant token and has no effect on authorization timing. Clients do not invent arbitrary tokens for CreateGrant in normal KMS usage; the relevant token is the one returned by KMS. Long retry windows are unnecessary and violate the workload's 100 ms requirement. Exam tips: When you see CreateGrant followed immediately by Encrypt or Decrypt and intermittent AccessDeniedException, think of KMS eventual consistency and grant tokens. The standard fix is to pass the grant token returned by CreateGrant into the next KMS API call using GrantTokens.
A technology company needs a mechanism to transparently encrypt Amazon EBS volumes for an Auto Scaling fleet of 2,500 Amazon EC2 instances across 3 AWS Regions and 8 AWS accounts without developers ever handling encryption keys. The solution must scale automatically without ongoing administration, and the organization must be able to revoke access by deleting the encryption keys immediately, meeting a revocation RTO of under 1 minute. Which solution meets these requirements?
Incorrect. AWS KMS does not allow PendingWindowInDays set to 0 for ScheduleKeyDeletion; the waiting period is enforced (minimum 7 days, up to 30). Even disabling a KMS key is not the same as immediate cryptographic erasure. Therefore, it cannot meet the stated revocation RTO of under 1 minute across the fleet.
Correct. With KMS imported key material, EBS can transparently encrypt volumes at scale using KMS-integrated envelope encryption, keeping developers away from keys. If immediate revocation is required, DeleteImportedKeyMaterial removes the cryptographic material from KMS right away, effectively preventing decryption and meeting a sub-minute revocation RTO without ongoing administration.
Incorrect. CloudHSM provides customer-controlled HSMs, but EBS encryption integrates with AWS KMS, not directly with CloudHSM keys via PKCS#11 for EBS volume encryption. Using CloudHSM would add significant operational overhead (clusters, HA, backups, lifecycle) and does not satisfy the “transparent EBS encryption” integration requirement as cleanly as KMS.
Incorrect. Systems Manager Parameter Store is not designed to be a cryptographic key management system for EBS encryption. Storing keys in Parameter Store would require applications or developers to retrieve and handle key material, violating the requirement that developers never handle encryption keys. It also does not integrate with EBS encryption the way KMS does.
Core Concept: This question tests AWS-native, at-scale encryption for Amazon EBS using AWS KMS, specifically how to achieve rapid cryptographic revocation without developers handling keys. The key distinction is between disabling/scheduling deletion of a KMS key versus immediately removing key material. Why the Answer is Correct: Using AWS KMS with imported key material (BYOK) allows the organization to use KMS for transparent EBS encryption (including Auto Scaling fleets) while retaining the ability to revoke access quickly by deleting the imported key material. When you call DeleteImportedKeyMaterial, KMS immediately loses the cryptographic material needed to decrypt any data keys previously protected by that KMS key. This provides near-immediate cryptographic erasure, meeting an RTO under 1 minute, and requires no developer key handling because EBS integrates with KMS for envelope encryption. Key AWS Features: - EBS encryption integrates with KMS: instances and developers never see plaintext keys; EBS uses data keys protected by the KMS key. - Multi-account/multi-Region scaling: use KMS keys per Region (EBS encryption is Region-scoped) and share via IAM/key policies; automation can standardize deployment across 8 accounts and 3 Regions. - Imported key material lifecycle: ImportKeyMaterial + DeleteImportedKeyMaterial enables immediate revocation; optional re-import if needed later. Common Misconceptions: - “ScheduleKeyDeletion with 0 days” is not possible; KMS enforces a minimum waiting period (7–30 days). Disabling a key also does not guarantee immediate revocation in the way cryptographic erasure does. - CloudHSM sounds like the strongest control, but EBS does not use CloudHSM keys directly for EBS encryption; KMS is the integrated service for EBS. - Parameter Store is not a key management/encryption authority for EBS and would require applications/developers to handle keys, violating requirements. Exam Tips: For AWS-managed services like EBS, RDS, S3 SSE-KMS, the default answer is usually KMS. If the question demands immediate revocation/crypto-shredding, look for “imported key material” and “DeleteImportedKeyMaterial” rather than key deletion scheduling. Also remember EBS encryption is Region-specific, so plan keys per Region and automate policy/permissions across accounts.
An enterprise operates 28 AWS accounts in a single AWS Organizations organization and requires that every Amazon EC2 instance (Amazon Linux 2 and Windows Server 2019) runs the CorpShield EDR agent version 3.8 within 15 minutes of launch across all Regions; a central tools account publishes the golden AMIs used by business units, and AWS Systems Manager is already enabled organization-wide for software inventory and patching; a security engineer must implement a solution that continuously detects any EC2 managed instances that do not have the agent and automatically installs the agent when it is not present without replacing existing instances or relying on manual targeting—Which solution will meet these requirements?
Preinstalling the agent in golden AMIs is a strong preventive measure for future instances, and restricting launches to approved AMIs can improve standardization. However, this does not continuously detect whether existing instances are missing the agent or whether the agent is later removed or becomes unhealthy. The requirement explicitly calls for automatic installation on instances that do not have the agent, including existing instances, without replacement. Tagging AMIs or instances does not itself provide compliance evaluation or remediation.
Systems Manager Patch Manager is designed for patch compliance and deployment based on patch baselines and maintenance schedules, including some application patching scenarios. However, it is fundamentally a scheduled process and does not provide the near-real-time, event-driven detect-and-remediate loop required to ensure the EDR agent is installed within 15 minutes of instance launch across all Regions. It also does not naturally target only newly noncompliant instances based on compliance events, so it is a weaker fit than a Config plus EventBridge plus SSM remediation pattern. Even if the agent package could be represented in a baseline, the option still fails the continuous automatic remediation requirement as stated.
This option creates a closed-loop compliance process that continuously identifies instances missing the required EDR agent and remediates them automatically. AWS Config can be used centrally across the organization to evaluate instance compliance based on installed software information collected by Systems Manager Inventory, and EventBridge can react when an instance becomes NON_COMPLIANT. A Lambda function can then call Systems Manager Run Command against the specific affected instance, which avoids manual targeting and does not require replacing existing instances. This is the only option that clearly addresses continuous detection, automatic remediation, organization-wide scope, and post-launch enforcement within the stated operational model.
Systems Manager Distributor is useful for packaging and distributing third-party software such as an EDR installer in a standardized way. But the option only describes selecting all instances and installing the package with Run Command, which is a one-time or manually initiated deployment approach rather than continuous compliance monitoring. It does not include any mechanism to detect when a new instance launches without the agent or when the agent is later missing. Without an automated compliance trigger and targeted remediation workflow, it does not meet the stated requirements.
Core Concept: This question tests continuous compliance detection and automated remediation at scale across AWS Organizations using AWS Config + event-driven automation + AWS Systems Manager. The key is to detect managed instances missing required software and remediate without rebuilding instances or manually targeting them. Why the Answer is Correct: Option C provides an organization-wide, continuous control loop: AWS Config evaluates instances for required applications and flags NON_COMPLIANT resources. An EventBridge rule can react to compliance state changes and invoke a Lambda function that triggers Systems Manager Run Command to install the CorpShield EDR agent on the specific noncompliant instance. This meets the requirements to (1) continuously detect drift, (2) automatically remediate, (3) avoid replacing existing instances, and (4) avoid manual targeting because the instance ID comes from the compliance event. With Systems Manager already enabled org-wide, Run Command can execute quickly after launch; the 15-minute requirement is achievable with near-real-time eventing. Key AWS Features: - AWS Config multi-account/multi-Region enablement via AWS Organizations and aggregators for centralized visibility. - Managed rule ec2-managedinstance-applications-required (or equivalent application inventory compliance rule) leveraging Systems Manager Inventory data to evaluate installed applications. - EventBridge integration with AWS Config compliance change notifications. - Automated remediation pattern: EventBridge -> Lambda -> SSM Run Command (or Automation) with least-privilege IAM and cross-account execution (e.g., delegated admin, assumed roles). Common Misconceptions: - “Golden AMIs + SCP” (Option A) enforces future launches but does not continuously detect/remediate existing instances and does not guarantee installation within 15 minutes if instances are launched from other sources or already running. - Patch Manager (Option B) is for OS/patch compliance and scheduled maintenance windows; it is not ideal for immediate, event-driven installation of third-party agents. - Distributor + Run Command (Option D) helps package distribution, but selecting “all instances” is manual targeting and does not inherently provide continuous detection or auto-remediation on drift. Exam Tips: For requirements that say “continuously detects” and “automatically remediates,” look for AWS Config + EventBridge + SSM Automation/Run Command. For org-wide scale, prefer AWS Organizations integration and event-driven remediation over scheduled windows or one-time fleet actions.

Associate

Practitioner

Specialty

Practitioner

Associate

Associate

Professional

Associate

Professional


¿Quieres practicar todas las preguntas en cualquier lugar?
Obtén la app gratis
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
