
Simule a experiência real do exame com 65 questões e limite de tempo de 170 minutos. Pratique com respostas verificadas por IA e explicações detalhadas.
Powered by IA
Cada resposta é verificada por 3 modelos de IA líderes para garantir máxima precisão. Receba explicações detalhadas por alternativa e análise aprofundada das questões.
While reviewing an AWS CloudFormation template for a payments microservice, a security engineer finds that a parameter named PaymentApiToken exposes a production API token in plaintext as its default value; the token is referenced 12 times throughout the template (for Lambda environment variables, API Gateway headers, and an ECS task definition), and the engineer must remove plaintext from the template, preserve the ability to reference the value in all 12 locations during stack operations, ensure the secret is encrypted at rest and never appears in stack events or logs, and also support automatic rotation every 60 days; which solution will meet these requirements in the MOST secure way?
Systems Manager Parameter Store SecureString does provide KMS-backed encryption at rest and can be referenced from CloudFormation by using the ssm-secure dynamic reference syntax. However, Parameter Store is not the primary AWS service for managed secret rotation, and automatic rotation every 60 days would require custom automation rather than a native built-in rotation workflow. Because the question explicitly requires automatic rotation and asks for the MOST secure solution, Secrets Manager is the stronger fit. Parameter Store is acceptable for encrypted configuration values, but it is less complete for full secret lifecycle management.
AWS Secrets Manager is the AWS service designed specifically for storing and managing secrets such as API tokens. It encrypts the secret at rest with AWS KMS and supports native automatic rotation, which directly satisfies the requirement to rotate the token every 60 days. CloudFormation dynamic references to Secrets Manager let the template reuse the same secret value in many properties without hardcoding it as plaintext in the template. This is the most secure and operationally appropriate option among the choices because it combines managed secret storage, access control, auditing, and rotation in one service.
Amazon DynamoDB is not a supported CloudFormation dynamic reference source for retrieving secrets in template properties. Although DynamoDB can encrypt data at rest, it is a general-purpose NoSQL database rather than a managed secret store, so it does not provide native secret rotation, secret version staging, or secret-specific access patterns. Using DynamoDB for a production API token would add unnecessary application and operational complexity. It also does not align with AWS best practices for storing and rotating secrets used by infrastructure and applications.
Amazon S3 is not a supported CloudFormation dynamic reference source for secret resolution in templates. Even if the object were encrypted with SSE-S3, S3 is object storage rather than a secret management service and does not provide native secret rotation or version-stage semantics for credentials and tokens. SSE-S3 also uses S3-managed keys and does not offer the same secret-focused controls and workflows as Secrets Manager. This makes S3 a weaker and less secure design choice for managing a production API token.
Core Concept: This question tests secure secret management in infrastructure-as-code (CloudFormation) using dynamic references, encryption at rest, and automated rotation. The key services are AWS Secrets Manager (purpose-built for secrets + rotation) and CloudFormation dynamic references (to avoid plaintext in templates and stack outputs/events). Why the Answer is Correct: Option B is the most secure because AWS Secrets Manager is designed to store sensitive values encrypted with AWS KMS, retrieve them at deploy/runtime without embedding plaintext in the template, and support native automatic rotation on a schedule (every 60 days) using a rotation Lambda. CloudFormation dynamic references to Secrets Manager ({{resolve:secretsmanager:...}}) allow the template to reference the secret in all 12 locations while keeping the secret value out of the template body. When used correctly, the secret value is not displayed in CloudFormation stack events/console output because CloudFormation resolves the value at deployment time and does not persist the plaintext in the template. Key AWS Features: - Secrets Manager encryption at rest via KMS (AWS-managed or customer-managed keys). - Automatic rotation with a configurable interval (60 days) and rotation Lambda integration. - CloudFormation dynamic references to Secrets Manager for parameters and resource properties (e.g., Lambda environment variables, ECS task definition environment variables, API Gateway integration/request parameters where supported). - Fine-grained IAM policies to restrict who/what can read the secret (least privilege), plus CloudTrail auditing for secret access. Common Misconceptions: SSM Parameter Store SecureString (Option A) is encrypted and supports dynamic references, but it does not provide the same first-class, built-in rotation workflow as Secrets Manager (rotation typically requires custom automation). Also, some teams incorrectly assume “SecureString” automatically implies rotation and full secret lifecycle management. Exam Tips: When requirements include “automatic rotation” and “most secure,” default to AWS Secrets Manager unless the question explicitly constrains cost or forbids it. For CloudFormation, look for “dynamic references” to avoid plaintext in templates and to reduce accidental exposure in code repositories. Always pair secrets storage with KMS encryption and least-privilege IAM.
Quer praticar todas as questões em qualquer lugar?
Baixe o Cloud Pass grátis — inclui simulados, acompanhamento de progresso e mais.


Baixe o Cloud Pass e acesse todas as questões de prática de AWS Certified Security - Specialty (SCS-C02) gratuitamente.
Quer praticar todas as questões em qualquer lugar?
Baixe o app grátis
Baixe o Cloud Pass grátis — inclui simulados, acompanhamento de progresso e mais.
A global edtech company operates 15 AWS accounts in AWS Organizations and has enabled AWS Identity and Access Management (IAM) Access Analyzer at the organization level to detect public or cross-account access. The security team requires an automated workflow that, for any newly created IAM or resource policy that triggers an ACTIVE Access Analyzer finding, remediates external access by updating IAM role trust policies to add an explicit Deny for external principals and sends an email notification to security-ops@example.com within 5 minutes. Which combination of steps should a security engineer implement to meet these requirements? (Choose three.)
Correct as the orchestration component in the intended architecture. AWS Step Functions is well suited to inspect the Access Analyzer finding, branch based on resource type, and invoke the necessary remediation actions through AWS SDK integrations or Lambda. It also provides retries, error handling, and auditability, which are valuable for automated security response. Although the option's wording says to add an explicit Deny to a trust policy, the valid remediation pattern is to update the trust policy to remove or restrict the external access and then publish a notification to SNS.
Incorrect. AWS Batch is designed for queued, compute-intensive, longer-running jobs and introduces unnecessary infrastructure and latency for near-real-time security remediation. Forwarding findings through Batch to Lambda is an anti-pattern compared to direct EventBridge-to-Step Functions/Lambda integration. It also complicates cross-account operations and makes meeting the 5-minute notification SLA less deterministic.
Correct. Amazon EventBridge is the appropriate service to receive IAM Access Analyzer finding events and filter for findings whose status is ACTIVE. It can invoke a Step Functions state machine directly, enabling near-real-time remediation without polling or custom event ingestion. This is the standard AWS event-driven automation pattern for security findings across an organization.
Incorrect. CloudWatch metric filters operate on CloudWatch Logs log events, not on IAM Access Analyzer findings as first-class events. Even if findings were logged somewhere, using metric filters to trigger AWS Batch is indirect and brittle. EventBridge is the intended integration point for service events and provides better filtering, routing, and reliability for automated response.
Incorrect. SQS is a message queue and does not “forward a notification” to email recipients by itself. You would need a consumer (Lambda/EC2) to poll the queue and then send email via SNS/SES, adding complexity and potential delay. SQS can be useful for buffering, but it is not required here and does not directly meet the email requirement.
Correct. Amazon SNS is the native AWS service for sending email notifications to operational teams. By creating an SNS topic and subscribing security-ops@example.com, the workflow can notify the security team immediately after remediation. SNS integrates cleanly with Step Functions and supports the required sub-5-minute notification objective.
Core Concept: This question is about building an event-driven, organization-wide automated response for IAM Access Analyzer findings. The required pattern is to detect ACTIVE findings in near real time, invoke an orchestration workflow to remediate the affected policy, and send an email notification. The best-fit AWS services are Amazon EventBridge for routing Access Analyzer events, AWS Step Functions for workflow orchestration, and Amazon SNS for email delivery. Why the Answer is Correct: EventBridge can match IAM Access Analyzer finding events when a finding becomes ACTIVE and trigger a Step Functions workflow. Step Functions can inspect the finding details and call AWS APIs or Lambda functions to remediate the affected policy, such as updating an IAM role trust policy to remove or restrict external access. SNS then sends the required email notification to security-ops@example.com within the required time window. Key AWS Features: 1) IAM Access Analyzer integrates with EventBridge so findings can trigger automation without polling. 2) EventBridge supports event pattern matching on finding status such as ACTIVE and can invoke Step Functions directly. 3) Step Functions provides branching, retries, and service integrations for policy remediation workflows across accounts. 4) SNS supports email subscriptions and is the standard AWS-native service for operational notifications. Common Misconceptions: CloudWatch metric filters are for log-based pattern matching, not for consuming Access Analyzer findings directly. AWS Batch is intended for batch compute workloads and is not an appropriate near-real-time trigger or orchestration mechanism for security findings. SQS is useful for decoupling consumers but does not send email notifications by itself. Exam Tips: For AWS security automation questions, look for EventBridge as the trigger for service-generated findings, Step Functions or Lambda for remediation logic, and SNS for email notifications. Be cautious with IAM policy semantics: trust policies generally grant assumption through Allow statements and are usually remediated by removing or tightening those Allows rather than adding Deny statements.
An online retail marketplace uses a third-party SaaS container vulnerability scanner that integrates with AWS Security Hub in the company’s audit account. The security team must ensure that when a new finding with severity label HIGH or CRITICAL from this third-party product is imported into Security Hub in us-east-1, a remediation workflow is triggered automatically within 60 seconds and can scale to handle bursts of up to 500 findings per minute without managing any servers. Which solution will meet these requirements?
Correct. Security Hub publishes imported findings as EventBridge events in the same Region. An EventBridge rule can match the “Security Hub Findings - Imported” event and filter by the third-party product and severity label HIGH/CRITICAL. EventBridge invokes Lambda within seconds and both services scale automatically, meeting the 60-second SLA and burst requirement without managing servers. Add retries/DLQ or destinations for reliability.
Incorrect. Security Hub custom actions are primarily for manual, analyst-driven workflows (e.g., selecting findings in the console and choosing a custom action) or explicit API invocation. They do not automatically fire when new findings are imported. While Systems Manager Automation can remediate, the triggering mechanism here would not meet the requirement for automatic execution within 60 seconds on import.
Incorrect. Like option B, a Security Hub custom action with Lambda is typically initiated by a user from the Security Hub console (or via explicit custom action API usage), not automatically on every imported finding. Therefore it does not satisfy the requirement to trigger remediation automatically when a new HIGH/CRITICAL finding is imported, even though Lambda itself is serverless and scalable.
Incorrect. AWS Config rules evaluate AWS resource configurations and compliance against desired state; they are not designed to react to third-party vulnerability findings imported into Security Hub. Config evaluations can be periodic or configuration-change triggered, but they won’t natively consume Security Hub imported findings as the event source. This approach also risks missing the 60-second requirement and is conceptually misaligned.
Core Concept: This question tests event-driven security automation using AWS Security Hub findings, Amazon EventBridge (formerly CloudWatch Events), and AWS Lambda to trigger near-real-time remediation at scale without servers. Why the Answer is Correct: When third-party findings are imported into Security Hub, Security Hub emits EventBridge events such as “Security Hub Findings - Imported” in the same Region (us-east-1 here). An EventBridge rule can filter on product/source and on finding severity label (HIGH/CRITICAL) and then invoke a Lambda function to run remediation. EventBridge delivers events in seconds, meeting the 60-second requirement, and both EventBridge and Lambda are fully managed services that automatically scale. Handling bursts of 500 findings/minute (~8.3/sec) is well within typical EventBridge and Lambda scaling patterns, especially with appropriate Lambda concurrency settings. Key AWS Features: - EventBridge rule pattern matching: filter by Security Hub event type, product name/ARN, and severity label to avoid unnecessary invocations. - Serverless scaling: EventBridge scales ingestion and routing; Lambda scales by concurrency. You can set reserved concurrency to protect downstream systems and use DLQs (SQS) or on-failure destinations for resiliency. - Cross-account/central account: Since findings land in the audit account, the EventBridge rule and Lambda can be deployed there in us-east-1, aligning with the event source Region. Common Misconceptions: Custom actions in Security Hub (options B/C) are often confused with automatic triggers. Custom actions are designed for analyst-initiated workflows from the Security Hub console (or explicit API calls), not automatic execution upon import. AWS Config rules (option D) evaluate resource configuration compliance, not third-party Security Hub findings, and are not the right mechanism for reacting to imported vulnerability scanner findings within 60 seconds. Exam Tips: - For “automatically trigger on Security Hub findings,” think “EventBridge rule on Security Hub events.” - For “no servers” and “burst handling,” prefer EventBridge + Lambda (optionally SQS buffering) over manual/console-driven actions. - Always align the automation with the Region where the events are generated (us-east-1 in this scenario).
A global fintech with 75 AWS accounts in an AWS Organizations organization spanning three Regions (us-east-1, us-west-2, eu-west-1) needs a centralized solution that can aggregate and normalize security and network events from all accounts in the organization, from all AWS Marketplace partner tools deployed in those accounts, and from on-premises data center systems (1,200 Linux servers sending syslog and 30 firewalls), retain the data for 400 days, and allow analysts to run ad hoc SQL queries; which solution will meet these requirements most effectively?
S3 + Glue + Athena can centralize and query logs, and it’s a valid DIY log lake pattern. However, it does not inherently normalize events across many AWS Marketplace partner tools or on-prem syslog/firewall sources; you would need to build and maintain custom ingestion, parsing, and schema mapping for each source. It also doesn’t provide OCSF normalization out of the box, making it less effective for a large fintech at this scale.
CloudWatch Logs subscription filters to OpenSearch supports near-real-time search and dashboards, but it’s typically expensive and operationally heavy for 400-day retention across 75 accounts plus on-prem logs. OpenSearch is optimized for indexed search, not low-cost long-term retention. It also doesn’t automatically normalize heterogeneous security/network events from multiple partner tools and custom sources into a common schema for consistent SQL analytics.
Security Lake is purpose-built for centralized, multi-account/multi-Region security data lakes. With a delegated administrator in AWS Organizations, it can enable collection across all accounts/Regions, store data in S3, and normalize to OCSF. It supports AWS sources, AWS Marketplace partner integrations, and custom sources (including on-prem syslog/firewall events) and is designed for analytics with Athena for ad hoc SQL queries while meeting long retention via S3 lifecycle policies.
SCPs can deny or allow API actions but cannot “force” services to deliver logs to a specific S3 bucket across all services and accounts; configuration still must be implemented per service. Additionally, OpenSearch is not a direct SQL query engine over S3 log files (Athena is), and this option does not address normalization across AWS Marketplace tools and on-prem sources. It’s both technically inaccurate and incomplete for the requirements.
Core Concept: This question tests centralized, multi-account/multi-Region security data aggregation and normalization for detection and hunting, using a managed data lake approach. The key services are Amazon Security Lake (organization-wide collection and normalization to OCSF), Amazon S3-based retention, and Amazon Athena for ad hoc SQL queries. Why the Answer is Correct: Option C directly matches every requirement: (1) aggregates security and network events across 75 accounts and three Regions using AWS Organizations with a delegated administrator; (2) normalizes data into the Open Cybersecurity Schema Framework (OCSF), which is explicitly designed to standardize disparate security telemetry; (3) supports ingestion from AWS sources, AWS Marketplace partner integrations, and custom sources (including on-prem syslog/firewall logs) via Security Lake custom source mechanisms; (4) stores data in an S3-backed data lake with configurable lifecycle/retention to meet 400-day retention; and (5) enables analysts to run ad hoc SQL queries using Athena over the normalized OCSF tables. Key AWS Features / Best Practices: Security Lake provides organization-level enablement, cross-account data access patterns, and a consistent schema (OCSF) that reduces per-tool parsing. It centralizes data without requiring you to manually build and maintain ETL pipelines for each log type. Retention is handled with S3 lifecycle policies (e.g., transition to S3 Glacier Instant Retrieval/Flexible Retrieval for cost optimization while meeting 400 days). Athena is the native query engine for Security Lake data and aligns with serverless, pay-per-query analytics. Common Misconceptions: A looks attractive because S3 + Glue + Athena is a common log lake pattern, but it does not satisfy “normalize” across AWS Marketplace partner tools and on-prem sources without significant custom ETL and schema management; it’s a build-your-own SIEM data lake. B is a classic streaming-to-OpenSearch approach, but OpenSearch is not ideal for 400-day retention at scale/cost, and it doesn’t inherently normalize diverse sources. D is incorrect because SCPs cannot “force all services to deliver logs” in the way described, and OpenSearch querying “files in S3” is not the primary pattern (and still lacks normalization). Exam Tips: When you see “AWS Organizations + aggregate across accounts/Regions + normalize security events + Athena SQL,” think Amazon Security Lake and OCSF. For long retention (400+ days), prefer S3-based lakes with lifecycle policies over hot indexing stores. Also remember SCPs restrict actions; they don’t configure services or guarantee log delivery.
A company runs a containerized REST API on Amazon ECS with AWS Fargate behind an Application Load Balancer, and the NGINX access logs from the Fargate tasks are shipped to an Amazon CloudWatch Logs log group with a 90-day retention policy; last night the security team flagged IPv4 address 203.0.113.77 as suspicious, and a security engineer must, with the least effort, analyze the past 7 days of logs to determine the total number of requests from that IP and the specific request paths (for example, /v1/* and /admin/*) it accessed—what should the engineer do?
Incorrect. Amazon Macie is designed to discover and classify sensitive data (PII, credentials, etc.) primarily in Amazon S3 using managed data identifiers. It is not a log query engine for NGINX access logs and does not natively provide ad hoc aggregations like “count requests by IP and path.” Exporting logs to S3 also adds unnecessary steps and delays for a 7-day investigation.
Incorrect for “least effort.” A CloudWatch Logs subscription filter to OpenSearch is useful for near-real-time indexing and building dashboards, but it requires provisioning and operating an OpenSearch domain and configuring ingestion. It also does not automatically backfill the last 7 days of existing logs unless you perform additional export/replay steps. Overkill for a quick incident query.
Correct. CloudWatch Logs Insights can query the existing log group immediately, scoped to the last 7 days, filter on IP 203.0.113.77, parse NGINX fields if needed, and use stats/count aggregations to return total request count and the request paths accessed (and counts per path/prefix). This is the fastest, lowest-setup approach for incident-response log analysis.
Incorrect. Exporting to S3, running a Glue crawler, and using Glue to view results is significantly more operational work than necessary. Glue crawlers catalog schemas; they do not inherently “catalog only entries that contain the IP” without additional ETL/filtering logic. For simple counts and grouping over a 7-day window, CloudWatch Logs Insights is the intended tool.
Core Concept: This question tests incident-response log analysis using Amazon CloudWatch Logs Insights. Logs are already centralized in CloudWatch Logs with sufficient retention (90 days). The requirement is to quickly query the last 7 days to (1) count requests from a specific IP and (2) list the request paths accessed. Why the Answer is Correct: CloudWatch Logs Insights is purpose-built for ad hoc, interactive querying of CloudWatch Logs without building a separate analytics pipeline. With the least effort, the engineer can run a query scoped to the last 7 days, filter on client IP 203.0.113.77, parse NGINX access log fields (if needed), and aggregate results to return total request count and counts by request path (or by path prefix such as /v1/ and /admin/). This directly satisfies both outputs in minutes and requires no data export, no new infrastructure, and no ongoing ingestion costs. Key AWS Features: CloudWatch Logs Insights supports time-range selection, filtering, parsing (parse command with patterns/regex), and aggregations (stats count() by field). It can query across log streams in a log group and is commonly used for security investigations (e.g., identifying suspicious IP activity). It aligns with AWS Well-Architected Security and Operational Excellence principles by enabling rapid detection/investigation using centralized logging. Common Misconceptions: Streaming to OpenSearch (option B) can provide powerful dashboards, but it is not “least effort” for a one-off 7-day investigation because it requires provisioning a domain, configuring a subscription filter, and waiting for ingestion (and it won’t retroactively include the last 7 days unless you reprocess/export). Exporting to S3 and using Glue (option D) is also heavier operationally and slower to iterate. Macie (option A) is for discovering sensitive data in S3, not for querying IP/path patterns in application logs. Exam Tips: When logs are already in CloudWatch Logs and the task is quick investigation/aggregation over a recent time window, default to CloudWatch Logs Insights. Choose OpenSearch/S3+Athena/Glue when you need long-term analytics, complex dashboards, cross-source correlation at scale, or retention beyond CloudWatch needs—not for immediate, minimal-effort incident triage.
A media analytics startup runs video transcoding jobs on AWS Batch (EC2 compute environment) that pull private container images from Amazon Elastic Container Registry (Amazon ECR); currently, 12 ECR repositories across two Regions use the default AES-256 encryption, but compliance requires migrating all repositories to a customer managed AWS KMS key (alias/media-sec) and enabling CVE detection on image push; a security engineer must implement an approach that leaves no repositories unencrypted and that provides a vulnerability report after the next image push. Which solution will meet these requirements?
Incorrect. You cannot simply “enable KMS encryption” on existing ECR repositories that were created with AES-256; ECR repository encryption settings are defined at creation and are not generally mutable to switch to a CMK. Also, installing the Amazon Inspector Agent on EC2 instances assesses host vulnerabilities, not container image CVEs triggered by an image push to ECR, so it does not meet the “report after next image push” requirement.
Correct. Recreating the repositories in each Region with encryption set to the customer managed KMS key (alias/media-sec) ensures every repository is encrypted with the required CMK. Enabling ECR image scanning on push ensures that the next time an image is pushed, ECR performs a vulnerability scan and generates findings that can be reviewed as a scan report, satisfying the compliance and detection requirements.
Incorrect. While recreating repositories with a CMK and enabling scanning aligns with the ECR requirements, installing the AWS Systems Manager (SSM) Agent and generating an inventory report does not provide container image CVE findings. SSM Inventory is for instance/software inventory and compliance metadata, not ECR image vulnerability scanning results tied to an image push event.
Incorrect. As with option A, you generally cannot change existing ECR repositories from AES-256 to CMK encryption in-place. Additionally, AWS Trusted Advisor does not provide container image CVE scanning or a vulnerability report after an image push; Trusted Advisor focuses on cost optimization, performance, fault tolerance, service limits, and some security checks, not ECR image CVE findings.
Core concept: This question tests Amazon ECR repository configuration for (1) encryption at rest using a customer managed AWS KMS key (CMK) and (2) vulnerability scanning that produces findings after the next image push. It also implicitly tests what can and cannot be changed in-place on an existing ECR repository. Why the answer is correct: To ensure “no repositories unencrypted” under the new compliance rule, each repository must be configured to use the specified CMK (alias/media-sec). In ECR, encryption configuration is set at repository creation time; you cannot retroactively switch an existing repository from AES-256 (AWS owned key) to a customer managed KMS key in-place. Therefore, the compliant approach is to create new repositories (or recreate with the same names after deletion, if feasible) in each Region with KMS encryption using alias/media-sec. Additionally, enabling “scan on push” ensures that after the next image push, ECR will automatically trigger a vulnerability scan and produce a findings report for that pushed image. Key AWS features and best practices: - ECR encryption at rest: AES-256 (AWS owned) vs AWS KMS (customer managed key). CMKs provide key policy control, rotation options, and auditability via AWS CloudTrail. - ECR image scanning: “scan on push” (basic scanning) or enhanced scanning via Amazon Inspector (depending on account settings). Either way, enabling scan on push meets the requirement to get a report after the next push. - Multi-Region: KMS keys are Regional; you must ensure alias/media-sec exists (or is created) in both Regions and that ECR has permission to use it. Common misconceptions: A frequent trap is assuming you can “enable KMS encryption” on an existing ECR repository like you can for some other services. Another trap is confusing host-based CVE scanning (Inspector Agent/SSM inventory) with container image scanning in ECR; the requirement is a vulnerability report tied to the image push, not the EC2 instances. Exam tips: When you see “after the next image push,” think ECR “scan on push” (or Inspector enhanced scanning). When you see “migrate encryption from AES-256 default to CMK,” verify whether the service supports in-place encryption changes; for ECR repositories, plan for repository recreation and image repush/retag as part of migration.
A fintech company's security engineer must ensure that a temporary vendor IAM user can access only the Amazon DynamoDB console to view and edit tables in a specific account and must be prevented from using any other AWS services under any circumstances. The vendor IAM user might later be added to one or more IAM groups that grant broad permissions (for example, AdministratorAccess), but the user's effective permissions must remain limited to DynamoDB only. Which approach should the security engineer take to meet these requirements?
An inline policy that allows DynamoDB access can grant the needed permissions today, but it does not prevent the user from gaining additional permissions later through group membership or other attached policies. If the user is later added to a group with AdministratorAccess, the user could access other services. Inline policies are not a guardrail; they are just one source of identity-based permissions.
A permissions boundary that allows only DynamoDB actions sets a hard maximum on what the user can do, regardless of any future group memberships or attached policies. Even if AdministratorAccess is attached later, actions outside DynamoDB will still be denied because they are not permitted by the boundary. This directly satisfies the requirement to prevent use of any other AWS services under any circumstances.
Putting the user in a DynamoDB-only group is a common way to grant access, but it is not durable against future changes. If the user is later added to another group with broader permissions (for example, AdministratorAccess), the user’s effective permissions expand. Group-based permission management is additive and does not inherently enforce a maximum permission boundary.
A role with explicit denies could restrict what the role can do, but it relies on the vendor always assuming that role and using only those role credentials. It does not inherently prevent the underlying IAM user from using other permissions if they are later granted directly or via groups. The requirement is to constrain the IAM user’s effective permissions regardless of future attachments, which is better met by a permissions boundary.
Core Concept: This question tests IAM effective permissions and how to enforce a maximum permission set regardless of future group memberships. The key concept is an IAM permissions boundary, which defines the upper limit of permissions that an IAM principal (user or role) can ever receive. Why the Answer is Correct: A permissions boundary policy that allows only DynamoDB actions ensures the vendor IAM user can never use other AWS services, even if later added to groups with broad permissions like AdministratorAccess. IAM evaluation requires that an action is allowed by the identity-based policy AND is within the permissions boundary. Therefore, any non-DynamoDB action will be implicitly denied by the boundary (not allowed within the boundary), preventing privilege escalation via group membership. Key AWS Features: Permissions boundaries are managed policies attached to a user or role to set a maximum permissions guardrail. They are commonly used for delegated administration and to constrain temporary or vendor access. To meet “console to view and edit tables,” the boundary should allow relevant DynamoDB actions (e.g., dynamodb:DescribeTable, dynamodb:UpdateTable, dynamodb:PutItem, etc.) and typically also allow required read-only actions for console usability (often via AWS-managed DynamoDB policies or a custom least-privilege set). Resource scoping can further restrict to specific tables/ARNs in the account. Common Misconceptions: Inline policies or group policies (Options A/C) grant permissions but do not cap future permissions; adding AdministratorAccess later would override the intent. An explicit deny strategy in a role (Option D) can work in theory, but it depends on the vendor consistently assuming the role and does not prevent the IAM user itself from using other permissions if they are granted directly or via groups. Exam Tips: When a requirement says “must remain limited even if later added to admin groups,” look for permissions boundary (or SCP at the organization level). Boundaries restrict maximum permissions for a principal; they do not grant access by themselves—you still need identity-based policies to grant the DynamoDB permissions within that boundary.
A video streaming company runs a thumbnail-generation microservice on an Amazon Elastic Container Service (Amazon ECS) cluster with the AWS Fargate launch type behind an Application Load Balancer, and after detecting about 200 suspicious requests per minute from a single ASN, a security engineer must, with the least operational effort, examine a live task to retrieve application log files from /var/log/app and capture a 256-MB memory/core dump for offline analysis; which solution meets these requirements?
Replatforming from Fargate to EC2 just to gain SSH access is high operational effort and introduces new management overhead (patching, scaling, host hardening). It also delays incident response. While it would allow host-level inspection, it violates the “least operational effort” requirement and is unnecessary because ECS Exec provides container access without changing the launch type.
Shipping logs to CloudWatch Logs via STDOUT and the awslogs driver is a good logging pattern, but it does not satisfy the requirement to examine a live task and retrieve existing files from /var/log/app or to capture a memory/core dump on demand. It also requires application changes and only addresses logs, not interactive forensic collection.
CloudWatch Container Insights and ADOT provide metrics/traces/log telemetry for observability and detection, not interactive access or artifact retrieval. They won’t let an engineer enter a running Fargate container to pull /var/log/app files or generate a targeted 256-MB memory/core dump. This is more aligned with monitoring than incident response forensics.
ECS Exec is the AWS-native feature for running commands inside a live ECS task, including tasks that use the Fargate launch type. It provides secure, auditable access without requiring host management, which aligns with the requirement for the least operational effort. Once connected, the engineer can inspect /var/log/app, archive and copy out the log files, and run appropriate in-container tooling to generate the requested memory/core dump for offline analysis. This directly addresses both live access and artifact collection, which the other options do not.
Core concept: This question is about performing live incident-response investigation on an Amazon ECS service that uses the AWS Fargate launch type. Because Fargate does not provide host access, the relevant AWS-native capability is ECS Exec, which allows secure command execution inside a running container without replatforming the workload. Why correct: ECS Exec is the least-operational-effort solution because it works directly with existing Fargate tasks and provides interactive access to the running container. From that session, the engineer can inspect /var/log/app, package and retrieve log artifacts, and run tooling inside the container to create the requested 256-MB memory/core dump for offline analysis. This satisfies the requirement to examine a live task rather than only reviewing previously exported telemetry. Key features: ECS Exec uses AWS Systems Manager Session Manager under the hood and supports audited, IAM-controlled access to running ECS tasks. It avoids the need to manage EC2 hosts, SSH keys, bastion hosts, or a launch-type migration. It is the standard AWS approach for operational access to containers running on Fargate. Common misconceptions: CloudWatch Logs, Container Insights, and ADOT help with observability, but they do not provide shell access to a live Fargate container or let you collect arbitrary filesystem artifacts and on-demand memory dumps. Another common mistake is assuming SSH access is possible or desirable with Fargate; there is no customer-managed host to log into. Exam tips: When a question mentions Fargate plus a need to inspect a running container or collect live forensic artifacts, think ECS Exec first. If the requirement were only centralized application logging, CloudWatch Logs might be enough, but live triage and artifact collection point to ECS Exec. Also watch for phrases like least operational effort, which strongly favor managed access over replatforming.
A media startup inadvertently pushed a Docker image to a public registry that contains a plaintext AWS access key and secret for an IAM user used by the CI build system; the image has been publicly downloadable for 3 hours, and the security team must both determine whether the leaked credentials were used without authorization within the last 24 hours and immediately prevent any additional use of those credentials, which combination of steps will meet these requirements? (Choose two.)
Correct. Setting the exposed IAM access key to Inactive immediately prevents any further AWS API requests signed with that key, meeting the “immediately prevent any additional use” requirement. This is standard containment in AWS incident response. After disabling, rotate credentials and remediate the CI system to avoid long-term keys (prefer STS/roles).
Correct. CloudTrail logs API activity and includes the AccessKeyId used for each request, along with eventTime, sourceIPAddress, userAgent, and affected resources (where applicable). Querying Event history or CloudTrail Lake for the specific AccessKeyId over the last 24 hours is the most direct way to determine whether the leaked credentials were used and what actions were performed.
Incorrect. GuardDuty is primarily a threat detection service that generates findings based on suspicious activity (e.g., anomalous API calls, known malicious IPs). It does not provide a native “rule to block a leaked access key at the account boundary.” Prevention would require disabling the key in IAM or using other controls (e.g., SCPs/permissions boundaries), not GuardDuty.
Incorrect. Keeping the compromised key active for 24 hours to gather evidence conflicts with the requirement to “immediately prevent any additional use” and increases risk of further unauthorized actions. Evidence collection should be done from logs (CloudTrail, VPC Flow Logs, etc.) after containment. You can still investigate past usage even after disabling the key.
Incorrect. IAM credential reports provide metadata (e.g., key age, status, last used date in some contexts) but are not a detailed audit trail of API calls and do not reliably satisfy “determine whether used without authorization within the last 24 hours” with source IPs and resources accessed. CloudTrail is the authoritative source for API-level forensics.
Core Concept: This scenario tests incident response for leaked long-term IAM user access keys: (1) immediate containment to stop further misuse and (2) detection/forensics to determine what the key did recently. The primary AWS services/concepts are IAM access key management (disable/rotate) and CloudTrail auditing (querying events by AccessKeyId). Why the Answer is Correct: Option A provides immediate containment. Deactivating (making inactive) the exposed access key instantly prevents any additional AWS API calls signed with that key, which is the fastest and most reliable way to stop ongoing abuse. Option B provides the required investigation capability. CloudTrail records management events (and optionally data events) with the access key used, source IP, user agent, AWS service, and API action. Querying CloudTrail Event history (last 90 days) or CloudTrail Lake for the specific AccessKeyId over the last 24 hours directly answers whether the credentials were used without authorization and what actions were taken. Key AWS Features / Best Practices: - IAM: Set the compromised access key status to Inactive; then rotate credentials (create a new key only after containment) and remove the leaked secret from the build pipeline. Consider moving CI to short-lived credentials via IAM roles/OIDC (e.g., GitHub Actions) or STS AssumeRole. - CloudTrail: Use Event history for quick triage, or CloudTrail Lake for richer SQL queries and longer retention. Filter on AccessKeyId, eventTime, eventSource, eventName, sourceIPAddress, and resources. If you have CloudTrail data events enabled (S3 object-level, Lambda invoke, etc.), include those in the investigation. Common Misconceptions: Some think you should keep the key active to “collect evidence” (Option D), but that violates containment and increases blast radius. Others rely on IAM credential reports (Option E), but those are not a forensic log of API calls and won’t show detailed usage. GuardDuty is a detector, not a preventative control to block a specific key (Option C). Exam Tips: For leaked IAM access keys, the exam pattern is: contain first (disable/delete key), then investigate with CloudTrail by AccessKeyId. GuardDuty helps detect suspicious behavior but does not replace CloudTrail for definitive event attribution, and it cannot directly block a specific access key at the account boundary.
A fintech company runs 600 Amazon ECS services on AWS Fargate across three Regions, and during a security review the engineer finds that short API PINs (6–10 characters) are stored as plaintext in task definition environment variables and are visible in the console and task metadata at runtime. What is the MOST cost-effective way to address this security issue?
Incorrect. IAM can restrict who can view ECS task definitions or describe tasks, but it does not eliminate the underlying risk: the secret still exists as plaintext in the task definition and can be exposed through task metadata endpoints or other operational paths. Security best practice is to remove secrets from task definitions entirely and store them in a dedicated secret store with encryption and audited access.
Incorrect. AWS Step Functions is a workflow/orchestration service, not designed to store secrets. Putting PINs into state machine input/output would likely increase exposure because execution history can retain data unless explicitly filtered, and it adds unnecessary cost and complexity. This does not align with least privilege or secret-management best practices for ECS/Fargate.
Partially correct but not the MOST cost-effective. AWS Secrets Manager is an excellent secret store with strong features (rotation, versioning, staging labels, integration patterns), but it has per-secret monthly charges and API call costs. For short PINs that likely do not require rotation workflows, Secrets Manager is typically more expensive than SSM Parameter Store SecureString at scale.
Correct. AWS Systems Manager Parameter Store SecureString encrypts secrets with KMS and integrates directly with ECS task definitions via the “secrets” field so values are not stored as plaintext in the task definition. IAM policies can restrict ssm:GetParameter(s) and kms:Decrypt to only the ECS task roles that need access. It is generally the most cost-effective managed approach for simple secrets without advanced rotation requirements.
Core concept: This question tests secure secret handling for Amazon ECS on AWS Fargate. Environment variables in task definitions are not a secure secret store because they can be viewed in the ECS console, retrieved via task metadata endpoints at runtime, and exposed to anyone with sufficient ECS/CloudTrail/SSM access. Best practice is to externalize secrets and inject them securely at runtime using IAM-scoped access. Why the answer is correct: AWS Systems Manager Parameter Store SecureString is the most cost-effective managed option for storing small secrets like short API PINs and retrieving them at runtime from ECS tasks. SecureString encrypts values with AWS KMS and supports fine-grained IAM policies so only the task role (not the execution role, unless needed) can read specific parameters. ECS integrates directly with Parameter Store via the container definition “secrets” field, which injects values into the container as environment variables without placing plaintext in the task definition. This removes console/task-definition plaintext exposure while keeping operational overhead low across 600 services and three Regions. Key AWS features and best practices: - Use SSM Parameter Store parameters of type SecureString with a customer managed KMS key (or AWS managed key) and least-privilege IAM (ssm:GetParameter(s) scoped to specific parameter ARNs, plus kms:Decrypt scoped to the key). - In ECS task definitions, use “secrets” (valueFrom referencing the parameter ARN/name) rather than “environment”. - Separate duties: task execution role pulls secrets at launch; task role is used by the app at runtime (depending on retrieval method). Prefer ECS secret injection to avoid app-side calls. - Replicate parameters per Region (Parameter Store is regional) and manage via IaC. Common misconceptions: Secrets Manager is often chosen by default for “secrets,” but it has per-secret monthly charges and API call costs that can be significant at 600 services across Regions, especially if many distinct PINs exist. Step Functions is an orchestration service, not a secret store. “Hiding” environment variables with IAM in the console does not address exposure via task metadata or other retrieval paths. Exam tips: When you see “plaintext in task definition env vars,” the fix is ECS secrets integration with either Secrets Manager or SSM Parameter Store SecureString. If the question emphasizes “MOST cost-effective” and secrets are simple/static, Parameter Store SecureString is typically the best answer; choose Secrets Manager when you need rotation, built-in secret lifecycle, or higher-level secret features.