
90問と165分の制限時間で実際の試験をシミュレーションしましょう。AI検証済み解答と詳細な解説で学習できます。
AI搭載
すべての解答は3つの主要AIモデルで交差検証され、最高の精度を保証します。選択肢ごとの詳細な解説と深い問題分析を提供します。
IoCs were missed during a recent security incident due to the reliance on a signature-based detection platform. A security engineer must recommend a solution that can be implemented to address this shortcoming. Which of the following would be the most appropriate recommendation?
FIM monitors critical files, directories, and sometimes registry settings for unauthorized changes, which is useful for detecting tampering and supporting compliance. However, it is limited to integrity changes on monitored assets and does not broadly analyze user or system behavior for anomalies. It may reveal that a file was altered, but it does not solve the larger problem of missing attacks because a platform depends too heavily on signatures. As a result, FIM is a helpful supplemental control but not the best recommendation here.
SASE is a cloud-delivered architecture that converges networking and security capabilities such as SWG, CASB, ZTNA, and FWaaS. It improves secure access and policy enforcement for distributed environments, but it is not primarily a detection method for identifying threats missed by signature-based systems. While some SASE offerings include threat prevention features, the core value of SASE is architectural consolidation and secure connectivity. Therefore, it does not directly address the need for behavior-based detection analytics in this scenario.
CSPM focuses on identifying cloud misconfigurations, insecure settings, and compliance issues across cloud resources. It is valuable for reducing attack surface and improving governance, but it is not designed to detect attacker behavior or anomalous activity during an incident. The question is about missed IoCs due to signature reliance, which calls for a detection capability rather than a posture management tool. CSPM is important in cloud security programs, but it is not the best answer for this specific gap.
EAP is an authentication framework used for network access control, commonly with 802.1X in wired and wireless environments. Its purpose is to support secure authentication methods, not to detect malicious behavior or analyze incident indicators. Implementing EAP could strengthen access control and identity verification, but it would not help identify threats that evade signature-based detection. Because the problem is detection shortfall rather than authentication weakness, EAP is not the appropriate recommendation.
Core concept: The question targets the limitation of signature-based detection (matching known IoCs such as hashes, domains, IPs, or static patterns) and asks for a solution that addresses missed IoCs. The best compensating capability is behavior/anomaly-based detection that can identify suspicious activity even when the specific indicator is new or obfuscated. Why the answer is correct: UEBA (User and Entity Behavior Analytics) is designed to detect deviations from established baselines of normal behavior for users, hosts, service accounts, and applications. When signature-based tools miss novel malware, living-off-the-land techniques, or attacker tradecraft that doesn’t produce known IoCs, UEBA can still flag the activity through anomalies (e.g., unusual login times, impossible travel, atypical data access, abnormal process execution patterns, privilege escalation sequences). This directly addresses the shortcoming: reliance on known signatures. Key features / best practices: UEBA commonly uses statistical models and/or machine learning to build baselines and generate risk scores. It correlates telemetry from SIEM, EDR, IAM, VPN, DNS, cloud logs, and endpoint events. Best practices include: integrating diverse log sources, tuning to reduce false positives, using peer-group analysis (comparing a user to similar roles), and coupling UEBA alerts with SOAR playbooks for rapid triage and containment. UEBA is especially effective for insider threats, compromised accounts, and stealthy lateral movement. Common misconceptions: FIM can detect unauthorized file changes, but it is still largely rule/threshold driven and narrow in scope; it won’t reliably detect novel attacker behavior across identities and entities. SASE is an architecture for delivering network/security controls, not a primary method for detecting missed IoCs. CSPM focuses on cloud misconfigurations and compliance posture, not behavioral detection. EAP is an authentication framework, unrelated to detection analytics. Exam tips: When you see “signature-based missed it” or “unknown/zero-day/novel techniques,” look for behavior/anomaly analytics: UEBA, NDR, EDR with behavioral engines, or ML-based detections. If the question emphasizes user/account misuse and baselining, UEBA is the strongest match. Map each option to its primary purpose (detection analytics vs posture management vs access control) to eliminate distractors.
外出先でもすべての問題を解きたいですか?
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。


外出先でもすべての問題を解きたいですか?
無料アプリを入手
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。
A security architect discovers the following while reviewing code for a company's website: selection = "SELECT Item FROM Catalog WHERE ItemID = " & Request("ItemID") Which of the following should the security architect recommend?
Client-side processing is not a reliable security control because the attacker controls the browser and can modify requests (e.g., via proxy tools). Even if JavaScript validates ItemID, an attacker can send a malicious ItemID directly to the server. Server-side protections are required for injection flaws, so this does not address the root cause.
Query parameterization (prepared statements) is the correct mitigation for SQL injection. The SQL is sent with placeholders (e.g., WHERE ItemID = ?) and the user input is bound as a parameter with a specific type (integer). This prevents the input from being interpreted as SQL syntax, stopping injected operators, comments, or additional statements.
Data normalization improves database structure and reduces redundancy (e.g., splitting data into related tables). While important for integrity and performance, it does not prevent an attacker from injecting SQL through concatenated queries. Normalization addresses schema design, not secure query construction or input handling.
Escape character blocking (or escaping) is a weak and error-prone approach. Attackers can bypass filters using alternate encodings, DB-specific syntax, comment styles, or unexpected character sets. Escaping may be part of defense-in-depth in some contexts, but it is not the preferred control compared to parameterized queries.
URL encoding only changes how characters are represented in transit (e.g., spaces to %20). It does not neutralize SQL metacharacters once decoded by the server framework, and it is not intended as an injection defense. Attackers can still deliver payloads that decode into malicious SQL fragments.
Core Concept: The code concatenates untrusted user input (Request("ItemID")) directly into a SQL statement. This is the classic pattern that enables SQL injection, where an attacker manipulates the query logic by supplying crafted input (e.g., "1 OR 1=1" or "1; DROP TABLE Catalog--"). The security control being tested is secure database query construction. Why the Answer is Correct: Query parameterization (prepared statements) separates code from data. Instead of building SQL strings, the application sends the SQL template with placeholders and binds the user-supplied value as a typed parameter. The database treats the parameter strictly as data, not executable SQL, preventing injected operators, comments, or stacked queries from altering the statement’s structure. This is the primary, recommended mitigation in OWASP guidance for injection flaws. Key Features / Best Practices: 1) Use prepared statements/parameterized queries in the data access layer (ADO.NET, JDBC, PDO, etc.). 2) Enforce strong input validation (e.g., ItemID must be an integer) as defense-in-depth, but not as the sole control. 3) Apply least privilege to the database account used by the web app to reduce impact if a flaw exists. 4) Consider stored procedures only if they are parameterized and do not build dynamic SQL internally. Common Misconceptions: Escaping or blocking characters can seem effective, but it is error-prone (different DBs, encodings, and bypass techniques). URL encoding is not a security control; it only changes representation. Client-side processing can be bypassed because attackers control the client. Data normalization is a database design practice and does not prevent injection. Exam Tips: When you see string concatenation into SQL ("... WHERE id = " & Request(...)), the best answer is almost always parameterized queries/prepared statements. If “input validation” is offered, remember it is helpful but secondary; parameterization is the definitive control for SQL injection prevention.
A security architect needs to enable a container orchestrator for DevSecOps and SOAR initiatives. The engineer has discovered that several Ansible YAML files used for the automation of configuration management have the following content:
$ hostnamectl
COMPTIA001
$ cat /etc/ansible/ansible.cfg
[inventory]
enable_plugins = kubernetes.core.k8s
$ cat /etc/ansible/projects/roles/k8/default/main.yml
---
- Name: Create a Kubernetes Service Objects
kubernetes.core.k8s:
state: present
definition:
apiVersion: v2
kind: Service
$ cat /etc/kubernetes/manifests
insecure-bind-address "localhost"
Which of the following should the engineer do to correct the security issues presented within this content?
Changing kubernetes.core.k8s to kubernetes.core.k8s_service is not a security fix. The generic k8s module is valid for managing Kubernetes objects from YAML definitions, including Service resources, so the issue is not the module name itself. This change would be a functional refactor at most and would not address insecure exposure, authentication, or control-plane hardening. Security questions should focus on reducing attack surface, not swapping equivalent automation modules.
Changing the hostname from COMPTIA001 to localhost would be operationally incorrect and would not remediate the security issue. Kubernetes nodes and automation systems rely on stable, unique host identity for certificates, networking, and management. Replacing a real hostname with localhost can break resolution and create ambiguity rather than improve security. The problem is not the hostname value but the insecure configuration choices around orchestration access.
Changing state: present to state: absent would simply remove the Kubernetes object instead of fixing the security weakness. Deleting a Service definition is not a targeted remediation for insecure control-plane or automation configuration. Security hardening should address how access is exposed and controlled, not arbitrarily remove resources that may be required for operations. This option confuses resource lifecycle management with actual security remediation.
Updating or removing the ansible.cfg file is the best answer among the available options because it can reduce unnecessary Kubernetes automation exposure and enforce safer orchestration behavior. The file explicitly enables the kubernetes.core.k8s plugin, which should only be present when there is a justified operational need and proper access controls around kubeconfig, RBAC, and credentials. In contrast to the other choices, this option can improve security posture without increasing the reachability of an insecure Kubernetes endpoint. While it does not directly fix the insecure-bind-address line, it is the only offered remediation that plausibly reduces risk rather than worsening it.
Changing insecure-bind-address from localhost to COMPTIA001 would make an insecure endpoint more accessible from the network, which is the opposite of hardening. For any setting labeled insecure, best practice is to disable it entirely or keep it tightly restricted, not expose it to a broader interface. Binding to localhost limits exposure to the local machine, whereas binding to the host identity can allow remote access depending on name resolution and interface mapping. This option increases attack surface and is therefore not a valid security correction.
Core concept: This question is testing recognition of insecure Kubernetes and Ansible configuration choices in an automation-driven DevSecOps/SOAR environment. The most obvious security issue shown is the presence of an insecure Kubernetes bind setting, but none of the answer choices offer the proper remediation of disabling the insecure setting entirely. Because option E would make the insecure endpoint more broadly reachable, the safest corrective action among the provided choices is to update or remove the ansible.cfg file so unnecessary Kubernetes inventory/plugin exposure is reduced. Why correct: The ansible.cfg file explicitly enables the kubernetes.core.k8s plugin, which may broaden automation access to Kubernetes resources and should be reviewed, restricted, or removed if not required. Since the manifest issue cannot be correctly fixed by changing localhost to COMPTIA001, D is the only option that plausibly improves security rather than worsening it. In secure DevSecOps design, automation configuration should follow least privilege and only enable plugins and integrations that are necessary. Key features: Secure automation requires minimizing exposed control-plane access, using least-privilege service accounts, and avoiding unnecessary inventory or orchestration plugins. Kubernetes insecure bind settings should normally be disabled rather than rebound to a wider interface. Ansible configuration files are a common place to enforce safer defaults and reduce attack surface. Common misconceptions: A common mistake is assuming localhost is the problem because remote tools cannot reach it, but for an insecure endpoint localhost is actually safer than exposing it externally. Another misconception is that changing modules or deleting Kubernetes resources fixes security posture; those actions do not address the actual risk. Also, hostnames are not a substitute for secure binding, authentication, or authorization. Exam tips: On SecurityX-style questions, if an option would expand access to something explicitly labeled insecure, it is almost certainly wrong. Prefer answers that reduce exposure, remove unnecessary functionality, or enforce least privilege. When the ideal remediation is not listed, choose the option that most improves security without creating a larger attack surface.
A CRM company leverages a CSP PaaS service to host and publish Its SaaS product. Recently, a large customer requested that all infrastructure components must meet strict regulatory requirements, including configuration management, patch management, and life-cycle management. Which of the following organizations is responsible for ensuring those regulatory requirements are met?
Correct. The CRM company is the SaaS provider to the large customer and is accountable for meeting the customer’s regulatory requirements end-to-end. Even when using a CSP PaaS, the CRM company must ensure required controls exist (via CSP capabilities and its own processes), document them, contract for them (SLAs/right-to-audit), and provide evidence through audits/attestations and continuous compliance monitoring.
Incorrect. The CRM company’s customer can mandate requirements contractually and may audit or request evidence, but it does not operate the CRM company’s infrastructure or platform. The customer’s role is to define expectations and verify via governance mechanisms (e.g., due diligence, audits), not to implement configuration management, patching, or lifecycle management within the vendor’s environment.
Incorrect. In PaaS, the CSP is responsible for many underlying components (physical security, hypervisor, core platform services, and often platform patching). However, the CRM company remains accountable to its customer for compliance. The CSP may help satisfy requirements via certifications and SLAs, but it is not the party responsible for ensuring the CRM customer’s regulatory obligations are met overall.
Incorrect. Regulatory bodies define requirements, publish standards, and may enforce compliance through audits, penalties, or licensing actions. They do not manage a specific company’s configurations, patching schedules, or lifecycle processes. The regulated entity (here, the CRM company providing the SaaS) must implement and demonstrate compliance, including managing third-party providers like the CSP.
Core concept: This question tests the cloud shared responsibility model in a PaaS context and how regulatory/compliance obligations flow through a SaaS provider’s supply chain. In PaaS, the cloud service provider (CSP) secures and operates the underlying cloud infrastructure and managed platform, while the customer (here, the CRM company) remains accountable for meeting contractual and regulatory requirements for the service they deliver. Why the answer is correct: The large customer is contracting with the CRM company for a SaaS product. Even though the CRM company uses a CSP’s PaaS, the CRM company is the service provider to its customer and therefore owns the obligation to ensure regulatory requirements are met end-to-end. Practically, the CRM company must select a CSP/PaaS that can support required controls (e.g., evidence of patching, configuration baselines, lifecycle processes), define those requirements in contracts/SLAs, and continuously verify compliance through audits, reports, and monitoring. The CSP may perform many underlying tasks, but responsibility (accountability) for compliance to the CRM customer remains with the CRM company. Key features/best practices: The CRM company should use vendor risk management and third-party assurance (e.g., SOC 2 reports, ISO 27001 certifications, FedRAMP authorizations where applicable), negotiate SLAs and right-to-audit clauses, and map requirements to control frameworks (NIST SP 800-53/800-171, ISO 27001, CIS Controls). They should implement configuration management (IaC, baseline hardening, drift detection), patch management processes for what they control (application code, dependencies, tenant configuration), and lifecycle management (asset inventory, EOL/EOS tracking, vulnerability management, change control). They must also collect evidence for audits and provide compliance attestations to the customer. Common misconceptions: Option C (CSP) is tempting because the CSP patches and manages much of the platform in PaaS. However, the CSP’s responsibilities do not automatically satisfy the CRM company’s customer-specific regulatory obligations unless contractually guaranteed and verified. Option B (customer) is incorrect because the customer can require controls but cannot operationally enforce them inside the CRM company’s environment. Option D (regulatory body) sets rules and may audit/enforce, but it does not implement controls. Exam tips: For CAS-005, distinguish “who performs a control” from “who is accountable for compliance.” In SaaS delivered by a vendor, the vendor (CRM company) is accountable to its customer, even when it relies on a CSP. Always think: contract chain + shared responsibility + evidence/assurance.
The results of an internal audit indicate several employees reused passwords that were previously included in a published list of compromised passwords. The company has the following employee password policy: Attribute | Requirement Complexity | Enabled Character class | Special character, number Length | 10 characters History | 8 Maximum age | 60 days Minimum age | 0 Which of the following should be implemented to best address the password reuse issue? (Choose two.)
Correct. Minimum age of 0 allows immediate repeated changes, enabling users to cycle through passwords until history is exhausted and then reuse an old (possibly compromised) password. Setting minimum age to two days blocks rapid cycling and makes the history setting meaningful. This is a direct control for the specific “reuse” behavior identified in the audit.
Correct. Password history of 8 only blocks reuse of the last eight passwords. If an employee used a compromised password earlier than that, they can reuse it today. Increasing history to 20 expands the disallowed set and reduces the likelihood of reusing older passwords, including those found in published compromised lists.
Incorrect for the stated issue. Increasing length to 12 generally improves entropy and resistance to brute force, but it does not prevent an employee from reusing a password that is already known to attackers. A longer password can still be compromised if it appears in breach corpuses; reuse controls are the primary need here.
Incorrect for the stated issue. Adding case sensitivity (requiring upper/lower) increases complexity and may marginally improve guessing resistance, but it does not stop reuse of a previously compromised password. Attackers commonly try case variants, and users often make predictable capitalization changes; this does not address reuse directly.
Incorrect. Reducing maximum age to 30 days forces more frequent changes. Modern guidance warns frequent forced changes can lead to weaker, more predictable passwords and reuse patterns (e.g., incrementing numbers). It still doesn’t prevent selecting a password from a compromised list, and it doesn’t stop history bypass if minimum age remains 0.
Incorrect and harmful. Removing complexity requirements would likely reduce password strength and increase risk. It also does nothing to prevent reuse of compromised passwords. Even though modern standards emphasize screening against compromised lists over arbitrary complexity rules, removing complexity without adding screening is a net negative.
Incorrect. Increasing maximum age to 120 days reduces how often passwords change, which can be beneficial for usability, but it does not address reuse of compromised passwords. If users are already reusing known-compromised passwords, extending the change interval could prolong exposure rather than mitigate it.
Core concept: This question tests password policy controls that prevent password reuse, especially reuse of known-compromised passwords. The audit found employees reused passwords that appeared in a published compromised list. While “password history” prevents reusing recent passwords, users can still cycle through changes quickly (because minimum age is 0) and return to an old password. Governance-wise, this is a policy/control gap. Why the answers are correct: A (increase minimum age) prevents rapid password cycling. With minimum age = 0, a user can change their password eight times in a row (meeting history=8) and then set the compromised password again. Setting a minimum age (e.g., two days) makes this impractical and is a classic control to stop “history bypass.” B (increase history) expands the set of disallowed previous passwords. If a compromised password was used more than eight changes ago, the current policy allows it. Increasing history to 20 reduces the chance that an older, previously used (and possibly compromised) password can be reused. Key features / best practices: - Password history + minimum password age work together. History alone is weak if minimum age is 0. - These controls are typically implemented in directory services (e.g., AD) and enforced centrally. - In modern guidance (e.g., NIST SP 800-63B), organizations are encouraged to screen against known-compromised password lists. That would be ideal, but it is not an option here; therefore, strengthening reuse controls is the best available answer. Common misconceptions: - Increasing complexity/length (C/D) improves resistance to guessing, but does not stop reuse of a known-compromised password. A long/complex password can still be breached if it’s already in a leaked list. - Decreasing maximum age (E) increases change frequency, which can actually encourage predictable reuse patterns and does not directly prevent reuse. - Increasing maximum age (G) reduces changes, but still doesn’t prevent reuse of compromised passwords. Exam tips: When you see “password reuse” and “history,” check “minimum age.” If minimum age is 0, users can rotate through changes to defeat history. The best pair is usually “increase history” + “set a nonzero minimum age.”
While performing mandatory monthly patch updates on a production application server, the security analyst reports an instance of buffer overflow for a new application that was migrated to the cloud and is also publicly exposed. Security policy requires that only internal users have access to the application. Which of the following should the analyst implement to mitigate the issues reported? (Choose two.)
Correct. Blocking external traffic enforces the stated policy (internal users only) and immediately reduces attack surface for a publicly exposed cloud workload. In practice this means tightening firewall/security group/NACL rules to allow only internal CIDRs, VPN ranges, or access via private connectivity/identity-aware proxy. This is the fastest containment step while the application vulnerability is remediated.
Correct. Buffer overflows are commonly triggered by oversized or malformed input. Enabling strong server-side input validation (allowlisting, strict length limits, bounds checks, canonicalization) prevents many exploit payloads from reaching vulnerable routines. While not a substitute for patching/refactoring unsafe code, it is a key mitigation and aligns with secure coding best practices tested on SecurityX.
Incorrect. Automatic updates improve patch compliance, but the scenario already involves monthly patching and the issue is a newly migrated application with a reported overflow and improper exposure. Auto-updates do not directly address the immediate requirement to restrict access to internal users, nor do they guarantee a fix for an application-level overflow without a vendor/app patch.
Incorrect. Enabling external traffic contradicts the security policy requiring internal-only access and increases risk because the application is already publicly exposed. In cloud environments, opening security groups to the internet (0.0.0.0/0) is a common misconfiguration that expands the attack surface and makes exploitation of vulnerabilities like buffer overflows more likely.
Incorrect. DLP focuses on detecting/preventing sensitive data exfiltration (e.g., PII, PCI) after access to data paths exists. It does not mitigate the root issue of a buffer overflow vulnerability or the misconfiguration of public exposure. DLP can be valuable as a detective/compensating control, but it is not the best answer for immediate mitigation here.
Incorrect. Nightly vulnerability scans are a detective control that can help identify exposures and missing patches, but they do not prevent exploitation of a known buffer overflow or enforce internal-only access. Scanning is useful for ongoing assurance and compliance, yet the question asks what to implement to mitigate the reported issues, which requires preventive controls.
Core concept: This question tests layered mitigation for (1) exposure control in cloud networking and (2) secure coding controls against memory corruption (buffer overflow). It also implicitly tests aligning technical controls to policy (internal-only access) and reducing attack surface during patching/migration. Why the answers are correct: A is required because the application is publicly exposed but policy mandates internal-only access. The fastest, most direct mitigation is to restrict inbound access at the network boundary (cloud security group/NACL/firewall/WAF edge rules) so only internal IP ranges/VPN/zero-trust access paths can reach the service. This reduces immediate risk from internet-based exploitation attempts. B is required because a reported buffer overflow indicates unsafe handling of input (e.g., unchecked length, improper bounds checking). Input validation (allowlisting, length checks, canonicalization) is a primary compensating control to reduce exploitability by preventing oversized or malformed payloads from reaching vulnerable code paths. While the ultimate fix is patching/refactoring (e.g., safe libraries, compiler protections), input validation is a standard mitigation that can be implemented at the application layer and/or via API gateway/WAF rules. Key features / best practices: - Network access control: implement default-deny inbound, restrict to internal CIDRs, require VPN/SD-WAN, private endpoints, or identity-aware proxy. In cloud terms, security groups should not allow 0.0.0.0/0 to the app ports. - Secure input handling: server-side validation for all fields, enforce maximum lengths, type/range checks, reject unexpected encodings, and sanitize where appropriate. Pair with secure SDLC practices and memory-safe functions. Common misconceptions: - Automatic updates (C) and nightly scans (F) improve hygiene but do not immediately stop public access or prevent exploitation of a known overflow. Scans detect; they don’t mitigate. - DLP (E) addresses data exfiltration monitoring, not preventing initial compromise via buffer overflow. - Enabling external traffic (D) is the opposite of the policy requirement. Exam tips: When you see “publicly exposed but should be internal-only,” prioritize network segmentation/access control (firewall/security group/private access). When you see “buffer overflow,” think bounds checking, input validation, patching, and compensating controls like WAF rules—choose the options that directly reduce exploitability and exposure.
Several unlabeled documents in a cloud document repository contain cardholder information. Which of the following configuration changes should be made to the DLP system to correctly label these documents in the future?
Digital rights management (DRM) enforces usage controls (view, print, copy, forward) and can apply encryption tied to identities. While DRM can be triggered after a document is classified, it does not help the DLP system discover or correctly label unlabeled documents containing cardholder data. DRM is a protection/enforcement mechanism, not a primary content detection method for PCI patterns.
Network traffic decryption enables inspection of encrypted traffic (TLS interception) so DLP can analyze data in transit. However, the scenario focuses on documents in a cloud document repository (data at rest) and the need to label them correctly in the future. Decrypting network traffic may help with uploads/downloads, but it does not provide the core content-identification logic needed to detect cardholder data within documents.
Regular expressions are a primary DLP technique for detecting structured sensitive data such as credit card numbers, SSNs, and other identifiers. For cardholder information, regex patterns (often combined with Luhn checksum validation and keyword proximity) allow accurate identification of PANs inside documents. Once detected, the DLP policy can automatically apply the appropriate classification label, meeting the requirement to label such documents correctly going forward.
Watermarking adds visible or invisible markings (e.g., “Confidential”, user ID, timestamp) to deter leakage and support attribution. It is typically an output control applied after a document is already classified or labeled. Watermarking does not help the DLP system detect cardholder information inside unlabeled documents, so it won’t solve the root problem of correctly labeling documents based on their content.
Core concept: This question tests Data Loss Prevention (DLP) content inspection and automated data classification/labeling. When documents are “unlabeled” but contain cardholder information (PCI data), the DLP system must be configured to detect that sensitive content reliably so it can apply the correct label/classification going forward. Why the answer is correct: Cardholder information (e.g., Primary Account Numbers/PANs) follows well-known patterns and validation rules. DLP products commonly identify such data using pattern matching (regular expressions) and sometimes additional checks (e.g., Luhn checksum for credit card numbers) plus proximity rules (e.g., PAN near keywords like “Visa”, “CVV”, “expiration”). Configuring or tuning the DLP policy with regular expressions (and associated validators) enables the system to recognize cardholder data inside documents stored in a cloud repository and then automatically label/classify them (e.g., “PCI”, “Cardholder Data”, “Confidential”). This directly addresses the requirement: “correctly label these documents in the future.” Key features/configuration best practices: - Use built-in PCI/financial “data identifiers” where available; these are often implemented with regex + checksum validation. - Reduce false positives by enabling Luhn validation, setting minimum/maximum digit counts, and requiring contextual keywords. - Apply the detection to data at rest in the repository (scanning/indexing) and to data in motion (uploads/downloads) as needed. - Map detection results to an auto-labeling action (classification label/tag) and optionally to enforcement (quarantine, block sharing, encrypt, alert). Common misconceptions: - DRM and watermarking are controls applied after classification; they don’t help the DLP engine discover unlabeled PCI content. - Network traffic decryption helps inspect encrypted network flows, but the scenario is about documents already in a cloud repository and the need for content-based labeling logic. Exam tips: When the question asks how to “identify” or “detect” specific sensitive data types (PCI, SSNs, PHI) for labeling/classification, look for content inspection mechanisms such as regular expressions, dictionaries, exact data match (EDM), fingerprinting, or built-in data identifiers. When it asks how to “protect” already-identified data, then think DRM, encryption, watermarking, or access controls.
A programmer is reviewing the following proprietary piece of code that was identified as a vulnerability due to users being authenticated when they provide incorrect credentials:
GET USERID
GET PASS
JUMP TO :ALLOWUSER;
IF USERID == GETDBUSER(USERID) AND HASH(PASS) == GETDBPASS(USERID)
EXIT
:ALLOWUSER:
SET USERACL(USERID)
...
...
...
Which of the following should the programmer implement to remediate the code vulnerability?
Salted hashing strengthens password storage by preventing rainbow-table attacks and ensuring identical passwords hash differently. However, the vulnerability here is not weak hashing; it is that the code unconditionally jumps to the allow path before checking credentials. Even with salted hashing, the bypass remains because the verification step is skipped.
Input validation on USERID and PASS can prevent malformed input, injection, and unexpected parsing behavior. But the shown flaw is a direct logic/control-flow error: `JUMP TO :ALLOWUSER;` executes before validation. Validating inputs does not stop the unconditional jump from granting access.
Atomic execution of subroutines (i.e., ensuring the authentication check completes and gates access before any authorization code runs) addresses the root cause: an authentication bypass due to incorrect control flow. The fix is to remove/relocate the unconditional jump and only reach `ALLOWUSER` after the IF condition succeeds, enforcing fail-closed behavior.
TOCTOU remediation applies when a resource is checked and later used, and an attacker can change it between the check and the use. Here, the issue is not a race between checking and setting ACLs; it is that ACLs are set without any successful authentication due to the unconditional jump.
Encrypting the database connection (e.g., TLS) protects credentials and queries in transit and reduces eavesdropping/MITM risk. It does not fix the application logic that grants access without verifying credentials. The authentication bypass would still occur even with perfectly encrypted DB communications.
Core concept: This is a secure coding/control-flow vulnerability. The code performs an unconditional jump to the allow/authorization label before validating credentials, creating an authentication bypass (logic flaw). In CAS-005 terms, this falls under Security Engineering: implementing correct authentication flow, fail-closed logic, and safe control transfer. Why the answer is correct: The line `JUMP TO :ALLOWUSER;` occurs before the `IF USERID == ... AND HASH(PASS) == ...` check. That means execution always reaches `:ALLOWUSER:` and runs `SET USERACL(USERID)` regardless of whether the credentials are correct. The remediation is to ensure the authentication check executes atomically and completely before any authorization/ACL-setting routine can run. Practically, remove the unconditional jump and only branch to `ALLOWUSER` after the IF condition succeeds (or invert the logic: default deny, then allow on success). This is best described by ensuring atomic execution of the authentication/authorization subroutines and correct control flow (no early/unsafe jumps). Key features / best practices: - Fail closed: default to deny; only grant access after successful verification. - Keep authentication and authorization sequencing strict: authenticate first, then authorize. - Use structured control flow (functions/returns) rather than arbitrary jumps/labels where possible. - Add explicit else/deny path (e.g., log, delay, lockout) to prevent bypass and reduce brute-force risk. Common misconceptions: - Improving password hashing (salting) is important, but it does not fix an authentication bypass caused by control flow. Even perfect hashing won’t matter if the code never checks it. - Input validation helps prevent injection and malformed input, but it won’t correct an unconditional jump that grants access. - TOCTOU issues and encrypted DB connections are unrelated to the immediate bug: users are authenticated with incorrect credentials due to logic. Exam tips: When you see “users are authenticated with incorrect credentials,” look first for logic errors: misplaced returns/jumps, inverted conditions, missing braces, or authorization happening before authentication. Choose the option that fixes control flow and enforces a single, complete, non-bypassable authentication decision before privileges are assigned.
Which of the following is the best way to protect the website browsing history for an executive who travels to foreign countries where internet usage is closely monitored?
DoH (DNS over HTTPS) encrypts DNS queries so observers on the local network or ISP cannot easily see which domains are being resolved or tamper with DNS responses. Since DNS logs are a common source of “browsing history” reconstruction, DoH directly reduces visibility of visited sites’ domain names. It is especially relevant in high-surveillance environments where DNS monitoring and censorship are prevalent.
EAP-TLS is an 802.1X authentication method for network access (commonly enterprise Wi-Fi) using client certificates for strong mutual authentication. While it improves secure access to a Wi-Fi network, it does not encrypt or hide browsing destinations from upstream monitoring once connected. It addresses authentication and access control, not privacy of DNS queries or web browsing metadata.
Geofencing restricts or triggers actions based on geographic location (e.g., blocking logins from certain countries, limiting app functionality abroad). It is a policy control to reduce risk exposure, not a mechanism to protect browsing history from being monitored. In fact, geofencing could prevent access entirely rather than provide privacy while traveling.
Private browsing mode (incognito) primarily prevents the browser from saving local history, cookies, and cached data on the device after the session ends. It does not prevent network observers, ISPs, or government monitoring from seeing DNS queries, destination IPs, or traffic patterns. It is a local privacy feature, not a network privacy or anti-surveillance control.
Core Concept: This question tests privacy protections for web browsing metadata, specifically DNS lookups. Even when using HTTPS, DNS queries can reveal which domains a user visits. In countries with heavy monitoring, DNS traffic is commonly logged, filtered, or manipulated (censorship, redirection, surveillance). Why the Answer is Correct: DNS over HTTPS (DoH) encrypts DNS queries by sending them inside an HTTPS session to a DoH-capable resolver. This prevents local networks (hotel Wi-Fi, ISP, captive portals, government monitoring points) from easily seeing or altering DNS requests in transit. Protecting DNS is a practical way to reduce exposure of browsing history because domain lookups are one of the most visible and frequently collected indicators of browsing behavior. Key Features / Best Practices: - Use a trusted DoH resolver (enterprise-managed or reputable provider) and enforce it via endpoint management (MDM/EDR policies, browser policies). - Pair DoH with full-tunnel VPN for stronger protection: DoH hides DNS from the local network, while VPN also hides destination IPs and other metadata. - Disable fallback to plaintext DNS where possible; ensure the OS/browser is configured to prefer encrypted DNS. - Monitor for “DNS leak” conditions (e.g., VPN split tunneling, captive portal behavior) and validate with testing tools. Common Misconceptions: Private browsing mode is often mistaken as “anonymous browsing.” It mainly prevents local storage of history/cookies on the device, not network-level monitoring. EAP-TLS is for Wi-Fi authentication, not browsing privacy. Geofencing is a control to restrict access by location; it does not protect an executive’s browsing history from surveillance. Exam Tips: When the question mentions “closely monitored internet usage,” think about metadata visibility (DNS, SNI, IP destinations) and encryption-in-transit controls. For “browsing history” exposure on untrusted networks, encrypted DNS (DoH/DoT) and VPN are the go-to answers. If VPN is not an option in the choices, DoH is the best fit among the provided options.
A web application server is running a legacy operating system with an unpatched RCE vulnerability. The server cannot be upgraded until the corresponding application code is changed. Which of the following compensating controls would best prevent successful exploitation?
Segmentation reduces the attack surface by limiting which networks can reach the vulnerable server and helps contain lateral movement if compromise occurs. However, if the web server must remain accessible to clients (often the case), segmentation alone does not prevent exploitation attempts from allowed paths. It is a strong compensating control for containment, but not the best single control to prevent successful RCE exploitation on the host.
A CASB (Cloud Access Security Broker) governs access to cloud services (SaaS) and enforces policies like DLP, tokenization, and conditional access for cloud apps. It does not provide host-level exploit prevention for a legacy on-prem web server OS vulnerability. CASB would be relevant if the risk involved unsanctioned cloud usage or protecting data in SaaS, not blocking RCE on a server.
A HIPS provides host-based, preventive controls that can block exploit behavior even when the OS is unpatched. It can stop suspicious process creation, code injection, privilege escalation attempts, and known exploit patterns, effectively acting as a compensating control (sometimes called virtual patching at the host). For an RCE scenario, preventing execution and post-exploitation actions on the server is the most direct way to stop successful exploitation.
UEBA (User and Entity Behavior Analytics) focuses on detecting anomalous behavior by users, hosts, or services using baselines and analytics. It is primarily a detective control that improves alerting and investigation, not a preventive control that blocks exploitation. UEBA might help identify that the server is behaving abnormally after compromise, but it typically will not prevent the initial RCE from succeeding.
Core concept: This question tests compensating controls for an unpatchable remote code execution (RCE) vulnerability on a legacy web application server. When you cannot remediate (patch/upgrade) immediately, you reduce likelihood of exploitation by adding preventive controls that block exploit techniques at runtime and/or at the host boundary. Why the answer is correct: A Host-based Intrusion Prevention System (HIPS) is the best compensating control to prevent successful exploitation because it can actively block malicious behavior on the vulnerable host even when the underlying OS flaw remains. For RCE, the attacker’s goal is to execute unauthorized code, spawn processes, inject into memory, modify system files/registry, or establish persistence. HIPS can enforce rules to prevent or terminate these actions (e.g., blocking suspicious child processes from the web server process, preventing command shells, stopping unauthorized DLL injection, restricting script interpreters, and blocking known exploit patterns). This directly interrupts the kill chain at the “execution” and “privilege/persistence” stages, which is the closest substitute for patching. Key features / best practices: - Application/process control: allowlisting or constraining what the web server process can launch (e.g., prevent w3wp/httpd/nginx from spawning cmd.exe/powershell/bash). - Exploit mitigation: memory protections, ROP/jump-oriented programming detection, and behavioral exploit rules. - Host firewall and policy enforcement: limit inbound/outbound connections from the host to only required services. - Virtual patching signatures: some HIPS/EDR/HIDS suites can block known exploit payloads and post-exploitation behaviors. - Pair with least privilege and service hardening (run service accounts with minimal rights) to reduce impact if partial exploitation occurs. Common misconceptions: Segmentation is valuable but primarily reduces exposure and lateral movement; it does not reliably stop exploitation if the server must remain reachable by users/Internet. CASB is for controlling cloud app usage and data access, not protecting an on-prem legacy OS RCE. UEBA detects anomalies but is largely detective; it won’t “best prevent” exploitation. Exam tips: When the question emphasizes “cannot be patched/upgraded” and asks what “best prevent successful exploitation,” prefer preventive, host-enforced controls (HIPS/EDR with prevention, application control, exploit mitigation, WAF if offered). Choose segmentation when the goal is limiting blast radius or reducing reachable attack surface, not when you need a direct block of exploit execution on the host.