
Simulate the real exam experience with 85 questions and a 165-minute time limit. Practice with AI-verified answers and detailed explanations.
AI-Powered
Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.
The Chief Executive Officer of an organization recently heard that exploitation of new attacks in the industry was happening approximately 45 days after a patch was released. Which of the following would best protect this organization?
A mean time to remediate of 30 days directly reduces the organization’s exposure window to known vulnerabilities. Since exploitation tends to occur around 45 days after patch release, remediating in 30 days means most systems will be patched before attackers commonly weaponize and widely exploit the issue. This is a proactive vulnerability management control aligned to patch SLAs and risk reduction.
A mean time to detect of 45 days is too slow and is also the wrong focus for the scenario. Detection measures how quickly you notice an issue or attack, not how quickly you eliminate the underlying vulnerability. If exploitation begins around day 45, detecting at day 45 means you may only notice compromise attempts as they start, while still being unpatched.
A mean time to respond of 15 days improves incident handling after detection, but it does not prevent exploitation of an unpatched vulnerability. You could respond quickly once an incident is identified, yet still suffer initial compromise because the vulnerable condition remains. The question asks what would best protect the organization given a patch-to-exploit timeline, which points to remediation speed.
Third-party application testing (e.g., pen testing, SAST/DAST, code review) can find weaknesses, especially in custom or externally supplied software, but it does not address the stated problem: attackers exploiting vulnerabilities after vendor patches are released. The immediate protective measure is deploying patches faster, not testing applications, which is longer-cycle and not tied to patch release timelines.
Core concept: This question tests vulnerability management metrics and how patching speed reduces exposure to “patch-to-exploit” timelines. Attackers often reverse-engineer patches or monitor advisories and then weaponize exploits shortly after release. The organization’s risk window is the time between patch availability and patch deployment across affected assets. Why the answer is correct: If exploitation is commonly occurring about 45 days after a patch is released, the best protection is to ensure the organization remediates (patches/mitigates) faster than that window. A mean time to remediate (MTTR) of 30 days means, on average, vulnerabilities are fixed within 30 days—15 days before the typical exploitation point—thereby shrinking the exposure window and reducing the likelihood that systems remain vulnerable when mass exploitation begins. Key features / best practices: Effective remediation within 30 days typically requires a mature patch management program: asset inventory and criticality ranking, vulnerability scanning and prioritization (e.g., CVSS plus exploitability and asset context), defined SLAs by severity (e.g., critical in 7–14 days, high in 30 days), change management with emergency patch lanes, automated deployment (MDM/SCCM/Intune/WSUS, etc.), validation (post-patch scanning), and compensating controls when patching is delayed (WAF rules, IPS signatures, configuration hardening, segmentation). Common misconceptions: It’s tempting to focus on detection/response metrics (MTTD/MTTRsp) because they are incident-response oriented, but those do not prevent exploitation of known unpatched vulnerabilities; they only help you notice and react after compromise attempts begin. Third-party application testing is valuable, but it does not directly address the stated industry pattern of exploitation after vendor patch release. Exam tips: When a question provides a “time-to-exploit after patch release,” the best defensive metric is usually remediation/patching speed (MTTR for vulnerabilities) and patch SLAs. Choose the option that reduces the vulnerability exposure window below the attacker’s typical weaponization timeline.
Want to practice all questions on the go?
Download Cloud Pass for free — includes practice tests, progress tracking & more.


Want to practice all questions on the go?
Get the free app
Download Cloud Pass for free — includes practice tests, progress tracking & more.
A security analyst received a malicious binary file to analyze. Which of the following is the best technique to perform the analysis?
Code analysis generally refers to reviewing source code (manual review or SAST) to find malicious logic, insecure functions, or vulnerabilities. For a received malicious binary, the analyst typically does not have the original source code, making traditional code analysis impractical. While decompiled output can resemble code, that activity is better categorized as reverse engineering rather than straightforward code review.
Static analysis examines an artifact without executing it (hashes, strings, headers, imports, entropy, signatures). It is useful for quick triage and IOC extraction, but by itself may not reveal full intent, especially if the binary is packed, obfuscated, or uses runtime-resolved APIs. Static analysis is often a component of reverse engineering, but it is not as complete a technique for understanding a malicious binary’s logic.
Reverse engineering is the process of analyzing a compiled binary to understand its internal logic and behavior without source code. Using disassembly/decompilation and debugging, an analyst can identify functionality (persistence, C2, credential theft), extract IOCs, and understand anti-analysis techniques. For a malicious binary file, reverse engineering is the most appropriate and comprehensive technique to determine what the malware does and how to detect/contain it.
Fuzzing is a testing technique that sends malformed or unexpected inputs to a program to trigger crashes or anomalous behavior, commonly used to discover vulnerabilities. It is not primarily used to analyze what a malicious binary does; instead, it helps find weaknesses in software. While you could fuzz a malware sample’s parser to learn about it indirectly, that is not the best or most direct technique for malware binary analysis.
Core Concept: This question tests malware analysis techniques, specifically how to analyze a compiled malicious binary. In CySA+ terms, this falls under incident response activities such as triage, analysis, and containment planning. The key idea is choosing the technique that best fits a binary (machine code) rather than source code. Why the Answer is Correct: Reverse engineering is the best technique for analyzing a malicious binary because it focuses on understanding how a compiled executable works when you do not have the source code. Reverse engineering uses disassemblers and decompilers to translate machine instructions into assembly and higher-level pseudocode, allowing an analyst to identify capabilities (persistence, privilege escalation, C2 communication, encryption routines), indicators of compromise (domains, IPs, mutexes, registry keys), and logic (kill switches, anti-analysis checks). This is the most direct and complete approach to determine intent and functionality from a binary artifact. Key Features / Best Practices: Effective reverse engineering typically combines static reverse engineering (strings, imports, control flow, disassembly/decompilation) with controlled dynamic analysis (sandbox execution, debugging, API tracing) to validate hypotheses. Best practices include using isolated lab environments, snapshots, non-routable networks, and tooling such as Ghidra/IDA, x64dbg/WinDbg, and Sysmon/Procmon/Wireshark. Analysts also look for packing/obfuscation and may need to unpack before meaningful reverse engineering. Common Misconceptions: “Static analysis” is related and often part of reverse engineering, but it is broader and can be superficial (hashing, strings, metadata) without actually reconstructing program logic. “Code analysis” implies access to source code, which is usually not available for malware binaries. “Fuzzing” is primarily for finding vulnerabilities by feeding malformed inputs to a target program; it is not the primary method to understand what a malicious binary does. Exam Tips: When the artifact is explicitly a “malicious binary,” assume no source code and prioritize reverse engineering. If the question mentions “source code,” then code analysis fits. If it mentions “behavior in a sandbox,” think dynamic analysis. If it mentions “finding crashes/bugs,” think fuzzing. For CySA+, reverse engineering is the go-to for deep capability analysis of compiled malware.
A cybersecurity team lead is developing metrics to present in the weekly executive briefs. Executives are interested in knowing how long it takes to stop the spread of malware that enters the network. Which of the following metrics should the team lead include in the briefs?
Mean time between failures (MTBF) is a reliability/availability metric used to estimate the average time between inherent failures of a system or component. It is common in operations and engineering contexts (hardware, infrastructure, service uptime) rather than incident response. It does not measure how quickly a security team stops malware propagation, so it is not appropriate for the executive question asked.
Mean time to detect (MTTD) measures how long it takes to discover an incident after it begins (or after initial compromise). While detection speed is important, it does not answer the executive’s specific concern: “how long it takes to stop the spread.” You could detect quickly but still take too long to isolate affected systems, so MTTD alone is insufficient for this question.
Mean time to remediate (often aligned with MTTR in security contexts) measures how long it takes to fully fix the issue—eradicate malware, remove persistence, patch vulnerabilities, and restore systems to a secure state. Remediation typically occurs after containment and can take significantly longer. Because the question focuses on stopping spread (limiting propagation), remediation is not the best match.
Mean time to contain (MTTC) measures the average time required to limit an incident so it can no longer expand in scope—e.g., isolating endpoints, blocking IOCs, disabling accounts, segmenting networks, and stopping C2 traffic. This directly answers “how long it takes to stop the spread of malware that enters the network,” making it the most appropriate metric for executive briefs.
Core Concept: This question tests incident response metrics, specifically measurements that describe how quickly an organization can limit an incident’s impact. In malware outbreaks, the most executive-relevant timing metric for “stopping the spread” is containment—preventing lateral movement and further propagation. Why the Answer is Correct: Mean time to contain (MTTC) measures the average time from when an incident is identified/declared (or sometimes from initial detection, depending on the organization’s definition) until the threat is contained so it can no longer spread. Containment actions include isolating infected hosts, blocking malicious indicators at network controls, disabling compromised accounts, segmenting networks, and stopping command-and-control communications. Because executives asked “how long it takes to stop the spread of malware that enters the network,” MTTC directly maps to that goal: limiting blast radius and halting propagation. Key Features / Best Practices: MTTC is commonly tracked alongside MTTD (detect) and MTTR (remediate/recover). For executive briefs, MTTC is especially meaningful because it reflects operational readiness (EDR isolation speed, SOC triage efficiency, network segmentation maturity, playbook automation via SOAR, and escalation paths). Define MTTC clearly (start/stop timestamps), standardize severity categories, and report trends (week-over-week) plus outliers with brief root-cause notes. Common Misconceptions: MTTR (remediate) is often confused with containment. Remediation focuses on eradication and restoring systems (patching, reimaging, removing persistence, closing root cause), which can take much longer than stopping spread. MTTD is about discovering the malware, not stopping it. MTBF is a reliability metric for equipment/process failures and is not incident-response focused. Exam Tips: When you see wording like “stop the spread,” “limit impact,” “prevent lateral movement,” or “reduce blast radius,” think containment metrics (MTTC). When you see “find it,” think MTTD. When you see “fix it/eradicate/recover,” think MTTR/mean time to remediate. Always map the verb in the question (detect/contain/remediate) to the metric name.
A company’s security team is updating a section of the reporting policy that pertains to inappropriate use of resources (e.g., an employee who installs cryptominers on workstations in the office). Besides the security team, which of the following groups should the issue be escalated to first in order to comply with industry best practices?
Help desk is primarily responsible for operational support: triage, ticketing, endpoint remediation, and user support. While they may assist by isolating a workstation, removing unauthorized software, or reimaging systems, they are not the appropriate first escalation group for updating reporting policy or ensuring the organization’s response aligns with legal/privacy and disciplinary requirements.
Law enforcement involvement is typically reserved for situations with clear criminal activity requiring external investigation, mandatory reporting, or significant impact. Escalating to law enforcement first is usually not best practice for internal misuse because it can disrupt internal fact-finding, create unnecessary exposure, and complicate evidence handling unless legal has determined it is warranted.
The legal department is the best available first escalation point among the listed options because employee misuse can create legal, regulatory, and evidentiary concerns. Legal can advise the security team on how to document the incident, preserve evidence, and proceed without violating privacy, employment, or monitoring requirements. Legal also helps determine whether the matter should remain internal, be coordinated with HR and management, or be referred externally. While HR is often involved in employee misconduct cases, it is not an option here, making Legal the strongest answer from the choices provided.
A board member is part of executive governance and oversight, not day-to-day policy drafting or incident escalation. Boards typically receive high-level risk and incident reporting after internal stakeholders (security, legal, HR, compliance) have assessed impact and response. Escalating to a board member first is inefficient and bypasses necessary legal/compliance review.
Core Concept: This question tests escalation and communication best practices for policy-driven reporting of inappropriate use of resources (an insider misuse scenario). In CySA+ terms, this sits at the intersection of reporting/communication, incident classification, and ensuring actions align with legal/regulatory obligations and evidence-handling requirements. Why the Answer is Correct: The legal department should be the first escalation point beyond the security team when updating reporting policy for employee misuse (e.g., installing cryptominers). Industry best practice is to ensure policies, reporting language, investigative steps, monitoring/inspection notices, disciplinary procedures, and evidence collection/retention align with employment law, privacy requirements, labor agreements, and regulatory constraints. Legal also guides when to involve HR, how to preserve attorney-client privilege, and how to word acceptable use and consent-to-monitoring clauses to reduce organizational liability. Key Features / Best Practices: Legal review commonly covers (1) acceptable use policy alignment and enforceability, (2) privacy and monitoring disclosures (e.g., consent banners, BYOD boundaries), (3) evidence preservation and chain of custody requirements, (4) thresholds for external reporting (regulators, law enforcement), and (5) coordination with HR for disciplinary actions. Framework-aligned programs (e.g., NIST incident handling guidance and common corporate governance practices) emphasize clear escalation paths and involving counsel early for insider cases to avoid mishandling evidence or violating privacy/employee rights. Common Misconceptions: Help desk is operationally involved in remediation (reimaging, ticketing) but is not the correct first escalation for policy/reporting decisions. Law enforcement is not typically first unless there is an immediate threat to life/safety or a clear requirement to report; premature involvement can complicate internal investigations and evidence handling. A board member is too high-level and generally receives summarized reporting after legal/HR/security have assessed risk and response. Exam Tips: For questions about “first escalation” for policy, compliance, and potential employee misconduct, think: Security/IR team identifies → Legal (and often HR) validates process and obligations → then consider external entities (law enforcement/regulators) only if required. If the scenario mentions policy updates, liability, privacy, or disciplinary language, legal is usually the best-practice next stop.
A SOC analyst identifies the following content while examining the output of a debugger command over a client-server application:
getConnection(database01,"alpha" ,"AxTv.127GdCx94GTd");
Which of the following is the most likely vulnerability in this system?
Lack of input validation refers to failing to validate or sanitize user-supplied data (type, length, format, allowlists) before processing it. While input validation weaknesses can lead to injection, logic flaws, or crashes, the snippet does not show any external input being accepted or validated. It shows a static function call with fixed parameters, so there is no direct evidence of missing validation here.
SQL injection occurs when untrusted input is incorporated into SQL statements in an unsafe way (e.g., string concatenation), allowing an attacker to alter query logic. The snippet does not show SQL query construction or user-controlled input; it shows a database connection call with a username and password literal. Without evidence of query manipulation or unsafely built SQL, SQL injection is not the most likely vulnerability indicated.
Hard-coded credential is the best match because the password ("AxTv.127GdCx94GTd") appears as a plaintext string literal in the application’s runtime/debugger output. This indicates the secret is embedded in code/binary and can be extracted through debugging, memory inspection, decompilation, or logs. It increases the risk of credential theft, reuse across systems, and makes rotation difficult, violating secure secret management best practices.
Buffer overflow involves writing more data to a memory buffer than it can hold, typically due to unsafe functions or missing bounds checks, leading to crashes or code execution. The provided content shows a normal-looking function call with string parameters and no indication of length issues, memory corruption, or unsafe copying. A debugger output alone doesn’t imply overflow unless accompanied by crash traces or overwritten memory patterns.
Core Concept: This question tests secure credential management and how vulnerabilities can be identified through debugging output, reverse engineering artifacts, or application telemetry. A common weakness is embedding secrets (passwords, API keys, connection strings) directly in code or binaries, which can be exposed via debuggers, logs, memory dumps, or decompilation. Why the Answer is Correct: The snippet shows a function call: getConnection(database01, "alpha", "AxTv.127GdCx94GTd");. The third parameter is clearly a password (or secret) being passed as a literal string. Seeing a plaintext secret in a debugger output strongly indicates the application contains hard-coded credentials. This is a vulnerability because anyone with access to the binary, runtime memory, crash dumps, or debugging interfaces can recover the credential and reuse it to authenticate to the database/server. It also implies poor secret rotation practices: changing the password requires rebuilding/redeploying the application, and the same credential is often reused across environments. Key Features / Best Practices: Secure design avoids embedding secrets in source code. Instead, use a secrets manager (e.g., HashiCorp Vault, AWS Secrets Manager, Azure Key Vault), OS-protected credential stores, or environment-based injection with strict access controls. Use least privilege for the database account, rotate credentials regularly, and prefer short-lived tokens or integrated authentication (Kerberos/AD) where possible. Prevent secrets from appearing in logs/debug output by disabling verbose debugging in production and using secure logging practices. Common Misconceptions: SQL injection and input validation issues are common in client-server/database contexts, but the provided evidence is not user-controlled input being concatenated into a query; it is a static credential literal. Buffer overflow would require evidence of unsafe memory handling (e.g., strcpy into fixed buffers) or crash patterns, not a visible password string. Exam Tips: When you see usernames/passwords/API keys embedded as string literals (in code, configs checked into repos, debugger output, or binaries), the best match is “hard-coded credentials.” On CySA+, map this to credential exposure risk, lateral movement potential, and remediation via secret management, rotation, and least-privilege service accounts.
A security analyst discovers an LFI vulnerability that can be exploited to extract credentials from the underlying host. Which of the following patterns can the security analyst use to search the web server logs for evidence of exploitation of that particular vulnerability?
“/etc/shadow” is a classic LFI target on Linux because it contains password hashes and account-related credential data. In web logs, it commonly appears with traversal sequences (e.g., ../../../../etc/shadow) or URL-encoded equivalents. If the LFI is used to extract credentials from the host, searching for this string (and encoded variants) is a strong indicator of exploitation attempts.
“curl localhost” is more indicative of SSRF or command execution scenarios. SSRF payloads often try to reach internal services via localhost/127.0.0.1, and “curl” suggests the attacker is executing a command on the server (command injection/RCE) rather than including a local file through a vulnerable file path parameter. It does not directly match LFI log patterns.
“; printenv” is a common command injection pattern. The semicolon is used to chain shell commands, and printenv dumps environment variables. While environment variables can contain secrets, this payload implies the attacker can execute shell commands, which is a different vulnerability class than LFI. LFI typically manipulates file paths, not shell metacharacters.
“cat /proc/self/” also suggests command execution because “cat” is a shell command. Although /proc/self/ can be relevant to LFI (e.g., /proc/self/environ is a known LFI target), the presence of “cat” makes this option primarily aligned with command injection detection rather than LFI. A more direct LFI pattern would be “/proc/self/environ” without shell syntax.
Core concept: This question tests recognition of Local File Inclusion (LFI) exploitation indicators in web server logs. LFI occurs when an application uses user-controlled input to build a file path (e.g., include(), require(), template loaders) and fails to properly validate/normalize it. Attackers then use path traversal (../) or absolute paths to read sensitive local files. Web logs often capture the requested URI and query parameters, making them a primary source for detecting LFI attempts. Why the answer is correct: “/etc/shadow” is a high-value Linux credential store containing password hashes (and related account data). When an analyst discovers an LFI that can extract credentials from the underlying host, a common attacker objective is to read files like /etc/passwd and /etc/shadow. Therefore, searching logs for requests containing “/etc/shadow” (often URL-encoded and combined with traversal sequences like ../../../../etc/shadow) is a strong pattern for evidence of exploitation aimed at credential extraction. Key features / best practices: In practice, also search for variants: URL encoding (%2e%2e%2f), double encoding, mixed separators, and common LFI targets (/etc/passwd, /proc/self/environ, application config files with DB creds). Correlate with response codes, response sizes, and unusual user agents. Use WAF/IDS logs and application logs to confirm whether the file contents were actually returned. Mitigations include allow-listing template/file names, using safe APIs, canonicalization checks, and running the web service with least privilege so sensitive files are unreadable. Common misconceptions: Some options resemble other web exploitation patterns (command injection or SSRF). However, the question is specifically about LFI used to extract credentials from the host, which aligns most directly with reading /etc/shadow. Exam tips: For CySA+, map payload strings to vulnerability classes: LFI/RFI often includes “../”, “/etc/passwd”, “/etc/shadow”, “php://filter”, “/proc/self/environ”. Command injection often includes “;”, “&&”, “|”, and commands like id, whoami, cat. SSRF often includes localhost/127.0.0.1 and internal URLs. Choose the payload that best matches the described vulnerability and objective (credential extraction).
A company is in the process of implementing a vulnerability management program. Which of the following scanning methods should be implemented to minimize the risk of OT/ICS devices malfunctioning due to the vulnerability identification process?
Non-credentialed scanning typically relies on active network probing (port scans, banner grabs, service detection) without logging into the target. In OT/ICS, those probes can overwhelm fragile devices, trigger watchdog resets, or interfere with real-time communications. While it avoids authentication overhead, it does not minimize operational risk because it still sends traffic to endpoints and may use aggressive detection techniques.
Passive scanning observes existing network traffic rather than generating probes. Using a TAP/SPAN, a sensor can identify devices, protocols, and sometimes firmware/service versions by analyzing communications patterns and fingerprints. Because it does not interact directly with OT/ICS endpoints, it greatly reduces the chance of causing crashes, latency spikes, or process disruptions—making it the safest vulnerability identification approach for sensitive industrial environments.
Agent-based scanning installs software on endpoints to collect configuration and vulnerability data. In OT/ICS, agents are often not feasible due to vendor support limitations, strict change control, limited CPU/memory, and safety certification concerns. Even when possible, agents can introduce performance overhead or unexpected behavior. It can be low-noise from a network perspective, but it does not generally represent the lowest-risk method for OT devices.
Credentialed scanning authenticates to systems (e.g., SSH/WinRM/WMI/SNMP) to enumerate patches, configurations, and software more accurately. In IT, this can reduce intrusive probing, but in OT/ICS it still requires active connections and queries that may be unsupported or destabilizing. Many OT devices lack robust authentication interfaces or have brittle implementations, so credentialed scans can still cause outages and are not the safest default.
Core concept: This question tests safe vulnerability identification approaches for OT/ICS environments. OT/ICS devices (PLCs, RTUs, HMIs, safety systems) often run fragile stacks, proprietary protocols, and real-time processes where availability and safety are paramount. Active probing can crash devices, trigger fail-safe states, or disrupt deterministic communications. Why the answer is correct: Passive scanning is the preferred method to minimize the risk of OT/ICS devices malfunctioning during vulnerability identification. Passive techniques observe network traffic (e.g., via SPAN/TAP) and infer assets, services, firmware, and potential vulnerabilities without sending probes to the devices. Because it does not initiate connections or send crafted packets, passive scanning significantly reduces the chance of causing device instability, process interruption, or safety impacts—aligning with OT best practices (safety/availability first). Key features / best practices: - Use network TAPs or switch SPAN ports to feed passive sensors. - Build an asset inventory from observed traffic: IP/MAC, vendor OUI, protocol fingerprints (Modbus, DNP3, EtherNet/IP, PROFINET), and communication patterns. - Correlate observed versions/configurations with vulnerability intelligence (CVEs, vendor advisories) rather than validating via intrusive checks. - If active validation is required, perform it in a maintenance window, with vendor-approved tools, and ideally in a test environment that mirrors production. - Segment OT networks and monitor at zone boundaries (Purdue model) to reduce exposure and improve visibility. Common misconceptions: Credentialed scanning is often “safer” than non-credentialed in IT because it reduces noisy probing, but it still requires connecting to endpoints and running queries that may not be supported or may overload OT devices. Agent-based scanning can be excellent in IT, but many OT endpoints cannot run agents due to vendor restrictions, resource constraints, or certification/safety requirements. Non-credentialed scanning still typically uses active probes and can be disruptive. Exam tips: For OT/ICS, prioritize methods that are non-intrusive: passive discovery/monitoring first. When you see wording like “minimize risk of malfunctioning” or “avoid impacting operations,” choose passive approaches over active/credentialed probing. Map this to vulnerability management domain decisions: discovery and identification must be adapted to the environment’s tolerance for scanning.
A security analyst needs to mitigate a known, exploited vulnerability related to an attack vector that embeds software through the USB interface. Which of the following should the analyst do first?
Security awareness training helps reduce risky behavior (e.g., plugging in unknown USB devices), but it is not the first step to mitigate an actively exploited vulnerability. Training takes time to develop, deliver, and measure, and users still make mistakes. On the exam, training is typically a compensating/administrative control that complements—rather than replaces—technical enforcement like port/device control.
A removable media policy is an important governance control, but writing a policy does not immediately change endpoint behavior. Without technical enforcement (GPO/MDM/EDR device control) and validation, users can still connect USB devices and exploitation can continue. Policies are usually implemented after confirming the current state and deploying controls, and they are strengthened by monitoring and enforcement mechanisms.
Checking whether USB ports are enabled is the best first step because it establishes current exposure and enables immediate risk reduction through configuration changes. If ports are enabled, the analyst can quickly implement device control (disable USB storage, restrict device classes, allowlist approved devices) via centralized management. This directly mitigates the USB-based attack vector and aligns with vulnerability management priorities for known exploited issues.
Reviewing logs to see whether the vulnerability has already been exploited is useful for detection and incident response, but it does not itself mitigate the vulnerability. If the organization is currently exposed, exploitation could continue while the analyst investigates. In “mitigate first” scenarios, prioritize containment/prevention (reduce attack surface) and then perform log review and threat hunting to assess impact and scope.
Core concept: This question tests vulnerability mitigation and prioritization when a known, exploited vulnerability is tied to a USB-based attack vector (e.g., malicious HID/BadUSB-style devices, autorun-like behaviors, or device emulation that delivers payloads). In CySA+ terms, the first action is to implement or validate an effective technical control that reduces exposure immediately, then follow with administrative controls and detection/response activities. Why the answer is correct: The analyst should first check configurations to determine whether USB ports are enabled on company assets (Option C). Before writing policy or training users, you need to understand current exposure and whether a quick configuration change can reduce risk right now. If USB ports are enabled broadly, the organization remains vulnerable today. Verifying and adjusting endpoint controls (e.g., device control, port control, GPO/MDM restrictions, EDR device policies) is a direct mitigation step aligned with vulnerability management: reduce attack surface and prevent exploitation. Key features/best practices: Common mitigations include disabling USB mass storage and/or all removable media, allowing only approved device classes/VID-PID allowlists, enforcing read-only modes, blocking HID emulation where feasible, and using endpoint management (GPO, Intune/MDM, EDR) to apply consistent controls. This is consistent with defense-in-depth: technical prevention first, then administrative reinforcement (policy/training), and continuous monitoring. Common misconceptions: Option D (review logs) is valuable, but it is more aligned with incident response/detection. If the question is explicitly “mitigate … do first,” reducing exposure takes precedence over retrospective investigation. Options A and B (training/policy) are important but slower to implement and less reliable than technical enforcement; they do not immediately stop exploitation. Exam tips: For “known, exploited vulnerability” questions, prioritize actions that (1) immediately reduce exposure (disable/patch/isolate), (2) are enforceable and measurable (configuration/controls), and (3) can be implemented quickly. Then follow with detection (log review/hunting) and administrative measures (policy/training) to sustain the control.
A security analyst must review a suspicious email to determine its legitimacy. Which of the following should be performed? (Choose two.)
Spam Confidence Level (SCL) and Bulk Complaint Level (BCL) are mail-gateway-generated scoring fields that help analysts assess how the receiving system classified the message. These values are not cryptographic proof, but they are highly relevant when determining whether the email appears legitimate or suspicious from an operational security perspective. They can indicate whether the message resembles known spam, bulk mail, or potentially malicious content based on filtering heuristics and reputation data. On certification exams, these scoring fields are commonly treated as useful evidence during suspicious-email review.
Reviewing headers from a forwarded email is not the best approach because forwarding often alters, encapsulates, or strips important transport details. The analyst should instead obtain the original message headers or inspect the message directly in the mail system. A forwarded copy may hide the true Received chain and authentication results, leading to inaccurate conclusions. Therefore, this option is weaker than reviewing original-message authentication and scoring data.
The recipient address field may help determine who was targeted, but it does not establish whether the sender is legitimate. Attackers can send to any recipient and can manipulate visible addressing fields without affecting actual delivery. This field is useful for understanding campaign scope or social engineering context, not for validating authenticity. As a result, it is not one of the best choices for legitimacy determination.
The Content-Type header can reveal whether the email contains HTML, attachments, or multipart content, which is useful for malware or phishing-content analysis. However, it does not verify the sender’s identity or prove that the message originated from authorized infrastructure. Both legitimate and malicious emails can use common Content-Type values. This makes it less relevant than authentication and anti-spam metadata for determining legitimacy.
The HELO or EHLO string from the connecting email server can provide supporting context, but it is easily spoofed and is not a primary legitimacy control. Analysts may correlate it with reverse DNS, Received headers, and IP reputation during deeper investigations, but by itself it is not as authoritative as SPF, DKIM, and DMARC. Compared with gateway scoring fields and authentication results, HELO/EHLO is a secondary indicator. Therefore, it is not one of the best two answers for this question.
SPF, DKIM, and DMARC are the primary standards used to validate whether an email was sent by authorized infrastructure and whether the message aligns with the claimed domain. SPF checks whether the sending IP is permitted to send for the domain, DKIM validates a cryptographic signature on the message, and DMARC evaluates alignment and policy handling. Reviewing these fields from the original email is one of the most reliable ways to identify spoofing, impersonation, or failed sender validation. This is a core email-legitimacy analysis task and is the strongest answer in the set.
Core concept: The question is about validating whether a suspicious email is legitimate by examining trustworthy metadata added during mail processing and domain-authentication results from the original message. The strongest indicators are the receiving system’s anti-spam assessment and the sender-authentication controls that verify whether the sending infrastructure and message align with the claimed domain. Key features include reviewing original-message metadata rather than user-visible fields, checking authentication outcomes, and using mail-gateway-added headers to assess spoofing risk. A common misconception is that visible fields like recipient address or content type prove legitimacy; they do not. Exam tip: when asked to determine legitimacy, prioritize original headers and authentication/anti-spam metadata over superficial message content.
A company is deploying new vulnerability scanning software to assess its systems. The current network is highly segmented, and the networking team wants to minimize the number of unique firewall rules. Which of the following scanning techniques would be most efficient to achieve the objective?
Agent-based scanning runs on each system and sends results to a central console, usually using a small, consistent set of outbound ports (commonly HTTPS). This minimizes the need for numerous inter-segment firewall rules because you avoid opening inbound scanner-to-target paths across segments. It also improves assessment depth (local patch/software inventory and configuration checks) compared to purely network-based probing.
A central non-credentialed scanner requires network reachability from the scanner to every target across segmented boundaries. That typically means many firewall rules (multiple subnets, ports, and protocols) to allow probing and service discovery. Non-credentialed scans also have lower accuracy and visibility, often missing local misconfigurations and patch state, making it inefficient for segmented environments.
A cloud-based scanner performing network scans still needs connectivity into each segmented network (VPN, tunnels, inbound allowances, or distributed connectors). This can increase firewall complexity and introduces additional considerations like egress control, routing, and trust boundaries. While cloud management can simplify operations, it does not inherently reduce the number of unique firewall rules needed for scanning segmented internal systems.
Placing a scanner sensor in every segment reduces cross-segment scanning traffic, but it increases deployment overhead and still requires firewall rules for each sensor to reach hosts within the segment and to communicate back to central management. Credentialed scans also require credential distribution and secure storage. This approach can be effective, but it is less efficient than agents for minimizing unique firewall rules overall.
Core concept: This question tests vulnerability scanning architecture choices in a highly segmented network, focusing on operational efficiency and firewall rule minimization. In segmented environments, traditional network-based scanning often requires many inter-segment firewall exceptions (multiple ports, protocols, and destinations) to reach targets and accurately enumerate services. Why A is correct: Agent-based scanning is typically the most efficient approach when the goal is to minimize unique firewall rules. Agents run locally on each endpoint/server and report results back to a management console (often over a small, consistent set of outbound ports such as HTTPS/TLS). Instead of opening inbound scanning paths from scanner-to-target across many segments, you standardize on a single egress rule per segment (or even none if existing outbound web access is allowed) to the scanner/manager. This aligns well with zero trust and segmentation principles: “scan locally, report centrally.” It also improves coverage because agents can assess local configuration, installed software, missing patches, and registry/file-system indicators that are difficult to infer from unauthenticated network probes. Key features/best practices: Use mutual TLS and agent authentication, restrict agent-to-manager communication to a fixed FQDN/IP and port, and ensure least privilege for agent operations. Centralize scheduling, policy, and reporting. Validate that the agent supports authenticated checks, local package inventory, and compliance baselines. This approach also reduces scan noise and avoids triggering IDS/IPS across segments. Common misconceptions: A central non-credentialed scanner (B) seems “simple,” but it usually requires broad network reach and many firewall rules across segments, and it provides weaker findings (limited to what is externally observable). Deploying sensors per segment (D) can reduce cross-segment rules, but it increases infrastructure footprint and still requires credential management and potentially multiple rule sets for each sensor’s reach and management traffic. Exam tips: When you see “highly segmented” plus “minimize firewall rules,” think agent-based scanning or very limited egress-only communication. Network scanning across segments generally increases firewall complexity. Also remember that credentialed/agent-based methods typically yield higher-fidelity vulnerability and configuration results than non-credentialed scans.