
CompTIA
508+ preguntas de práctica gratuitas con respuestas verificadas por IA
Impulsado por IA
Cada respuesta de CS0-003: CompTIA CySA+ es verificada de forma cruzada por 3 modelos de IA líderes para garantizar la máxima precisión. Obtén explicaciones detalladas por opción y análisis profundo de cada pregunta.
The Chief Executive Officer of an organization recently heard that exploitation of new attacks in the industry was happening approximately 45 days after a patch was released. Which of the following would best protect this organization?
A mean time to remediate of 30 days directly reduces the organization’s exposure window to known vulnerabilities. Since exploitation tends to occur around 45 days after patch release, remediating in 30 days means most systems will be patched before attackers commonly weaponize and widely exploit the issue. This is a proactive vulnerability management control aligned to patch SLAs and risk reduction.
A mean time to detect of 45 days is too slow and is also the wrong focus for the scenario. Detection measures how quickly you notice an issue or attack, not how quickly you eliminate the underlying vulnerability. If exploitation begins around day 45, detecting at day 45 means you may only notice compromise attempts as they start, while still being unpatched.
A mean time to respond of 15 days improves incident handling after detection, but it does not prevent exploitation of an unpatched vulnerability. You could respond quickly once an incident is identified, yet still suffer initial compromise because the vulnerable condition remains. The question asks what would best protect the organization given a patch-to-exploit timeline, which points to remediation speed.
Third-party application testing (e.g., pen testing, SAST/DAST, code review) can find weaknesses, especially in custom or externally supplied software, but it does not address the stated problem: attackers exploiting vulnerabilities after vendor patches are released. The immediate protective measure is deploying patches faster, not testing applications, which is longer-cycle and not tied to patch release timelines.
Core concept: This question tests vulnerability management metrics and how patching speed reduces exposure to “patch-to-exploit” timelines. Attackers often reverse-engineer patches or monitor advisories and then weaponize exploits shortly after release. The organization’s risk window is the time between patch availability and patch deployment across affected assets. Why the answer is correct: If exploitation is commonly occurring about 45 days after a patch is released, the best protection is to ensure the organization remediates (patches/mitigates) faster than that window. A mean time to remediate (MTTR) of 30 days means, on average, vulnerabilities are fixed within 30 days—15 days before the typical exploitation point—thereby shrinking the exposure window and reducing the likelihood that systems remain vulnerable when mass exploitation begins. Key features / best practices: Effective remediation within 30 days typically requires a mature patch management program: asset inventory and criticality ranking, vulnerability scanning and prioritization (e.g., CVSS plus exploitability and asset context), defined SLAs by severity (e.g., critical in 7–14 days, high in 30 days), change management with emergency patch lanes, automated deployment (MDM/SCCM/Intune/WSUS, etc.), validation (post-patch scanning), and compensating controls when patching is delayed (WAF rules, IPS signatures, configuration hardening, segmentation). Common misconceptions: It’s tempting to focus on detection/response metrics (MTTD/MTTRsp) because they are incident-response oriented, but those do not prevent exploitation of known unpatched vulnerabilities; they only help you notice and react after compromise attempts begin. Third-party application testing is valuable, but it does not directly address the stated industry pattern of exploitation after vendor patch release. Exam tips: When a question provides a “time-to-exploit after patch release,” the best defensive metric is usually remediation/patching speed (MTTR for vulnerabilities) and patch SLAs. Choose the option that reduces the vulnerability exposure window below the attacker’s typical weaponization timeline.
¿Quieres practicar todas las preguntas en cualquier lugar?
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
¿Quieres practicar todas las preguntas en cualquier lugar?
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
¿Quieres practicar todas las preguntas en cualquier lugar?
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.


Descarga Cloud Pass y accede a todas las preguntas de práctica de CS0-003: CompTIA CySA+ gratis.
¿Quieres practicar todas las preguntas en cualquier lugar?
Obtén la app gratis
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
A recent zero-day vulnerability is being actively exploited, requires no user interaction or privilege escalation, and has a significant impact to confidentiality and integrity but not to availability. Which of the following CVE metrics would be most accurate for this zero-day threat?
Option A is the closest match to the scenario because it uses AV:N, AC:L, PR:N, and UI:N, which align with a remotely exploitable vulnerability that requires no user interaction and no privileges. It also indicates high confidentiality impact with C:H and an unchanged scope with S:U, both of which are reasonable based on the stem. However, the vector is not perfectly formed because 'I:K' is not a valid CVSS v3.1 integrity value, and availability is listed as A:L rather than none. Despite those flaws, it is still the best available choice because the other options contradict the exploitability conditions much more severely.
Option B is inconsistent with the stem because it requires high privileges and user interaction, which directly contradicts the description of no privilege requirement and no user interaction. It also uses malformed CVSS notation such as AV:K, PR:H in a context that does not fit, and UI:R instead of the valid UI:N or UI:R style expected in proper vectors. Although C:H and I:H would fit the impact portion, the exploitability metrics are fundamentally wrong. This makes B a poor match even before considering the notation issues.
Option C is incorrect because it indicates user interaction is required, which conflicts with the explicit statement that no user interaction is needed. Its impact metrics also do not fit the scenario: confidentiality is only low, integrity is none, and availability is high, while the stem says confidentiality and integrity are significantly affected but availability is not. Even though AV:N and AC:L are plausible, the rest of the vector misrepresents the vulnerability. Therefore, C does not accurately describe the threat.
Option D is wrong because it describes a local attack vector and requires privileges and user interaction, all of which contradict the scenario's highly exploitable no-interaction, no-privilege nature. It also assigns high availability impact, which the stem explicitly rules out. While confidentiality impact is high, the rest of the vector reflects a very different type of vulnerability. As a result, D is not an accurate CVSS representation of the described zero-day.
Core concept: This question tests how to map a vulnerability description to CVSS v3.1 base metrics, especially Attack Vector, Attack Complexity, Privileges Required, User Interaction, Scope, and Confidentiality/Integrity/Availability impact. The stem clearly indicates a remotely exploitable issue requiring no privileges and no user interaction, with high impact to confidentiality and integrity but little or no impact to availability. In CVSS terms, that points toward AV:N, AC:L, PR:N, UI:N, S:U, C:H, I:H, and A:N or at least not A:H. A key exam point is that 'zero-day' and 'actively exploited' are threat-context indicators and do not directly change the base vector; they affect prioritization and temporal considerations rather than intrinsic base severity. Common misconceptions include confusing privilege escalation with Scope and assuming active exploitation changes base metrics. Exam tip: first map exploitability conditions from the stem, then map CIA impact, and finally choose the closest available option even if the distractors contain malformed or imperfect vectors.
Which of the following tools would work best to prevent the exposure of PII outside of an organization?
PAM (Privileged Access Management) focuses on securing privileged accounts (admins, service accounts) through vaulting, rotation, just-in-time access, session recording, and least privilege. It reduces the risk of privileged misuse and lateral movement, but it is not primarily a content-aware control. PAM won’t reliably stop a user from emailing or uploading PII externally unless combined with other data controls.
IDS (Intrusion Detection System) monitors traffic or host activity to detect malicious behavior and policy violations. While it can alert on suspicious outbound connections or known exfiltration patterns, it typically lacks deep, business-context content inspection for PII and is often not deployed inline to block. IDS is more aligned with detection than prevention of PII exposure.
PKI (Public Key Infrastructure) provides certificates, authentication, digital signatures, and encryption (e.g., TLS, S/MIME). It can protect data in transit and help ensure only intended recipients can read encrypted messages, but it does not prevent users from sending PII to unauthorized external recipients. PKI is about trust and confidentiality, not policy enforcement on sensitive content.
DLP (Data Loss Prevention) is purpose-built to prevent sensitive data such as PII from leaving the organization. It inspects content and context across endpoints, networks, and cloud services, then enforces policies (block, quarantine, encrypt, redact, or alert). DLP directly addresses accidental leakage and intentional exfiltration by controlling outbound channels like email, web uploads, and removable media.
Core Concept: This question tests data loss prevention (DLP) controls used to stop sensitive data (like PII) from leaving an organization via email, web uploads, cloud apps, removable media, printing, or other exfiltration paths. In CySA+ terms, it’s about detective/preventive security controls that enforce data handling policies at endpoints, networks, and cloud services. Why the Answer is Correct: DLP is specifically designed to prevent exposure of sensitive data outside the organization. DLP solutions identify PII using content inspection (pattern matching for SSNs, national IDs, credit card numbers), contextual signals (file labels, classification tags, destination domains), and sometimes exact data matching (EDM) against known sensitive datasets. Once detected, DLP can block, quarantine, encrypt, redact, or require user justification/manager approval before transmission—directly addressing “prevent exposure.” Key Features / Best Practices: Effective DLP typically combines (1) data classification and labeling, (2) policies mapped to regulatory requirements (e.g., GDPR/CCPA/GLBA/HIPAA depending on context), (3) coverage across channels (endpoint DLP, network DLP, and cloud/CASB-integrated DLP), and (4) tuned rules to reduce false positives. Common controls include blocking uploads to personal webmail, preventing copy to USB, enforcing encryption when emailing PII externally, and alerting SOC workflows for attempted exfiltration. Common Misconceptions: IDS can detect suspicious traffic but generally does not understand business data sensitivity well enough to reliably prevent PII leakage, and it is often not in-line to block. PKI supports encryption and identity (certificates) but does not enforce content-aware policies to stop PII from being sent. PAM restricts privileged access but does not directly prevent non-privileged users from accidentally or intentionally sending PII externally. Exam Tips: When the goal is “prevent sensitive data/PII from leaving,” think DLP first. If the goal is “detect intrusions,” think IDS/IPS. If the goal is “encrypt/establish trust,” think PKI. If the goal is “control/administer privileged accounts,” think PAM. Also note that DLP can be preventive (blocking) and detective (alerting), which is why it fits exfiltration/PII exposure scenarios well.
Which of the following items should be included in a vulnerability scan report? (Choose two.)
Lessons learned is typically part of an after-action report or post-incident review in incident response. It captures what went well, what failed, and process improvements. A vulnerability scan report focuses on discovered vulnerabilities and their context, not retrospective analysis of an event. While a vulnerability program may later produce lessons learned, it is not a standard scan report component.
A service-level agreement (SLA) defines expected service performance or response/remediation timelines (e.g., critical vulnerabilities remediated within X days). SLAs are governance artifacts and may be referenced by a vulnerability management policy, but they are not usually included as a required element inside the scan report output itself, which is primarily findings and prioritization data.
A playbook is an incident response or security operations guide that outlines step-by-step actions for specific scenarios (e.g., ransomware, phishing, web shell). Vulnerability scanning is a proactive assessment activity; its report documents vulnerabilities found. While playbooks may exist for remediation workflows, they are not a standard inclusion in the scan report.
Affected hosts are a core requirement of a vulnerability scan report because they identify exactly which systems are impacted. Reports typically list IPs/hostnames, asset IDs, and sometimes tags (environment, owner, criticality). This enables assignment, remediation, and verification. Without affected hosts, the findings are not actionable and cannot be tracked to closure.
A risk score (often CVSS or a tool-specific priority rating) is commonly included to support triage and remediation prioritization. It helps teams focus on the highest-impact issues first, especially when combined with exploitability and exposure context. CySA+ expects you to recognize that scan reports should include severity/risk information to drive decision-making.
An education plan is a training and awareness artifact used to improve staff skills or user behavior over time. It is not part of the standard output of a vulnerability scan. While scan trends might inform training needs (e.g., recurring misconfigurations), the scan report itself should focus on findings, affected assets, and prioritization/remediation details.
Core concept: A vulnerability scan report is a vulnerability management deliverable that documents what was scanned, what was found, and how severe the findings are so teams can prioritize remediation. Typical scan outputs map findings to assets (hosts) and provide severity/priority indicators (often a risk score such as CVSS, vendor severity, or an internal risk rating). Why the answer is correct: D (Affected hosts) is essential because vulnerabilities are only actionable when tied to specific assets. A report must identify which IPs/hostnames/instances are impacted, often including asset identifiers, OS/app context, and evidence (ports, banners, plugin IDs). Without affected hosts, remediation cannot be assigned, validated, or tracked. E (Risk score) is also a standard component because organizations need prioritization. Risk scoring helps determine which vulnerabilities to address first based on severity and context. Many tools include CVSS base/temporal scores, exploitability indicators, known-exploited status, and/or a consolidated “risk” or “priority” score. Key features / best practices: A strong vulnerability scan report typically includes: scope and time of scan, scanner/tool version, scan policy, credentialed vs non-credentialed status, affected hosts, vulnerability details (CVE, plugin ID, description), evidence, risk/severity score, and remediation guidance. In mature programs, findings are enriched with asset criticality and exposure (internet-facing vs internal) to refine prioritization. Reports should be reproducible and support validation (re-scan) and tracking (ticket references). Common misconceptions: Items like lessons learned, playbooks, and education plans are associated with incident response or program improvement rather than the core output of a vulnerability scan. An SLA may exist for remediation timelines, but it is usually a governance document or policy reference, not a required element of the scan report itself. Exam tips: For CySA+, distinguish between vulnerability management artifacts (scan results, affected assets, severity/risk, remediation recommendations) and incident response artifacts (playbooks, lessons learned, after-action reports). When asked what “should be included” in a scan report, pick items that make findings actionable and prioritizable: asset/host mapping and severity/risk scoring are the most universal.
A security analyst recently joined the team and is trying to determine which scripting language is being used in a production script to determine if it is malicious. Given the following script:
foreach ($user in Get-Content .\this.txt)
{
Get-ADUser $user -Properties primaryGroupID | select-object primaryGroupID
Add-ADGroupMember "Domain Users" -Members $user
Set-ADUser $user -Replace @{primaryGroupID=513}
}
Which of the following scripting languages was used in the script?
PowerShell is correct because the script uses native PowerShell syntax throughout. The foreach ($user in Get-Content .\this.txt) structure, $user variable notation, and the pipeline to Select-Object are all characteristic of PowerShell. In addition, Get-ADUser, Add-ADGroupMember, and Set-ADUser are PowerShell cmdlets from the ActiveDirectory module, which strongly confirms the language. The hashtable syntax used with -Replace is another clear PowerShell-specific feature.
Ruby is incorrect. Ruby loops and variables look different (e.g., users.each do |user| ... end) and Ruby does not use Verb-Noun cmdlets or pipelines. While Ruby could interact with AD via libraries or system calls, the native syntax and the presence of Get-ADUser/Add-ADGroupMember strongly indicate PowerShell rather than Ruby.
Python is incorrect. Python would typically use “for user in ...:” with indentation and would rely on libraries (e.g., ldap3) or subprocess calls to manage AD. The script’s pipeline operator “|”, cmdlet naming, and hashtable replacement syntax are not Python constructs, making PowerShell the clear match.
Shell script is incorrect in the common sense of bash/sh. Bash uses constructs like “for user in $(cat file); do ...; done” and does not have PowerShell cmdlets or object pipelines. Although PowerShell is sometimes called a “shell,” the option “Shell script” here refers to Unix-like shell scripting, not PowerShell.
Core concept: This question tests recognition of PowerShell syntax and cmdlets. The script uses PowerShell-specific constructs such as $-prefixed variables, the foreach loop syntax, the pipeline operator, and Verb-Noun cmdlets. The presence of Active Directory cmdlets like Get-ADUser, Add-ADGroupMember, and Set-ADUser makes PowerShell the clear identification. Why correct: PowerShell is the only option that matches both the syntax and the command structure shown. The script reads values from a text file, iterates through them, queries AD user properties, modifies group membership, and updates an AD attribute using native PowerShell cmdlets. These are standard indicators of a PowerShell administrative script, regardless of whether the script is benign or suspicious. Key features: PowerShell uses object-oriented pipelines, so commands like Select-Object operate on returned objects rather than plain text. The hashtable syntax @{primaryGroupID=513} used with -Replace is also a strong PowerShell indicator. AD cmdlets are commonly available through the ActiveDirectory module in Windows environments. Common misconceptions: Some learners confuse PowerShell with generic shell scripting because both automate tasks from a command line. However, traditional shell scripts do not use Verb-Noun cmdlets, object pipelines, or AD-specific commands like Get-ADUser. Python and Ruby also have very different loop and variable syntax. Exam tips: Look for PowerShell fingerprints such as cmdlets named in Verb-Noun format, variables beginning with $, pipelines using |, and hashtables using @{ }. If you see Windows administration or AD management commands written this way, PowerShell is usually the correct answer.
A company's user accounts have been compromised. Users are also reporting that the company's internal portal is sometimes only accessible through HTTP, other times; it is accessible through HTTPS. Which of the following most likely describes the observed activity?
An SSL certificate or port 443 outage would typically make HTTPS consistently unavailable or generate clear browser errors (certificate expired, name mismatch, untrusted CA, TLS handshake failures). It does not inherently cause users to sometimes land on HTTP and sometimes on HTTPS unless there is a separate redirection misconfiguration. It also doesn’t strongly explain credential compromise compared to an on-path downgrade/stripping scenario.
An on-path attacker can force HTTP by performing SSL stripping/downgrade, intercepting redirects, or manipulating responses so the browser never upgrades to HTTPS. This enables credential and session theft, directly aligning with compromised accounts. The intermittent nature fits an attacker who is only on-path for some users/some times (e.g., ARP spoofing, rogue AP, compromised switch port, or malicious internal proxy).
Web servers generally do not forward users from HTTPS to HTTP because of load; they scale out, shed load, or fail. Redirecting to HTTP would be a major security anti-pattern and would likely be consistent if configured. While high TLS load can cause timeouts or handshake failures, it doesn’t naturally produce intermittent protocol switching and doesn’t directly explain account compromise as well as an on-path attack does.
BGP governs inter-domain routing between autonomous systems and is not a typical mechanism for causing an internal portal to alternate between HTTP and HTTPS. Internal routers usually use IGPs (OSPF/EIGRP/IS-IS) or static routes, and even if routing changed, it would affect reachability/latency rather than selectively downgrading application-layer encryption. This option doesn’t match the symptoms.
Core Concept: This scenario tests understanding of on-path (man-in-the-middle) attacks and SSL/TLS downgrade/stripping. When users intermittently reach a site over HTTP instead of HTTPS, it often indicates an attacker is manipulating traffic to remove or bypass encryption, enabling credential theft. Why the Answer is Correct: Users report the internal portal is sometimes only accessible via HTTP and other times via HTTPS, and user accounts have been compromised. This combination strongly aligns with an on-path attacker performing SSL stripping or a downgrade attack. In SSL stripping, the attacker intercepts a user’s initial HTTP request (or redirects), prevents the upgrade to HTTPS, and proxies traffic so the user stays on HTTP while the attacker may communicate to the server over HTTPS. The user sees an unencrypted session, and credentials/cookies can be captured. The “someone with internal access” clue fits common enterprise realities: a compromised internal host, rogue AP, ARP spoofing, or malicious insider can position themselves on-path. Key Features / Best Practices: To prevent this, enforce HTTPS everywhere: implement HSTS (HTTP Strict Transport Security), redirect HTTP to HTTPS at the server/load balancer, disable insecure ciphers/protocols, and ensure cookies are marked Secure and HttpOnly. Use network protections like DHCP snooping, Dynamic ARP Inspection, port security, and monitoring for ARP poisoning. From an IR perspective, validate portal configuration, inspect proxy/WAF/load balancer logs, and hunt for indicators of on-path behavior (duplicate IP/MAC, ARP cache anomalies, unexpected gateway MAC changes). Common Misconceptions: A certificate/port 443 issue (Option A) could cause consistent HTTPS failures, but it doesn’t naturally explain intermittent HTTP availability paired with account compromise; users would more likely see certificate warnings or consistent inability to negotiate TLS. Capacity issues (Option C) are not a normal design pattern—servers don’t “forward to port 80” due to TLS load; they scale or fail, and this would not directly explain credential compromise. BGP issues (Option D) are irrelevant for an internal portal’s HTTP/HTTPS switching and would more likely cause routing outages, not protocol downgrades. Exam Tips: When you see “sometimes HTTP, sometimes HTTPS” plus credential compromise, think downgrade/SSL stripping and on-path attacks. Look for clues about internal network positioning (ARP spoofing, rogue Wi-Fi, malicious proxy). The best answer typically ties both symptoms together: loss of encryption leading to stolen credentials.
A security analyst is tasked with prioritizing vulnerabilities for remediation. The relevant company security policies are shown below: Security Policy 1006: Vulnerability Management
THOR.HAMMER is highly exploitable (AV:N/AC:L/PR:N/UI:N) and has High availability impact (A:H) but no confidentiality or integrity impact (C:N/I:N). It is also on an internal system, which the policy deprioritizes relative to external systems. Additionally, the policy states confidentiality is prioritized over availability when a choice must be made, so this ranks below confidentiality-impacting issues like B/D.
CAP.SHIELD is remotely exploitable with no privileges or user interaction required and has High confidentiality impact (C:H) with no integrity/availability impact. This aligns with the policy’s preference for confidentiality over availability. It is also on an external/publicly available system, which the policy prioritizes over internal assets. With identical exploitability to the other options, its confidentiality impact plus external exposure makes it the top priority.
LOKI.DAGGER is externally exposed and highly exploitable, which makes it important. However, its impact is only High availability (A:H) with no confidentiality/integrity impact (C:N/I:N). The policy explicitly prioritizes confidentiality over availability when choosing between them. Therefore, despite being external, it is lower priority than an external vulnerability with High confidentiality impact (B).
THANOS.GAUNTLET has the same strong exploitability and High confidentiality impact as option B, but it is on an internal system. The policy states publicly available (external) systems and services are prioritized over internal systems. Therefore, D would be a high priority, but it is second to B because external exposure is the tie-breaker when severity characteristics are otherwise equivalent.
Core concept: This question tests policy-driven vulnerability prioritization using CVSS v3.1 Base metrics (Exploitability + Impact) and organizational risk rules. CVSS Base vectors describe technical severity; the policy then adds business prioritization: (1) use Base metrics, (2) prefer confidentiality over availability when forced to choose, and (3) prioritize externally/publicly exposed systems over internal ones. Why the answer is correct: Options B and D have the same CVSS vector: AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N. That indicates a remotely exploitable issue (Network), low complexity, no privileges, no user interaction, with High confidentiality impact and no integrity/availability impact. Per policy item #2, confidentiality-impacting issues outrank availability-only issues when a choice is required. Between B/D, policy item #3 breaks the tie: patch publicly available (external) systems before internal systems. Therefore CAP.SHIELD (B) is the highest priority. Key features/best practices: CVSS Base scoring emphasizes exploitability (AV, AC, PR, UI, S) and impact (C, I, A). Here, all options are equally exploitable (AV:N, AC:L, PR:N, UI:N, S:U), so impact and exposure drive prioritization. High confidentiality impact on an internet-facing asset typically maps to data disclosure, credential leakage, or sensitive information exposure—often a rapid escalation path to broader compromise. Many programs also align this with NIST SP 800-40 (patch management) and NIST CSF “Protect”/“Respond” functions, where internet-exposed, high-impact vulnerabilities are prioritized. Common misconceptions: Some may pick the highest availability impact (A or C) assuming outages are most urgent. However, the policy explicitly states confidentiality is prioritized over availability when choosing. Others may pick C because it is external, but it only affects availability (A:H) and not confidentiality. Exam tips: When a question provides a policy, treat it as the primary decision rule—even over your personal intuition. Compare (1) exposure (external vs internal), (2) impact category emphasized by policy (C over A), then (3) CVSS exploitability/impact details. If vectors are identical, use business context/policy to break ties.
A security analyst received a malicious binary file to analyze. Which of the following is the best technique to perform the analysis?
Code analysis generally refers to reviewing source code (manual review or SAST) to find malicious logic, insecure functions, or vulnerabilities. For a received malicious binary, the analyst typically does not have the original source code, making traditional code analysis impractical. While decompiled output can resemble code, that activity is better categorized as reverse engineering rather than straightforward code review.
Static analysis examines an artifact without executing it (hashes, strings, headers, imports, entropy, signatures). It is useful for quick triage and IOC extraction, but by itself may not reveal full intent, especially if the binary is packed, obfuscated, or uses runtime-resolved APIs. Static analysis is often a component of reverse engineering, but it is not as complete a technique for understanding a malicious binary’s logic.
Reverse engineering is the process of analyzing a compiled binary to understand its internal logic and behavior without source code. Using disassembly/decompilation and debugging, an analyst can identify functionality (persistence, C2, credential theft), extract IOCs, and understand anti-analysis techniques. For a malicious binary file, reverse engineering is the most appropriate and comprehensive technique to determine what the malware does and how to detect/contain it.
Fuzzing is a testing technique that sends malformed or unexpected inputs to a program to trigger crashes or anomalous behavior, commonly used to discover vulnerabilities. It is not primarily used to analyze what a malicious binary does; instead, it helps find weaknesses in software. While you could fuzz a malware sample’s parser to learn about it indirectly, that is not the best or most direct technique for malware binary analysis.
Core Concept: This question tests malware analysis techniques, specifically how to analyze a compiled malicious binary. In CySA+ terms, this falls under incident response activities such as triage, analysis, and containment planning. The key idea is choosing the technique that best fits a binary (machine code) rather than source code. Why the Answer is Correct: Reverse engineering is the best technique for analyzing a malicious binary because it focuses on understanding how a compiled executable works when you do not have the source code. Reverse engineering uses disassemblers and decompilers to translate machine instructions into assembly and higher-level pseudocode, allowing an analyst to identify capabilities (persistence, privilege escalation, C2 communication, encryption routines), indicators of compromise (domains, IPs, mutexes, registry keys), and logic (kill switches, anti-analysis checks). This is the most direct and complete approach to determine intent and functionality from a binary artifact. Key Features / Best Practices: Effective reverse engineering typically combines static reverse engineering (strings, imports, control flow, disassembly/decompilation) with controlled dynamic analysis (sandbox execution, debugging, API tracing) to validate hypotheses. Best practices include using isolated lab environments, snapshots, non-routable networks, and tooling such as Ghidra/IDA, x64dbg/WinDbg, and Sysmon/Procmon/Wireshark. Analysts also look for packing/obfuscation and may need to unpack before meaningful reverse engineering. Common Misconceptions: “Static analysis” is related and often part of reverse engineering, but it is broader and can be superficial (hashing, strings, metadata) without actually reconstructing program logic. “Code analysis” implies access to source code, which is usually not available for malware binaries. “Fuzzing” is primarily for finding vulnerabilities by feeding malformed inputs to a target program; it is not the primary method to understand what a malicious binary does. Exam Tips: When the artifact is explicitly a “malicious binary,” assume no source code and prioritize reverse engineering. If the question mentions “source code,” then code analysis fits. If it mentions “behavior in a sandbox,” think dynamic analysis. If it mentions “finding crashes/bugs,” think fuzzing. For CySA+, reverse engineering is the go-to for deep capability analysis of compiled malware.
A user downloads software that contains malware onto a computer that eventually infects numerous other systems. Which of the following has the user become?
Hacktivists are threat actors motivated by political or social causes who intentionally conduct attacks (e.g., defacement, DDoS, data leaks) to promote an agenda. In this scenario, there is no indication of ideology or deliberate activism—only a user downloading infected software. Therefore, “hacktivist” does not fit the described behavior or motivation.
An advanced persistent threat (APT) refers to a highly capable adversary (often state-sponsored or well-funded) that performs targeted, stealthy, long-term intrusion with persistence and careful operational security. While malware spreading to many systems can happen during APT activity, the scenario focuses on a user accidentally introducing malware, not a sophisticated, persistent external campaign.
An insider threat includes any risk originating from within the organization’s trusted boundary—employees, contractors, or partners—whether malicious, negligent, or compromised. Here, the user’s action (downloading malware) made them the internal entry point that enabled infection of other systems. Even without malicious intent, they are an unintentional/negligent insider threat.
A script kiddie is typically an external attacker with limited skills who uses existing scripts, exploit kits, or tools to compromise systems. The scenario does not describe the user actively attacking others or using hacking tools; it describes accidental installation of malware. Thus, “script kiddie” is not the correct classification for the user’s role.
Core Concept: This question tests understanding of threat actor categories, specifically how an otherwise legitimate user can become a security risk to the organization. In CySA+ terms, an “insider threat” includes not only malicious employees/contractors but also negligent or compromised insiders whose actions lead to security incidents. Why the Answer is Correct: The user downloaded software containing malware, and that malware later spread to other systems. The key detail is that the initial action came from a trusted internal user account/device. Even if the user had no intent to cause harm, they became the initial infection vector from inside the organization. That fits the definition of an insider threat (often called an “unintentional” or “negligent” insider). From a defender’s perspective, the risk is driven by the user’s authorized access and proximity to internal resources, which can enable rapid lateral movement once malware executes. Key Features / Best Practices: To reduce this risk, security operations commonly implement: least privilege and application allowlisting; endpoint protection/EDR with behavioral blocking; web/DNS filtering; email and download sandboxing; user awareness training; and network segmentation to limit propagation. Monitoring controls include SIEM correlation for unusual process execution, new persistence mechanisms, suspicious outbound connections, and lateral movement indicators (e.g., abnormal SMB/RDP/WinRM usage). Common Misconceptions: Learners sometimes pick “script kiddie” because malware was involved, but script kiddies are attackers who use prebuilt tools to hack others—not ordinary users who accidentally install malware. “APT” is also tempting because the infection spread widely, but APT implies a well-resourced, targeted, long-term campaign by an external adversary. “Hacktivist” implies ideological motivation and deliberate action. Exam Tips: On CySA+, classify the actor by intent and role. If the person is a legitimate user whose actions (malicious or accidental) cause compromise, think “insider threat.” If the scenario describes a targeted, stealthy, long-duration campaign, think “APT.” If it describes an attacker using off-the-shelf tools with limited sophistication, think “script kiddie.”
During an extended holiday break, a company suffered a security incident. This information was properly relayed to appropriate personnel in a timely manner and the server was up to date and configured with appropriate auditing and logging. The Chief Information Security Officer wants to find out precisely what happened. Which of the following actions should the analyst take first?
Cloning the virtual server is the best first step for a precise investigation because it preserves evidence and enables forensic analysis without altering the original system. In a VM context, cloning/snapshotting supports repeatable analysis, hashing for integrity, and maintaining chain of custody. It also allows multiple analysts to work in parallel while keeping the original instance as pristine as possible.
Logging into the affected server to review logs seems efficient, but it is not the best first action in a forensic-minded incident. Interactive access changes the system state (new authentication events, file access timestamps, potential log rotation) and can overwrite artifacts. Best practice is to acquire a forensic copy first, then analyze logs and artifacts from the clone/image.
Restoring from a last known-good backup is a recovery/continuity action, not an investigation-first action. It can overwrite or remove evidence needed to determine initial access, persistence, lateral movement, and data impact. Unless the question prioritizes immediate service restoration over investigation, restoration should occur after evidence preservation and scoping.
Shutting down immediately can destroy volatile evidence (RAM contents, running processes, active network connections) and may also trigger disk consistency operations on reboot that alter artifacts. While isolation/containment is often necessary, an abrupt shutdown is typically not the first step when the goal is to determine exactly what happened; preserve evidence via imaging/cloning first.
Core concept: This question tests incident response fundamentals—specifically evidence preservation and forensic acquisition prior to analysis. In CySA+ terms, the first priority after notification/triage is to preserve volatile and non-volatile evidence while maintaining chain of custody and minimizing contamination of the system. Why the answer is correct: Cloning the virtual server for forensic analysis creates a point-in-time copy (ideally a snapshot plus full disk clone, depending on platform and requirements) that allows investigators to examine artifacts without altering the original evidence. Logging into the affected server and “starting to look” changes file access times, creates new log entries, may rotate logs, and can overwrite valuable artifacts. A clone supports repeatable analysis, integrity verification (hashing), and parallel work by multiple analysts, which is critical when the CISO wants to know precisely what happened. Key features/best practices: In virtual environments, a forensic workflow commonly includes isolating the VM (network containment as appropriate), capturing volatile data when feasible (memory, running processes, network connections), and then acquiring images (VM disk files, snapshots, hypervisor logs). The clone/image should be hashed (e.g., SHA-256) and stored securely with documented chain of custody. Analysis is performed on the copy using forensic tools, preserving the original for potential legal/regulatory needs. Common misconceptions: Many responders jump straight to log review on the live host because logs are available and the system is “properly audited.” However, live interaction is evidence contamination. Another misconception is to restore from backup or shut down immediately; both can destroy volatile evidence and complicate root-cause determination. Exam tips: When a question emphasizes “precisely what happened” and mentions good logging/auditing, think “forensics and evidence handling.” The first action is usually preserve/acquire evidence (image/clone/snapshot) before performing detailed analysis. Avoid choices that modify the system (interactive logins), destroy evidence (shutdown), or prioritize recovery over investigation (restore) unless the scenario explicitly states life/safety or critical availability requirements.
Which of the following will most likely ensure that mission-critical services are available in the event of an incident?
A Business Continuity Plan (BCP) is designed to keep critical business services operating during an incident. It prioritizes essential functions using a BIA and defines continuity strategies (redundancy, alternate sites, manual procedures, vendor support) to meet RTO/RPO targets. Because it focuses on maintaining availability of mission-critical services—not just restoring them later—it best matches the question’s goal.
A vulnerability management plan focuses on identifying, prioritizing, remediating, and tracking vulnerabilities (scanning, patching, risk ranking, exception handling). While it reduces the likelihood of incidents and can limit impact, it does not directly ensure service availability during an incident. It is a preventative and risk-reduction program rather than an operational continuity plan.
A Disaster Recovery Plan (DRP) details how to restore IT systems, applications, and data after a disruptive event (backups, rebuild procedures, failover/failback). DRP supports availability, but it is typically narrower than BCP and is oriented toward recovery after the incident rather than maintaining business operations throughout. It’s often a component of the broader BCP.
An asset management plan inventories and tracks hardware, software, data, and ownership to support lifecycle management, compliance, and risk decisions. It helps with incident response (knowing what exists and who owns it) and vulnerability management (knowing what to patch), but by itself it does not provide continuity mechanisms like redundancy, alternate processing, or recovery procedures.
Core Concept: This question tests resilience planning: ensuring mission-critical services remain available during and after an incident. In CySA+ terms, this is about operational resilience, continuity of operations, and maintaining essential business functions despite disruptions. Why the Answer is Correct: A Business Continuity Plan (BCP) is the most likely to ensure mission-critical services are available during an incident because it focuses on keeping essential functions running (or rapidly sustaining them) through predefined strategies such as redundancy, alternate processing, manual workarounds, and continuity procedures. BCP is broader than IT recovery: it includes people, processes, facilities, third parties, communications, and prioritization of critical services based on business impact. Key Features / Best Practices: A strong BCP is driven by a Business Impact Analysis (BIA), which identifies mission-critical services, defines Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs), and maps dependencies (identity services, DNS, network, cloud providers, SaaS, etc.). It includes continuity strategies (active-active/active-passive, alternate sites, failover procedures), roles and responsibilities, crisis communications, and regular testing (tabletop exercises, functional tests). Framework alignment commonly references ISO 22301 (Business Continuity Management Systems) and NIST SP 800-34 (Contingency Planning Guide for Information Systems). Common Misconceptions: Many confuse BCP with Disaster Recovery Plan (DRP). DRP is critical, but it primarily addresses restoring IT systems after a disruptive event. BCP is the overarching plan to keep the business operating, which is exactly what “ensure mission-critical services are available” implies. Vulnerability management and asset management are important security programs, but they do not directly provide continuity during an incident. Exam Tips: If the question emphasizes “mission-critical services available” or “continue operations,” choose BCP. If it emphasizes “restore systems/data after outage,” choose DRP. Look for keywords: BCP = continuity of business functions; DRP = recovery of IT infrastructure; IR plan = containment/eradication/lessons learned; vulnerability management = reduce exposure over time.
An incident response team receives an alert to start an investigation of an internet outage. The outage is preventing all users in multiple locations from accessing external SaaS resources. The team determines the organization was impacted by a DDoS attack. Which of the following logs should the team review first?
CDN logs are primarily useful when the organization is defending its own public-facing applications through a content delivery or DDoS protection provider. In this scenario, the problem is that internal users cannot access external SaaS resources, which are third-party services and generally not fronted by the organization's CDN. Reviewing CDN logs would not be the best first step unless the outage specifically involved the organization's hosted content or edge protection platform. Therefore, CDN logs are less directly relevant than DNS logs for this type of widespread outbound access issue.
Vulnerability scanner logs are used to track scanning activity, discovered weaknesses, and assessment results. They do not provide meaningful visibility into a live DDoS event affecting user access to external SaaS platforms. Even if a scanner generated excess traffic, it would not typically explain a distributed outage across multiple locations in the same way as a DNS-related disruption. As a result, these logs are not the correct first source to review.
DNS logs should be reviewed first because DNS is a critical dependency for accessing external SaaS resources. If a DDoS attack is affecting the organization's DNS resolvers or DNS provider, users in multiple locations may be unable to resolve the hostnames of cloud services, creating the appearance of a widespread internet outage. DNS logs can quickly show query failures, timeouts, abnormal spikes in requests, or resolver unavailability. This makes them the most direct and efficient source for confirming whether the outage is tied to name resolution disruption caused by the attack.
Web server logs are useful for investigating attacks against a specific web application or server, especially for application-layer events such as HTTP floods. However, the scenario describes users across multiple locations being unable to access external SaaS resources, not an issue with the organization's own web server. Web server logs would not explain why many different third-party services became unreachable at once. That makes them a lower-priority and less appropriate first review point than DNS logs.
Core concept: This question tests incident response triage during a DDoS-related internet outage affecting users in multiple locations who cannot access external SaaS resources. The first log source to review should be the one most directly tied to how users locate and reach external services, especially when the impact is broad and affects many different destinations. In this case, DNS logs are the most relevant starting point because failed or degraded name resolution can prevent access to all external SaaS platforms at once. Why correct: DNS is a foundational dependency for reaching external SaaS resources. If a DDoS attack is impacting the organization's DNS resolvers, upstream DNS provider, or DNS traffic path, users across multiple sites may appear to have a general internet outage even though the root cause is name resolution failure. Reviewing DNS logs first helps confirm whether queries are timing out, being dropped, or overwhelmed, and whether the outage is tied to DNS-specific attack activity. Key features: DNS logs show query volume, response failures, SERVFAIL/NXDOMAIN patterns, timeout behavior, resolver health, and whether requests to external domains are succeeding. They can also reveal whether internal clients across multiple locations are all failing at the same dependency point. This makes DNS a high-value first source during widespread SaaS access issues. Common misconceptions: CDN logs are useful when investigating attacks against the organization's own internet-facing applications that sit behind a CDN or DDoS protection service. However, users trying to reach third-party SaaS platforms are not typically traversing the organization's CDN. Web server logs are also less relevant because the issue is not with a single hosted application, and vulnerability scanner logs do not help with real-time outage triage. Exam tips: On CySA+ questions, start with the shared dependency that best explains the broadest impact. If many users in many locations cannot access many external services, think of common services like DNS before focusing on logs tied to a specific hosted application or security tool.
During security scanning, a security analyst regularly finds the same vulnerabilities in a critical application. Which of the following recommendations would best mitigate this problem if applied along the SDLC phase?
Regular red team exercises can uncover real-world attack paths and validate detection/response, but they are typically periodic and occur late (often against production). This does not address the root cause of recurring vulnerabilities being reintroduced during development. Red teaming is more about adversary emulation and assurance than continuous prevention across the SDLC.
Regularly checking implemented coding libraries is essentially software composition analysis and patch management for dependencies. This can reduce repeated findings related to known vulnerable components, but it won’t prevent recurring vulnerabilities in custom code (logic flaws, insecure input handling, auth issues). It’s a partial control, not the best broad SDLC-phase mitigation for repeated issues.
Integrating application security scanning into the CI/CD pipeline provides continuous, automated detection and enforcement. SAST/SCA (and DAST in staging) can run on every build, create immediate developer feedback, and enforce quality gates so vulnerable code cannot be promoted. This directly mitigates repeated vulnerabilities by preventing regressions and embedding security controls into the SDLC workflow.
Proper input validation is an important secure coding practice and can mitigate specific vulnerabilities like injection and some XSS scenarios. However, it is a single technical control and may not address the full range of recurring findings (e.g., vulnerable dependencies, insecure configurations, auth flaws). The question asks for an SDLC-phase recommendation that best mitigates recurrence, which is better solved by pipeline scanning and gating.
Core concept: This question tests “shift-left” vulnerability management within the SDLC—specifically integrating application security testing into CI/CD so vulnerabilities are detected and fixed before release. In CySA+ terms, this aligns with continuous assessment, secure SDLC practices, and reducing vulnerability recurrence through process controls. Why the answer is correct: If the same vulnerabilities keep reappearing in a critical application, the issue is usually systemic: insecure code patterns are being reintroduced, fixes are not verified, or developers lack immediate feedback. Embedding application security scanning into the CI/CD pipeline (e.g., SAST, SCA, and possibly DAST in staging) creates automated, repeatable gates. Each commit/build is evaluated, findings are tracked, and builds can fail when critical issues are detected. This prevents vulnerable code from reaching production and reduces “regression” (previously fixed flaws returning). Key features / best practices: - Use SAST during build to catch coding flaws early (insecure functions, injection patterns, auth/crypto misuse). - Use SCA to detect vulnerable third-party libraries and transitive dependencies. - Add policy gates (severity thresholds, allowlists, exception workflows with expiration). - Integrate results into ticketing (Jira/ADO) and enforce remediation SLAs. - Run DAST in a staging environment and correlate with SBOM and versioning. These practices align with DevSecOps and continuous vulnerability management. Common misconceptions: Red teaming (A) is valuable but periodic and production-focused; it won’t stop repeated vulnerabilities from being introduced. Checking libraries (B) helps only for dependency-related issues and doesn’t address custom code flaws or enforce prevention. Input validation (D) is a specific control for certain classes (e.g., injection) but is too narrow; recurring findings likely span multiple categories and require a process-level SDLC control. Exam tips: When you see “regularly finds the same vulnerabilities,” think “automation + prevention in the pipeline,” not ad hoc testing. For SDLC/CI/CD questions, the best answer is often integrating security testing (SAST/SCA/DAST) with build gates and feedback loops to stop vulnerabilities before deployment.
An analyst is reviewing a vulnerability report and must make recommendations to the executive team. The analyst finds that most systems can be upgraded with a reboot resulting in a single downtime window. However, two of the critical systems cannot be upgraded due to a vendor appliance that the company does not have access to. Which of the following inhibitors to remediation do these systems and associated vulnerabilities best represent?
Proprietary systems are vendor-controlled or closed platforms where the customer lacks administrative access to patch, upgrade, or modify components (e.g., firmware, embedded OS, appliance software). The stem’s key clue is the “vendor appliance” the company “does not have access to,” which prevents remediation regardless of reboot/downtime planning. This commonly requires vendor escalation and compensating controls until a vendor-provided fix is available.
Legacy systems are older technologies that remain in use due to business dependency, compatibility constraints, or replacement cost. They may be difficult to patch because updates could break integrations or because hardware/software is outdated. However, the stem’s primary blocker is not age or compatibility—it is lack of access because the system is controlled by a vendor appliance, which aligns more directly with proprietary constraints.
Unsupported operating systems are end-of-life platforms where the vendor no longer provides security patches. This is a major remediation inhibitor because vulnerabilities cannot be fixed through normal patching. The question does not indicate the OS is out of support; instead, it indicates the organization cannot upgrade due to a vendor appliance they cannot access. That is a different constraint than EOL/unsupported status.
Lack of maintenance windows means the organization cannot schedule downtime to apply patches or upgrades, often due to 24/7 availability requirements. In the scenario, most systems can be upgraded with a reboot and a single downtime window, so maintenance windows are available. The issue is not scheduling downtime; it is the inability to perform the upgrade because the vendor appliance is not accessible to the company.
Core Concept: This question tests inhibitors to vulnerability remediation—specifically situations where an organization cannot patch or upgrade because the affected component is controlled by a third party or is closed/locked down. In CySA+ terms, these are common constraints that drive compensating controls, risk acceptance, or vendor engagement. Why the Answer is Correct: The two critical systems cannot be upgraded because they rely on a vendor appliance that the company does not have access to. That is the hallmark of a proprietary system: the underlying platform, firmware, or software stack is vendor-controlled, not customer-serviceable, and may restrict administrative access, patching, or configuration changes. Even if the organization can reboot and schedule downtime, they still cannot apply the upgrade because the vendor controls the appliance and the update mechanism. Key Features / Best Practices: When remediation is blocked by proprietary/vendor-controlled technology, typical actions include: - Engage the vendor for a patch/firmware update, maintenance procedure, or supported upgrade path. - Review contract/SLA language for security patch timelines and escalation. - Implement compensating controls: network segmentation, strict ACLs, WAF/IPS signatures, virtual patching, application allowlisting, and enhanced monitoring. - Perform risk analysis and document risk acceptance if business constraints prevent timely remediation. - Consider architectural changes (replace appliance, move to supported platform, add redundancy) to reduce single points of vendor dependency. Common Misconceptions: “Legacy systems” and “unsupported operating systems” are also patch inhibitors, but the stem does not say the OS is old or out of support; it says access is blocked by a vendor appliance. “Lack of maintenance windows” is explicitly not the issue because most systems can be upgraded with a single downtime window. Exam Tips: Look for keywords like “vendor appliance,” “no access,” “closed system,” “cannot install patches,” or “only vendor can update.” These point to proprietary/vendor-controlled remediation constraints. If the question emphasizes end-of-life, obsolete hardware/software, or inability to obtain patches due to age, that points to legacy/unsupported OS instead. If the barrier is scheduling downtime, that points to maintenance windows.
A security team conducts a lessons-learned meeting after struggling to determine who should conduct the next steps following a security event. Which of the following should the team create to address this issue?
A service-level agreement (SLA) defines measurable service expectations such as uptime, response times, and support commitments between parties. It is primarily focused on performance and availability targets rather than operational security response procedures. Although an SLA may state how quickly a provider must respond to an issue, it does not normally assign internal incident response roles or define who performs containment, investigation, or recovery tasks. Therefore, it would not solve the team's problem of clarifying who should take the next steps during a security event.
A change management plan is used to control how system changes are requested, reviewed, approved, implemented, and documented. Its purpose is to reduce risk when modifying production systems, not to coordinate security incident handling. While change management may become relevant when applying patches or remediation after an incident, it does not define incident ownership, escalation paths, or responder responsibilities. For that reason, it does not address the confusion described in the scenario.
An incident response plan (IRP) documents the process for handling security incidents, including roles and responsibilities, escalation paths, communication requirements, and phase-based actions (detect, analyze, contain, eradicate, recover, and lessons learned). Because the team was unsure who should conduct next steps after a security event, creating or updating an IRP directly addresses the gap by assigning ownership and defining the workflow.
A memorandum of understanding (MOU) is a high-level agreement that outlines cooperation or shared expectations between organizations or departments. It can help clarify external support relationships, such as coordination with law enforcement, vendors, or partner agencies. However, an MOU usually does not provide the detailed internal workflow, role assignments, and decision-making structure needed during incident response. The team needs an operational plan for handling incidents, which is why an IRP is the better choice.
Core Concept: This question tests incident response governance—specifically role clarity and ownership during an incident. In CySA+ terms, this maps to having documented procedures (IR policy/plan/playbooks) that define who does what, when, and how during each phase of incident handling. Why the Answer is Correct: The team struggled to determine who should conduct next steps after a security event, which is a classic symptom of missing or inadequate incident response planning. An Incident Response Plan (IRP) defines roles and responsibilities (e.g., Incident Commander, communications lead, forensics lead, containment/eradication owner), escalation paths, decision authority, and handoffs between teams. It also typically includes RACI-style responsibility assignments, contact lists, severity classification, and required approvals—exactly what prevents confusion about “who acts next.” Lessons-learned meetings commonly produce updates to the IRP and related playbooks to fix gaps discovered during real events. Key Features / Best Practices: A strong IRP aligns with NIST SP 800-61 (Computer Security Incident Handling Guide): preparation; detection/analysis; containment/eradication/recovery; post-incident activity. It should include: - Defined roles, authority, and delegation (including after-hours coverage) - Escalation criteria and severity levels - Communication plan (internal/external, legal/PR, regulators) - Evidence handling and chain of custody guidance - Integration with ticketing/SOAR and runbooks/playbooks for common incidents Common Misconceptions: SLA and MOU documents can define responsibilities between organizations, but they don’t provide the operational, step-by-step internal incident workflow needed during an event. A change management plan governs controlled changes to production systems, not incident command and response actions. Exam Tips: When the problem is “confusion during/after an incident about next actions, ownership, escalation, or coordination,” the best answer is almost always an incident response plan (and sometimes supporting playbooks/runbooks). Look for keywords like roles/responsibilities, escalation, communication, and next steps after an event—these point directly to IR planning and management.
A cybersecurity analyst notices unusual network scanning activity coming from a country that the company does not do business with. Which of the following is the best mitigation technique?
Geoblocking is the best mitigation when traffic originates from a geography with no business need. It provides a scalable, policy-based reduction of attack surface and is effective against distributed scanning that rotates IPs. Implement at the edge (NGFW/WAF/CDN) with an exception process for legitimate partners. It reduces noise and reconnaissance opportunities without chasing individual indicators.
Blocking an IP range can be effective if the scanning is tied to a stable, well-defined netblock (e.g., a specific hosting provider). However, scanners often rotate across many providers and ranges, and large range blocks can cause collateral damage. It’s typically a tactical, short-term control rather than the best strategic mitigation when geography is the key clue.
Historical trend analysis helps determine whether the activity is new, recurring, or correlated with other events, and it can improve detection and threat hunting. However, it does not stop the scanning. The question asks for the best mitigation technique, so an investigative/analytical action is not the correct choice even though it is a good follow-up activity.
Blocking a single IP address is the least robust option because scanning commonly comes from botnets, VPNs, or cloud infrastructure with rapidly changing IPs. It may provide brief relief but is easily bypassed and creates ongoing operational overhead. It’s appropriate only as an immediate, temporary containment step when the source is singular and stable.
Core Concept: This question tests network-based mitigation and access control decisions during active reconnaissance. Specifically, it focuses on using geolocation-based controls (geoblocking) to reduce exposure to unsolicited scanning from regions outside the organization’s business footprint. This aligns with defensive hardening and attack surface reduction in Security Operations. Why the Answer is Correct: If scanning originates from a country the company does not do business with, geoblocking is the most effective and scalable mitigation. It blocks traffic at a broader policy level (country/region) rather than playing “whack-a-mole” with individual IPs that can rapidly change. Many scanning campaigns leverage large botnets, cloud providers, VPNs, and rotating infrastructure; blocking a single IP or even a discovered range often fails to stop the activity for long. Geoblocking reduces noise, lowers the chance of successful enumeration, and conserves SOC time by preventing repeated recon attempts from that geography. Key Features / Best Practices: Geoblocking is commonly implemented on next-generation firewalls, WAFs, CDN/edge services, and SIEM/SOAR-driven enforcement. Best practice is to apply it where it has the most leverage (internet edge/WAF) and to scope it to inbound traffic types that are not required (e.g., block all inbound from that country except explicitly allowed services). Maintain an exception/allowlist process for legitimate third parties, and monitor for bypass via “allowed” geographies (e.g., attackers using domestic cloud IPs). Common Misconceptions: Blocking a specific IP (D) feels precise but is usually ineffective against distributed scanning. Blocking an IP range (B) can be useful when the range is confidently attributed and stable, but ranges can be huge (risking collateral damage) or quickly change. Historical trend analysis (C) is valuable for investigation and detection tuning, but it is not a mitigation technique to stop current scanning. Exam Tips: When a question highlights “a country we don’t do business with,” it is a strong cue for geolocation-based controls. Choose broad, policy-driven mitigations when the threat is likely distributed or rapidly changing. Reserve IP/range blocks for well-attributed, stable sources or as short-term tactical controls while longer-term policies are implemented.
An analyst has received an IPS event notification from the SIEM stating an IP address, which is known to be malicious, has attempted to exploit a zero-day vulnerability on several web servers. The exploit contained the following snippet:
/wp-json/trx_addons/V2/get/sc_layout?sc=wp_insert_user&role=administrator
Which of the following controls would work best to mitigate the attack represented by this snippet?
Correct. The snippet attempts to call wp_insert_user and set role=administrator, indicating an unauthorized user-creation/privilege-escalation attempt through a REST endpoint. Restricting user creation (and especially assignment of privileged roles) to administrators via proper capability checks and authorization controls directly mitigates the attacker’s objective, even if the endpoint is reachable. This is the most targeted and effective control among the choices.
Incorrect. “Layout creation” is not the attacker’s goal; it is likely just the vulnerable route being abused (sc_layout) to reach a sensitive function. Restricting layout creation might reduce some functionality, but it does not directly prevent wp_insert_user from being invoked or stop user creation/role assignment if the vulnerability allows it. It’s an indirect control and less reliable as a mitigation.
Incorrect. Making the trx_addons directory read-only is a filesystem hardening step that can help prevent plugin file tampering or web shell drops, but it does not stop an exploit that triggers server-side code already present to create a user in the database. The attack shown is about abusing application logic/API parameters, not writing to plugin files.
Incorrect. Setting the V2 directory to read-only is similarly a filesystem permission change and does not address the core vulnerability: an API endpoint allowing unauthorized invocation of wp_insert_user with an administrator role. The REST API route can still be executed even if the directory is read-only, because the web server only needs read/execute access to run the existing code.
Core concept: This question tests understanding of web application/API abuse leading to privilege escalation, and which compensating control best mitigates the impact. The snippet targets a WordPress REST endpoint (/wp-json/...) and appears to abuse a plugin route (trx_addons) to invoke a sensitive WordPress function (wp_insert_user) while supplying role=administrator. That is a classic pattern of insecure direct function invocation / parameter tampering that results in unauthorized account creation with elevated privileges. Why the answer is correct: The most effective mitigation among the options is to ensure that user creation (and especially assigning privileged roles) is restricted to administrators and protected by proper authorization checks. If the vulnerable endpoint is attempting to create users by calling wp_insert_user without enforcing capability checks (e.g., current_user_can('create_users') and current_user_can('promote_users')), then enforcing “limit user creation to administrators only” directly blocks the attacker’s goal: creating an admin account. This is a least-privilege and access-control control that reduces blast radius even if an exploit attempt reaches the application. Key features / best practices: In WordPress terms, this means enforcing role/capability checks on any code path that can create users or set roles, requiring authenticated admin sessions, and ideally adding CSRF protections (nonces) and strong server-side validation (never trusting a role parameter from the client). In operations, this also aligns with compensating controls such as WAF rules to block suspicious REST calls, disabling unused REST routes, and rapid patching of vulnerable plugins. Common misconceptions: Options about “layout creation” or making directories read-only sound like hardening, but they don’t address the core issue: an API endpoint is being abused to create an admin user. Read-only permissions can help prevent file modification or web shells, but they won’t stop a database-level action like creating a user via application logic. Exam tips: When you see wp_insert_user plus role=administrator, think privilege escalation via unauthorized user creation. Choose controls that enforce authorization/capabilities on that action (access control), not generic filesystem permission changes. Also note that “zero-day” implies patch may not exist yet, so compensating controls that reduce impact are key.
A penetration tester submitted data to a form in a web application, which enabled the penetration tester to retrieve user credentials. Which of the following should be recommended for remediation of this application vulnerability?
Implementing MFA on the server OS is an authentication hardening measure for administrative or interactive logons to the operating system. It does not remediate a web application vulnerability where crafted form input is used to extract credentials from the application/database. MFA may reduce the impact of stolen admin credentials, but it does not prevent the injection-style data retrieval described in the scenario.
Hashing user passwords is a critical best practice (with a strong adaptive algorithm like bcrypt/Argon2 and unique salts) to reduce the impact of credential theft. However, it does not address the root cause: the attacker can still exploit the form submission flaw to retrieve stored credential data (hashes or other sensitive fields). The vulnerability remains exploitable even if passwords are hashed.
Input validation (performed server-side) is a direct remediation for vulnerabilities where untrusted form data is used unsafely. By enforcing allow-listed formats, lengths, and characters, the application can reject malicious payloads commonly used in injection attacks. While parameterized queries are ideal for SQL injection, input validation is the best available option here to prevent crafted submissions from being processed.
Network segmentation between users and the web server is a compensating control that can reduce exposure and limit blast radius, but it does not fix the application logic flaw. Users must still reach the web server to use the application, and an attacker can still submit malicious form input through allowed paths. Segmentation is defense-in-depth, not primary remediation for injection.
Core Concept: This question targets web application injection vulnerabilities (most commonly SQL injection) caused by untrusted user input being sent to the server and interpreted as commands/queries. When a tester can submit crafted form data and retrieve user credentials, it strongly indicates the application is failing to treat input as data and is instead allowing it to alter backend logic (e.g., database queries). Why the Answer is Correct: Performing input validation before allowing submission is a primary remediation control to prevent malicious payloads from being accepted and processed. Proper validation enforces expected formats (type, length, character set, range) and rejects or sanitizes unexpected input. In practice, this should be paired with server-side validation (not just client-side) so attackers cannot bypass it. For injection specifically, validation reduces the attack surface by disallowing characters/patterns that are not required for the business function. Key Features / Best Practices: Effective remediation typically includes: - Server-side allow-list validation (preferred over block-lists). - Canonicalization/normalization before validation. - Context-aware output encoding where applicable. - Using parameterized queries/prepared statements and safe ORM patterns (often the strongest control against SQLi), plus least-privilege database accounts. Even though parameterization is not listed, “input validation” is the closest option that directly addresses the root cause described: unsafe handling of form input. Common Misconceptions: - Hashing passwords is important, but it does not prevent the attacker from extracting data if injection exists; it only reduces the value of stolen passwords. - MFA on the server OS does not fix a web app flaw that leaks credentials from the application/database. - Network segmentation can limit lateral movement, but it does not stop the application from being exploited to read credentials. Exam Tips: When you see “submitted data to a form” leading to “retrieved credentials,” think injection and broken input handling. Choose controls that prevent malicious input from being executed/interpreted (validation, parameterization, escaping). Controls like MFA, segmentation, and hashing are defense-in-depth but not the primary fix for the described vulnerability.
A cybersecurity team lead is developing metrics to present in the weekly executive briefs. Executives are interested in knowing how long it takes to stop the spread of malware that enters the network. Which of the following metrics should the team lead include in the briefs?
Mean time between failures (MTBF) is a reliability/availability metric used to estimate the average time between inherent failures of a system or component. It is common in operations and engineering contexts (hardware, infrastructure, service uptime) rather than incident response. It does not measure how quickly a security team stops malware propagation, so it is not appropriate for the executive question asked.
Mean time to detect (MTTD) measures how long it takes to discover an incident after it begins (or after initial compromise). While detection speed is important, it does not answer the executive’s specific concern: “how long it takes to stop the spread.” You could detect quickly but still take too long to isolate affected systems, so MTTD alone is insufficient for this question.
Mean time to remediate (often aligned with MTTR in security contexts) measures how long it takes to fully fix the issue—eradicate malware, remove persistence, patch vulnerabilities, and restore systems to a secure state. Remediation typically occurs after containment and can take significantly longer. Because the question focuses on stopping spread (limiting propagation), remediation is not the best match.
Mean time to contain (MTTC) measures the average time required to limit an incident so it can no longer expand in scope—e.g., isolating endpoints, blocking IOCs, disabling accounts, segmenting networks, and stopping C2 traffic. This directly answers “how long it takes to stop the spread of malware that enters the network,” making it the most appropriate metric for executive briefs.
Core Concept: This question tests incident response metrics, specifically measurements that describe how quickly an organization can limit an incident’s impact. In malware outbreaks, the most executive-relevant timing metric for “stopping the spread” is containment—preventing lateral movement and further propagation. Why the Answer is Correct: Mean time to contain (MTTC) measures the average time from when an incident is identified/declared (or sometimes from initial detection, depending on the organization’s definition) until the threat is contained so it can no longer spread. Containment actions include isolating infected hosts, blocking malicious indicators at network controls, disabling compromised accounts, segmenting networks, and stopping command-and-control communications. Because executives asked “how long it takes to stop the spread of malware that enters the network,” MTTC directly maps to that goal: limiting blast radius and halting propagation. Key Features / Best Practices: MTTC is commonly tracked alongside MTTD (detect) and MTTR (remediate/recover). For executive briefs, MTTC is especially meaningful because it reflects operational readiness (EDR isolation speed, SOC triage efficiency, network segmentation maturity, playbook automation via SOAR, and escalation paths). Define MTTC clearly (start/stop timestamps), standardize severity categories, and report trends (week-over-week) plus outliers with brief root-cause notes. Common Misconceptions: MTTR (remediate) is often confused with containment. Remediation focuses on eradication and restoring systems (patching, reimaging, removing persistence, closing root cause), which can take much longer than stopping spread. MTTD is about discovering the malware, not stopping it. MTBF is a reliability metric for equipment/process failures and is not incident-response focused. Exam Tips: When you see wording like “stop the spread,” “limit impact,” “prevent lateral movement,” or “reduce blast radius,” think containment metrics (MTTC). When you see “find it,” think MTTD. When you see “fix it/eradicate/recover,” think MTTR/mean time to remediate. Always map the verb in the question (detect/contain/remediate) to the metric name.
A company’s security team is updating a section of the reporting policy that pertains to inappropriate use of resources (e.g., an employee who installs cryptominers on workstations in the office). Besides the security team, which of the following groups should the issue be escalated to first in order to comply with industry best practices?
Help desk is primarily responsible for operational support: triage, ticketing, endpoint remediation, and user support. While they may assist by isolating a workstation, removing unauthorized software, or reimaging systems, they are not the appropriate first escalation group for updating reporting policy or ensuring the organization’s response aligns with legal/privacy and disciplinary requirements.
Law enforcement involvement is typically reserved for situations with clear criminal activity requiring external investigation, mandatory reporting, or significant impact. Escalating to law enforcement first is usually not best practice for internal misuse because it can disrupt internal fact-finding, create unnecessary exposure, and complicate evidence handling unless legal has determined it is warranted.
The legal department is the best available first escalation point among the listed options because employee misuse can create legal, regulatory, and evidentiary concerns. Legal can advise the security team on how to document the incident, preserve evidence, and proceed without violating privacy, employment, or monitoring requirements. Legal also helps determine whether the matter should remain internal, be coordinated with HR and management, or be referred externally. While HR is often involved in employee misconduct cases, it is not an option here, making Legal the strongest answer from the choices provided.
A board member is part of executive governance and oversight, not day-to-day policy drafting or incident escalation. Boards typically receive high-level risk and incident reporting after internal stakeholders (security, legal, HR, compliance) have assessed impact and response. Escalating to a board member first is inefficient and bypasses necessary legal/compliance review.
Core Concept: This question tests escalation and communication best practices for policy-driven reporting of inappropriate use of resources (an insider misuse scenario). In CySA+ terms, this sits at the intersection of reporting/communication, incident classification, and ensuring actions align with legal/regulatory obligations and evidence-handling requirements. Why the Answer is Correct: The legal department should be the first escalation point beyond the security team when updating reporting policy for employee misuse (e.g., installing cryptominers). Industry best practice is to ensure policies, reporting language, investigative steps, monitoring/inspection notices, disciplinary procedures, and evidence collection/retention align with employment law, privacy requirements, labor agreements, and regulatory constraints. Legal also guides when to involve HR, how to preserve attorney-client privilege, and how to word acceptable use and consent-to-monitoring clauses to reduce organizational liability. Key Features / Best Practices: Legal review commonly covers (1) acceptable use policy alignment and enforceability, (2) privacy and monitoring disclosures (e.g., consent banners, BYOD boundaries), (3) evidence preservation and chain of custody requirements, (4) thresholds for external reporting (regulators, law enforcement), and (5) coordination with HR for disciplinary actions. Framework-aligned programs (e.g., NIST incident handling guidance and common corporate governance practices) emphasize clear escalation paths and involving counsel early for insider cases to avoid mishandling evidence or violating privacy/employee rights. Common Misconceptions: Help desk is operationally involved in remediation (reimaging, ticketing) but is not the correct first escalation for policy/reporting decisions. Law enforcement is not typically first unless there is an immediate threat to life/safety or a clear requirement to report; premature involvement can complicate internal investigations and evidence handling. A board member is too high-level and generally receives summarized reporting after legal/HR/security have assessed risk and response. Exam Tips: For questions about “first escalation” for policy, compliance, and potential employee misconduct, think: Security/IR team identifies → Legal (and often HR) validates process and obligations → then consider external entities (law enforcement/regulators) only if required. If the scenario mentions policy updates, liability, privacy, or disciplinary language, legal is usually the best-practice next stop.



