
Simulate the real exam experience with 85 questions and a 165-minute time limit. Practice with AI-verified answers and detailed explanations.
AI-Powered
Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.
A recent zero-day vulnerability is being actively exploited, requires no user interaction or privilege escalation, and has a significant impact to confidentiality and integrity but not to availability. Which of the following CVE metrics would be most accurate for this zero-day threat?
Option A is the closest match to the scenario because it uses AV:N, AC:L, PR:N, and UI:N, which align with a remotely exploitable vulnerability that requires no user interaction and no privileges. It also indicates high confidentiality impact with C:H and an unchanged scope with S:U, both of which are reasonable based on the stem. However, the vector is not perfectly formed because 'I:K' is not a valid CVSS v3.1 integrity value, and availability is listed as A:L rather than none. Despite those flaws, it is still the best available choice because the other options contradict the exploitability conditions much more severely.
Option B is inconsistent with the stem because it requires high privileges and user interaction, which directly contradicts the description of no privilege requirement and no user interaction. It also uses malformed CVSS notation such as AV:K, PR:H in a context that does not fit, and UI:R instead of the valid UI:N or UI:R style expected in proper vectors. Although C:H and I:H would fit the impact portion, the exploitability metrics are fundamentally wrong. This makes B a poor match even before considering the notation issues.
Option C is incorrect because it indicates user interaction is required, which conflicts with the explicit statement that no user interaction is needed. Its impact metrics also do not fit the scenario: confidentiality is only low, integrity is none, and availability is high, while the stem says confidentiality and integrity are significantly affected but availability is not. Even though AV:N and AC:L are plausible, the rest of the vector misrepresents the vulnerability. Therefore, C does not accurately describe the threat.
Option D is wrong because it describes a local attack vector and requires privileges and user interaction, all of which contradict the scenario's highly exploitable no-interaction, no-privilege nature. It also assigns high availability impact, which the stem explicitly rules out. While confidentiality impact is high, the rest of the vector reflects a very different type of vulnerability. As a result, D is not an accurate CVSS representation of the described zero-day.
Core concept: This question tests how to map a vulnerability description to CVSS v3.1 base metrics, especially Attack Vector, Attack Complexity, Privileges Required, User Interaction, Scope, and Confidentiality/Integrity/Availability impact. The stem clearly indicates a remotely exploitable issue requiring no privileges and no user interaction, with high impact to confidentiality and integrity but little or no impact to availability. In CVSS terms, that points toward AV:N, AC:L, PR:N, UI:N, S:U, C:H, I:H, and A:N or at least not A:H. A key exam point is that 'zero-day' and 'actively exploited' are threat-context indicators and do not directly change the base vector; they affect prioritization and temporal considerations rather than intrinsic base severity. Common misconceptions include confusing privilege escalation with Scope and assuming active exploitation changes base metrics. Exam tip: first map exploitability conditions from the stem, then map CIA impact, and finally choose the closest available option even if the distractors contain malformed or imperfect vectors.
Want to practice all questions on the go?
Download Cloud Pass for free — includes practice tests, progress tracking & more.


Want to practice all questions on the go?
Get the free app
Download Cloud Pass for free — includes practice tests, progress tracking & more.
A security analyst is tasked with prioritizing vulnerabilities for remediation. The relevant company security policies are shown below: Security Policy 1006: Vulnerability Management
THOR.HAMMER is highly exploitable (AV:N/AC:L/PR:N/UI:N) and has High availability impact (A:H) but no confidentiality or integrity impact (C:N/I:N). It is also on an internal system, which the policy deprioritizes relative to external systems. Additionally, the policy states confidentiality is prioritized over availability when a choice must be made, so this ranks below confidentiality-impacting issues like B/D.
CAP.SHIELD is remotely exploitable with no privileges or user interaction required and has High confidentiality impact (C:H) with no integrity/availability impact. This aligns with the policy’s preference for confidentiality over availability. It is also on an external/publicly available system, which the policy prioritizes over internal assets. With identical exploitability to the other options, its confidentiality impact plus external exposure makes it the top priority.
LOKI.DAGGER is externally exposed and highly exploitable, which makes it important. However, its impact is only High availability (A:H) with no confidentiality/integrity impact (C:N/I:N). The policy explicitly prioritizes confidentiality over availability when choosing between them. Therefore, despite being external, it is lower priority than an external vulnerability with High confidentiality impact (B).
THANOS.GAUNTLET has the same strong exploitability and High confidentiality impact as option B, but it is on an internal system. The policy states publicly available (external) systems and services are prioritized over internal systems. Therefore, D would be a high priority, but it is second to B because external exposure is the tie-breaker when severity characteristics are otherwise equivalent.
Core concept: This question tests policy-driven vulnerability prioritization using CVSS v3.1 Base metrics (Exploitability + Impact) and organizational risk rules. CVSS Base vectors describe technical severity; the policy then adds business prioritization: (1) use Base metrics, (2) prefer confidentiality over availability when forced to choose, and (3) prioritize externally/publicly exposed systems over internal ones. Why the answer is correct: Options B and D have the same CVSS vector: AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N. That indicates a remotely exploitable issue (Network), low complexity, no privileges, no user interaction, with High confidentiality impact and no integrity/availability impact. Per policy item #2, confidentiality-impacting issues outrank availability-only issues when a choice is required. Between B/D, policy item #3 breaks the tie: patch publicly available (external) systems before internal systems. Therefore CAP.SHIELD (B) is the highest priority. Key features/best practices: CVSS Base scoring emphasizes exploitability (AV, AC, PR, UI, S) and impact (C, I, A). Here, all options are equally exploitable (AV:N, AC:L, PR:N, UI:N, S:U), so impact and exposure drive prioritization. High confidentiality impact on an internet-facing asset typically maps to data disclosure, credential leakage, or sensitive information exposure—often a rapid escalation path to broader compromise. Many programs also align this with NIST SP 800-40 (patch management) and NIST CSF “Protect”/“Respond” functions, where internet-exposed, high-impact vulnerabilities are prioritized. Common misconceptions: Some may pick the highest availability impact (A or C) assuming outages are most urgent. However, the policy explicitly states confidentiality is prioritized over availability when choosing. Others may pick C because it is external, but it only affects availability (A:H) and not confidentiality. Exam tips: When a question provides a policy, treat it as the primary decision rule—even over your personal intuition. Compare (1) exposure (external vs internal), (2) impact category emphasized by policy (C over A), then (3) CVSS exploitability/impact details. If vectors are identical, use business context/policy to break ties.
Which of the following concepts is using an API to insert bulk access requests from a file into an identity management system an example of?
Command and control (C2) refers to attacker infrastructure and communication channels used to remotely manage compromised hosts (e.g., beaconing, tasking, exfiltration coordination). It is associated with malware, botnets, and post-exploitation activity, not legitimate administrative integrations. Using an API to bulk load access requests into IAM is an internal operational workflow, so it does not match C2.
Data enrichment is the process of augmenting existing security data with additional context to improve detection and triage (e.g., adding threat intelligence reputation, asset owner, business criticality, vulnerability context, or geo-location). While enrichment often uses APIs, the described action is not adding context to records; it is performing bulk provisioning/requests, which is automation rather than enrichment.
Automation is correct because the scenario describes programmatically inserting many access requests using an API from a file, replacing manual entry and enabling consistent, repeatable processing at scale. This is a common security operations pattern: integrate systems via APIs to streamline IAM workflows, reduce human error, accelerate provisioning, and ensure actions are logged and governed through standardized processes.
Single sign-on (SSO) is an authentication/authorization architecture that allows users to authenticate once and access multiple applications using federation protocols (e.g., SAML, OpenID Connect) or centralized authentication (e.g., Kerberos). Bulk inserting access requests into an identity management system is about provisioning and workflow automation, not about federated login or session/token reuse across services.
Core Concept: This question tests understanding of automation and orchestration in security operations, especially around identity and access management (IAM). Using an API to ingest a file of access requests into an identity management system is a classic example of automating a repeatable administrative/security workflow. In CySA+ terms, this aligns with using scripts, APIs, and integrations to reduce manual effort, improve consistency, and speed up operational processes. Why the Answer is Correct: The key indicators are “using an API” and “insert bulk access requests from a file.” That describes programmatic execution of a task that would otherwise be performed manually (e.g., creating many access tickets, provisioning accounts, assigning roles/groups). Bulk operations via API are specifically meant to automate high-volume, repetitive actions. This is not about authentication federation (SSO) or threat actor infrastructure (C2); it’s about streamlining an operational process. Key Features / Best Practices: Automation in IAM via APIs typically includes: input validation (ensure the file format and fields are correct), least privilege for the API client/service account, strong authentication for API access (OAuth2, mTLS, signed tokens), logging/auditing of every change (who/what/when), change control/approvals (workflow gates for privileged access), and error handling/rollback. In mature environments, this is integrated with SOAR/ITSM (e.g., ServiceNow) and governed by policies such as joiner-mover-leaver processes and periodic access reviews. Common Misconceptions: “Data enrichment” can sound plausible because APIs are often used to add context to data, but enrichment means augmenting existing records with additional attributes (e.g., adding threat intel, geo-IP, asset criticality), not performing bulk provisioning actions. “Single sign-on” involves centralized authentication and token-based access across applications, not bulk inserting requests. “Command and control” is a malware/attacker concept, not an administrative integration. Exam Tips: When you see APIs used to execute tasks (create/update users, assign roles, open/close tickets, push configs, trigger playbooks), think automation/orchestration. When APIs are used to add context to alerts/logs, think enrichment. When the topic is federated authentication (SAML/OIDC/Kerberos), think SSO. When it’s attacker remote control channels, think C2.
After identifying a threat, a company has decided to implement a patch management program to remediate vulnerabilities. Which of the following risk management principles is the company exercising?
Transfer means shifting or sharing risk with another party, such as purchasing cyber insurance, using contractual indemnification, or outsourcing a service with defined SLAs. While a third party might help manage patches, the scenario specifically describes implementing a patch management program to remediate vulnerabilities, which is applying a control to reduce risk rather than transferring it.
Accept means acknowledging the risk and choosing not to implement additional controls, typically because the cost or operational impact of remediation outweighs the benefit. Acceptance is usually documented with risk sign-off and monitoring. Since the company is actively implementing patch management to remediate vulnerabilities, it is not accepting the risk; it is taking corrective action.
Mitigate (risk reduction) is correct because patch management directly reduces vulnerability exposure and lowers the likelihood of successful exploitation. By systematically identifying, testing, deploying, and verifying patches, the organization implements a security control that decreases risk to an acceptable level. This is a classic example of mitigation in vulnerability management and risk treatment.
Avoid means eliminating the risk by discontinuing the activity that creates it, such as removing the vulnerable application, decommissioning the system, disabling a risky feature, or blocking an entire service. Patch management does not eliminate the underlying business activity; it keeps the system in use while reducing exposure. Therefore, it is mitigation, not avoidance.
Core Concept: This question tests risk response (risk treatment) principles in the context of vulnerability remediation. In cybersecurity risk management, common responses are: avoid, mitigate (reduce), transfer (share), and accept. Patch management is a key vulnerability management control used to reduce the likelihood and/or impact of exploitation. Why the Answer is Correct: Implementing a patch management program is an example of risk mitigation. The company has identified a threat and is taking action to remediate vulnerabilities by applying patches, updating software, and systematically managing updates. This reduces the attack surface and lowers the probability that known vulnerabilities will be exploited. Mitigation does not eliminate all risk, but it reduces risk to an acceptable level through controls. Key Features / Best Practices: A patch management program typically includes: asset inventory and software baselines; vulnerability scanning and prioritization (e.g., CVSS, exploitability, asset criticality, exposure); testing patches in staging; change management and maintenance windows; deployment automation (WSUS/SCCM/Intune, Linux repos/config management); verification (post-deployment scans, compliance reporting); and exception handling with compensating controls (e.g., virtual patching via WAF/IPS) when patches cannot be applied quickly. These practices align with common guidance such as NIST risk management concepts (risk response) and vulnerability remediation lifecycle expectations. Common Misconceptions: Transfer can seem plausible because organizations sometimes use cyber insurance or outsource patching, but the act described is directly applying a control, not shifting liability. Avoid might sound right if “remediate” is interpreted as “eliminate,” but avoidance means stopping the risky activity entirely (e.g., decommissioning the vulnerable system). Accept is also tempting when patching is delayed, but acceptance means taking no additional action beyond acknowledging the risk. Exam Tips: When you see actions like patching, hardening, adding MFA, segmentation, EDR deployment, or tuning firewall rules, think “mitigate.” If the scenario mentions insurance, contracts, or outsourcing liability, think “transfer.” If it mentions shutting down a service, removing a feature, or decommissioning a system, think “avoid.” If it mentions documenting and living with the risk, think “accept.”
A security administrator has been notified by the IT operations department that some vulnerability reports contain an incomplete list of findings. Which of the following methods should be used to resolve this issue?
A credentialed scan authenticates to the target and performs local enumeration of patches, installed software, configurations, and services. This significantly improves accuracy and completeness versus unauthenticated probing, reducing false negatives. It is the best method to address vulnerability reports that are missing findings, especially when the missing items are host-based, such as patch, configuration, or installed software issues. By allowing the scanner to inspect the system from the inside, it produces a more complete and authoritative list of vulnerabilities.
An external scan assesses systems from outside the network perimeter and is useful for understanding internet-facing exposure. However, it is typically unauthenticated and limited to what is reachable externally. It will not reliably identify internal patch or configuration issues and therefore will not resolve the problem of incomplete findings in vulnerability reports for internal assets. It improves perimeter visibility, not host-level scan completeness.
A differential scan compares results against a previous baseline and reports only changes, often to reduce scan time and reporting noise. If the baseline scan was incomplete due to lack of authentication or access issues, the differential scan will inherit that limitation and still miss vulnerabilities. It is useful for tracking changes over time, but it does not improve the depth or completeness of vulnerability discovery. Therefore, it is not the right method to fix incomplete findings.
A network scan focuses on discovering live hosts, open ports, and reachable services across the environment. While it can improve asset discovery and identify exposed services, it does not provide the same depth as authenticated host checks. If reports are incomplete because the scanner cannot inspect local vulnerabilities such as missing patches or insecure configurations, a network scan alone will not solve that problem. It helps map the environment, but not fully enumerate host-based weaknesses.
Core Concept: This question tests vulnerability scanning depth and accuracy. Many scanners produce incomplete findings when they can only perform unauthenticated (non-credentialed) checks. Without credentials, the scanner often relies on banner grabbing, limited remote probes, and inference, which can miss local vulnerabilities such as missing patches, insecure local configurations, vulnerable installed software, weak local permissions, or registry/file-level issues. Why the Answer is Correct: Using a credentialed scan (authenticated scan) resolves incomplete vulnerability reports because it allows the scanner to log into the target system (e.g., via SSH for Linux, WinRM/WMI/SMB for Windows) and enumerate the system from the inside. This provides authoritative data: exact OS build, installed packages, patch levels, running services, local configuration settings, and sometimes even compliance-relevant settings. As a result, the findings list becomes more complete and accurate, reducing false negatives and improving prioritization. Key Features / Best Practices: Credentialed scanning typically requires: - Properly scoped service accounts with least privilege (often read-only where possible) - Secure credential handling (vaulting, rotation, avoiding hardcoding) - Network access to management ports (e.g., 22/SSH, 5985/5986 WinRM, 445 SMB, WMI/DCOM as applicable) - Validation of scan coverage (asset inventory alignment, scan logs, and error reports for auth failures) In practice, incomplete reports are frequently caused by authentication failures, blocked management ports, endpoint security blocking scanner activity, or insufficient privileges. Reviewing scanner logs for “login failed” or “insufficient privileges” is a common troubleshooting step. Common Misconceptions: External scans and network scans can sound like they improve visibility, but they still remain largely unauthenticated and therefore can miss host-level issues. Differential scans reduce scan time by focusing on changes, but they do not fix missing baseline visibility; if the initial scan was incomplete, subsequent differentials will also be incomplete. Exam Tips: When you see “incomplete findings,” “missing vulnerabilities,” or “false negatives,” think authentication/privilege and choose credentialed scanning. Also remember to consider operational causes (credential failures, blocked ports) as the practical reason reports are incomplete.
Which of the following best describes the process of requiring remediation of a known threat within a given time frame?
An SLA (Service Level Agreement) defines measurable service targets, including time-bound remediation requirements (e.g., patch critical vulnerabilities within 7 days). It is commonly used between security/IT teams or with third-party providers to enforce deadlines, track compliance, and drive escalation when targets are missed. In vulnerability management, SLAs are a primary way to formalize and measure remediation performance.
An MOU (Memorandum of Understanding) documents an understanding or intent to cooperate between parties. It is often less formal and may be non-binding, focusing on roles and coordination rather than enforceable performance metrics. While an MOU might mention desired timelines, it typically does not function as the primary mechanism to require remediation within a strict, measurable time frame.
Best-effort patching means attempting remediation when feasible without committing to a specific deadline or measurable target. This approach lacks enforceability and is generally considered weak from a security and audit perspective because it cannot guarantee risk reduction within required windows. The question specifically asks about requiring remediation within a given time frame, which best-effort explicitly does not ensure.
Organizational governance refers to the overall structure of policies, oversight, accountability, and decision-making within an organization. Governance can mandate that remediation SLAs exist and can define risk tolerance and escalation paths, but it is not the specific process/instrument that sets a concrete time-bound remediation requirement. The question is looking for the direct mechanism used to enforce timelines.
Core Concept: This question is testing how organizations formally enforce vulnerability remediation timelines. In vulnerability management, a key control is defining how quickly known threats (vulnerabilities, misconfigurations, exposed services) must be remediated based on severity and risk. These timelines are typically expressed as measurable commitments (e.g., Critical: 7 days, High: 14 days) and are tracked for compliance. Why the Answer is Correct: An SLA (Service Level Agreement) is the mechanism that best describes requiring remediation within a given time frame. SLAs define measurable performance targets and deadlines between parties (internal IT and the business, security and operations, or an external managed service provider). For remediation, an SLA sets expectations such as “patch critical vulnerabilities within X days,” includes how compliance is measured, and often defines escalation paths and consequences for missed targets. Key Features / Best Practices: Effective remediation SLAs are risk-based and aligned to vulnerability severity (CVSS), exploitability (KEV/EPSS), asset criticality, and exposure (internet-facing vs internal). They include: - Defined time-to-remediate (TTR) targets by severity - Scope (systems, applications, cloud resources) - Exceptions process (risk acceptance, compensating controls) - Reporting metrics (MTTR, SLA compliance rate) - Escalation and accountability (ticketing workflows, management reporting) These practices align with common governance and control frameworks (e.g., NIST vulnerability management guidance and IT service management practices). Common Misconceptions: “MOU” and “organizational governance” sound formal, so they can be tempting. However, an MOU is usually non-binding and focuses on cooperation, not enforceable performance targets. “Organizational governance” is broader (policies, oversight, decision rights) and may mandate SLAs, but it is not itself the specific instrument that sets a remediation deadline. “Best-effort patching” is the opposite of a time-bound requirement. Exam Tips: On CySA+ questions, look for keywords like “within a given time frame,” “measurable,” “required,” and “commitment.” Those point to SLAs/OLAs and metrics-driven enforcement. If the question emphasizes cooperation without enforceability, think MOU. If it emphasizes broad oversight and policy, think governance. If it implies no guarantees, think best-effort.
An incident response analyst notices multiple emails traversing the network that target only the administrators of the company. The email contains a concealed URL that leads to an unknown website in another country. Which of the following best describes what is happening? (Choose two.)
Beaconing is periodic, automated communication from an infected host to a command-and-control (C2) server (e.g., regular HTTP/DNS callbacks). The prompt describes emails with concealed URLs targeting administrators, which is indicative of phishing attempts, not an established compromised endpoint calling out repeatedly. Beaconing would be supported by logs showing consistent outbound connections at fixed intervals.
DNS hijacking involves manipulating DNS resolution (poisoning caches, altering resolver settings, or compromising authoritative records) so users are redirected to attacker-controlled destinations. The question does not mention DNS anomalies, incorrect domain resolution, or users being redirected despite typing correct URLs. It specifically highlights a concealed URL in an email, which points more directly to obfuscation and phishing.
A social engineering attack fits because the emails are selectively targeting administrators (high-value accounts) to influence behavior—typically to click a link, open content, or provide credentials. This is classic spear phishing/whaling tradecraft aimed at privileged access. The attacker leverages trust and urgency rather than exploiting a technical network weakness described in the prompt.
An on-path (man-in-the-middle) attack occurs when an attacker intercepts and possibly alters traffic between two parties. Indicators include certificate warnings, unexpected TLS changes, ARP/DNS anomalies, or traffic routing through suspicious intermediaries. The prompt describes email content with a hidden link to an external site, not evidence of traffic interception or modification in transit.
Obfuscated links are URLs intentionally disguised to hide the true destination, such as shortened URLs, encoded strings, misleading anchor text, nested redirects, or lookalike domains. The prompt explicitly states the email contains a concealed URL leading to an unknown foreign website, which is a hallmark of link obfuscation used to bypass filters and trick recipients.
ARP poisoning is a local network technique to redirect traffic by corrupting ARP tables, often enabling on-path attacks within the same broadcast domain. The scenario does not mention ARP anomalies, duplicate IP/MAC mappings, or local traffic interception. The described behavior is email-based targeting with a concealed URL, which is not explained by ARP poisoning.
Core concept: This scenario tests recognition of targeted phishing (a social engineering technique) combined with URL obfuscation. In CySA+ terms, it’s about identifying attacker tradecraft in email-based initial access: selecting high-value recipients (administrators) and hiding a malicious destination (concealed/obfuscated URL) that leads to an external, unknown site. Why the answers are correct: The emails “target only the administrators,” which strongly indicates a targeted campaign (spear phishing/whaling) rather than broad spam. The goal is typically credential theft, malware delivery, or establishing initial foothold by tricking privileged users into clicking a link or entering credentials. That is best described as a social engineering attack (C). The email also contains a “concealed URL,” which aligns with obfuscated links (E): techniques like URL shorteners, mismatched anchor text vs. actual href, homoglyph domains, excessive URL encoding, embedded redirects, or using HTML/CSS to hide the true destination. Key features / best practices: Analysts should inspect full headers, sender authentication results (SPF/DKIM/DMARC), and the actual URL after decoding/expanding redirects in a sandbox. Controls include secure email gateways, URL rewriting/detonation, disabling automatic link following, and user training focused on privileged accounts. For admins, enforce phishing-resistant MFA (FIDO2/WebAuthn), conditional access, and least privilege to reduce impact if credentials are harvested. Common misconceptions: “Unknown website in another country” can tempt choices like on-path attack, DNS hijacking, or ARP poisoning, but those are network-layer/path manipulation techniques and are not evidenced by the prompt. “Beaconing” refers to periodic outbound callbacks from an already-compromised host; here, the activity described is inbound email with a hidden link, which is earlier in the kill chain. Exam tips: When you see (1) a specific high-value audience (admins/executives) and (2) a link designed to hide its destination, think spear phishing/social engineering plus obfuscated links. Reserve DNS/ARP/on-path answers for scenarios describing traffic redirection, poisoned caches, altered routes, or man-in-the-middle symptoms.
A company is in the process of implementing a vulnerability management program, and there are concerns about granting the security team access to sensitive data. Which of the following scanning methods can be implemented to reduce the access to systems while providing the most accurate vulnerability scan results?
Credentialed network scanning logs into targets (e.g., via SSH/WinRM/WMI) to enumerate patches, configs, and software, producing very accurate results and fewer false positives than unauthenticated scans. However, it requires providing privileged credentials to the scanning process and often implies broader access to sensitive systems. This conflicts with the question’s goal of reducing access while maintaining accuracy.
Passive scanning monitors network traffic (SPAN/TAP) and uses observed behavior and fingerprints to infer assets and potential vulnerabilities. It is low impact and requires minimal access to endpoints, which is attractive for sensitive environments. However, it is generally less accurate and less complete than credentialed or agent-based methods because it cannot reliably confirm patch/configuration state and may miss dormant services.
Agent-based scanning installs a local agent on endpoints/servers to collect vulnerability and configuration data from within the system and report it to a central platform. It provides high accuracy (often comparable to credentialed scans) while reducing the need for the security team to have direct login access or to manage privileged scanning credentials across the environment. This best matches the requirements in the question.
Dynamic scanning typically refers to Dynamic Application Security Testing (DAST), which tests running web applications by sending requests and analyzing responses for vulnerabilities (e.g., injection, XSS). It is valuable for application security but does not provide comprehensive host-level vulnerability assessment for OS patches and system configurations. It also doesn’t directly address the access-to-systems concern for infrastructure vulnerability management.
Core concept: This question tests vulnerability scanning approaches and the tradeoff between scan accuracy and the level of access/privilege required. In vulnerability management, the most accurate results typically come from “inside” the endpoint (local inventory, patch levels, installed software, configuration state). The concern here is limiting the security team’s direct access to sensitive systems/data while still getting high-fidelity findings. Why the answer is correct: Agent-based scanning provides highly accurate vulnerability data without requiring the security team to have interactive credentials or broad remote access to target systems. An agent runs locally on the endpoint/server, collects vulnerability-relevant telemetry (OS/build, installed packages, missing patches, insecure configurations, running services), and reports results back to a central console. This reduces the need to grant the security team privileged domain/service accounts or remote login capability, addressing the “concerns about granting access to sensitive data” while still producing results comparable to (and often better than) credentialed remote scans. Key features / best practices: - Least privilege: agents can run with only the permissions needed to inventory and assess; access is mediated by the agent/management plane rather than human logins. - Reduced credential risk: avoids storing/rotating privileged scan credentials and reduces lateral movement risk if a scanner is compromised. - Better coverage: works across network segmentation, VPN/offline endpoints, and cloud workloads where inbound scanning is restricted. - Operational controls: use mutual TLS, signed agents, centralized policy, and role-based access control (RBAC) on the console to limit who can view sensitive findings. Common misconceptions: Credentialed network scanning is also accurate, but it explicitly requires providing credentials with elevated access to the scanner/security team (or at least to the scanning platform), which is exactly the concern described. Passive scanning reduces access but is not “most accurate” because it infers vulnerabilities from observed traffic and fingerprints. “Dynamic scanning” is typically associated with DAST for web applications, not broad infrastructure vulnerability assessment. Exam tips: When you see “most accurate” think credentialed or agent-based. When you also see “reduce access to systems/credentials,” agent-based is the best fit because it minimizes privileged credential distribution and interactive access while maintaining high accuracy. Map the method to the environment: infrastructure VM vs application testing (DAST/SAST).
A security analyst is reviewing a packet capture in Wireshark that contains an FTP session from a potentially compromised machine. The analyst sets the following display filter: ftp. The analyst can see there are several RETR requests with 226 Transfer complete responses, but the packet list pane is not showing the packets containing the file transfer itself. Which of the following can the analyst perform to see the entire contents of the downloaded files?
The ftp.active.port field relates to active-mode FTP negotiation details and is not a filter that reveals the transferred file contents by itself. It would only help in understanding how the data connection was negotiated in active mode, not in reconstructing the payload. It also fails entirely if the session used passive mode, which is very common. Therefore, it does not address the analyst's need to view the downloaded file contents.
Filtering on tcp.port==20 assumes the FTP data transfer is using TCP port 20, which is only generally true for active FTP and not for passive FTP. Many FTP sessions use ephemeral ports for the data channel, so this filter can miss the transfer completely. Even if some packets appear, the filter is too narrow and based on an outdated assumption about FTP behavior. It is not the best way to reliably locate and inspect the file contents.
Changing the display filter to ftp-data shows the packets that carry the actual file contents rather than just the FTP commands and responses. Using Follow TCP Stream on those packets lets Wireshark reassemble the transferred bytes into the full file content in sequence. This directly solves the problem described, because the analyst wants to see the contents of the downloaded files that are not visible under the ftp control-channel filter. It is also the most protocol-appropriate method for examining the transfer itself inside the capture.
Export Objects -> FTP is useful for extracting transferred files from a capture, but it is not the best answer to the specific problem described. The question asks how to see the packets containing the file transfer and inspect the entire contents after noticing they are absent from the packet list under the ftp filter. Export Objects is a recovery/export feature rather than the direct method for displaying the missing data-channel traffic in the analysis view. The more precise action is to filter on ftp-data and follow the relevant TCP streams.
Core concept: FTP uses two separate connections: the control channel for commands and responses, and the data channel for the actual file contents. A display filter of ftp shows the control conversation, such as RETR and 226 messages, but not the packets carrying the transferred bytes. To view the actual downloaded content in Wireshark, the analyst must examine the FTP data channel rather than only the control channel. Why correct: The ftp-data display filter isolates the packets that contain the file transfer payload. Following the TCP streams on those data connections allows Wireshark to reassemble the transferred bytes so the analyst can inspect the full contents of the downloaded files. This directly addresses why the file-transfer packets are not visible when filtering only on ftp. Key features: Wireshark distinguishes FTP control traffic from FTP data traffic, and ftp-data is the dissector/filter used for the latter. Follow TCP Stream reassembles the payload in order, which is especially useful when the file spans many packets. This method works regardless of whether the transfer uses active or passive FTP, as long as Wireshark identifies the data stream. Common misconceptions: A common mistake is assuming FTP file transfers always occur on TCP port 20, which is only typical for active mode and not reliable in modern environments. Another misconception is that seeing RETR and 226 in the control channel means the file bytes should also appear under the ftp filter. Exporting objects can recover files, but it is not the primary action for making the missing transfer packets visible in the packet list. Exam tips: When a question mentions FTP commands are visible but the actual transfer packets are not, think control channel versus data channel. In Wireshark, use ftp-data to locate the payload-bearing packets and Follow TCP Stream to reconstruct the file contents. Be cautious of answers that rely on fixed ports for FTP, because passive mode often uses ephemeral ports.
Which of the following would a security analyst most likely use to compare TTPs between different known adversaries of an organization?
MITRE ATT&CK is the best choice because it is a standardized knowledge base and matrix for adversary TTPs. Analysts can map observed behaviors to techniques/sub-techniques and compare those mappings across known threat groups. ATT&CK also documents which techniques are associated with specific adversaries, enabling direct comparison and supporting detection engineering, coverage analysis, and threat hunting.
The Cyber Kill Chain describes the stages of an intrusion, such as reconnaissance, weaponization, delivery, exploitation, installation, command and control, and actions on objectives. While it helps analysts understand the progression of an attack and identify points for disruption, it is not granular enough to compare detailed techniques and procedures across multiple adversaries. MITRE ATT&CK is better suited for TTP comparison because it organizes adversary behavior into specific tactics and techniques rather than broad attack phases.
OWASP is focused on application security, especially web apps (e.g., OWASP Top 10). It provides guidance on common vulnerability classes and secure development/testing practices. It does not serve as a framework for comparing adversary TTPs across threat actors, so it is not the best tool for analyzing and contrasting known adversaries’ behaviors.
STIX/TAXII supports structured threat intelligence representation (STIX) and transport/sharing (TAXII). It’s excellent for exchanging indicators, observables, and threat intel objects between organizations and tools. However, it is not primarily a behavioral TTP comparison framework; it can carry ATT&CK-related data, but ATT&CK is the framework used to compare TTPs.
Core Concept: This question tests knowledge of threat intelligence frameworks used to describe and compare adversary behavior using TTPs (tactics, techniques, and procedures). In CySA+ terms, it’s about operationalizing threat intel to understand how different threat actors behave and to map detections and mitigations accordingly. Why the Answer is Correct: MITRE ATT&CK is specifically designed to catalog and compare adversary TTPs in a standardized, behavior-based matrix. Security analysts use ATT&CK to map observed activity (e.g., PowerShell abuse, credential dumping, lateral movement methods) to techniques and sub-techniques, then compare those mappings across multiple known adversary groups. ATT&CK also includes curated “Groups” and “Campaigns” knowledge that links adversaries to the techniques they commonly use, enabling direct comparison between different adversaries targeting an organization. Key Features / Best Practices: ATT&CK provides a common taxonomy (tactics like Initial Access, Execution, Persistence; techniques like Phishing, Scheduled Task, LSASS dumping) that supports consistent reporting and correlation across tools. It is widely integrated into SIEM/SOAR, EDR, and threat intel platforms, and is used for detection engineering (mapping alerts to ATT&CK), gap analysis (coverage by technique), purple teaming, and adversary emulation. Analysts often build ATT&CK heatmaps to visualize which techniques are most relevant to their threat landscape. Common Misconceptions: Cyber Kill Chain can seem relevant because it describes phases of an attack, but it is more linear and higher-level; it’s not as granular for comparing specific techniques across adversaries. STIX/TAXII is also tempting because it’s used for threat intel sharing, but it’s a transport/data format rather than a behavioral comparison framework. OWASP focuses on web application security risks and testing guidance, not adversary TTP comparison. Exam Tips: If the question mentions “compare TTPs,” “map techniques,” “adversary groups,” or “behavior-based matrix,” think MITRE ATT&CK. If it mentions “sharing threat intel feeds” or “structured indicators,” think STIX/TAXII. If it mentions “phases of attack,” think Kill Chain. If it mentions “web app vulnerabilities,” think OWASP.