
CompTIA
192+ questions d'entraînement gratuites avec réponses vérifiées par IA
Propulsé par l'IA
Chaque réponse PT0-003: CompTIA PenTest+ est vérifiée par 3 modèles d'IA de pointe pour garantir une précision maximale. Obtenez des explications détaillées par option et une analyse approfondie des questions.
A penetration tester wants to send a specific network packet with custom flags and sequence numbers to a vulnerable target. Which of the following should the tester use?
tcprelay is typically associated with relaying TCP connections between endpoints (a proxy/bridge behavior), often used in tooling ecosystems to forward traffic from one interface/host to another. It is not designed for low-level packet crafting where you set arbitrary TCP flags, sequence numbers, or header options. It operates at the stream/connection level rather than constructing custom packets field-by-field.
Bluecrack is associated with Bluetooth-related attacks (e.g., cracking Bluetooth PINs/keys in certain tool contexts). It is not a general-purpose network packet crafting tool and would not be used to generate custom TCP packets with specific flags and sequence numbers against an IP-based target. The scenario is TCP/IP packet manipulation, not Bluetooth exploitation.
Scapy is a packet crafting and manipulation framework for Python that allows a tester to build packets with precise control over header fields such as TCP flags, sequence numbers, acknowledgment numbers, window sizes, and options. It can send packets, receive responses, and script complex interactions, making it ideal for testing vulnerabilities that require specific packet structures or unusual protocol behavior.
tcpdump is a packet capture and analysis tool used to sniff network traffic and display packet headers/payloads based on filters. While it is excellent for verifying what was sent/received and collecting evidence, it does not provide the capability to craft and transmit arbitrary packets with custom flags and sequence numbers. It complements crafting tools but does not replace them.
Core Concept: This question tests packet crafting—creating and sending custom network packets with precise control over protocol fields (e.g., TCP flags, sequence/ack numbers, options, payload). Packet crafting is commonly used in exploit development, protocol fuzzing, firewall/IDS evasion testing, and validating vulnerabilities that depend on specific TCP state behavior. Why the Answer is Correct: Scapy is a Python-based interactive packet manipulation tool/library designed specifically for crafting, sending, receiving, and dissecting packets across many protocols. With Scapy, a tester can explicitly set TCP flags (SYN, ACK, RST, FIN, PSH, URG), sequence numbers, acknowledgment numbers, window sizes, TCP options (MSS, SACK, timestamps), and even malformed or unusual combinations. This makes it ideal when a vulnerability requires a very specific packet structure or timing that standard tools cannot easily produce. Key Features / Best Practices: Scapy supports building packets layer-by-layer (e.g., IP()/TCP()/Raw()), sending at L2 or L3, sniffing responses, and scripting repeatable tests. In real engagements, testers often script Scapy to reproduce a crash, validate a state-machine bug, or test how a target handles out-of-window sequence numbers. Best practice is to document crafted fields, capture traffic (e.g., with tcpdump/Wireshark) for evidence, and coordinate with stakeholders because crafted packets can destabilize services. Common Misconceptions: Many confuse tcpdump with “packet tools.” tcpdump captures and displays packets; it does not craft and transmit custom packets with arbitrary header fields. Others may think any “tcp*” tool can set flags/sequence numbers, but most are for relaying, proxying, or troubleshooting rather than low-level crafting. Exam Tips: When you see wording like “custom flags,” “custom sequence numbers,” “craft a specific packet,” or “build malformed packets,” think Scapy (or similar crafting tools like hping in other question sets). When you see “capture,” “sniff,” or “analyze traffic,” think tcpdump/Wireshark. When you see “relay/proxy,” think tcprelay. Match the verb (craft vs capture vs relay) to the tool’s primary function.
Envie de vous entraîner partout ?
Téléchargez Cloud Pass gratuitement — inclut des tests d'entraînement, le suivi de progression et plus encore.
Envie de vous entraîner partout ?
Téléchargez Cloud Pass gratuitement — inclut des tests d'entraînement, le suivi de progression et plus encore.
Envie de vous entraîner partout ?
Téléchargez Cloud Pass gratuitement — inclut des tests d'entraînement, le suivi de progression et plus encore.


Téléchargez Cloud Pass et accédez gratuitement à toutes les questions d'entraînement PT0-003: CompTIA PenTest+.
Envie de vous entraîner partout ?
Obtenir l'application gratuite
Téléchargez Cloud Pass gratuitement — inclut des tests d'entraînement, le suivi de progression et plus encore.
A penetration tester needs to test a very large number of URLs for public access. Given the following code snippet:
1 import requests
2 import pathlib
3
4 for url in pathlib.Path("urls.txt").read_text().split("\n"):
5 response = requests.get(url)
6 if response.status == 401:
7 print("URL accessible")
Which of the following changes is required?
Correct. The logic on line 6 is wrong for two reasons: `requests` uses `response.status_code` (not `response.status`) to expose the HTTP code, and 401 means “Unauthorized,” i.e., authentication is required, so the URL is not publicly accessible. The condition should be updated to use `status_code` and to test for accessible outcomes (e.g., 200) rather than 401.
Incorrect. `requests.get(url)` is a valid method call for issuing an HTTP GET request and is appropriate for checking whether a URL is reachable and what status code it returns. While you might enhance it with headers, timeouts, or session reuse for scale, the method itself is not the required change to fix the core correctness issue in the snippet.
Incorrect. `import requests` is correct and required to use the requests library. The bug is not caused by a missing or incorrect import. If anything, additional imports (e.g., for concurrency) could improve performance for a very large list, but the question asks what change is required for correctness, which is in the status check logic.
Incorrect. The delimiter in `split("\n")` is reasonable for a newline-separated file and is not the primary issue. A more robust approach might use `splitlines()` to handle different newline formats and avoid trailing empty entries, but the script’s fundamental problem is mis-checking the HTTP status (wrong attribute and wrong interpretation of 401).
Core concept: This question tests understanding of HTTP response handling during reconnaissance/enumeration. When checking whether URLs are “publicly accessible,” the key signal is the HTTP status code returned by the server (e.g., 200 OK vs. 401 Unauthorized/403 Forbidden). In Python’s requests library, status codes are accessed via response.status_code, not response.status. Why the answer is correct: Line 6 uses `if response.status == 401:`. Two issues exist: (1) `requests.Response` does not expose a `status` attribute for HTTP codes; the correct attribute is `status_code`. (2) A 401 status means the resource is NOT publicly accessible; it indicates authentication is required. If the tester’s goal is to identify public access, the condition should check for success codes (typically 200) or, more generally, “not requiring auth” (i.e., not 401/403). Therefore, the required change is to the condition on line 6. Key features / best practices: - Use `response.status_code` to evaluate HTTP results. - Interpret codes correctly: 200/204/3xx often indicate reachable content; 401 indicates authentication required; 403 indicates forbidden (authorization failure or blocked); 404 indicates not found. - For large URL lists, consider timeouts, exception handling, and concurrency (e.g., requests with timeouts, retry logic, or async), but those are enhancements rather than the specific required fix. Common misconceptions: - Confusing 401 with “accessible.” In security testing, 401 is evidence of access control being enforced, not public access. - Assuming `response.status` is a valid requests property because other frameworks use similar naming. Exam tips: For PenTest+ questions involving scripting and enumeration, focus on (1) correct library usage (attribute/method names) and (2) correct security interpretation of HTTP status codes. If the question says “public access,” think “200 OK” (or at least “not 401/403”) and ensure you’re checking the correct response field.
During a penetration test, a tester captures information about an SPN account. Which of the following attacks requires this information as a prerequisite to proceed?
Golden Ticket attacks involve forging Kerberos Ticket Granting Tickets (TGTs). The key prerequisite is obtaining the krbtgt account’s secret (NTLM hash/AES key) from the domain, typically after domain admin-level compromise. SPN information is not required to create a Golden Ticket; the attacker can mint TGTs and then request access to services broadly.
Kerberoasting requires identifying an account with an SPN (a Kerberos service account). The attacker requests a service ticket (TGS) for that SPN and then cracks the ticket offline to recover the service account password/key. Capturing SPN account information is therefore a direct prerequisite because it provides the target service identity needed to request the roastable ticket.
DCShadow is an Active Directory attack where an adversary registers a rogue domain controller and pushes malicious changes via replication mechanisms. It typically requires high privileges (e.g., Domain Admin, Enterprise Admin, or specific replication-related rights). It does not rely on SPN enumeration; the core prerequisite is replication capability and elevated permissions, not service account identifiers.
LSASS dumping is a post-exploitation technique to extract credentials (NTLM hashes, Kerberos tickets, plaintext creds) from the Local Security Authority Subsystem Service process memory on a Windows host. The prerequisite is local admin/SYSTEM-level access (or equivalent) on the target machine, not SPN account information. SPNs are unrelated to dumping LSASS.
Core concept: This question tests knowledge of Active Directory Kerberos authentication and how Service Principal Names (SPNs) are used to identify service accounts. In AD, an SPN maps a service instance (for example, MSSQLSvc/server:1433) to an account, often a user or computer account, that runs that service. Kerberos uses SPNs when issuing service tickets (TGS), and those tickets are encrypted using the service account’s long-term key. Why the answer is correct: Kerberoasting specifically depends on SPN-associated service accounts. An attacker enumerates accounts with SPNs, requests a TGS for one of those services from the KDC, and then attempts offline cracking of the encrypted ticket material to recover the service account password or key. Without SPN information, there is no valid Kerberoasting target because there is no service ticket to request for that service account. Key features, configurations, and best practices: Kerberoasting is often possible with only a low-privileged domain account because authenticated users can usually request service tickets for SPN-registered services. Risk increases when service accounts have weak or rarely rotated passwords, or when those accounts are highly privileged. Mitigations include using strong random passwords or gMSAs for service accounts, rotating credentials regularly, enforcing least privilege, and monitoring for unusual TGS requests such as Event ID 4769. Common misconceptions: Golden Ticket is also a Kerberos attack, but it requires the krbtgt account hash or key rather than SPN information. DCShadow is an AD replication abuse technique involving a rogue domain controller and elevated privileges, not SPN enumeration. LSASS dumping extracts credentials from process memory and likewise does not depend on SPNs. Exam tips: Associate 'SPN + request TGS + offline cracking' with Kerberoasting. If the prerequisite is the krbtgt hash or key, think Golden Ticket. If the attack involves pushing malicious directory changes by impersonating a domain controller, think DCShadow. If it involves extracting credentials from memory on a compromised host, think LSASS dumping.
While performing an internal assessment, a tester uses the following command: crackmapexec smb 192.168.1.0/24 -u user.txt -p Summer123@ Which of the following is the main purpose of the command?
Incorrect. Pass-the-hash over SMB would use NTLM hashes rather than a plaintext password. In CME, PtH is typically performed by supplying hashes (e.g., --hashes or similar) instead of -p with a cleartext string. Since the command uses -p Summer123@, it is attempting standard authentication with a password, not replaying a captured hash.
Incorrect. Common protocol scanning refers to discovering open ports/services across hosts (e.g., using nmap or masscan). CrackMapExec is primarily for SMB/AD credential testing, enumeration, and post-exploitation workflows, not broad multi-protocol scanning. This command specifically targets SMB and attempts logons; it is not enumerating multiple protocols or doing port discovery.
Correct. The command attempts SMB authentication across 192.168.1.0/24 using many usernames from user.txt with a single password (Summer123@). That is the defining pattern of password spraying: testing a commonly used password against a large set of accounts (often across many hosts) to find valid credentials while reducing the chance of account lockouts compared to brute forcing.
Incorrect. Remote command execution with CME requires additional options such as -x (run a command) or -X (run PowerShell), and typically successful credentials with sufficient privileges. This command only supplies targets and credentials for authentication attempts; it does not include any execution flags or payloads, so it is not executing commands on endpoints.
Core concept: This question tests understanding of CrackMapExec (CME) usage against SMB in an internal network. CME is a post-exploitation/attack framework commonly used for credential validation, password spraying, enumeration, and (when specified) remote execution over SMB/WinRM. Why the answer is correct: The command is: crackmapexec smb 192.168.1.0/24 -u user.txt -p Summer123@ This targets SMB on every host in the /24 subnet and attempts authentication using a list of usernames (user.txt) with a single password (Summer123@). That pattern—many users, one password across many hosts—is classic password spraying. The goal is to find accounts that reuse a common/weak password without triggering lockouts as quickly as a brute-force attack would (which is many passwords against one account). Key features and details: - Module: smb indicates CME will attempt SMB authentication (typically to ports 445/139 depending on configuration). - Target scope: 192.168.1.0/24 means multiple endpoints are tested. - Credential strategy: -u user.txt supplies multiple usernames; -p Summer123@ supplies one candidate password. - Output: CME will report successful logons, local admin status, SMB signing, OS info, domain/workgroup, etc., but the primary intent here is credential validation at scale. Common misconceptions: - Pass-the-hash (PtH) requires providing an NTLM hash (often with --hashes or -H, depending on tool/version) rather than a plaintext password. Here, -p is clearly a plaintext password. - “Protocol scanning” is more like nmap/masscan; CME is not a general port scanner, even though it touches SMB. - Remote command execution in CME typically requires flags like -x (execute command) or -X (PowerShell), plus valid creds and often admin rights. None are present. Exam tips: - Identify spraying vs brute force: spraying = one/few passwords across many accounts; brute force = many passwords against one account. - For CME, remember: -u/-p for plaintext auth attempts; hashes require explicit hash options; execution requires -x/-X; enumeration often uses --shares, --sessions, --users, etc. - When you see a subnet plus a username list and a single password, default to “password spraying.”
A tester gains initial access to a server and needs to enumerate all corporate domain DNS records. Which of the following commands should the tester use?
“dig +short A AAAA local.domain” performs targeted lookups for A and AAAA records for a single name (local.domain). It may return only the zone apex IPs (or nothing) and will not enumerate subdomains or other record types. Useful for quick resolution checks, but it cannot retrieve “all corporate domain DNS records.”
“nslookup local.domain” resolves one name using the system’s default resolver. It provides limited information (typically A/AAAA) and does not attempt a zone transfer or enumerate the namespace. It’s a basic troubleshooting query, not a comprehensive DNS enumeration technique.
This option is clearly intended to represent a DNS zone transfer using dig, which is the correct technique for enumerating an entire DNS zone. A successful AXFR request can return the full set of records for the domain, including host, mail, name server, and service records. Although the option contains a typo (`afxr` instead of `axfr`), it is still the only choice that corresponds to the proper enumeration method. In correct form, the command would be `dig AXFR local.domain @local.dns.server`, assuming the server is authoritative and permits transfers.
“nslookup -server local.dns.server local.domain *” resembles an attempt to query a wildcard, but DNS does not support listing all records via “*”. A wildcard record only answers for non-existent names; it does not enumerate existing hostnames. Even if it returns something, it won’t provide a complete zone listing like AXFR.
Core concept: The question is testing DNS enumeration of an entire corporate domain. The standard method to retrieve all records from a DNS zone in one operation is a DNS zone transfer (AXFR). Zone transfers are intended for replication between authoritative DNS servers, but if misconfigured to allow transfers from unauthorized hosts, they can expose the full zone contents for reconnaissance. Why correct: The intended correct choice is the dig AXFR zone transfer against a specified DNS server. In proper syntax, this would be `dig AXFR local.domain @local.dns.server`, which requests a full zone transfer for `local.domain` from `local.dns.server`. If permitted, it can return A, AAAA, CNAME, MX, NS, TXT, SRV, and other records, which best matches the requirement to enumerate all corporate domain DNS records. Key features: AXFR is used between primary and secondary DNS servers for full zone replication. Secure deployments restrict zone transfers to approved secondary servers, often with ACLs and TSIG authentication. From an internal foothold, a successful AXFR can reveal internal-only hostnames and services that are not externally visible. Common misconceptions: Standard `nslookup` or `dig` queries only resolve specific names or record types and do not enumerate an entire zone. Querying `*` is not a mechanism for listing all DNS records; wildcard DNS records only affect how non-existent names are answered. Also, syntax matters: `AXFR` is the correct transfer type, while `afxr` is not a valid dig query type. Exam tips: When a question asks to enumerate all DNS records or dump a DNS zone, think of a zone transfer using AXFR. Remember the dig pattern as `dig AXFR <zone> @<server>`. On exams, if one option clearly intends AXFR but contains a minor typo, it is often still the best answer when all other options are plainly non-enumeration queries.
A penetration tester is performing network reconnaissance. The tester wants to gather information about the network without causing detection mechanisms to flag the reconnaissance activities. Which of the following techniques should the tester use?
Sniffing is passive reconnaissance: capturing existing network traffic rather than generating new probes. When performed via a network tap or SPAN/mirror port (or in limited cases on the same segment), it can reveal hosts, services, protocols, and naming information with minimal additional network footprint. Because it doesn’t inherently create scan-like traffic patterns, it is less likely to trigger IDS/IPS alerts than active discovery methods.
Banner grabbing is an active technique that connects to a service (e.g., HTTP, SSH, SMTP) to read identifying strings and version details. Even though it may involve only a small number of connections, it still generates traffic to the target and is commonly logged by the service and monitored by IDS/IPS or SIEM rules. It is therefore more detectable than passive sniffing.
TCP/UDP scanning is active reconnaissance that sends crafted packets to many ports/hosts to determine open services and firewall behavior. This produces distinctive patterns (SYN scans, UDP probes, retransmissions, port sweeps) that network security tools frequently detect and alert on. While rate limiting and evasion exist, scanning remains inherently noisy compared to passive collection.
Ping sweeps are active host discovery, typically using ICMP echo requests (and sometimes ARP or TCP-based pings) across an address range. This can quickly identify live hosts, but it generates bursts of ICMP or other probe traffic that is easy to spot in logs and is often blocked or alerted on. It is more likely to be flagged than passive sniffing.
Core Concept: This question tests the difference between passive and active reconnaissance. Passive recon collects information without directly interacting with target hosts/services in a way that generates noticeable traffic or log entries. Active recon (scans, probes, and service queries) touches systems and is more likely to trigger IDS/IPS, EDR, firewall logs, or SIEM correlation rules. Why the Answer is Correct: Sniffing is a passive technique where the tester captures and analyzes network traffic (e.g., via SPAN/mirror ports, network taps, or being on the same broadcast domain for certain traffic). Because it does not require sending probes to target hosts, it generally produces minimal to no additional network noise attributable to the tester. This makes it the best choice when the goal is to gather network information while reducing the chance of detection mechanisms flagging the activity. Key Features / What You Can Learn via Sniffing: Sniffing can reveal IP ranges in use, active hosts, DNS queries, internal domain names, protocols and ports in use, authentication methods, service discovery chatter (LLMNR, NBNS, mDNS), routing and ARP behavior, and sometimes credentials or session tokens if insecure protocols are present. Best practices include using lawful access methods (tap/SPAN), filtering to reduce capture volume, and focusing on metadata when content inspection is unnecessary. Common Misconceptions: Banner grabbing can feel “lightweight,” but it is still active: it connects to services and often gets logged. TCP/UDP scanning and ping sweeps are classic discovery methods, but they generate recognizable patterns (sequential probes, unusual rates, ICMP bursts, UDP to many ports) that IDS/IPS and firewalls commonly detect. Even “slow” scans can be correlated over time. Exam Tips: On PenTest+ questions, “without causing detection mechanisms to flag” strongly hints at passive recon. Choose techniques like sniffing, OSINT, and log review (when authorized) over active probing. If the option list includes one clearly passive method (sniffing) and others that transmit probes (scans/sweeps/banner grabs), the passive method is typically correct.
During a security assessment, a penetration tester uses a tool to capture plaintext log-in credentials on the communication between a user and an authentication system. The tester wants to use this information for further unauthorized access. Which of the following tools is the tester using?
Burp Suite is an intercepting web proxy used for testing web applications by capturing and modifying HTTP/S requests and responses. It can reveal credentials if the tester routes a browser through Burp or performs TLS interception, but it is not primarily a packet sniffer for general network communications. The scenario describes capturing plaintext credentials from network communication, which more directly matches a packet analyzer like Wireshark.
Wireshark is a network protocol analyzer that captures packets and decodes protocols to reveal application-layer data. Using features like display filters and “Follow TCP Stream,” a tester can reconstruct sessions and view plaintext usernames/passwords when insecure protocols are used. This directly matches capturing plaintext login credentials between a user and an authentication system for later unauthorized access.
Zed Attack Proxy (OWASP ZAP) is also an intercepting proxy focused on web application testing. Like Burp, it captures and manipulates web requests when the client is configured to use the proxy, and it can help find auth/session issues. However, it is not the typical tool for passive packet capture of arbitrary network traffic between two systems unless you are explicitly proxying that traffic.
Metasploit is an exploitation and post-exploitation framework used to deliver payloads, run exploits, and manage sessions (e.g., Meterpreter). While it has auxiliary modules and can support some sniffing in certain contexts after compromise, it is not the standard tool for capturing plaintext credentials from network traffic during a general assessment. The described activity aligns better with Wireshark.
Core concept: This question tests credential interception via network traffic capture (packet sniffing). If an application or authentication exchange uses plaintext protocols (e.g., HTTP Basic, Telnet, FTP, POP3 without TLS), credentials can be recovered by capturing packets and reconstructing the session. Why the answer is correct: Wireshark is a packet capture and protocol analysis tool used to observe and dissect network communications. During an assessment, a tester can capture traffic on a network segment (or via a SPAN/mirror port, TAP, or on the local host) and then use Wireshark’s protocol dissectors and “Follow TCP Stream” feature to reassemble application-layer conversations. If the login is transmitted in cleartext, Wireshark will reveal the username/password directly in the payload, enabling subsequent unauthorized access (credential reuse), which aligns exactly with the scenario. Key features / configurations / best practices: Wireshark supports live capture and offline analysis (PCAP). Filters (capture and display) help isolate authentication traffic (e.g., tcp.port==80, ftp, telnet, http.authbasic). “Follow TCP/UDP Stream” reconstructs sessions; protocol-specific views can decode HTTP headers, FTP USER/PASS commands, etc. In real engagements, testers often need proper positioning (same broadcast domain, ARP spoofing, or mirrored ports) to see the traffic. Defensively, the best practice is to enforce TLS (HTTPS, SSH, FTPS, STARTTLS), use HSTS, disable plaintext services, and implement network segmentation. Common misconceptions: Burp Suite and ZAP can also reveal credentials, but typically by acting as an intercepting proxy for web traffic where the tester controls the client’s proxy settings or performs SSL interception. The question emphasizes “capture plaintext credentials on the communication between a user and an authentication system,” which is classic packet sniffing rather than web proxy interception. Metasploit is primarily for exploitation and post-exploitation, not general packet capture/analysis. Exam tips: When you see “capture traffic,” “packet capture,” “sniffing,” “plaintext credentials,” or “follow stream,” think Wireshark/tcpdump. When you see “intercept/modify HTTP requests,” “proxy,” “replay web requests,” think Burp Suite or ZAP. Map the tool to where it sits: on-path packet analyzer (Wireshark) vs application-layer proxy (Burp/ZAP) vs exploitation framework (Metasploit).
A penetration tester established an initial compromise on a host. The tester wants to pivot to other targets and set up an appropriate relay. The tester needs to enumerate through the compromised host as a relay from the tester's machine. Which of the following commands should the tester use to do this task from the tester's host?
Incorrect. This command runs nmap locally against <target_cidr> and then pipes nmap’s textual output into nc to port 22 on the compromised host. That does not cause the scan packets to traverse the compromised host, so it is not pivoting. It also attempts to send arbitrary text to an SSH port, which is not a meaningful “relay” for enumeration traffic.
Incorrect. This is a classic named-pipe netcat relay pattern, but as written it’s malformed (mknod syntax is wrong) and conceptually it creates a bidirectional relay for a single TCP flow, not a scalable pivot for enumerating an entire CIDR. It also mixes listening on 8000 with connecting to <target_cidr> on port 80, which doesn’t implement a general proxy for tools like nmap.
Incorrect. This attempts to chain netcat listeners/connectors and then scan 127.0.0.1:8000, but it doesn’t actually establish a pivot through the compromised host. There’s no step that places a proxy on the compromised host or forwards traffic through it. Additionally, nmap syntax is off (scanning “127.0.0.1 8000” is not a correct way to specify a port without -p).
Correct. proxychains forces supported applications (including nmap connect scans) to send their TCP connections through a configured proxy (commonly a SOCKS proxy established via the compromised host). This is exactly how a tester enumerates internal networks from their own machine while using the compromised host as a relay/pivot point. Pair it with nmap -sT for best compatibility.
Core concept: This question tests pivoting via a compromised host using a relay/proxy so the tester can enumerate internal targets “through” that host. In practice, this is commonly implemented as a SOCKS proxy (e.g., SSH dynamic port forwarding, chisel, Metasploit SOCKS, etc.) and then using a tool like proxychains to force scanning traffic through that proxy. Why the answer is correct: Option D uses proxychains to run nmap against <target_cidr>. Proxychains intercepts outbound connect() calls and routes them through a configured proxy (typically SOCKS4/5) that terminates on the compromised host (or on a pivot agent reachable via the compromised host). This matches the requirement: enumerate from the tester’s machine while using the compromised host as the relay. It’s a standard workflow: establish pivot (SOCKS) -> configure /etc/proxychains.conf -> run tools (nmap, curl, smbclient) through proxychains. Key features / best practices: - Works best with TCP connect scans (-sT) because proxychains can proxy full TCP connections; raw packet scans (e.g., -sS SYN scan) generally won’t work through SOCKS. - Requires a pre-established proxy path (e.g., ssh -D 1080 user@compromised_host, or a pivot tool) and correct proxychains configuration. - For nmap via proxychains, keep expectations realistic: service detection and some NSE scripts may be limited/slow; consider targeted port lists and timing. Common misconceptions: - Trying to “pipe” nmap output into netcat (Option A) does not forward the scan traffic; it only forwards text output. - Netcat listener/pipe tricks (Options B/C) can create simple relays for single TCP streams, but they are not a general-purpose pivot for scanning an entire CIDR from the attacker host. Exam tips: - When you see “pivot,” “relay,” “enumerate through compromised host,” think SOCKS proxy + proxychains. - Remember: proxychains + nmap typically implies -sT (connect scan), not -sS. - Distinguish between forwarding scan results (text) vs forwarding scan traffic (network connections).
A penetration tester is unable to identify the Wi-Fi SSID on a client's cell phone. Which of the following techniques would be most effective to troubleshoot this issue?
Sidecar scanning is not a standard wireless reconnaissance or troubleshooting term in the PenTest+ context. It does not describe a recognized method for identifying whether an SSID is being broadcast on a particular RF channel. Because the problem is specifically about Wi-Fi network visibility, the tester needs a technique tied to 802.11 channel and beacon analysis. That makes this option a distractor rather than a practical troubleshooting approach.
Channel scanning is correct because Wi-Fi access points transmit management traffic on specific channels, and a client must scan those channels to discover the network. By checking each channel, a tester can determine whether the AP is active, what band it is using, and whether the phone supports that channel range. This is especially useful for troubleshooting issues such as unsupported 5 GHz channels, DFS channels, or simple channel misconfiguration. It is the most direct and technically relevant method for diagnosing why an SSID is not visible on a mobile device.
Stealth scanning focuses on reducing the likelihood of detection during reconnaissance rather than improving the ability to find a wireless network. Even if a scan is performed quietly or passively, the core troubleshooting need is still to inspect the relevant wireless channels for beacon or probe-response traffic. The question asks for the most effective troubleshooting technique, not the least detectable one. Therefore, stealth scanning does not best address the SSID visibility problem.
Static analysis scanning is used to inspect source code, binaries, or applications without executing them. It is relevant to software security testing, not to wireless signal discovery or 802.11 management frame analysis. A missing SSID on a phone is a radio-frequency and wireless configuration issue, not a code-analysis problem. For that reason, static analysis scanning is clearly unrelated to the scenario.
Core concept: This question tests wireless reconnaissance and troubleshooting fundamentals related to SSID discovery. Wi-Fi networks are found by listening for beacon frames or probe responses on the channel where the access point is operating. If a cell phone cannot identify a Wi-Fi network, the most effective troubleshooting step is to determine which channel the AP is using and whether the client device can scan that band/channel. Why correct: Channel scanning is the best answer because it systematically checks wireless channels for 802.11 management traffic. This allows the tester to verify whether the access point is present, whether it is broadcasting on 2.4 GHz or 5 GHz, and whether the phone supports that frequency and channel. It is the most direct way to troubleshoot why an SSID is not appearing on a client device. Key features: Channel scanning helps identify channel mismatch, unsupported bands, weak signal, and interference. It can also detect the presence of an AP even if the SSID is hidden, because the BSSID and channel are still observable in management traffic. This makes it useful for determining whether the issue is with the AP configuration or the client device. Common misconceptions: Stealth scanning is about avoiding detection, not improving wireless troubleshooting. Static analysis scanning applies to software or code review, not RF discovery. Sidecar scanning is not a standard wireless troubleshooting technique in this context. Exam tips: When a question asks why a device cannot see a Wi-Fi network, think about wireless channels, bands, beacon frames, and client compatibility. The most practical troubleshooting method is to scan channels and confirm where the AP is operating.
A client recently hired a penetration testing firm to conduct an assessment of their consumer-facing web application. Several days into the assessment, the client's networking team observes a substantial increase in DNS traffic. Which of the following would most likely explain the increase in DNS traffic?
Covert data exfiltration via DNS tunneling commonly causes large increases in DNS queries. Data is encoded into subdomains (or TXT queries) and sent to an attacker-controlled domain, producing many unique lookups and often long query names. This is especially plausible “several days into” an assessment, when testers may simulate post-compromise actions and attempt stealthy outbound channels that bypass strict HTTP egress controls.
URL spidering (crawling) maps a web application by following links and discovering endpoints. While it may trigger some DNS lookups initially (e.g., for new hostnames), most crawling stays within the same domain and primarily increases HTTP/HTTPS requests, not sustained high DNS volume. A major DNS spike is less consistent with spidering than with DNS tunneling behavior.
HTML scraping extracts content from web pages and APIs. Like spidering, it mainly drives HTTP/HTTPS traffic and application-layer load. DNS usage typically remains stable because the scraper repeatedly accesses the same hostnames after initial resolution. Scraping might increase bandwidth and web server logs, but it is unlikely to produce a substantial, ongoing increase in DNS traffic.
A DoS attack can increase DNS traffic if the DNS infrastructure is targeted (e.g., query floods, amplification). However, the scenario is a consumer-facing web application assessment and the observation is a DNS spike without mention of service outage. In PenTest+ context, a sustained DNS increase during an assessment more commonly indicates DNS tunneling/exfiltration than an overt DoS, which is often out of scope.
Core concept: DNS is not just for name resolution; it can be abused as a transport channel. Attackers (or testers simulating attackers) can generate unusually high DNS query volume by encoding data into subdomain labels and sending it to an attacker-controlled authoritative DNS server (DNS tunneling). This is commonly used for covert command-and-control (C2) and/or data exfiltration because DNS is often allowed through firewalls and monitored less rigorously than HTTP/S. Why the answer is correct: A substantial increase in DNS traffic several days into a web application assessment strongly suggests activity beyond basic browsing—specifically, a covert channel. In DNS exfiltration, each chunk of data is base32/base64/hex encoded into a series of queries like <encoded-data>.exfil.example.com. The victim’s resolver forwards these queries outward, creating a high volume of DNS requests and responses. The “several days in” timing aligns with post-compromise behavior: after gaining a foothold (e.g., via web app vulnerability leading to server-side execution), the tester may attempt to move data out in a stealthy way. Key features and best practices: Indicators include many unique subdomains, long label lengths near DNS limits, high NXDOMAIN rates (if using non-existent labels), and queries to unusual domains. Defensive controls include DNS logging/analytics, egress filtering to approved resolvers, blocking known tunneling patterns, limiting TXT record usage, response rate limiting (RRL), and DLP/IDS signatures for tunneling tools. From a testing perspective, this should be explicitly authorized in the rules of engagement because it can resemble real attacker behavior. Common misconceptions: Recon activities like spidering and scraping primarily increase HTTP(S) traffic, not DNS, because they reuse the same hostnames and rely on existing connections. A DoS could target DNS, but the scenario is a web application assessment and the question asks what would most likely explain increased DNS traffic—covert exfiltration is a classic reason for sustained, abnormal DNS volume. Exam tips: When you see “unexpected spike in DNS,” think DNS tunneling/exfiltration/C2 first. Look for wording implying stealth, persistence, or post-exploitation (e.g., “several days into the assessment”). If the question instead described service unavailability or resolver saturation, then DNS-focused DoS would be more plausible.
A penetration tester performs a service enumeration process and receives the following result after scanning a server using the Nmap tool: PORT STATE SERVICE 22/tcp open ssh 25/tcp filtered smtp 111/tcp open rpcbind 2049/tcp open nfs Based on the output, which of the following services provides the best target for launching an attack?
Database is not supported by the scan results. Common database ports (e.g., 1433 MSSQL, 3306 MySQL, 5432 PostgreSQL, 1521 Oracle) are not shown as open. While databases can be excellent targets, you should base your selection on observed services/ports. Here, the evidence points to SSH, RPC/NFS, and filtered SMTP—not a database service.
Remote access maps to SSH on 22/tcp. SSH can be a target if weak passwords, exposed keys, outdated daemons, or misconfigurations exist, but it typically requires authentication and is often well-hardened. Without additional findings (version vulnerabilities, credential reuse, default creds), SSH is usually less immediately fruitful than an exposed file-sharing service like NFS.
Email maps to SMTP on 25/tcp, but Nmap reports it as filtered. Filtered indicates a firewall/ACL is blocking probes or limiting connectivity, which reduces immediate attack options. Even when open, SMTP attacks often depend on specific server misconfigurations (open relay, vulnerable MTA versions, user enumeration). In this output, SMTP is not the best initial target.
File sharing maps to NFS on 2049/tcp (with rpcbind on 111/tcp). This is often a high-value target because misconfigured exports can allow unauthenticated access to sensitive data or writable shares. Attackers can enumerate exports, mount them, and potentially obtain credentials, keys, configs, or escalate privileges (e.g., via no_root_squash or writable paths). This makes file sharing the best target here.
Core concept: Interpreting Nmap service enumeration results to identify the most promising attack surface. The scan shows SSH (22/tcp open), SMTP (25/tcp filtered), rpcbind (111/tcp open), and NFS (2049/tcp open). The question asks which service provides the best target for launching an attack, which typically means the service most likely to yield exploitable misconfigurations, weak access controls, or direct data exposure. Why the answer is correct: NFS (Network File System) on 2049/tcp, especially when paired with rpcbind on 111/tcp, is a classic high-value target because it often exposes file shares that may be misconfigured (e.g., exported to broad networks, weak host-based restrictions, or permissive export options). If an export is accessible, an attacker can enumerate exports (showmount -e), mount them, and potentially read sensitive files (configs, keys, backups) or even write to directories. Misconfigurations like no_root_squash can allow remote root on the mounted share to act as root on the server’s filesystem for that export, enabling straightforward privilege escalation or persistence (e.g., dropping SSH authorized_keys, modifying scripts, or planting cron jobs if writable paths are exposed). Key features and best practices: NFS security relies heavily on correct export configuration (/etc/exports), network scoping, and proper options (root_squash, ro where possible, restricting by IP/subnet, and using NFSv4 with stronger controls/Kerberos where appropriate). rpcbind/portmapper (111) facilitates discovery of RPC services and can aid attackers in mapping NFS-related RPC programs and versions. Common misconceptions: SSH is “remote access,” but it is often hardened and requires credentials; without a known vulnerability or weak credentials, it may not be the best initial target. SMTP is filtered, meaning a firewall/ACL is likely blocking or limiting access, reducing immediate attackability. “Database” is not indicated by any open port here. Exam tips: Prioritize services that commonly expose data or allow unauthenticated/weakly authenticated access. In Nmap output, combinations like rpcbind + nfs are strong indicators to test file-sharing exposure first (enumerate exports, attempt mounts, check permissions, and look for sensitive files or writable paths). Also note state: “open” is generally more actionable than “filtered.”
Which of the following explains the reason a tester would opt to use DREAD over PTES during the planning phase of a penetration test?
Incorrect. PTES can be used for web application tests just as well as for network or internal tests; it describes phases and deliverables, not a web-specific approach. DREAD is also not inherently web-specific. The deciding factor between DREAD and PTES is whether you need threat/risk scoring (DREAD) versus a full testing methodology (PTES).
Incorrect. Mobile application assessments may use specialized techniques (reverse engineering, device security checks, API testing), but that does not make DREAD preferable to PTES. PTES remains a general engagement framework. DREAD would only be chosen here if the goal is to score and prioritize mobile threats, not simply because the target is mobile.
Incorrect. Thick client testing involves different attack surfaces (local storage, IPC, update mechanisms, client-server protocols), but PTES still applies as the overarching process. DREAD is not a thick-client methodology; it is a threat rating model. Platform type does not determine DREAD vs PTES; the need for threat prioritization does.
Correct. DREAD is specifically used to create and prioritize a threat model by assigning scores to threats across five categories. In the planning phase, this helps determine which threats deserve the most testing effort and communicates risk to stakeholders. PTES includes a threat modeling phase, but it is a process standard, not a scoring model—so DREAD is the better choice when the task is threat modeling and prioritization.
Core concept: DREAD and PTES serve different purposes in the planning phase. PTES (Penetration Testing Execution Standard) is a methodology/framework describing phases and activities of a penetration test (pre-engagement, intelligence gathering, threat modeling, vulnerability analysis, exploitation, post-exploitation, reporting). DREAD is a risk-rating model used to score and prioritize threats (Damage potential, Reproducibility, Exploitability, Affected users, Discoverability). Why the answer is correct: A tester would choose DREAD over PTES specifically when the task is to create or support a threat model by quantifying and prioritizing identified threats. During planning, stakeholders often need a defensible way to rank risks and decide what to test first, what depth is required, and where to allocate time. DREAD provides a structured scoring approach that can translate technical findings into business-relevant prioritization. PTES, by contrast, is not a scoring system; it is a process standard that tells you what phases to perform, not how to numerically rank threats. Key features/best practices: DREAD is commonly used alongside threat modeling approaches (e.g., STRIDE for categorizing threats, then DREAD for scoring). In planning, you can use DREAD to prioritize test cases, define success criteria, and justify scope focus (e.g., high DREAD items get deeper testing). PTES is best used to ensure engagement governance: rules of engagement, scope, communications, legal authorization, and a consistent workflow through reporting. Common misconceptions: Options about web, mobile, or thick client testing can seem plausible because different targets may require different tools and techniques. However, PTES is target-agnostic and can be applied to web, mobile, internal, external, and application testing. DREAD is also target-agnostic; it is chosen based on the need for threat prioritization, not the platform under test. Exam tips: If the question contrasts a “model” (DREAD) with a “standard/methodology” (PTES), look for wording about risk scoring, prioritization, or threat modeling. Choose PTES when the question is about the overall penetration testing lifecycle and engagement structure; choose DREAD when the question is about ranking threats by impact/likelihood-style factors.
A penetration tester is performing a security review of a web application. Which of the following should the tester leverage to identify the presence of vulnerable open-source libraries?
VM (Vulnerability Management) refers to an organizational process and supporting tools for identifying, prioritizing, remediating, and tracking vulnerabilities across systems. While VM platforms may ingest scan results and sometimes include application findings, they are not purpose-built to enumerate application dependencies and map them to open-source CVEs. VM is broader asset-centric management, not focused dependency composition analysis.
IAST (Interactive Application Security Testing) instruments the application (agents/sensors) to observe code execution during functional testing and detect issues like injection, insecure deserialization, and weak crypto usage. Although IAST may reveal some runtime component details, it is not the primary or most reliable approach for identifying all vulnerable open-source libraries and their versions, especially transitive dependencies and build-time packages.
DAST (Dynamic Application Security Testing) tests a running application from the outside (black-box) by sending requests and analyzing responses. It excels at finding runtime vulnerabilities such as XSS, SQLi, auth/session issues, and misconfigurations. However, DAST generally cannot accurately inventory all embedded open-source libraries and versions; it may only infer some components via headers or fingerprints, which is incomplete for dependency vulnerability detection.
SCA (Software Composition Analysis) is specifically designed to detect and assess third-party and open-source components used in an application. It parses dependency manifests/lockfiles and/or build artifacts, identifies direct and transitive dependencies, and correlates versions with known vulnerabilities (CVEs) and advisories. This directly answers the requirement to identify the presence of vulnerable open-source libraries during a web application security review.
Core Concept: This question tests knowledge of how to detect vulnerable third-party components in a web application. Modern apps heavily depend on open-source packages (npm, Maven, PyPI, NuGet, RubyGems) and bundled client-side libraries (jQuery, React, etc.). The security practice focused on identifying these dependencies, their versions, licenses, and known CVEs is Software Composition Analysis (SCA). Why the Answer is Correct: SCA is specifically designed to inventory open-source and third-party libraries and correlate them with vulnerability databases (e.g., NVD, vendor advisories, GitHub Security Advisories). It can detect direct and transitive dependencies, flag outdated or vulnerable versions, and often recommend fixed versions. In a security review, leveraging SCA is the most direct and reliable way to identify the presence of vulnerable open-source libraries because it analyzes the application’s composition (manifests, lock files, build artifacts, SBOMs) rather than only runtime behavior. Key Features / Best Practices: SCA tools commonly: - Parse dependency manifests and lockfiles (package-lock.json, pom.xml, requirements.txt, Gemfile.lock) - Identify transitive dependencies and dependency confusion risks - Generate/consume SBOMs (CycloneDX, SPDX) - Map components to CVEs and provide remediation guidance - Integrate into CI/CD (shift-left) and enforce policies (block builds on critical CVEs) Common Misconceptions: DAST and IAST are often associated with “application security testing,” so they can seem plausible. However, DAST primarily finds runtime/external issues (injection, auth flaws) and typically cannot reliably enumerate all embedded libraries and versions. IAST can sometimes observe loaded components at runtime, but its primary purpose is instrumented runtime vulnerability detection, not comprehensive dependency inventory. “VM” (vulnerability management) is a program/process and toolset for tracking vulnerabilities across assets, not a purpose-built method for open-source dependency analysis. Exam Tips: When you see wording like “open-source libraries,” “third-party components,” “dependencies,” “SBOM,” or “supply chain,” think SCA. When the question emphasizes black-box web testing, think DAST; when it emphasizes instrumented runtime testing during QA, think IAST. For PenTest+, SCA is the clearest match for identifying vulnerable libraries and frameworks used by an application.
A penetration tester finds that an application responds with the contents of the /etc/passwd file when the following payload is sent:
]>
&foo;
Which of the following should the tester recommend in the report to best prevent this type of vulnerability?
Reducing file permissions is not the best prevention for XXE. /etc/passwd is typically world-readable and changing its permissions can break system functionality. More importantly, XXE is a parser configuration flaw; even if file reads are restricted, attackers may still exploit XXE for SSRF (accessing internal web services/metadata endpoints) or other data exposures. Least privilege helps limit impact but does not remove the vulnerability.
Frequent log review is a detection/monitoring control, not a preventive control. It may help identify exploitation attempts after the fact, but it does not stop the XML parser from resolving external entities and disclosing data. For exam questions asking how to best prevent the vulnerability type, choose a control that removes the root cause (unsafe XML parsing) rather than an operational process.
Disabling external entities (and ideally DTD processing entirely) is the primary remediation for XXE. The attack relies on defining and expanding an external entity to read local files like /etc/passwd or to make outbound/internal requests. Secure parser configuration (disallow DOCTYPE, disable external general/parameter entities, enable secure processing) prevents entity expansion and stops the vulnerability at its source.
A WAF can sometimes detect and block obvious XXE payloads, but it is not the best prevention. XXE can be obfuscated or encoded to bypass signatures, and WAF rules vary widely. The underlying issue remains an insecure XML parser configuration. WAFs are best treated as defense-in-depth, not the primary fix for XXE.
Core Concept: This payload indicates an XML External Entity (XXE) vulnerability. XXE occurs when an XML parser is configured to resolve external entities (e.g., SYSTEM identifiers pointing to local files or remote URLs). An attacker defines an entity that references a sensitive resource (like file:///etc/passwd) and then triggers expansion (e.g., &foo;), causing the application to disclose file contents or perform server-side requests (SSRF). Why the Answer is Correct: The best preventative recommendation is to disable the use of external entities (and DTD processing) in the XML parser. The observed behavior—returning /etc/passwd contents—strongly implies the parser is expanding an external entity. Preventing entity resolution directly addresses the root cause at the parser level, stopping file disclosure, SSRF, and related XXE impacts regardless of payload variations. Key Features / Best Practices: - Configure XML parsers to disallow DTDs and external entity resolution (often called “secure processing”). - Use hardened libraries and safe defaults (e.g., disable DOCTYPE declarations, disallow external general/parameter entities). - Prefer simpler data formats (JSON) when possible, or use streaming parsers with strict schemas. - Apply defense-in-depth: input validation, least privilege for the service account, and egress controls to reduce SSRF blast radius. Guidance aligns with OWASP XXE Prevention recommendations and common secure parser configuration practices. Common Misconceptions: - File permissions (chmod) can reduce impact but does not fix XXE; many sensitive files are world-readable by design (e.g., /etc/passwd). Also, XXE can target other resources (internal HTTP endpoints) even without file reads. - Log review is detective, not preventive. - A WAF may block known patterns but is bypassable (encoding, alternate entity tricks) and does not reliably eliminate the underlying parser weakness. Exam Tips: When you see XML payloads with DOCTYPE/ENTITY and a response containing local file contents (especially /etc/passwd), think XXE. The primary remediation is to disable external entity resolution/DTDs in the XML parser. Secondary controls (WAF, least privilege, monitoring) are helpful but not the “best” single prevention in exam terms.
A penetration tester is working on an engagement in which a main objective is to collect confidential information that could be used to exfiltrate data and perform a ransomware attack. During the engagement, the tester is able to obtain an internal foothold on the target network. Which of the following is the next task the tester should complete to accomplish the objective?
Initiating a social engineering campaign can help gain initial access or additional credentials, but it is generally less direct once the tester already has an internal foothold. Post-compromise, internal techniques (credential dumping, token impersonation, Kerberoasting, etc.) are typically faster and more reliable for escalating privileges and reaching sensitive systems needed for exfiltration and ransomware objectives.
Credential dumping is the key post-exploitation step that enables privilege escalation and lateral movement. By extracting hashes, cached credentials, or Kerberos tickets, the tester can access file shares, databases, backups, and administrative tooling. This directly supports both confidential data collection (exfiltration) and ransomware-style impact by enabling broad deployment and control over security/backup systems.
Compromising an endpoint is usually part of gaining the initial foothold. The prompt already indicates the tester has obtained internal foothold, implying at least one endpoint is compromised. The next task should build on that access to reach higher-value targets and broader control; credential dumping is a more appropriate immediate next step than simply compromising another endpoint.
Share enumeration (discovering accessible SMB/NFS shares and their contents) is valuable for locating sensitive data, but it is often constrained by current permissions. Without elevated or additional credentials, enumeration may not reveal or allow access to truly confidential repositories. In ransomware/exfiltration tradecraft, credential acquisition typically precedes deep discovery to maximize access and impact.
Core Concept: This question tests post-exploitation decision-making after gaining an internal foothold. Once inside, the attacker’s priority shifts from initial access to privilege escalation, credential access, and lateral movement—steps that enable access to sensitive data and the ability to deploy ransomware broadly. Why the Answer is Correct: With an internal foothold, the most direct “next task” to support data exfiltration and ransomware objectives is credential dumping. Dumping credentials (e.g., local hashes, cached domain creds, Kerberos tickets) enables the tester to escalate privileges and pivot to high-value systems such as file servers, database servers, backup infrastructure, and domain controllers. Ransomware operators specifically seek privileged credentials to disable security controls, access backups, and push payloads via centralized management (GPO, SCCM, PSRemoting). Similarly, exfiltration requires access to repositories where confidential data resides, which is often gated by domain or service accounts. Key Features / Best Practices: Credential access commonly targets LSASS memory, SAM/SECURITY hives, DPAPI secrets, browser credential stores, and Kerberos artifacts (tickets). In enterprise environments, obtaining domain admin or equivalent is a major milestone because it enables broad access and rapid propagation. From a PenTest+ perspective, credential dumping is a classic post-exploitation technique that bridges initial compromise to impactful objectives. Common Misconceptions: “Compromise an endpoint” may sound like the next step, but the scenario already states an internal foothold (i.e., an endpoint is already compromised). “Initiate social engineering” is typically a pre-access or parallel access path, not the most efficient next step once internal access exists. “Share enumeration” is useful, but it is usually performed after or alongside credential acquisition; without better credentials, access to sensitive shares may be limited. Exam Tips: When the objective includes exfiltration and ransomware, think: gain privileged credentials → expand access/lateral movement → locate/collect sensitive data → impact operations. After foothold, credential dumping is often the fastest route to privilege escalation and enterprise-wide reach.
A penetration testing team needs to determine whether it is possible to disrupt the wireless communications for PCs deployed in the client's offices. Which of the following techniques should the penetration tester leverage?
Port mirroring (SPAN) is a wired switch feature that copies traffic from one or more ports/VLANs to a monitoring port for packet capture. It helps with visibility and troubleshooting on Ethernet networks, not with assessing RF channel usage or the ability to disrupt Wi-Fi communications. It does not directly enable wireless disruption testing and typically requires switch access/authorization.
Sidecar scanning generally refers to using an auxiliary device or sensor (a “sidecar”) to perform scanning/monitoring without impacting the primary system, or to gain an alternate vantage point. While it can be used for reconnaissance, it is not the specific technique for determining which Wi-Fi channels are in use or evaluating channel-based disruption/jamming feasibility.
ARP poisoning (ARP spoofing) is a Layer 2 man-in-the-middle technique used to redirect traffic on a local network by corrupting ARP tables. It can disrupt communications by causing misrouting, but it requires the attacker to be on the same broadcast domain and typically already associated to the network. It does not directly address wireless RF disruption and is not the best fit for “disrupt wireless communications” in the channel/interference sense.
Channel scanning is used to enumerate and analyze active Wi-Fi channels, APs, and RF conditions (signal, noise, utilization). This is the correct technique to determine whether wireless communications can be disrupted because it identifies the specific channel(s) and channel widths in use and reveals congestion/overlap. It provides the necessary intelligence to assess and plan disruption tests against the WLAN.
Core Concept: This question is about assessing whether wireless communications can be disrupted (i.e., susceptibility to wireless interference/jamming or channel-related denial of service). In Wi-Fi (802.11), clients and access points communicate on specific RF channels within 2.4GHz/5GHz/6GHz bands. Understanding which channels are in use, how crowded they are, and whether clients can roam/fail over to other channels is foundational to evaluating disruption risk. Why the Answer is Correct: Channel scanning is the technique used to identify which wireless channels are active, which SSIDs/BSSIDs are present on each channel, signal strength, noise/interference, and channel utilization. A penetration tester would leverage channel scanning to map the RF environment and determine the most effective way to disrupt communications (e.g., targeting the specific channel the office WLAN uses, identifying overlapping APs, or finding that the network is using a narrow set of channels). Without knowing the channel plan and what is actually in use, you cannot reliably test disruption scenarios. Key Features / What to Look For: Channel scanning is commonly performed with tools like airodump-ng, Kismet, Wireshark in monitor mode, or vendor survey tools. Key data includes: channel number, channel width (20/40/80/160 MHz), band, RSSI, beacon rate, and whether multiple APs share the same channel. This supports testing resilience: whether APs auto-channel, whether clients roam to alternate APs, and whether the environment is already congested (which can cause “natural” disruption). Common Misconceptions: Testers sometimes jump straight to active attacks (e.g., deauth frames) without first enumerating channels and APs. Also, some confuse wired-layer disruption techniques (ARP poisoning) with wireless-layer disruption. Port mirroring is a switch feature for visibility, not RF disruption. Sidecar scanning is a reconnaissance approach (often via an adjacent/auxiliary device) but does not specifically address channel-based disruption. Exam Tips: For wireless questions, first think: “Do I need to discover SSIDs/APs/channels (recon) or perform an active attack?” If the goal is to determine feasibility of disrupting wireless communications, you must first identify the channel(s) in use—channel scanning is the correct foundational technique. Remember: ARP poisoning targets Layer 2 on wired/wireless after association; channel scanning targets RF/802.11 environment discovery and supports DoS feasibility analysis.
A penetration tester completed a report for a new client. Prior to sharing the report with the client, which of the following should the penetration tester request to complete a review?
A generative AI assistant is generally not appropriate for reviewing a client penetration test report unless the engagement explicitly authorizes its use and the tool is approved for sensitive data. Uploading findings, evidence, or client identifiers can violate NDAs and data handling requirements. While AI can help with wording, the exam typically emphasizes confidentiality and controlled handling of client information.
The customer’s designated contact is usually the recipient for final delivery, coordination, and follow-up discussions (e.g., retest planning, remediation validation). However, requesting the client to review before internal QA is risky: it can expose unvetted findings, cause confusion if corrections are needed, and may disclose sensitive details prematurely. Internal review should occur first.
A cybersecurity industry peer implies an external party. Even if technically skilled, involving someone outside the authorized engagement team can breach confidentiality and contractual obligations (NDA, rules of engagement). External review is only acceptable if the contract explicitly permits it and the peer is formally included/authorized. For the exam, this is generally incorrect.
A team member is the best choice because internal peer review/QA is a standard step before releasing a penetration test report. A teammate can validate technical accuracy, ensure evidence supports conclusions, confirm scope alignment, check severity ratings, and verify that sensitive data is properly redacted. This reduces reputational and legal risk and improves report quality before client delivery.
Core Concept: This question tests report handling and quality assurance within the rules of engagement. Before delivering a penetration test report, testers should perform an internal peer/technical review to ensure accuracy, completeness, consistency with evidence, and compliance with contractual and legal constraints (e.g., NDA, data handling, scope limitations). This is part of engagement management and professional reporting practices. Why the Answer is Correct: A team member is the appropriate party to request a review from prior to sharing the report with the client. Internal review reduces risk of factual errors (wrong IPs/hosts, incorrect CVSS scoring, inaccurate reproduction steps), inconsistent findings, missing screenshots/logs, or recommendations that don’t align with the client’s environment. It also helps ensure sensitive data is minimized/redacted and that the report matches the agreed scope and objectives. This aligns with common best practices in consulting firms: a second set of eyes from within the engagement team (or internal QA) validates technical content and executive messaging before external release. Key Features / Best Practices: - Internal QA/peer review: validate evidence, exploit chain descriptions, and impact statements. - Consistency checks: scope boundaries, timestamps, asset identifiers, and severity methodology. - Sanitization: remove unnecessary secrets (tokens, passwords), PII, and excessive internal details. - Professionalism: grammar, clarity, and actionable remediation guidance. - Chain of custody and confidentiality: keep drafts within authorized internal personnel until approved for release. Common Misconceptions: - Involving the customer’s designated contact can seem logical, but that is typically for final delivery, deconfliction, or factual validation after internal QA—not for pre-release internal review. Premature sharing can leak sensitive findings or create confusion if the report still contains errors. - Using an industry peer (external) may sound like “peer review,” but it can violate NDAs and confidentiality unless explicitly authorized. - Using a generative AI assistant may help with grammar, but it introduces data leakage and confidentiality risks unless the tool is approved, isolated, and contractually permitted. Exam Tips: For PenTest+ questions about reporting, default to: internal review first (team/QA), then controlled client delivery via the agreed point of contact. Also remember confidentiality: avoid sharing client data with external parties or unapproved tools.
Which of the following tasks would ensure the key outputs from a penetration test are not lost as part of the cleanup and restoration activities?
Preserving artifacts means collecting and securely storing the evidence and outputs produced during the test (logs, screenshots, tool output, packet captures, recovered files, hashes, notes). This directly prevents losing proof and details when cleanup removes tools, temporary files, persistence, or modified settings. It supports accurate reporting, reproducibility, and defensibility of findings.
Reverting configuration changes is a restoration task to return systems to their pre-test state (undo firewall rules, remove test accounts, revert settings). While necessary, it can actually eliminate evidence or the vulnerable condition if done before evidence collection. It focuses on environment stability, not on retaining penetration test outputs.
Keeping chain of custody documents who handled evidence, when, and how, preserving integrity and traceability (often for legal or regulatory needs). It helps prove evidence wasn’t altered, but it does not inherently prevent artifacts from being deleted during cleanup. You still must preserve/collect the artifacts first for chain of custody to matter.
Exporting credential data preserves a specific subset of outputs (captured passwords/hashes/tokens), but it is not comprehensive and may violate scope or data-handling requirements if done improperly. It also doesn’t address other critical outputs like exploit proof, logs, screenshots, and command history that are commonly lost during cleanup.
Core Concept: This question tests cleanup and restoration activities in a penetration test and how to prevent loss of key deliverables. During a test, “outputs” include evidence and artifacts such as logs, screenshots, packet captures, tool output, recovered files, hashes, timelines, and notes that support findings and enable reporting, validation, and potential legal defensibility. Why the Answer is Correct: Preserving artifacts ensures that the evidence and results generated during the engagement are retained before cleanup actions remove them. Cleanup often involves deleting dropped tools, removing persistence mechanisms, clearing temporary files, reverting test accounts, and restoring services. If artifacts are not preserved first, the tester may accidentally delete proof-of-exploit, lose reproduction steps, or remove indicators needed to substantiate risk and impact. Artifact preservation is therefore the task that directly prevents key penetration test outputs from being lost. Key Features / Best Practices: Artifact preservation typically includes: exporting tool results to a secure repository; collecting and hashing evidence (integrity); maintaining organized timestamps and context; storing screenshots and console transcripts; capturing relevant logs (host, application, SIEM) before they rotate; and documenting exact commands, payloads, and configurations used. Best practice is to follow a defined evidence-handling process (often aligned with incident response/forensics practices) and to store artifacts in an access-controlled location with backups. Common Misconceptions: Reverting configuration changes is important for restoring the client environment, but it can remove the very conditions that demonstrate the vulnerability. Keeping chain of custody is about evidentiary integrity and traceability, not about ensuring outputs aren’t deleted during cleanup. Exporting credential data may preserve one type of output, but it is narrow and can be inappropriate or prohibited by rules of engagement; it does not address the broader set of test artifacts. Exam Tips: When you see “cleanup/restoration” paired with “don’t lose outputs/evidence,” think “preserve artifacts” (collect, export, store, hash, document) before you revert or remove anything. Also remember that engagement management includes handling deliverables, evidence retention, and documentation practices that support the final report and any retesting.
A penetration tester is conducting reconnaissance for an upcoming assessment of a large corporate client. The client authorized spear phishing in the rules of engagement. Which of the following should the tester do first when developing the phishing campaign?
Shoulder surfing is a physical social-engineering technique used to observe credentials or sensitive information directly from a victim’s screen/keyboard. It is not typically the first step in developing a spear-phishing campaign because it requires physical proximity, increases risk of detection, and is more aligned with onsite assessments. Phishing campaigns usually begin with passive OSINT to build pretexts and target lists.
Recon-ng is a reconnaissance framework that automates OSINT collection (domains, contacts, hosts, breaches, etc.). While useful during recon, it’s not the best “first” action when developing a spear-phishing campaign because you first need to identify targets and context. Social media OSINT often provides the initial targeting and pretext details that then feed tools like Recon-ng.
Social media is the best first step because spear phishing depends on personalization. Platforms like LinkedIn and other public sources reveal roles, relationships, current initiatives, writing tone, and likely business workflows. This enables realistic pretexts and accurate target selection with minimal footprint. Starting here aligns with passive reconnaissance best practices and improves campaign effectiveness and safety.
Password dumps are collections of credentials typically obtained from breaches, dark web sources, or post-exploitation. They are not a first step for creating a phishing campaign and may be out of scope unless explicitly authorized. Even when allowed, they are usually used later for credential stuffing or validation, not for initial pretext development.
Core concept: This question tests OSINT-driven reconnaissance as the first step in building an effective spear-phishing campaign. Spear phishing is targeted social engineering, so the campaign must start with gathering accurate, contextual information about specific employees, roles, relationships, and corporate processes. Why the answer is correct: Social media is typically the best first source for spear-phishing development because it provides high-signal, low-cost, legally accessible intelligence (OSINT) about targets: job titles, reporting lines, projects, travel, vendors, writing style, interests, and recent events. This information enables pretext creation (e.g., “HR benefits update,” “invoice from vendor,” “Teams document share,” “conference agenda”) and improves credibility, timing, and personalization—key success factors in spear phishing. Starting with social media also helps identify high-value targets (executives, finance, HR, IT help desk) and likely trust paths (assistants, peers, external partners). Key features / best practices: In PenTest+ terms, begin with passive recon and OSINT before active techniques. Use platforms like LinkedIn, X, Facebook, GitHub, and company press releases to map org structure and technology hints (email formats, tools mentioned, cloud services). Correlate findings with domain/WHOIS, job postings, and breach data only if authorized. Ensure the rules of engagement cover phishing scope, allowed lures, data handling, and reporting requirements. Maintain operational security and minimize collection of unnecessary personal data. Common misconceptions: Tools like Recon-ng are excellent, but they are a means to collect OSINT—not the “first thing” conceptually. Shoulder surfing is physical, high-risk, and not a typical starting point for a phishing campaign. Password dumps are post-compromise artifacts or breach-derived data; using them may be out of scope or unethical unless explicitly authorized and handled carefully. Exam tips: For spear phishing questions, think “targeted OSINT first.” Start with passive information gathering (social media/OSINT) to craft a believable pretext, then move to tooling (Recon-ng, Maltego), infrastructure setup, and finally delivery and tracking—always within the ROE and legal constraints.
A penetration tester reviews a SAST vulnerability scan report. The following lines of code have been reported as vulnerable:
Issue 40 of 126
Language: Java
Severity: Medium
Call:
try {
// ...
} catch (SomeException e) {
e.printStackTrace();
}
Which of the following is the best method to remediate this vulnerability?
Correct. Implement a logging framework and replace e.printStackTrace() with structured, configurable logging (e.g., logger.error("message", e)). This prevents uncontrolled disclosure of stack traces, supports log levels and secure destinations, and enables different verbosity by environment (dev vs prod). It also aligns with secure error-handling practices: generic user messages, detailed info only in protected logs.
Incorrect. Removing the reported lines (or the entire try/catch) is not a sound remediation because exceptions still need to be handled. Eliminating error handling can cause crashes, undefined behavior, or loss of auditability. The goal is not to remove exception handling, but to handle it securely by controlling what is exposed and where diagnostic details are recorded.
Incorrect. A secure coding-awareness program is a good long-term preventive control, but it does not directly remediate the vulnerable code identified by the SAST report. Exam questions asking for the “best method to remediate” typically expect a concrete code/configuration change that fixes the issue now, not an organizational initiative alone.
Incorrect. This is not a false positive in most contexts: printStackTrace() commonly leaks internal implementation details and can expose sensitive information in logs or responses. SAST tools flag it because it is a recognized insecure error-handling pattern. The appropriate action is to implement controlled logging and safe error responses, not to dismiss the finding.
Core concept: This finding is about information disclosure and insecure error handling. Calling e.printStackTrace() prints internal exception details (class names, file names, line numbers, stack frames, sometimes sensitive values) to standard error or application logs in an uncontrolled way. In web apps, this output can also end up in HTTP responses or centralized log stores, aiding attackers with reconnaissance and exploit development. Why the answer is correct: The best remediation is to replace printStackTrace() with a proper logging framework and a secure logging pattern. A logging framework (e.g., SLF4J with Logback/Log4j2, java.util.logging) allows consistent severity levels, controlled destinations, formatting, and redaction. It also supports environment-based configuration (dev vs prod) so detailed stack traces can be available to developers in non-production while production logs remain minimal and safe. The secure approach is typically: log a high-level message for users, log technical details only to protected logs, and avoid leaking sensitive data. Key features / best practices: Use parameterized logging (avoid string concatenation), set appropriate log levels (WARN/ERROR), restrict log access, and ensure logs don’t include secrets (tokens, passwords, PII). In production, return generic error messages to clients and correlate with an error ID. Many organizations align this with OWASP guidance (e.g., Error Handling and Logging best practices) and secure SDLC policies. Common misconceptions: It may seem easiest to delete the code (option B), but exceptions must still be handled; removing the catch or logging can hide failures and reduce incident visibility. Training (option C) is valuable but does not remediate the specific vulnerable code. Marking as false positive (option D) is incorrect because printStackTrace() is a well-known anti-pattern flagged by SAST tools for valid reasons. Exam tips: On PenTest+ questions, stack traces and verbose errors usually map to “information disclosure.” The best fix is controlled logging plus safe user-facing error handling, not simply suppressing errors. When you see printStackTrace(), think: replace with a logging framework and ensure production-safe configuration.



