
Simulate the real exam experience with 90 questions and a 165-minute time limit. Practice with AI-verified answers and detailed explanations.
AI-Powered
Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.
A penetration tester needs to test a very large number of URLs for public access. Given the following code snippet:
1 import requests
2 import pathlib
3
4 for url in pathlib.Path("urls.txt").read_text().split("\n"):
5 response = requests.get(url)
6 if response.status == 401:
7 print("URL accessible")
Which of the following changes is required?
Correct. The logic on line 6 is wrong for two reasons: `requests` uses `response.status_code` (not `response.status`) to expose the HTTP code, and 401 means “Unauthorized,” i.e., authentication is required, so the URL is not publicly accessible. The condition should be updated to use `status_code` and to test for accessible outcomes (e.g., 200) rather than 401.
Incorrect. `requests.get(url)` is a valid method call for issuing an HTTP GET request and is appropriate for checking whether a URL is reachable and what status code it returns. While you might enhance it with headers, timeouts, or session reuse for scale, the method itself is not the required change to fix the core correctness issue in the snippet.
Incorrect. `import requests` is correct and required to use the requests library. The bug is not caused by a missing or incorrect import. If anything, additional imports (e.g., for concurrency) could improve performance for a very large list, but the question asks what change is required for correctness, which is in the status check logic.
Incorrect. The delimiter in `split("\n")` is reasonable for a newline-separated file and is not the primary issue. A more robust approach might use `splitlines()` to handle different newline formats and avoid trailing empty entries, but the script’s fundamental problem is mis-checking the HTTP status (wrong attribute and wrong interpretation of 401).
Core concept: This question tests understanding of HTTP response handling during reconnaissance/enumeration. When checking whether URLs are “publicly accessible,” the key signal is the HTTP status code returned by the server (e.g., 200 OK vs. 401 Unauthorized/403 Forbidden). In Python’s requests library, status codes are accessed via response.status_code, not response.status. Why the answer is correct: Line 6 uses `if response.status == 401:`. Two issues exist: (1) `requests.Response` does not expose a `status` attribute for HTTP codes; the correct attribute is `status_code`. (2) A 401 status means the resource is NOT publicly accessible; it indicates authentication is required. If the tester’s goal is to identify public access, the condition should check for success codes (typically 200) or, more generally, “not requiring auth” (i.e., not 401/403). Therefore, the required change is to the condition on line 6. Key features / best practices: - Use `response.status_code` to evaluate HTTP results. - Interpret codes correctly: 200/204/3xx often indicate reachable content; 401 indicates authentication required; 403 indicates forbidden (authorization failure or blocked); 404 indicates not found. - For large URL lists, consider timeouts, exception handling, and concurrency (e.g., requests with timeouts, retry logic, or async), but those are enhancements rather than the specific required fix. Common misconceptions: - Confusing 401 with “accessible.” In security testing, 401 is evidence of access control being enforced, not public access. - Assuming `response.status` is a valid requests property because other frameworks use similar naming. Exam tips: For PenTest+ questions involving scripting and enumeration, focus on (1) correct library usage (attribute/method names) and (2) correct security interpretation of HTTP status codes. If the question says “public access,” think “200 OK” (or at least “not 401/403”) and ensure you’re checking the correct response field.
While performing an internal assessment, a tester uses the following command: crackmapexec smb 192.168.1.0/24 -u user.txt -p Summer123@ Which of the following is the main purpose of the command?
Incorrect. Pass-the-hash over SMB would use NTLM hashes rather than a plaintext password. In CME, PtH is typically performed by supplying hashes (e.g., --hashes or similar) instead of -p with a cleartext string. Since the command uses -p Summer123@, it is attempting standard authentication with a password, not replaying a captured hash.
Incorrect. Common protocol scanning refers to discovering open ports/services across hosts (e.g., using nmap or masscan). CrackMapExec is primarily for SMB/AD credential testing, enumeration, and post-exploitation workflows, not broad multi-protocol scanning. This command specifically targets SMB and attempts logons; it is not enumerating multiple protocols or doing port discovery.
Correct. The command attempts SMB authentication across 192.168.1.0/24 using many usernames from user.txt with a single password (Summer123@). That is the defining pattern of password spraying: testing a commonly used password against a large set of accounts (often across many hosts) to find valid credentials while reducing the chance of account lockouts compared to brute forcing.
Incorrect. Remote command execution with CME requires additional options such as -x (run a command) or -X (run PowerShell), and typically successful credentials with sufficient privileges. This command only supplies targets and credentials for authentication attempts; it does not include any execution flags or payloads, so it is not executing commands on endpoints.
Core concept: This question tests understanding of CrackMapExec (CME) usage against SMB in an internal network. CME is a post-exploitation/attack framework commonly used for credential validation, password spraying, enumeration, and (when specified) remote execution over SMB/WinRM. Why the answer is correct: The command is: crackmapexec smb 192.168.1.0/24 -u user.txt -p Summer123@ This targets SMB on every host in the /24 subnet and attempts authentication using a list of usernames (user.txt) with a single password (Summer123@). That pattern—many users, one password across many hosts—is classic password spraying. The goal is to find accounts that reuse a common/weak password without triggering lockouts as quickly as a brute-force attack would (which is many passwords against one account). Key features and details: - Module: smb indicates CME will attempt SMB authentication (typically to ports 445/139 depending on configuration). - Target scope: 192.168.1.0/24 means multiple endpoints are tested. - Credential strategy: -u user.txt supplies multiple usernames; -p Summer123@ supplies one candidate password. - Output: CME will report successful logons, local admin status, SMB signing, OS info, domain/workgroup, etc., but the primary intent here is credential validation at scale. Common misconceptions: - Pass-the-hash (PtH) requires providing an NTLM hash (often with --hashes or -H, depending on tool/version) rather than a plaintext password. Here, -p is clearly a plaintext password. - “Protocol scanning” is more like nmap/masscan; CME is not a general port scanner, even though it touches SMB. - Remote command execution in CME typically requires flags like -x (execute command) or -X (PowerShell), plus valid creds and often admin rights. None are present. Exam tips: - Identify spraying vs brute force: spraying = one/few passwords across many accounts; brute force = many passwords against one account. - For CME, remember: -u/-p for plaintext auth attempts; hashes require explicit hash options; execution requires -x/-X; enumeration often uses --shares, --sessions, --users, etc. - When you see a subnet plus a username list and a single password, default to “password spraying.”
A penetration tester established an initial compromise on a host. The tester wants to pivot to other targets and set up an appropriate relay. The tester needs to enumerate through the compromised host as a relay from the tester's machine. Which of the following commands should the tester use to do this task from the tester's host?
Incorrect. This command runs nmap locally against <target_cidr> and then pipes nmap’s textual output into nc to port 22 on the compromised host. That does not cause the scan packets to traverse the compromised host, so it is not pivoting. It also attempts to send arbitrary text to an SSH port, which is not a meaningful “relay” for enumeration traffic.
Incorrect. This is a classic named-pipe netcat relay pattern, but as written it’s malformed (mknod syntax is wrong) and conceptually it creates a bidirectional relay for a single TCP flow, not a scalable pivot for enumerating an entire CIDR. It also mixes listening on 8000 with connecting to <target_cidr> on port 80, which doesn’t implement a general proxy for tools like nmap.
Incorrect. This attempts to chain netcat listeners/connectors and then scan 127.0.0.1:8000, but it doesn’t actually establish a pivot through the compromised host. There’s no step that places a proxy on the compromised host or forwards traffic through it. Additionally, nmap syntax is off (scanning “127.0.0.1 8000” is not a correct way to specify a port without -p).
Correct. proxychains forces supported applications (including nmap connect scans) to send their TCP connections through a configured proxy (commonly a SOCKS proxy established via the compromised host). This is exactly how a tester enumerates internal networks from their own machine while using the compromised host as a relay/pivot point. Pair it with nmap -sT for best compatibility.
Core concept: This question tests pivoting via a compromised host using a relay/proxy so the tester can enumerate internal targets “through” that host. In practice, this is commonly implemented as a SOCKS proxy (e.g., SSH dynamic port forwarding, chisel, Metasploit SOCKS, etc.) and then using a tool like proxychains to force scanning traffic through that proxy. Why the answer is correct: Option D uses proxychains to run nmap against <target_cidr>. Proxychains intercepts outbound connect() calls and routes them through a configured proxy (typically SOCKS4/5) that terminates on the compromised host (or on a pivot agent reachable via the compromised host). This matches the requirement: enumerate from the tester’s machine while using the compromised host as the relay. It’s a standard workflow: establish pivot (SOCKS) -> configure /etc/proxychains.conf -> run tools (nmap, curl, smbclient) through proxychains. Key features / best practices: - Works best with TCP connect scans (-sT) because proxychains can proxy full TCP connections; raw packet scans (e.g., -sS SYN scan) generally won’t work through SOCKS. - Requires a pre-established proxy path (e.g., ssh -D 1080 user@compromised_host, or a pivot tool) and correct proxychains configuration. - For nmap via proxychains, keep expectations realistic: service detection and some NSE scripts may be limited/slow; consider targeted port lists and timing. Common misconceptions: - Trying to “pipe” nmap output into netcat (Option A) does not forward the scan traffic; it only forwards text output. - Netcat listener/pipe tricks (Options B/C) can create simple relays for single TCP streams, but they are not a general-purpose pivot for scanning an entire CIDR from the attacker host. Exam tips: - When you see “pivot,” “relay,” “enumerate through compromised host,” think SOCKS proxy + proxychains. - Remember: proxychains + nmap typically implies -sT (connect scan), not -sS. - Distinguish between forwarding scan results (text) vs forwarding scan traffic (network connections).
A penetration tester is unable to identify the Wi-Fi SSID on a client's cell phone. Which of the following techniques would be most effective to troubleshoot this issue?
Sidecar scanning is not a standard wireless reconnaissance or troubleshooting term in the PenTest+ context. It does not describe a recognized method for identifying whether an SSID is being broadcast on a particular RF channel. Because the problem is specifically about Wi-Fi network visibility, the tester needs a technique tied to 802.11 channel and beacon analysis. That makes this option a distractor rather than a practical troubleshooting approach.
Channel scanning is correct because Wi-Fi access points transmit management traffic on specific channels, and a client must scan those channels to discover the network. By checking each channel, a tester can determine whether the AP is active, what band it is using, and whether the phone supports that channel range. This is especially useful for troubleshooting issues such as unsupported 5 GHz channels, DFS channels, or simple channel misconfiguration. It is the most direct and technically relevant method for diagnosing why an SSID is not visible on a mobile device.
Stealth scanning focuses on reducing the likelihood of detection during reconnaissance rather than improving the ability to find a wireless network. Even if a scan is performed quietly or passively, the core troubleshooting need is still to inspect the relevant wireless channels for beacon or probe-response traffic. The question asks for the most effective troubleshooting technique, not the least detectable one. Therefore, stealth scanning does not best address the SSID visibility problem.
Static analysis scanning is used to inspect source code, binaries, or applications without executing them. It is relevant to software security testing, not to wireless signal discovery or 802.11 management frame analysis. A missing SSID on a phone is a radio-frequency and wireless configuration issue, not a code-analysis problem. For that reason, static analysis scanning is clearly unrelated to the scenario.
Core concept: This question tests wireless reconnaissance and troubleshooting fundamentals related to SSID discovery. Wi-Fi networks are found by listening for beacon frames or probe responses on the channel where the access point is operating. If a cell phone cannot identify a Wi-Fi network, the most effective troubleshooting step is to determine which channel the AP is using and whether the client device can scan that band/channel. Why correct: Channel scanning is the best answer because it systematically checks wireless channels for 802.11 management traffic. This allows the tester to verify whether the access point is present, whether it is broadcasting on 2.4 GHz or 5 GHz, and whether the phone supports that frequency and channel. It is the most direct way to troubleshoot why an SSID is not appearing on a client device. Key features: Channel scanning helps identify channel mismatch, unsupported bands, weak signal, and interference. It can also detect the presence of an AP even if the SSID is hidden, because the BSSID and channel are still observable in management traffic. This makes it useful for determining whether the issue is with the AP configuration or the client device. Common misconceptions: Stealth scanning is about avoiding detection, not improving wireless troubleshooting. Static analysis scanning applies to software or code review, not RF discovery. Sidecar scanning is not a standard wireless troubleshooting technique in this context. Exam tips: When a question asks why a device cannot see a Wi-Fi network, think about wireless channels, bands, beacon frames, and client compatibility. The most practical troubleshooting method is to scan channels and confirm where the AP is operating.
A client recently hired a penetration testing firm to conduct an assessment of their consumer-facing web application. Several days into the assessment, the client's networking team observes a substantial increase in DNS traffic. Which of the following would most likely explain the increase in DNS traffic?
Covert data exfiltration via DNS tunneling commonly causes large increases in DNS queries. Data is encoded into subdomains (or TXT queries) and sent to an attacker-controlled domain, producing many unique lookups and often long query names. This is especially plausible “several days into” an assessment, when testers may simulate post-compromise actions and attempt stealthy outbound channels that bypass strict HTTP egress controls.
URL spidering (crawling) maps a web application by following links and discovering endpoints. While it may trigger some DNS lookups initially (e.g., for new hostnames), most crawling stays within the same domain and primarily increases HTTP/HTTPS requests, not sustained high DNS volume. A major DNS spike is less consistent with spidering than with DNS tunneling behavior.
HTML scraping extracts content from web pages and APIs. Like spidering, it mainly drives HTTP/HTTPS traffic and application-layer load. DNS usage typically remains stable because the scraper repeatedly accesses the same hostnames after initial resolution. Scraping might increase bandwidth and web server logs, but it is unlikely to produce a substantial, ongoing increase in DNS traffic.
A DoS attack can increase DNS traffic if the DNS infrastructure is targeted (e.g., query floods, amplification). However, the scenario is a consumer-facing web application assessment and the observation is a DNS spike without mention of service outage. In PenTest+ context, a sustained DNS increase during an assessment more commonly indicates DNS tunneling/exfiltration than an overt DoS, which is often out of scope.
Core concept: DNS is not just for name resolution; it can be abused as a transport channel. Attackers (or testers simulating attackers) can generate unusually high DNS query volume by encoding data into subdomain labels and sending it to an attacker-controlled authoritative DNS server (DNS tunneling). This is commonly used for covert command-and-control (C2) and/or data exfiltration because DNS is often allowed through firewalls and monitored less rigorously than HTTP/S. Why the answer is correct: A substantial increase in DNS traffic several days into a web application assessment strongly suggests activity beyond basic browsing—specifically, a covert channel. In DNS exfiltration, each chunk of data is base32/base64/hex encoded into a series of queries like <encoded-data>.exfil.example.com. The victim’s resolver forwards these queries outward, creating a high volume of DNS requests and responses. The “several days in” timing aligns with post-compromise behavior: after gaining a foothold (e.g., via web app vulnerability leading to server-side execution), the tester may attempt to move data out in a stealthy way. Key features and best practices: Indicators include many unique subdomains, long label lengths near DNS limits, high NXDOMAIN rates (if using non-existent labels), and queries to unusual domains. Defensive controls include DNS logging/analytics, egress filtering to approved resolvers, blocking known tunneling patterns, limiting TXT record usage, response rate limiting (RRL), and DLP/IDS signatures for tunneling tools. From a testing perspective, this should be explicitly authorized in the rules of engagement because it can resemble real attacker behavior. Common misconceptions: Recon activities like spidering and scraping primarily increase HTTP(S) traffic, not DNS, because they reuse the same hostnames and rely on existing connections. A DoS could target DNS, but the scenario is a web application assessment and the question asks what would most likely explain increased DNS traffic—covert exfiltration is a classic reason for sustained, abnormal DNS volume. Exam tips: When you see “unexpected spike in DNS,” think DNS tunneling/exfiltration/C2 first. Look for wording implying stealth, persistence, or post-exploitation (e.g., “several days into the assessment”). If the question instead described service unavailability or resolver saturation, then DNS-focused DoS would be more plausible.
Want to practice all questions on the go?
Download Cloud Pass for free — includes practice tests, progress tracking & more.
A penetration tester performs a service enumeration process and receives the following result after scanning a server using the Nmap tool: PORT STATE SERVICE 22/tcp open ssh 25/tcp filtered smtp 111/tcp open rpcbind 2049/tcp open nfs Based on the output, which of the following services provides the best target for launching an attack?
Database is not supported by the scan results. Common database ports (e.g., 1433 MSSQL, 3306 MySQL, 5432 PostgreSQL, 1521 Oracle) are not shown as open. While databases can be excellent targets, you should base your selection on observed services/ports. Here, the evidence points to SSH, RPC/NFS, and filtered SMTP—not a database service.
Remote access maps to SSH on 22/tcp. SSH can be a target if weak passwords, exposed keys, outdated daemons, or misconfigurations exist, but it typically requires authentication and is often well-hardened. Without additional findings (version vulnerabilities, credential reuse, default creds), SSH is usually less immediately fruitful than an exposed file-sharing service like NFS.
Email maps to SMTP on 25/tcp, but Nmap reports it as filtered. Filtered indicates a firewall/ACL is blocking probes or limiting connectivity, which reduces immediate attack options. Even when open, SMTP attacks often depend on specific server misconfigurations (open relay, vulnerable MTA versions, user enumeration). In this output, SMTP is not the best initial target.
File sharing maps to NFS on 2049/tcp (with rpcbind on 111/tcp). This is often a high-value target because misconfigured exports can allow unauthenticated access to sensitive data or writable shares. Attackers can enumerate exports, mount them, and potentially obtain credentials, keys, configs, or escalate privileges (e.g., via no_root_squash or writable paths). This makes file sharing the best target here.
Core concept: Interpreting Nmap service enumeration results to identify the most promising attack surface. The scan shows SSH (22/tcp open), SMTP (25/tcp filtered), rpcbind (111/tcp open), and NFS (2049/tcp open). The question asks which service provides the best target for launching an attack, which typically means the service most likely to yield exploitable misconfigurations, weak access controls, or direct data exposure. Why the answer is correct: NFS (Network File System) on 2049/tcp, especially when paired with rpcbind on 111/tcp, is a classic high-value target because it often exposes file shares that may be misconfigured (e.g., exported to broad networks, weak host-based restrictions, or permissive export options). If an export is accessible, an attacker can enumerate exports (showmount -e), mount them, and potentially read sensitive files (configs, keys, backups) or even write to directories. Misconfigurations like no_root_squash can allow remote root on the mounted share to act as root on the server’s filesystem for that export, enabling straightforward privilege escalation or persistence (e.g., dropping SSH authorized_keys, modifying scripts, or planting cron jobs if writable paths are exposed). Key features and best practices: NFS security relies heavily on correct export configuration (/etc/exports), network scoping, and proper options (root_squash, ro where possible, restricting by IP/subnet, and using NFSv4 with stronger controls/Kerberos where appropriate). rpcbind/portmapper (111) facilitates discovery of RPC services and can aid attackers in mapping NFS-related RPC programs and versions. Common misconceptions: SSH is “remote access,” but it is often hardened and requires credentials; without a known vulnerability or weak credentials, it may not be the best initial target. SMTP is filtered, meaning a firewall/ACL is likely blocking or limiting access, reducing immediate attackability. “Database” is not indicated by any open port here. Exam tips: Prioritize services that commonly expose data or allow unauthenticated/weakly authenticated access. In Nmap output, combinations like rpcbind + nfs are strong indicators to test file-sharing exposure first (enumerate exports, attempt mounts, check permissions, and look for sensitive files or writable paths). Also note state: “open” is generally more actionable than “filtered.”
Which of the following explains the reason a tester would opt to use DREAD over PTES during the planning phase of a penetration test?
Incorrect. PTES can be used for web application tests just as well as for network or internal tests; it describes phases and deliverables, not a web-specific approach. DREAD is also not inherently web-specific. The deciding factor between DREAD and PTES is whether you need threat/risk scoring (DREAD) versus a full testing methodology (PTES).
Incorrect. Mobile application assessments may use specialized techniques (reverse engineering, device security checks, API testing), but that does not make DREAD preferable to PTES. PTES remains a general engagement framework. DREAD would only be chosen here if the goal is to score and prioritize mobile threats, not simply because the target is mobile.
Incorrect. Thick client testing involves different attack surfaces (local storage, IPC, update mechanisms, client-server protocols), but PTES still applies as the overarching process. DREAD is not a thick-client methodology; it is a threat rating model. Platform type does not determine DREAD vs PTES; the need for threat prioritization does.
Correct. DREAD is specifically used to create and prioritize a threat model by assigning scores to threats across five categories. In the planning phase, this helps determine which threats deserve the most testing effort and communicates risk to stakeholders. PTES includes a threat modeling phase, but it is a process standard, not a scoring model—so DREAD is the better choice when the task is threat modeling and prioritization.
Core concept: DREAD and PTES serve different purposes in the planning phase. PTES (Penetration Testing Execution Standard) is a methodology/framework describing phases and activities of a penetration test (pre-engagement, intelligence gathering, threat modeling, vulnerability analysis, exploitation, post-exploitation, reporting). DREAD is a risk-rating model used to score and prioritize threats (Damage potential, Reproducibility, Exploitability, Affected users, Discoverability). Why the answer is correct: A tester would choose DREAD over PTES specifically when the task is to create or support a threat model by quantifying and prioritizing identified threats. During planning, stakeholders often need a defensible way to rank risks and decide what to test first, what depth is required, and where to allocate time. DREAD provides a structured scoring approach that can translate technical findings into business-relevant prioritization. PTES, by contrast, is not a scoring system; it is a process standard that tells you what phases to perform, not how to numerically rank threats. Key features/best practices: DREAD is commonly used alongside threat modeling approaches (e.g., STRIDE for categorizing threats, then DREAD for scoring). In planning, you can use DREAD to prioritize test cases, define success criteria, and justify scope focus (e.g., high DREAD items get deeper testing). PTES is best used to ensure engagement governance: rules of engagement, scope, communications, legal authorization, and a consistent workflow through reporting. Common misconceptions: Options about web, mobile, or thick client testing can seem plausible because different targets may require different tools and techniques. However, PTES is target-agnostic and can be applied to web, mobile, internal, external, and application testing. DREAD is also target-agnostic; it is chosen based on the need for threat prioritization, not the platform under test. Exam tips: If the question contrasts a “model” (DREAD) with a “standard/methodology” (PTES), look for wording about risk scoring, prioritization, or threat modeling. Choose PTES when the question is about the overall penetration testing lifecycle and engagement structure; choose DREAD when the question is about ranking threats by impact/likelihood-style factors.
A penetration tester finds that an application responds with the contents of the /etc/passwd file when the following payload is sent:
]>
&foo;
Which of the following should the tester recommend in the report to best prevent this type of vulnerability?
Reducing file permissions is not the best prevention for XXE. /etc/passwd is typically world-readable and changing its permissions can break system functionality. More importantly, XXE is a parser configuration flaw; even if file reads are restricted, attackers may still exploit XXE for SSRF (accessing internal web services/metadata endpoints) or other data exposures. Least privilege helps limit impact but does not remove the vulnerability.
Frequent log review is a detection/monitoring control, not a preventive control. It may help identify exploitation attempts after the fact, but it does not stop the XML parser from resolving external entities and disclosing data. For exam questions asking how to best prevent the vulnerability type, choose a control that removes the root cause (unsafe XML parsing) rather than an operational process.
Disabling external entities (and ideally DTD processing entirely) is the primary remediation for XXE. The attack relies on defining and expanding an external entity to read local files like /etc/passwd or to make outbound/internal requests. Secure parser configuration (disallow DOCTYPE, disable external general/parameter entities, enable secure processing) prevents entity expansion and stops the vulnerability at its source.
A WAF can sometimes detect and block obvious XXE payloads, but it is not the best prevention. XXE can be obfuscated or encoded to bypass signatures, and WAF rules vary widely. The underlying issue remains an insecure XML parser configuration. WAFs are best treated as defense-in-depth, not the primary fix for XXE.
Core Concept: This payload indicates an XML External Entity (XXE) vulnerability. XXE occurs when an XML parser is configured to resolve external entities (e.g., SYSTEM identifiers pointing to local files or remote URLs). An attacker defines an entity that references a sensitive resource (like file:///etc/passwd) and then triggers expansion (e.g., &foo;), causing the application to disclose file contents or perform server-side requests (SSRF). Why the Answer is Correct: The best preventative recommendation is to disable the use of external entities (and DTD processing) in the XML parser. The observed behavior—returning /etc/passwd contents—strongly implies the parser is expanding an external entity. Preventing entity resolution directly addresses the root cause at the parser level, stopping file disclosure, SSRF, and related XXE impacts regardless of payload variations. Key Features / Best Practices: - Configure XML parsers to disallow DTDs and external entity resolution (often called “secure processing”). - Use hardened libraries and safe defaults (e.g., disable DOCTYPE declarations, disallow external general/parameter entities). - Prefer simpler data formats (JSON) when possible, or use streaming parsers with strict schemas. - Apply defense-in-depth: input validation, least privilege for the service account, and egress controls to reduce SSRF blast radius. Guidance aligns with OWASP XXE Prevention recommendations and common secure parser configuration practices. Common Misconceptions: - File permissions (chmod) can reduce impact but does not fix XXE; many sensitive files are world-readable by design (e.g., /etc/passwd). Also, XXE can target other resources (internal HTTP endpoints) even without file reads. - Log review is detective, not preventive. - A WAF may block known patterns but is bypassable (encoding, alternate entity tricks) and does not reliably eliminate the underlying parser weakness. Exam Tips: When you see XML payloads with DOCTYPE/ENTITY and a response containing local file contents (especially /etc/passwd), think XXE. The primary remediation is to disable external entity resolution/DTDs in the XML parser. Secondary controls (WAF, least privilege, monitoring) are helpful but not the “best” single prevention in exam terms.
A penetration testing team needs to determine whether it is possible to disrupt the wireless communications for PCs deployed in the client's offices. Which of the following techniques should the penetration tester leverage?
Port mirroring (SPAN) is a wired switch feature that copies traffic from one or more ports/VLANs to a monitoring port for packet capture. It helps with visibility and troubleshooting on Ethernet networks, not with assessing RF channel usage or the ability to disrupt Wi-Fi communications. It does not directly enable wireless disruption testing and typically requires switch access/authorization.
Sidecar scanning generally refers to using an auxiliary device or sensor (a “sidecar”) to perform scanning/monitoring without impacting the primary system, or to gain an alternate vantage point. While it can be used for reconnaissance, it is not the specific technique for determining which Wi-Fi channels are in use or evaluating channel-based disruption/jamming feasibility.
ARP poisoning (ARP spoofing) is a Layer 2 man-in-the-middle technique used to redirect traffic on a local network by corrupting ARP tables. It can disrupt communications by causing misrouting, but it requires the attacker to be on the same broadcast domain and typically already associated to the network. It does not directly address wireless RF disruption and is not the best fit for “disrupt wireless communications” in the channel/interference sense.
Channel scanning is used to enumerate and analyze active Wi-Fi channels, APs, and RF conditions (signal, noise, utilization). This is the correct technique to determine whether wireless communications can be disrupted because it identifies the specific channel(s) and channel widths in use and reveals congestion/overlap. It provides the necessary intelligence to assess and plan disruption tests against the WLAN.
Core Concept: This question is about assessing whether wireless communications can be disrupted (i.e., susceptibility to wireless interference/jamming or channel-related denial of service). In Wi-Fi (802.11), clients and access points communicate on specific RF channels within 2.4GHz/5GHz/6GHz bands. Understanding which channels are in use, how crowded they are, and whether clients can roam/fail over to other channels is foundational to evaluating disruption risk. Why the Answer is Correct: Channel scanning is the technique used to identify which wireless channels are active, which SSIDs/BSSIDs are present on each channel, signal strength, noise/interference, and channel utilization. A penetration tester would leverage channel scanning to map the RF environment and determine the most effective way to disrupt communications (e.g., targeting the specific channel the office WLAN uses, identifying overlapping APs, or finding that the network is using a narrow set of channels). Without knowing the channel plan and what is actually in use, you cannot reliably test disruption scenarios. Key Features / What to Look For: Channel scanning is commonly performed with tools like airodump-ng, Kismet, Wireshark in monitor mode, or vendor survey tools. Key data includes: channel number, channel width (20/40/80/160 MHz), band, RSSI, beacon rate, and whether multiple APs share the same channel. This supports testing resilience: whether APs auto-channel, whether clients roam to alternate APs, and whether the environment is already congested (which can cause “natural” disruption). Common Misconceptions: Testers sometimes jump straight to active attacks (e.g., deauth frames) without first enumerating channels and APs. Also, some confuse wired-layer disruption techniques (ARP poisoning) with wireless-layer disruption. Port mirroring is a switch feature for visibility, not RF disruption. Sidecar scanning is a reconnaissance approach (often via an adjacent/auxiliary device) but does not specifically address channel-based disruption. Exam Tips: For wireless questions, first think: “Do I need to discover SSIDs/APs/channels (recon) or perform an active attack?” If the goal is to determine feasibility of disrupting wireless communications, you must first identify the channel(s) in use—channel scanning is the correct foundational technique. Remember: ARP poisoning targets Layer 2 on wired/wireless after association; channel scanning targets RF/802.11 environment discovery and supports DoS feasibility analysis.
A penetration tester is conducting reconnaissance for an upcoming assessment of a large corporate client. The client authorized spear phishing in the rules of engagement. Which of the following should the tester do first when developing the phishing campaign?
Shoulder surfing is a physical social-engineering technique used to observe credentials or sensitive information directly from a victim’s screen/keyboard. It is not typically the first step in developing a spear-phishing campaign because it requires physical proximity, increases risk of detection, and is more aligned with onsite assessments. Phishing campaigns usually begin with passive OSINT to build pretexts and target lists.
Recon-ng is a reconnaissance framework that automates OSINT collection (domains, contacts, hosts, breaches, etc.). While useful during recon, it’s not the best “first” action when developing a spear-phishing campaign because you first need to identify targets and context. Social media OSINT often provides the initial targeting and pretext details that then feed tools like Recon-ng.
Social media is the best first step because spear phishing depends on personalization. Platforms like LinkedIn and other public sources reveal roles, relationships, current initiatives, writing tone, and likely business workflows. This enables realistic pretexts and accurate target selection with minimal footprint. Starting here aligns with passive reconnaissance best practices and improves campaign effectiveness and safety.
Password dumps are collections of credentials typically obtained from breaches, dark web sources, or post-exploitation. They are not a first step for creating a phishing campaign and may be out of scope unless explicitly authorized. Even when allowed, they are usually used later for credential stuffing or validation, not for initial pretext development.
Core concept: This question tests OSINT-driven reconnaissance as the first step in building an effective spear-phishing campaign. Spear phishing is targeted social engineering, so the campaign must start with gathering accurate, contextual information about specific employees, roles, relationships, and corporate processes. Why the answer is correct: Social media is typically the best first source for spear-phishing development because it provides high-signal, low-cost, legally accessible intelligence (OSINT) about targets: job titles, reporting lines, projects, travel, vendors, writing style, interests, and recent events. This information enables pretext creation (e.g., “HR benefits update,” “invoice from vendor,” “Teams document share,” “conference agenda”) and improves credibility, timing, and personalization—key success factors in spear phishing. Starting with social media also helps identify high-value targets (executives, finance, HR, IT help desk) and likely trust paths (assistants, peers, external partners). Key features / best practices: In PenTest+ terms, begin with passive recon and OSINT before active techniques. Use platforms like LinkedIn, X, Facebook, GitHub, and company press releases to map org structure and technology hints (email formats, tools mentioned, cloud services). Correlate findings with domain/WHOIS, job postings, and breach data only if authorized. Ensure the rules of engagement cover phishing scope, allowed lures, data handling, and reporting requirements. Maintain operational security and minimize collection of unnecessary personal data. Common misconceptions: Tools like Recon-ng are excellent, but they are a means to collect OSINT—not the “first thing” conceptually. Shoulder surfing is physical, high-risk, and not a typical starting point for a phishing campaign. Password dumps are post-compromise artifacts or breach-derived data; using them may be out of scope or unethical unless explicitly authorized and handled carefully. Exam tips: For spear phishing questions, think “targeted OSINT first.” Start with passive information gathering (social media/OSINT) to craft a believable pretext, then move to tooling (Recon-ng, Maltego), infrastructure setup, and finally delivery and tracking—always within the ROE and legal constraints.


Want to practice all questions on the go?
Get the free app
Download Cloud Pass for free — includes practice tests, progress tracking & more.