AI搭載
すべてのSY0-701: CompTIA Security+解答は3つの主要AIモデルで交差検証され、最高の精度を保証します。選択肢ごとの詳細な解説と深い問題分析を提供します。
Which of the following threat actors is the most likely to be hired by a foreign government to attack critical systems located in other countries?
Hacktivists are driven primarily by ideology or political/social causes and often seek publicity (defacements, DDoS, data leaks) to advance a message. While they can target government or critical services, they are not typically “hired” by governments because their motivations are not primarily financial and their actions can be unpredictable and attention-seeking, which is undesirable for covert state objectives.
A whistleblower is usually an insider (employee/contractor) who exposes unethical or illegal activity to authorities or the public. The intent is disclosure, not conducting attacks on foreign critical systems. While whistleblowers may leak sensitive data, they are not generally contracted by a foreign government to perform offensive cyber operations, making this a poor fit for the scenario.
Organized crime groups are financially motivated, capable, and often operate as service providers (malware-as-a-service, ransomware affiliates, initial access brokers). Because they can be paid and directed, they are plausible proxies for foreign governments seeking deniability. Their resources and operational maturity make them more likely to successfully target critical systems than less capable actors listed here.
Unskilled attackers (script kiddies) rely on publicly available tools and exploits with limited understanding. They typically pursue easy targets and opportunistic attacks rather than complex, strategic operations against critical infrastructure. A foreign government would be unlikely to hire an unskilled actor for high-stakes attacks because the probability of failure, exposure, and poor operational security is high.
Core concept: This question tests your ability to map attacker “types” to their typical motivations, resources, and relationships. In Security+, foreign governments commonly use either their own intelligence/military cyber units (APT) or proxies/contractors to create deniability. The key phrase is “hired by a foreign government” to attack “critical systems” in other countries—this implies a paid, task-oriented relationship and potentially high capability. Why the answer is correct: Organized crime is the most likely of the listed options to be hired as a proxy. Mature criminal groups have established infrastructure (botnets, malware development, exploit acquisition, initial access brokerage), operational security, and the ability to execute targeted intrusions or disruptive campaigns for payment. Governments may contract or indirectly task criminal groups to conduct operations that align with state interests while maintaining plausible deniability. Critical infrastructure attacks often require specialized skills, persistence, and coordination—traits more consistent with organized crime than with unskilled attackers, and more “for-hire” than ideologically driven hacktivists. Key features / best-practice context: From a defender perspective, state-sponsored or state-aligned operations often resemble APT tradecraft: long dwell time, stealthy lateral movement, credential theft, living-off-the-land techniques, and targeting of OT/ICS environments. Best practices include strong segmentation between IT/OT, MFA and PAM, continuous monitoring (SIEM/SOAR), threat intelligence integration, incident response playbooks, and supply-chain risk management. Frameworks like MITRE ATT&CK and NIST guidance help map observed behaviors to likely actor profiles. Common misconceptions: Hacktivists can attack critical systems, but their motivation is typically ideological and public-facing rather than “hired.” Whistleblowers are insiders exposing wrongdoing, not external attackers contracted for offensive operations. Unskilled attackers (script kiddies) generally lack the capability and discipline for high-impact, cross-border critical infrastructure targeting. Exam tips: When you see “hired,” think financially motivated actors (organized crime, mercenaries/contractors). When you see “foreign government,” think APT/state-sponsored—if APT isn’t an option, choose the closest proxy with resources and professionalism: organized crime.
外出先でもすべての問題を解きたいですか?
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。
外出先でもすべての問題を解きたいですか?
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。
外出先でもすべての問題を解きたいですか?
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。


外出先でもすべての問題を解きたいですか?
無料アプリを入手
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。
Which of the following is used to add extra complexity before using a one-way data transformation algorithm?
Key stretching increases the computational cost of deriving a key or password hash by using many iterations and/or memory-hard functions (e.g., PBKDF2 iterations, bcrypt cost factor, scrypt/Argon2). It helps resist brute-force attacks by slowing each guess. However, it is not primarily about adding random extra data to the input; that description more directly matches salting.
Data masking obfuscates sensitive data (e.g., showing only last 4 digits of a credit card, replacing characters with Xs) to reduce exposure in logs, UIs, or non-production environments. It does not involve one-way transformations for password storage and does not add randomness prior to hashing. It is a confidentiality/control measure, not a hashing-hardening technique.
Steganography hides data within other data (e.g., embedding a message in an image or audio file) to conceal the existence of the information. It is not related to password hashing or one-way transformations. Steganography does not add complexity to a hash input; it is a covert communication/data-hiding technique.
Salting adds a unique, random value to each password before applying a one-way hash. This ensures identical passwords do not produce identical hashes and makes precomputed attacks like rainbow tables impractical because attackers cannot reuse a single table across many hashes. The salt is stored with the hash and is typically not secret; its purpose is uniqueness and anti-precomputation.
Core Concept: This question tests password hashing protections used with one-way data transformation algorithms (cryptographic hash functions such as SHA-256, bcrypt, scrypt, Argon2). Because hashes are one-way, systems store hashes instead of plaintext passwords. Attackers can still crack hashes using precomputed tables (rainbow tables) or high-speed brute force, so additional measures are applied before hashing. Why the Answer is Correct: Salting is the practice of adding a unique, random value (the salt) to each password before hashing. The salt is stored alongside the resulting hash. This “extra complexity” ensures that identical passwords produce different hashes across users and across systems, and it defeats rainbow tables because the attacker would need a separate precomputed table for every possible salt value. Salting also prevents attackers from quickly spotting reused passwords by comparing hashes. Key Features / Best Practices: 1) Use a unique, cryptographically random salt per password (not a single global salt). 2) Store the salt with the hash; it is not required to be secret. 3) Combine salting with a slow, password-specific hashing/KDF algorithm (bcrypt, scrypt, Argon2) and appropriate work factors. 4) Consider adding a secret “pepper” (stored separately, e.g., in an HSM) for additional protection, but pepper is not the same as salt. Common Misconceptions: Key stretching sounds similar because it also increases cracking difficulty, but it refers to repeatedly applying a hash/KDF to increase computational cost, not specifically “adding extra complexity” (random data) to the input. Data masking and steganography are unrelated to password hashing. Exam Tips: If the question mentions “random value added to a password before hashing” or “defeats rainbow tables,” choose salting. If it emphasizes “making hashing slower by increasing iterations/work factor,” that points to key stretching (often implemented by PBKDF2/bcrypt/scrypt/Argon2).
A data administrator is configuring authentication for a SaaS application and would like to reduce the number of credentials employees need to maintain. The company prefers to use domain credentials to access new SaaS applications. Which of the following methods would allow this functionality?
SSO (Single Sign-On) enables users to authenticate with one set of enterprise credentials (often AD/domain-backed) and access multiple applications, including SaaS. It is commonly implemented via federation using SAML or OIDC, where an IdP authenticates the user and issues a trusted token/assertion to the SaaS provider. This directly reduces credential sprawl and centralizes access control and policy enforcement.
LEAP (Lightweight Extensible Authentication Protocol) is a Cisco-proprietary EAP method historically used for 802.1X wireless authentication. It is considered weak compared to modern methods and is not used to provide SaaS application access using domain credentials. LEAP addresses network access authentication, not federated identity or application SSO for cloud services.
MFA (Multi-Factor Authentication) requires two or more authentication factors (something you know/have/are) to strengthen login security. While MFA can be integrated with SSO at the IdP, MFA alone does not reduce the number of credentials employees maintain for multiple SaaS apps. Users could still have separate usernames/passwords per SaaS application.
PEAP (Protected Extensible Authentication Protocol) is an EAP method used mainly for secure network authentication (e.g., Wi-Fi with 802.1X), typically tunneling credentials inside TLS. Like LEAP, it is focused on network access control rather than SaaS application authentication consolidation. It does not provide the federated trust model needed for domain-credential-based access to SaaS apps.
Core concept: This question tests identity and access management (IAM) for SaaS, specifically reducing credential sprawl by using enterprise (domain) credentials to access cloud applications. The key technology is Single Sign-On (SSO), typically implemented via federation using standards such as SAML 2.0, OpenID Connect (OIDC), or OAuth 2.0, with an Identity Provider (IdP) like Active Directory Federation Services (AD FS), Azure AD/Entra ID, Okta, Ping, etc. Why the answer is correct: SSO allows employees to authenticate once using their corporate identity (often backed by Active Directory/domain credentials) and then access multiple SaaS applications without creating separate usernames/passwords for each service. In a common SaaS scenario, the SaaS app (Service Provider) trusts assertions/tokens issued by the company’s IdP. This meets both requirements: fewer credentials to maintain and preference for domain credentials. Key features and best practices: SSO is commonly deployed with federated identity, where authentication occurs at the IdP and the SaaS relies on signed tokens (SAML assertions or OIDC ID tokens). Best practices include enforcing strong authentication at the IdP (often adding MFA there), using conditional access policies, least privilege via role/attribute-based access (RBAC/ABAC), proper certificate/key management for signing, and lifecycle automation (provisioning/deprovisioning) via SCIM to ensure access is removed when employees leave. Common misconceptions: MFA improves security but does not inherently reduce the number of credentials—users could still have separate SaaS passwords. LEAP and PEAP are EAP methods used primarily for network authentication (e.g., Wi-Fi/802.1X) and are not the standard mechanism for SaaS app login consolidation. Exam tips: When you see “reduce number of credentials,” “use domain credentials,” and “SaaS,” think SSO/federation (SAML/OIDC) with an IdP. If the question emphasized “additional factor,” that would point to MFA. If it emphasized “wireless authentication/EAP,” that would point to PEAP/LEAP.
Which of the following scenarios describes a possible business email compromise attack?
This matches a common BEC/CEO fraud scenario: an attacker impersonates an executive (often via display-name spoofing or a look-alike domain) and pressures an employee to buy gift cards or send money. The focus is business process manipulation and financial loss, typically without malware. Verifying the sender address and using out-of-band confirmation are key mitigations.
This describes ransomware delivered via an email attachment: opening the attachment leads to file encryption and a ransom demand to regain access. While email may be the initial vector, the defining characteristic is malware-based extortion, not impersonation-driven financial fraud. This is not typically categorized as BEC on the exam.
This is a phishing/social engineering attempt to obtain credentials (requesting a cloud administrator login). It could be part of a broader BEC campaign, but the scenario itself is primarily credential harvesting and violates security policy (no legitimate HR director should request passwords). BEC questions usually emphasize payment diversion, invoice fraud, or executive impersonation for funds.
This is classic credential phishing: a link leads to a fake portal designed to steal usernames and passwords. It’s a common precursor to account takeover, which can later enable BEC, but the described activity is specifically phishing via a spoofed login page rather than the business-payment fraud focus of BEC.
Core concept: Business Email Compromise (BEC) is a targeted social engineering attack that abuses email trust to induce an employee to perform an unauthorized business action (often wire transfers, gift cards, invoice changes, or sensitive data release). Unlike broad phishing, BEC commonly uses impersonation (spoofed display name, look-alike domains, or compromised executive/vendor mailboxes) and focuses on process manipulation rather than malware delivery. Why A is correct: A gift card request that shows an executive’s name in the display field is a classic BEC pattern (often called “CEO fraud”). Attackers rely on urgency, authority, and confidentiality (“I’m in a meeting—buy gift cards now”) to bypass normal approval workflows. The key indicator is impersonation of a trusted executive combined with a financial request that can be quickly monetized and is hard to reverse. Key features and best practices: Defenses include enforcing out-of-band verification for payment/gift card requests, dual approval for financial transactions, and training users to check the actual sender address (not just the display name). Technical controls include SPF/DKIM/DMARC to reduce spoofing, banner warnings for external senders, and monitoring for anomalous mailbox rules or suspicious login activity (if the account is compromised). Strong MFA and conditional access reduce the chance of account takeover, which is another common BEC method. Common misconceptions: Many learners equate any phishing email with BEC. BEC is specifically about business process fraud and impersonation, often without links/attachments. Ransomware (B) and credential-harvesting phishing (D) are serious but are different attack categories. Credential requests (C) are phishing/social engineering, but the scenario is more directly “credential harvesting” than the hallmark BEC financial/invoice redirection theme. Exam tips: For Security+ SY0-701, look for cues like executive/vendor impersonation, payment/invoice/gift card urgency, and requests to change banking details. If the scenario centers on encrypting files and demanding payment, it’s ransomware; if it centers on a fake login page, it’s credential phishing; if it centers on manipulating business payments via trusted email context, it’s BEC.
Which of the following has been implemented when a host-based firewall on a legacy Linux system allows connections from only specific internal IP addresses?
A compensating control is an alternative safeguard used when the preferred control can’t be implemented, often due to legacy limitations. Restricting inbound connections on a legacy Linux host to only specific internal IPs reduces exposure and compensates for the inability to fully modernize, patch, or deploy stronger controls on that system. It mitigates risk without eliminating the underlying legacy issue.
Network segmentation is the architectural practice of dividing networks into separate zones (VLANs/subnets) and controlling traffic between them using network devices and policies. While segmentation can limit access to a legacy host, the scenario describes a host-based firewall rule on the system itself, not a redesign of network boundaries or inter-segment controls.
Transfer of risk means shifting financial or operational impact to another party, such as through cyber insurance, outsourcing, or contractual agreements. A host-based firewall allowlist does not transfer responsibility or impact; it directly reduces the likelihood of unauthorized access. Therefore, it is risk mitigation, not risk transfer.
SNMP traps are asynchronous alert messages sent from managed devices to an SNMP manager to report events (e.g., interface down, high CPU). They provide monitoring and visibility but do not enforce access control. Allowing connections only from specific internal IP addresses is an access restriction, not an SNMP-based alerting mechanism.
Core Concept: This question tests compensating controls and host-based access restrictions. A compensating control is an alternative security measure used when a primary/desired control cannot be implemented (often due to legacy constraints, cost, or operational limitations). In Security+ terms, it’s a risk mitigation technique that provides comparable protection when the ideal control isn’t feasible. Why the Answer is Correct: A legacy Linux system often cannot support modern security requirements (e.g., current endpoint agents, strong authentication modules, or timely patching). If the organization cannot fully remediate the underlying weakness (legacy OS/app constraints), implementing a host-based firewall rule that only allows connections from specific internal IP addresses reduces the attack surface and limits who can reach the host. This is a classic compensating control: it does not “fix” the legacy risk, but it compensates by adding a restrictive barrier to reduce likelihood/impact of compromise. Key Features / Best Practices: Host-based firewalls (e.g., iptables/nftables/firewalld) can enforce allowlists (source IP restrictions), limit exposed ports, and apply default-deny policies. Best practice is “deny by default, allow by exception,” combined with least privilege networking (only required ports, only required sources). This is frequently paired with additional controls such as logging, IDS/IPS monitoring, and jump hosts/bastions for administrative access. Common Misconceptions: Network segmentation (B) is related but typically refers to separating networks using VLANs/subnets and network devices (switches/routers/firewalls) to control traffic between segments. Here, the control is explicitly host-based on the legacy system, not a network architecture change. Transfer of risk (C) involves shifting risk to a third party (insurance, outsourcing, contracts), not reducing exposure via firewall rules. SNMP traps (D) are monitoring/alerting messages and do not enforce access restrictions. Exam Tips: When you see “legacy system” plus “can’t implement the ideal security solution,” look for compensating controls (additional restrictions, isolation, allowlisting, jump boxes). If the question emphasizes host-level rules restricting who can connect, that’s a compensating control rather than segmentation. Segmentation is more about network design boundaries; host firewalls are endpoint controls that can compensate for weak/unsupported systems.
An enterprise is trying to limit outbound DNS traffic originating from its internal network. Outbound DNS requests will only be allowed from one device with the IP address 10.50.10.25. Which of the following firewall ACLs will accomplish this goal?
This is incorrect because the first rule permits DNS from any source to any destination on port 53, which already allows all outbound DNS. The second rule then denies DNS from 10.50.10.25, which is the opposite of the requirement. Additionally, due to top-down processing, the initial broad permit would match first and the deny would never effectively restrict traffic as intended.
This is incorrect because it reverses the meaning of the destination. It permits traffic from any source to destination 10.50.10.25 on port 53, which would be relevant for inbound DNS to an internal DNS server, not outbound DNS originating from that host. The subsequent deny blocks all DNS to any destination, which would still not correctly allow 10.50.10.25 to query external DNS.
This is incorrect because, like option A, it begins with a broad permit allowing DNS from any source to any destination on port 53, which defeats the goal of limiting outbound DNS. The deny rule also targets traffic destined to 10.50.10.25, not sourced from it, so it does not implement “only this internal host may originate DNS.”
This is correct because it permits outbound DNS only when the source is 10.50.10.25/32 and the destination is any (0.0.0.0/0) on port 53. The next rule denies all other outbound DNS (any source to any destination on port 53). With first-match ACL processing, the specific permit is evaluated before the general deny, enforcing the stated requirement.
Core Concept: This question tests firewall ACL logic for egress (outbound) filtering of DNS. DNS queries typically use destination port 53 (UDP/53 primarily; TCP/53 for zone transfers and large responses). An outbound ACL should restrict which internal source hosts are allowed to send DNS traffic to external resolvers. Why the Answer is Correct: The requirement is: only one internal device (10.50.10.25) may originate outbound DNS requests. Therefore, the ACL must (1) permit DNS traffic with source 10.50.10.25 to any destination on port 53, and then (2) deny DNS traffic from all other sources to any destination on port 53. Option D does exactly this: - permit 10.50.10.25/32 -> 0.0.0.0/0 port 53 - deny 0.0.0.0/0 -> 0.0.0.0/0 port 53 Because ACLs are processed top-down with first-match behavior, the specific permit for the allowed host must appear before the broader deny. Key Features / Best Practices: - Order matters: place specific permits before general denies. - Use /32 for a single host. - In real implementations, specify protocol (udp/tcp) and direction/interface (egress on the internal-to-external interface). Many environments also allow TCP/53 in addition to UDP/53. - Consider logging the deny rule to detect policy violations or malware attempting DNS tunneling. Common Misconceptions: A frequent mistake is reversing source and destination fields, accidentally permitting traffic to the internal host rather than from it. Another is placing a broad permit first, which would allow everyone and render later denies ineffective. Exam Tips: For “only X is allowed,” look for: (1) a permit for X, then (2) a deny for everyone else, matching the same service/port. Also verify the correct directionality: internal host should be the source for outbound traffic, and the destination is typically “any” (0.0.0.0/0).
An employee clicked a link in an email from a payment website that asked the employee to update contact information. The employee entered the log-in information but received a “page not found” error message. Which of the following types of social engineering attacks occurred?
Brand impersonation is when an attacker pretends to be a trusted company (logos, sender name, look-and-feel) to gain credibility. It commonly appears as a component of phishing, but by itself it’s broader and doesn’t necessarily involve credential harvesting. In this question, the key behavior is an email link leading to credential entry, which more directly maps to phishing.
Pretexting involves creating a believable fabricated scenario (a “pretext”) to manipulate a target into providing information or performing actions, often through interactive communication (calls, chats, repeated exchanges). While the email claims to be from a payment site, the scenario is not an extended story or interactive persuasion; it’s a straightforward credential-harvesting lure typical of phishing.
Typosquatting relies on registering a domain name that is a close misspelling or variation of a legitimate domain (e.g., rnicrosoft.com vs. microsoft.com) to trick users who don’t notice the difference. The question provides no evidence of a look-alike URL or misspelled domain—only that a link was clicked and credentials were entered—so typosquatting cannot be concluded.
Phishing is the use of fraudulent messages (commonly email) to trick users into clicking malicious links, opening attachments, or entering credentials on a fake site. The employee clicked an email link, entered login information, and then received an error page—consistent with credential harvesting followed by a redirect/404 to reduce suspicion. This matches phishing most precisely.
Core Concept: This scenario tests recognition of phishing as a social engineering technique. Phishing uses fraudulent messages (often email) to trick users into revealing credentials or other sensitive data, typically by sending them to a fake login page that mimics a legitimate service. Why the Answer is Correct: The employee received an email “from a payment website” asking to update contact information, clicked a link, and entered login credentials. Immediately afterward, the user saw a “page not found” error. This is a common phishing pattern: the attacker’s goal is credential harvesting, not providing a working site. After capturing the username/password, the attacker may redirect to an error page (404) or a benign page to reduce suspicion and avoid giving the victim a chance to notice inconsistencies on the fake site. Key Features / Best Practices: Phishing indicators include unsolicited urgency (“update your information”), embedded links, mismatched sender/display names vs. actual domains, and unexpected login prompts. Defensive controls include user awareness training, email security gateways, URL rewriting/sandboxing, DMARC/DKIM/SPF to reduce spoofing, MFA to mitigate stolen passwords, and reporting workflows (e.g., “Report Phish” button). Incident response steps include resetting the user’s password, revoking sessions/tokens, checking for mailbox rules/forwarding, and reviewing logs for suspicious logins. Common Misconceptions: Brand impersonation often occurs within phishing, but it’s not the best single label here because the defining action is credential capture via an email lure and link. Typosquatting requires evidence of a look-alike domain (e.g., paypa1.com), which the question does not provide. Pretexting involves a fabricated story and interactive persuasion (often over phone or ongoing conversation), whereas this is a classic email-and-link credential harvest. Exam Tips: On Security+ questions, if you see an email with a link leading to a login page where credentials are entered, the most likely answer is phishing (or spear phishing if targeted). A subsequent error page does not negate phishing; it can be part of the attacker’s workflow to hide the theft. Look for explicit clues of domain misspellings for typosquatting, and for a sustained fabricated scenario for pretexting.
An organization’s internet-facing website was compromised when an attacker exploited a buffer overflow. Which of the following should the organization deploy to best protect against similar attacks in the future?
NGFWs add capabilities beyond traditional firewalls (application identification, IPS, URL filtering, SSL inspection). They can help reduce risk from known exploit traffic, but they are not purpose-built for detailed web application parameter validation and virtual patching. For a compromise of a website via crafted HTTP input, a WAF is typically the more precise and effective control at the application layer.
A WAF is designed to protect web applications by inspecting and filtering HTTP(S) traffic at Layer 7. It can block exploit payloads delivered through web requests using signatures, anomaly detection, protocol enforcement, and request size/format constraints. WAFs also support “virtual patching,” providing immediate protection for known vulnerabilities (including those that could be exploited via malformed input) while developers remediate the underlying code.
TLS encrypts data in transit and provides server (and optionally client) authentication, preventing eavesdropping and tampering between client and server. However, TLS does not stop an attacker from sending malicious but properly encrypted requests to the application. A buffer overflow is an application vulnerability; encryption does not validate input safety or prevent memory corruption exploits.
SD-WAN is a networking technology for managing WAN connectivity using centralized control, dynamic path selection, and improved performance/reliability across multiple links. While it may include security features in some implementations, its primary purpose is connectivity and traffic engineering, not specialized protection against web application exploits like buffer overflows.
Core concept: This question tests web-application attack mitigation controls for an internet-facing website. A buffer overflow is a software vulnerability that can be triggered by malformed or oversized input, potentially leading to crashes or remote code execution. For public web apps, the security control most directly designed to detect and block malicious HTTP(S) requests is a Web Application Firewall (WAF). Why the answer is correct: A WAF sits in front of the website (reverse proxy or inline) and inspects Layer 7 traffic (HTTP methods, headers, parameters, cookies, bodies). Many real-world buffer overflow exploits are delivered via crafted web requests (e.g., long parameters, unusual encodings, protocol anomalies). A WAF can block these patterns using signatures (e.g., OWASP Core Rule Set), protocol validation, request size limits, and anomaly scoring—reducing the likelihood that exploit payloads reach the vulnerable application. Key features / best practices: Deploy the WAF in blocking mode after tuning (start in detect/monitor). Enable positive security controls (allow-listing expected URLs, methods, and parameter formats) where feasible. Configure maximum request/body/header sizes, normalize and decode inputs, and enable virtual patching rules for known CVEs while the application is being fixed. Integrate WAF logs with SIEM and set alerting for repeated exploit attempts. Note: the most complete fix is secure coding and patching; the WAF is the best “deployable” protective layer for similar web-delivered attacks. Common misconceptions: An NGFW provides strong network controls and some application awareness, but it is not as specialized as a WAF for deep HTTP parameter inspection and virtual patching of web app vulnerabilities. TLS protects confidentiality/integrity in transit, not the application’s memory safety. SD-WAN optimizes connectivity and routing, not exploit prevention. Exam tips: When the target is an internet-facing website and the question mentions web exploits (injection, XSS, request anomalies, app-layer attacks), the best control is typically a WAF. If the scenario were general network exploitation or lateral movement, NGFW/IPS might be better. Always map the control to the layer where the attack occurs (Layer 7 for web apps).
Several employees received a fraudulent text message from someone claiming to be the Chief Executive Officer (CEO). The message stated: “I’m in an airport right now with no access to email. I need you to buy gift cards for employee recognition awards. Please send the gift cards to following email address.” Which of the following are the best responses to this situation? (Choose two).
Canceling current employee recognition gift cards is not directly responsive to the smishing attempt. The scam is about fraudulent purchasing and exfiltration of gift card codes, not compromise of an existing gift card program. Canceling legitimate cards may disrupt business operations and does not address the root cause (social engineering). A better response is warning users and reinforcing verification procedures.
Adding a smishing exercise to annual training is a strong corrective/preventive control. It targets the root issue: users being manipulated via SMS impersonation and urgency. Simulations and updated training improve recognition of red flags (gift cards, urgency, unusual payment methods) and reinforce out-of-band verification and reporting. This is a best-practice continuous improvement action after a real-world attempt.
Issuing a general email warning is an immediate containment/awareness step. It reduces the likelihood that any employee will comply, helps uncover the full scope (who received it), and provides instructions to report the message and not engage. This is a common incident communications action for widespread social engineering campaigns and supports rapid organizational response.
Having the CEO change phone numbers is usually unnecessary and ineffective. Smishing/impersonation often uses spoofed numbers or lookalike identities; changing the CEO’s number does not prevent attackers from claiming to be the CEO again. It also creates operational disruption. The better approach is verification procedures and awareness rather than changing identifiers.
Conducting a forensic investigation on the CEO’s phone is not the best initial response given the facts. The scenario indicates impersonation via SMS, not confirmed compromise of the CEO’s device. Forensics is appropriate when there are indicators of device compromise (malware, suspicious logins, SIM swap evidence). Here, immediate user warning and training updates are higher-value responses.
Implementing mobile device management can improve mobile security posture (policy enforcement, app control, device compliance), but it is not the best direct response to an SMS impersonation/gift-card scam. MDM typically won’t prevent an attacker from sending fraudulent texts to employees. It’s a broader architectural control and may be beneficial long-term, but it’s not the top response for this specific incident.
Core concept: This scenario is smishing (SMS-based phishing) combined with impersonation/CEO fraud (a form of social engineering and BEC-style pretexting). The question asks for the best responses, which in Security+ typically means immediate user-focused containment/notification plus longer-term awareness and process improvement. Why the answers are correct: C (issue a general email warning) is an immediate incident-response communication control. When multiple employees are targeted, rapid internal notification reduces the chance someone will comply, helps identify additional recipients, and encourages reporting of related indicators (phone number, message content, requested email address). This aligns with security awareness and incident communications best practices: notify, instruct users not to engage, and provide a reporting path. B (add a smishing exercise to annual training) is a programmatic corrective action. After an event, organizations should update security awareness training to address the observed tactic. Adding a smishing simulation/tabletop or phishing-style exercise improves user detection and reporting rates and reinforces verification procedures for unusual requests (gift cards, urgency, out-of-band contact). This fits Security Program Management and Oversight: continuous improvement of training based on real incidents. Key features / best practices: - Establish and reinforce an out-of-band verification policy for executive requests (call known numbers from directory, use internal approval workflows). - Encourage “stop, verify, report” behavior; provide a single reporting channel (security mailbox, ticket, or hotline). - Capture indicators (sender number, requested email, timestamps) for blocking and threat intel. - Incorporate smishing into awareness content and simulations, since many programs over-focus on email phishing. Common misconceptions: It’s tempting to choose technical controls like MDM (F) or a forensic investigation (E). While potentially useful, the prompt doesn’t indicate the CEO’s device is compromised; the attacker can spoof identity without accessing the CEO’s phone. Also, MDM is a broader control that won’t directly stop SMS impersonation and is not the most immediate “best response” to this specific event. Exam tips: For Security+ incident questions, prioritize: (1) immediate risk reduction via user notification and reporting, (2) longer-term prevention via training and policy/process updates. Gift-card scams are classic social engineering; the best defenses are verification procedures and awareness, not drastic actions like changing numbers or canceling legitimate programs.
A company is required to use certified hardware when building networks. Which of the following best addresses the risks associated with procuring counterfeit hardware?
A thorough analysis of the supply chain directly targets how counterfeit hardware enters organizations: through unauthorized distributors, weak logistics controls, or lack of provenance. It emphasizes vendor vetting, authorized sourcing, traceability, chain-of-custody, and receiving inspection/authenticity checks. These controls reduce the probability of procuring counterfeit devices and align with supply chain risk management best practices.
A legally enforceable corporate acquisition policy improves internal governance (who can buy, from where, required approvals), but by itself it doesn’t validate authenticity or provenance. Policies can be ignored, misapplied, or fail to address upstream risks like sub-tier suppliers and logistics substitution. It’s supportive, but not the best single control for counterfeit hardware risk.
A right to audit clause can help verify a vendor’s processes and compliance, and it can deter poor practices. However, audits are periodic, may be limited by scope, and can be difficult to execute across multi-tier supply chains. It’s a useful contractual safeguard, but it is not as directly preventative as comprehensive supply chain analysis and validation.
Penetration testing suppliers and vendors evaluates their security posture and potential vulnerabilities in their networks or applications. It does not reliably detect counterfeit hardware, unauthorized component substitution, or tampering during manufacturing/shipping. Counterfeit risk is primarily a procurement/provenance and chain-of-custody problem, not a problem solved by offensive security testing.
Core Concept: This question tests supply chain risk management (SCRM) and secure procurement controls used to prevent counterfeit or tampered hardware from entering an organization’s environment. Counterfeit hardware is a major risk because it can be unreliable, fail compliance requirements (e.g., “certified hardware”), and may include malicious modifications (backdoors, altered firmware, or weakened components). Why the Answer is Correct: A thorough analysis of the supply chain best addresses counterfeit hardware risk because counterfeits typically enter through complex, multi-tier sourcing: unauthorized distributors, gray-market resellers, substituted parts, or weak chain-of-custody controls. Supply chain analysis focuses on validating provenance and integrity from manufacturer to end customer. It includes assessing vendor legitimacy, authorized channel usage, logistics handling, and traceability—controls that directly reduce the likelihood of counterfeit procurement. Key Features / Best Practices: Effective supply chain analysis includes: approved vendor lists and authorized reseller requirements; verification of manufacturer certifications and serial numbers; chain-of-custody documentation; tamper-evident packaging checks; receiving/inspection processes; hardware authenticity validation (e.g., vendor authenticity tools, cryptographic attestation where available); and ongoing vendor risk assessments. Framework-aligned approaches (e.g., NIST SCRM concepts) emphasize understanding upstream dependencies and implementing controls across sourcing, delivery, and lifecycle management. Common Misconceptions: A corporate acquisition policy (B) sounds strong but is only internal governance; it doesn’t validate upstream authenticity. A right-to-audit clause (C) is useful but is a contractual mechanism that may be difficult to exercise and doesn’t replace continuous supply chain validation. Penetration testing suppliers (D) addresses cybersecurity posture, not counterfeit risk; it won’t reliably detect fake components or unauthorized substitution in logistics. Exam Tips: For Security+ questions about counterfeit hardware, “supply chain” keywords usually point to SCRM activities: supplier vetting, authorized sourcing, provenance/traceability, chain-of-custody, and inspection/validation at receiving. Choose options that prevent counterfeit entry rather than options that only provide legal leverage or test network security after the fact.
An enterprise has been experiencing attacks focused on exploiting vulnerabilities in older browser versions with well-known exploits. Which of the following security solutions should be configured to best provide the ability to monitor and block these known signature-based attacks?
ACLs (Access Control Lists) filter traffic based on simple criteria such as source/destination IP, port, and protocol. While an ACL can block traffic from known bad IPs or restrict risky ports, it does not inspect payloads to match exploit signatures (e.g., specific browser exploit patterns in HTTP). Therefore, it is not the best tool for monitoring and blocking known signature-based browser exploit attacks.
DLP (Data Loss Prevention) is designed to detect and prevent unauthorized disclosure of sensitive data (PII, PHI, PCI, intellectual property) via email, web uploads, removable media, or cloud services. It is not primarily used to stop inbound exploit attempts against browsers, nor does it typically rely on exploit signatures for prevention. DLP might help after compromise (exfiltration control), but it doesn’t best meet the stated need.
An IDS (Intrusion Detection System) uses signatures and/or anomaly detection to monitor traffic and generate alerts when malicious patterns are observed. However, IDS is typically deployed out-of-band (SPAN/TAP) and does not block traffic by default. It provides visibility and detection, but the question explicitly requires the ability to both monitor and block known signature-based attacks, which is more aligned with IPS.
An IPS (Intrusion Prevention System) is an in-line control that inspects traffic using signature-based and other detection methods, then actively blocks or mitigates attacks (drop packets, reset sessions, quarantine flows). For well-known exploits targeting older browser versions, an IPS can apply updated signatures to identify exploit traffic and prevent delivery to endpoints. This directly satisfies “monitor and block” for known signature-based attacks.
Core concept: This question tests knowledge of signature-based network security controls and the difference between detection (IDS) and prevention (IPS). Signature-based systems use known patterns (signatures) of malicious traffic—such as exploit strings, shellcode fragments, or protocol anomalies tied to specific CVEs—to identify attacks targeting older browser versions. Why the answer is correct: An Intrusion Prevention System (IPS) is designed to both monitor traffic and actively block malicious activity in-line. Because the enterprise is seeing attacks that use well-known exploits against outdated browsers, signatures for these attacks likely already exist in commercial/open-source IPS rule sets. Configured in-line at key choke points (internet edge, between user VLANs and egress, or in front of web proxies), an IPS can detect the exploit attempt and drop/reset the connection, preventing compromise. This directly matches the requirement to “monitor and block” known signature-based attacks. Key features / best practices: Deploy the IPS in-line (not just SPAN/TAP) so it can enforce blocks. Keep signature/rule feeds updated (vendor updates, Emerging Threats, etc.) and tune policies to reduce false positives. Enable relevant protocol decoders (HTTP/HTTPS inspection where feasible, often via TLS inspection on a proxy/NGFW) because many browser exploits are delivered over web traffic. Use alerting to SIEM, baseline normal traffic, and implement exception handling for business-critical apps. Common misconceptions: IDS is often chosen because it is strongly associated with “signatures,” but IDS is primarily passive—alerting rather than blocking. ACLs can block by IP/port but are not content-aware and cannot match exploit signatures. DLP focuses on preventing sensitive data exfiltration, not stopping inbound exploit attempts. Exam tips: If the question includes “block/prevent” in addition to “monitor/detect,” lean toward IPS (or WAF for web apps). If it only says “detect/alert,” IDS is usually the better fit. Also watch for “signature-based” wording—this commonly maps to IDS/IPS rather than ACL/DLP.
Which of the following is required for an organization to properly manage its restore process in the event of system failure?
IRP (Incident Response Plan) guides how to detect, respond to, contain, eradicate, and recover from security incidents (e.g., malware, intrusion). While it may include limited recovery actions, its primary focus is incident handling and evidence/forensics, not comprehensive system restoration after general system failure or disaster. Choose IRP when the question centers on cyberattacks and response phases.
DRP (Disaster Recovery Plan) is specifically required to manage the restore process after system failure. It documents recovery procedures, restoration order, roles, communications, and validation steps to bring IT services back online. DRP operationalizes backup usage and rebuild processes and is tested to ensure recovery meets business needs. This is the best match for “properly manage its restore process.”
RPO (Recovery Point Objective) defines the maximum acceptable amount of data loss measured in time (e.g., “no more than 15 minutes of data”). It influences backup frequency and replication design, but it is not the plan or procedure for restoring systems. RPO is a requirement/metric used within a DRP/BCP, not a standalone mechanism to manage restoration.
SDLC (Software Development Life Cycle) is a structured approach to designing, building, testing, deploying, and maintaining software. It can improve reliability and reduce failures through secure coding and change control, but it does not provide the operational runbooks and recovery procedures needed to restore systems after a failure. SDLC is about development governance, not disaster recovery execution.
Core concept: This question tests business continuity and disaster recovery planning—specifically what an organization needs to manage the restore process after a system failure. “Restore process” implies recovering systems, data, and services to an operational state, which is the primary purpose of a Disaster Recovery Plan (DRP). Why the answer is correct: A DRP is the documented, tested set of procedures and resources used to recover IT infrastructure and resume critical services after an outage, disaster, or major system failure. It defines how backups are used, how systems are rebuilt (bare-metal restore, image restore, infrastructure-as-code), the order of restoration (prioritization of critical services), roles and responsibilities, communication paths, vendor contacts, and validation steps to confirm systems are functioning correctly. Without a DRP, restores may be ad hoc, inconsistent, and too slow to meet business requirements. Key features/best practices: A strong DRP includes recovery strategies (hot/warm/cold sites; cloud DR), runbooks, dependency mapping (e.g., identity services before applications), backup/replication methods, and testing (tabletop exercises and full failover tests). It aligns with business requirements expressed as RTO (how fast to restore) and RPO (how much data loss is acceptable). Framework-wise, DRP practices map well to NIST SP 800-34 (Contingency Planning Guide for Federal Information Systems) and ISO 22301 (business continuity management). Common misconceptions: RPO is important for restore planning, but it is a metric/requirement, not the plan/process itself. An IRP focuses on handling security incidents (containment/eradication) rather than restoring business services after failure. SDLC governs how software is built and maintained, not how to recover operations after an outage. Exam tips: When you see “restore,” “recover,” “failover,” “backup restoration,” or “resuming operations after outage,” think DRP/BCP. If the scenario emphasizes “security incident response steps,” think IRP. If it asks for “maximum tolerable data loss,” that’s RPO; for “maximum tolerable downtime,” that’s RTO.
Which of the following vulnerabilities is associated with installing software outside of a manufacturer’s approved software repository?
Jailbreaking is the process of removing manufacturer or OS restrictions (commonly on iOS) to gain elevated privileges and install unauthorized software or tweaks. It weakens built-in security controls like sandboxing and code-signing enforcement. While jailbreaking can enable installing apps outside the official store, the question specifically describes the act of installing from outside the approved repository, which is sideloading.
Memory injection is an exploitation technique where an attacker inserts and executes malicious code in the memory space of a running process (e.g., DLL injection, process hollowing). It is associated with malware execution and evasion, not with how software is obtained or installed. It does not describe installing software outside an approved repository.
Resource reuse refers to insecure reuse of resources such as memory, sessions, tokens, file handles, or identifiers without proper reinitialization or access control. This can lead to data leakage or privilege issues, but it is unrelated to application distribution channels or installing software from outside a manufacturer’s repository.
Side loading is installing software (often mobile apps) from outside the manufacturer’s approved repository/app store, such as downloading an APK from a website or using a third-party store. This bypasses many repository protections (vetting, scanning, reputation, revocation) and increases the risk of trojanized or vulnerable apps. This directly matches the scenario described.
Core concept: This question tests mobile/endpoint application security and software supply chain risk—specifically the security implications of installing apps from outside a vendor-controlled, approved repository (e.g., Apple App Store, Google Play, Microsoft Store, managed enterprise catalog). Approved repositories typically provide signing requirements, malware scanning, policy enforcement, and a trusted distribution channel. Why the answer is correct: Side loading is the act of installing software (commonly mobile apps) from outside the manufacturer’s or platform owner’s official app store/repository. Examples include installing an Android APK from a website, using third-party app stores, or manually deploying an app package that bypasses standard store vetting. This increases exposure to trojanized apps, repackaged legitimate apps with malicious code, outdated/vulnerable versions, and reduced ability for the platform to enforce integrity checks and revocation. Key features and best practices: Official repositories typically enforce developer identity checks, code signing, automated/static analysis, reputation systems, and rapid takedown/revocation. Enterprises mitigate sideloading risk via MDM/UEM controls (disable “Unknown sources,” restrict installation sources, allowlist apps, enforce signed apps), application control (whitelisting), and user awareness. From a supply chain perspective, using trusted repositories and verifying signatures/hashes reduces the risk of malicious or tampered packages. Common misconceptions: Jailbreaking is often associated with “getting apps outside the store,” but it is specifically the act of removing platform restrictions (most commonly iOS) to gain elevated access and bypass security controls. Sideloading can occur without jailbreaking (especially on Android or in enterprise deployment scenarios). Memory injection is a runtime exploitation technique, not a software installation method. Resource reuse relates to reusing objects/resources insecurely (e.g., sessions, memory, identifiers), not app distribution. Exam tips: When you see “installing apps outside the official store/repository,” think “sideloading.” When you see “removing OS restrictions/rooting to bypass manufacturer controls,” think “jailbreaking/rooting.” Tie both to increased malware risk and weakened platform security controls, but keep the terms distinct for exam precision.
An analyst is evaluating the implementation of Zero Trust principles within the data plane. Which of the following would be most relevant for the analyst to evaluate?
Secured zones align with Zero Trust data-plane implementation because they describe how resources are segmented and how traffic is controlled between segments. Evaluating secured zones includes checking microsegmentation, default-deny policies, enforcement points in the traffic path, and restrictions on east-west movement. This is a concrete architectural control that directly affects how data flows and is protected in the data plane.
Subject role is an identity and access management concept used to determine authorization (e.g., RBAC/ABAC). While roles influence policy decisions, they are typically evaluated in the control plane (policy decision logic) rather than the data plane (traffic enforcement and segmentation). Roles can feed enforcement rules, but “subject role” itself is not a data-plane implementation element.
Adaptive identity refers to risk-based or context-aware authentication/authorization (device posture, location, behavior, step-up MFA). This is strongly associated with identity systems and policy decision processes (control plane). The data plane may enforce the resulting decision, but adaptive identity is not primarily about how traffic is segmented or routed; it’s about how trust is evaluated before allowing access.
Threat scope reduction is a desired outcome of Zero Trust (limiting blast radius and lateral movement). However, it is not a specific data-plane component to evaluate. The analyst would instead assess the mechanisms that achieve scope reduction—such as secured zones, microsegmentation, and enforcement points—rather than the abstract goal itself.
Core concept: Zero Trust Architecture (ZTA) separates concerns into planes (control/management vs data plane). The data plane is where actual traffic flows and where enforcement happens: segmentation, policy enforcement points (PEPs), microperimeters, and protected communication paths. In Security+ terms, this maps strongly to network/security architecture controls that limit lateral movement and constrain access paths. Why the answer is correct: “Secured zones” are directly relevant to evaluating Zero Trust in the data plane because they represent how the organization segments and isolates resources and enforces access between segments. In ZTA, you assume breach and design the data plane so that workloads, applications, and data are placed into tightly controlled zones (often microsegments) with explicit allow rules, continuous verification, and strong inspection. Evaluating secured zones means checking whether traffic between zones is mediated by enforcement points (e.g., firewalls, microsegmentation agents, service mesh policies), whether east-west traffic is restricted, and whether access is granted per-session/per-request rather than broad network trust. Key features / what to evaluate: - Microsegmentation and zoning strategy (workload/app/data tiers separated; least-privilege flows) - Enforcement points in the data path (NGFW, host-based firewall, SDN policies, service mesh mTLS + authorization) - Default-deny between zones, explicit allow lists, and tight egress controls - Continuous monitoring/telemetry for zone-to-zone flows and policy violations - Minimizing implicit trust based on network location (no “trusted internal network”) Common misconceptions: Options like “adaptive identity” and “subject role” sound Zero Trust-related, but they primarily belong to identity/control-plane decisioning (who you are, what role you have, risk-based authentication). “Threat scope reduction” is a goal/outcome of ZTA, not a concrete data-plane implementation element to evaluate. Exam tips: When you see “data plane” in Zero Trust questions, think “where traffic is enforced and segmented.” Look for answers tied to segmentation, zones, microperimeters, and enforcement points. When you see identity attributes, roles, or adaptive authentication, those are typically control-plane decision inputs rather than data-plane constructs.
A company’s web filter is configured to scan the URL for strings and deny access when matches are found. Which of the following search strings should an analyst employ to prohibit access to non-encrypted websites?
"encryption=off" is not a standard or universal indicator of non-encrypted web traffic. It might appear as a custom query-string parameter on a specific website (e.g., ?encryption=off), but most HTTP sites will not contain this text in the URL. Using this string would miss the vast majority of non-encrypted websites and is therefore ineffective for enforcing HTTPS-only browsing.
"http://" directly identifies the HTTP scheme, which is non-encrypted plaintext web traffic. Because the web filter scans URLs for strings, matching and denying URLs that begin with or contain "http://" is the most straightforward way to block access to non-encrypted websites. This aligns with the fundamental distinction between HTTP (no TLS) and HTTPS (TLS-encrypted).
"www.*.com" is not an encryption-related indicator; it is an attempt at a wildcard domain pattern. It would also be overly broad and could block many legitimate sites regardless of whether they use HTTPS. Additionally, many valid websites do not use "www" and many are not in the .com TLD, so it is both inaccurate and unrelated to the goal of blocking non-encrypted access.
":443" refers to the common TCP port for HTTPS, but it is not a reliable string to detect encryption in a URL. Most users do not explicitly specify ":443" in URLs, so the filter would miss most HTTPS/HTTP cases. Also, port numbers do not guarantee encryption—services can run on 443 without TLS, and HTTPS can be served on nonstandard ports.
Core concept: This question tests recognizing encrypted vs. non-encrypted web traffic by URL scheme and how a basic URL-string-matching web filter can enforce secure browsing. In web URLs, the scheme (also called protocol) indicates how the browser should connect: HTTP is plaintext, while HTTPS uses TLS to encrypt data in transit. Why the answer is correct: To prohibit access to non-encrypted websites, the analyst should block URLs that use the non-encrypted scheme. The clearest string to match is "http://" because it explicitly indicates an HTTP URL, which does not provide confidentiality or integrity protections. A filter that denies access when it finds "http://" will prevent users from visiting sites via plaintext HTTP. Key features and best practices: Blocking "http://" is a simple control that aligns with the broader best practice of enforcing TLS for web browsing (often paired with redirecting HTTP to HTTPS, HSTS, and TLS inspection where appropriate). In enterprise environments, web proxies/secure web gateways commonly enforce HTTPS-only policies, block downgrade attempts, and may also block known-bad categories. However, because this question states the filter scans the URL for strings, the most reliable indicator available at the URL level is the scheme prefix. Common misconceptions: Many people associate encryption with port 443, but the presence or absence of ":443" in a URL is not a reliable indicator of encryption. HTTPS commonly uses 443, but users typically do not include the port in the URL, and other services can run on 443 without being HTTPS. Similarly, "encryption=off" is not a standard URL component and would only match a specific query parameter if a site happened to use it. "www.*.com" is a wildcard-like pattern that would overblock and is unrelated to encryption. Exam tips: For Security+ questions about encrypted web access, remember: HTTP (port 80) is plaintext; HTTPS (port 443) uses TLS. When the control is URL string matching, look for the scheme (http:// vs https://) rather than ports or nonstandard parameters. Also note that blocking HTTP does not guarantee the destination is trustworthy—only that the transport is encrypted.
During a security incident, the security operations team identified sustained network traffic from a malicious IP address: 10.1.4.9. A security analyst is creating an inbound firewall rule to block the IP address from accessing the organization’s network. Which of the following fulfills this request?
This denies inbound traffic from any source (0.0.0.0/0) to destination 10.1.4.9/32. That would block traffic headed to 10.1.4.9, effectively protecting that specific destination host (if it is inside your network). It does not block traffic originating from 10.1.4.9, so it fails the requirement to stop the malicious IP from accessing the organization.
This denies inbound traffic with source 10.1.4.9/32 to any destination (0.0.0.0/0). That matches the requirement: block the malicious IP from reaching any internal system. Using /32 targets only that single host, and using any destination ensures the attacker cannot access any address behind the firewall.
This is the opposite of what is needed: it permits inbound traffic from source 10.1.4.9/32 to any destination. During an incident, a permit rule would explicitly allow the malicious host to continue communicating with internal targets, increasing risk and undermining containment efforts.
This permits inbound traffic from any source to destination 10.1.4.9/32. It not only fails to block the malicious IP, but it also explicitly allows traffic to 10.1.4.9. Like option A, it focuses on the destination being 10.1.4.9 rather than blocking 10.1.4.9 as the source of malicious inbound traffic.
Core Concept: This question tests firewall/ACL logic: direction (inbound), action (deny), and correct placement of IPs in source vs. destination fields. Inbound rules evaluate traffic entering the organization from external sources, so the malicious host must be matched as the source address. Why the Answer is Correct: To block a malicious IP (10.1.4.9) from accessing the organization’s network, the inbound rule must deny packets whose source is 10.1.4.9, regardless of which internal destination they target. Option B does exactly that: it denies inbound IP traffic with source 10.1.4.9/32 to destination 0.0.0.0/0 (any). In practical terms, this prevents that host from initiating connections to any address reachable behind the firewall. Key Features / Best Practices: - Use a /32 mask for a single host block. - Place the malicious IP in the source field for inbound filtering. - Use “any” destination (0.0.0.0/0) when you want to block access to all internal targets. - Ensure rule ordering: in many ACL implementations, rules are processed top-down, first match wins. A broader “permit any” above the deny would negate the block. - Consider logging on the deny rule during an incident to support detection/forensics, but be mindful of log volume. Common Misconceptions: A common mistake is swapping source and destination. Option A denies traffic destined to 10.1.4.9, which would protect that host (if it were internal) rather than block it as an attacker. Another trap is choosing “permit” rules (C or D), which would explicitly allow the malicious traffic. Exam Tips: - For inbound rules: attacker is typically the source; your network is the destination. - “Block an IP from accessing us” usually means deny where source = attacker. - Read CIDR carefully: /32 = single IP; /0 = any. - Always sanity-check: does the rule stop traffic coming from the bad IP, or does it stop traffic going to it?
A technician is deploying a new security camera. Which of the following should the technician do?
Configuring the correct VLAN is a strong security architecture practice (segmentation of IoT/cameras, limiting lateral movement, applying ACLs). However, it’s not the most fundamental first step for “deploying a new security camera.” You typically perform a site survey first to determine placement and connectivity needs, then implement the VLAN and firewall rules once you know the camera’s network path and recording architecture.
A vulnerability scan can help identify outdated firmware, weak services, or misconfigurations after the camera is deployed. It is not usually the primary action during initial deployment because embedded/IoT devices can be sensitive to scanning, and scanning doesn’t solve the core deployment challenge of ensuring proper coverage, lighting, and placement. Scanning is better categorized as validation/operations after installation.
Disabling unnecessary ports/services (e.g., Telnet, UPnP, unused management interfaces) is a best practice for hardening cameras and reducing attack surface. Still, the question focuses on deploying a new camera, where the most critical initial activity is ensuring the camera will meet surveillance objectives in its physical environment. Hardening is important but typically follows placement and connectivity planning.
A site survey is the correct step because it ensures the camera will provide effective coverage and usable footage. It evaluates field of view, blind spots, lighting, mounting location, tamper risks, and practical needs like power/PoE, cable routes, and wireless signal. In Security+ terms, it’s part of designing and implementing physical security controls to meet security requirements before final configuration.
Core Concept: Deploying a physical security control (a security camera) requires planning for placement, coverage, lighting, power, mounting, and network connectivity. In Security+ terms, this aligns with physical security design and secure architecture considerations, where you validate that the control will meet the security objective (visibility/deterrence/forensics) before and during installation. Why the Answer is Correct: Conducting a site survey is the most appropriate action when deploying a new security camera because it determines the optimal location and configuration to achieve required coverage and image quality. A site survey evaluates line-of-sight, field of view, potential obstructions, lighting conditions (day/night, glare, backlighting), mounting height/angle, environmental exposure (weather, vibration), and tamper risks. It also confirms practical requirements such as cable runs, PoE availability, wireless signal strength (if applicable), and whether the camera placement complies with policy and privacy requirements. Key Features / Best Practices: A proper camera site survey includes verifying coverage of critical assets and entry/egress points, avoiding blind spots, ensuring sufficient illumination or IR capability, confirming retention and resolution requirements for identification, and planning secure network placement (e.g., camera VLAN, ACLs, NVR placement). It also considers physical hardening (tamper-resistant housings, protected conduit) and operational needs (maintenance access, cleaning, signage). Common Misconceptions: Configuring a VLAN and disabling ports are valid hardening steps, but they come after you know where and how the camera will be installed and connected. A vulnerability scan is not the first step in “deploying” a camera; it’s typically part of ongoing security assessment once the device is installed and reachable, and it may be limited by vendor support and risk of disrupting embedded devices. Exam Tips: For questions about installing physical security devices (cameras, badge readers, sensors), look for planning actions like “site survey” or “walkthrough” as the first and best step. Network segmentation and device hardening are important, but the exam often tests sequencing: validate physical placement and requirements first, then implement network/security configurations and monitoring.
A company has begun labeling all laptops with asset inventory stickers and associating them with employee IDs. Which of the following security benefits do these actions provide? (Choose two.)
Correct. Associating an asset tag with an employee ID creates accountability and traceability. If logs or alerts indicate suspicious activity from a specific laptop, the security team can quickly identify the assigned custodian and notify the correct person for immediate containment actions. This also supports investigations by tying device events to an accountable owner (even if attribution to a specific user session may require additional logging).
Incorrect. User awareness training is typically delivered to users (email/LMS) or enforced via policy, not “to a device.” While an inventory system could help identify who has a laptop, the sticker itself doesn’t enable training delivery or ensure the right endpoint receives it. This is more of a security program/training administration function than a direct benefit of asset tagging.
Incorrect. Mapping users to devices for software MFA tokens generally depends on identity provider enrollment, device registration, MDM/UEM records, or device certificates. A physical asset sticker is not a trusted technical identifier for MFA provisioning and is not used by MFA systems to bind tokens. Asset tagging may help administratively, but it’s not the security benefit being tested.
Incorrect. User-based firewall policies are usually targeted using user identity (directory groups), device identity (hostnames, certificates), or MDM compliance state. A sticker does not integrate with firewall policy engines and cannot be reliably used for enforcement. Proper targeting requires technical controls like NAC, endpoint agents, or directory-based policy mapping.
Incorrect. Penetration testing targets are defined by scope (IP ranges, hostnames, applications) and discovered through technical inventory and scanning, not by physical stickers. Asset tags might help correlate a discovered host to a physical device after the fact, but they don’t materially enable targeting the “desired laptops” during a test.
Correct. Asset tags tied to employee IDs support offboarding and data governance by identifying which specific device must be returned, inspected, and sanitized. This helps ensure company data is accounted for (e.g., local files, cached credentials, encryption keys) and that the device is properly wiped/reimaged before reassignment or disposal, reducing data leakage risk.
Core concept: This question tests asset management and accountability controls: physically labeling endpoints (asset tags) and logically associating them to an owner/custodian (employee ID). In Security+ terms, this supports inventory/asset tracking, chain of custody, and governance processes across the asset lifecycle (procurement, assignment, use, incident handling, and offboarding). Why the answers are correct: A is correct because linking a laptop’s asset tag/serial to an employee establishes clear device-to-user accountability. During incident response (malware infection, policy violation, lost device, suspicious log activity), the security team can quickly identify the responsible custodian and notify the correct employee for containment steps (disconnect, bring device in, confirm activity). This reduces time-to-triage and improves auditability. F is correct because asset tagging plus employee association supports offboarding and data accountability. When an employee leaves, the organization can verify which specific laptop (and therefore which corporate data repositories, local caches, and encryption keys) were assigned, ensuring return of the device, secure wipe/reimage, and confirmation that company data is recovered or properly disposed of. This aligns with governance and data lifecycle management. Key features / best practices: Asset inventory should include asset tag, serial number, model, assigned user, department, location, and status (in service, repair, retired). Best practice is to integrate physical inventory with a CMDB/asset management system and to tie it to HR identity records for joiner/mover/leaver workflows. This enables audits, loss/theft reporting, and consistent enforcement of policies (e.g., encryption required, patch compliance reporting). Common misconceptions: Several options describe security controls that are not directly enabled by a sticker-to-employee mapping. MFA token assignment and firewall policy targeting typically rely on directory identities, device certificates, MDM enrollment, or endpoint management identifiers—not a physical sticker. Pen testing targeting and awareness training delivery are also not primary benefits of asset stickers. Exam tips: When you see “asset tags,” think inventory, ownership, accountability, audits, and lifecycle/offboarding. If the question asks for “security benefits,” prioritize incident response traceability and governance outcomes over operational conveniences that require additional technical controls (MDM, NAC, certificates, directory integration).
A company needs to provide administrative access to internal resources while minimizing the traffic allowed through the security boundary. Which of the following methods is most secure?
A bastion host is a hardened jump point used for administrative access into internal networks. It reduces the attack surface by limiting inbound firewall rules to a single system and centralizes authentication, logging, and monitoring. Admins connect to the bastion (often via VPN/SSH/RDP with MFA) and then access internal resources from there, minimizing exposed management ports at the boundary.
A perimeter network (DMZ) is a segmented network placed between the internet and the internal network, typically used to host public-facing services. While it improves segmentation, it does not by itself provide the most secure method for administrative access or minimize boundary traffic. You would still need a controlled admin entry point (like a bastion) to avoid opening management access broadly.
A WAF (Web Application Firewall) protects web applications by filtering and monitoring HTTP/HTTPS traffic, mitigating attacks like SQL injection and XSS. It is not designed to provide administrative access to internal resources or reduce management-plane exposure. Even with a WAF, you would still need a secure remote administration approach for non-web internal systems.
Single sign-on (SSO) centralizes authentication and can improve security with stronger identity controls and reduced password sprawl. However, SSO does not inherently minimize the network traffic allowed through a security boundary or reduce exposed ports/services. It addresses who can authenticate, not how many firewall openings are required for administrative connectivity.
Core Concept: This question tests secure administrative access design at a network/security boundary. The goal is to minimize allowed inbound traffic while still enabling admins to manage internal systems. This aligns with least privilege, reduced attack surface, and controlled management-plane access. Why the Answer is Correct: A bastion host (also called a jump host/jump box) is the most secure method here because it concentrates administrative entry into a single hardened, tightly monitored system. Instead of opening multiple firewall rules to many internal servers (e.g., SSH/RDP/WinRM to each host), you allow management traffic only to the bastion host, and then admins pivot from that controlled point to internal resources. This minimizes exposed services at the boundary and provides a single choke point for authentication, logging, and session control. Key Features / Best Practices: A secure bastion host is typically placed in a controlled segment (often a DMZ/perimeter subnet) with strict inbound rules (e.g., only VPN-to-bastion, or only SSH from a management network), strict outbound rules to only required internal management ports, and strong hardening (patching, minimal services, host firewall). Use MFA, privileged access management (PAM), short-lived credentials, and session recording where possible. Centralize logs (SIEM), enable command auditing, and restrict admin tools to the bastion to prevent direct admin access from unmanaged endpoints. Common Misconceptions: A perimeter network (DMZ) sounds similar, but it’s a broader architecture for hosting exposed services; it doesn’t inherently minimize admin traffic unless paired with a bastion. A WAF protects web applications (HTTP/HTTPS) and is not a general administrative access solution. Single sign-on improves authentication usability and can strengthen identity controls, but it does not reduce the number of network paths/ports exposed through the boundary. Exam Tips: When you see “administrative access” plus “minimizing traffic through the boundary,” think “jump/bastion host” or “management plane isolation.” Look for answers that reduce exposed ports and centralize control, monitoring, and auditing. Bastion hosts are a common Security+ pattern for secure remote administration and segmentation.
A security analyst is reviewing alerts in the SIEM related to potential malicious network traffic coming from an employee’s corporate laptop. The security analyst has determined that additional data about the executable running on the machine is necessary to continue the investigation. Which of the following logs should the analyst use as a data source?
Application logs record events generated by a specific application (e.g., web server access logs, database logs, email server logs). They can help if the suspicious activity is clearly tied to that application’s own logging, but they generally do not provide comprehensive OS-level details about arbitrary executables, parent/child processes, or file hashes across the endpoint.
IPS/IDS logs provide detections based on network signatures, anomalies, or policy violations. They are excellent for identifying potentially malicious network traffic and indicators of compromise on the wire, but they typically cannot attribute the traffic to a specific executable on the host without additional endpoint telemetry or advanced network-to-host correlation data.
Network logs (e.g., NetFlow, firewall logs, proxy logs, DNS logs) show source/destination IPs, ports, protocols, domains, and sometimes URLs. They help confirm what the laptop communicated with and when, but they usually do not identify the exact process/executable responsible for generating the traffic on the endpoint.
Endpoint logs (EDR/agent logs, Sysmon, OS security auditing, auditd) capture host-based telemetry such as process creation, executable path, hashes, command-line arguments, user context, and process trees. This is the most direct data source to identify and investigate the executable running on the corporate laptop that is associated with the suspicious network activity.
Core concept: This question tests selecting the correct log source to obtain process/executable-level telemetry from a specific host. In Security+ terms, that is endpoint visibility (EDR/agent logs, Sysmon, OS auditing), not purely network or perimeter detection. Why the answer is correct: The analyst needs “additional data about the executable running on the machine.” Details about an executable (process name, full path, hash, parent/child process tree, command-line arguments, user context, signature status, loaded modules, persistence mechanisms) are collected on the endpoint. Endpoint logs (from EDR tools like Microsoft Defender for Endpoint/CrowdStrike, or Windows event logs enhanced by Sysmon, or Linux auditd) provide the necessary host-based evidence to correlate the SIEM network alert to a specific process generating the traffic. Key features/best practices: Endpoint telemetry commonly includes process creation events, network connection events mapped to process IDs, file creation/modification, registry changes, and reputation/hash lookups. Best practice is to forward high-value endpoint events to the SIEM and ensure time synchronization (NTP) so process events align with network alerts. For deeper investigation, analysts often pivot on file hash (SHA-256), command line, and parent process to identify initial execution vectors (phishing, drive-by, LOLBins). Common misconceptions: “Network” logs can show suspicious connections but usually cannot reliably identify the exact executable on a host without endpoint correlation. IDS/IPS alerts indicate malicious patterns/signatures on traffic but also lack definitive process attribution. “Application” logs are typically produced by a specific application (e.g., web server, database) and may not capture OS-level process execution details unless the application itself logs that information. Exam tips: When a question asks for information about what is running on a device (process, executable, hash, command line, user context), choose endpoint/EDR/host logs. When it asks about traffic patterns, flows, or packets, choose network/IDS/IPS. If it asks about a specific service’s internal errors or transactions, choose application logs.




