
Simulez l'expérience réelle de l'examen avec 90 questions et une limite de temps de 90 minutes. Entraînez-vous avec des réponses vérifiées par IA et des explications détaillées.
Propulsé par l'IA
Chaque réponse est vérifiée par 3 modèles d'IA de pointe pour garantir une précision maximale. Obtenez des explications détaillées par option et une analyse approfondie des questions.
Which of the following threat actors is the most likely to be hired by a foreign government to attack critical systems located in other countries?
Hacktivists are driven primarily by ideology or political/social causes and often seek publicity (defacements, DDoS, data leaks) to advance a message. While they can target government or critical services, they are not typically “hired” by governments because their motivations are not primarily financial and their actions can be unpredictable and attention-seeking, which is undesirable for covert state objectives.
A whistleblower is usually an insider (employee/contractor) who exposes unethical or illegal activity to authorities or the public. The intent is disclosure, not conducting attacks on foreign critical systems. While whistleblowers may leak sensitive data, they are not generally contracted by a foreign government to perform offensive cyber operations, making this a poor fit for the scenario.
Organized crime groups are financially motivated, capable, and often operate as service providers (malware-as-a-service, ransomware affiliates, initial access brokers). Because they can be paid and directed, they are plausible proxies for foreign governments seeking deniability. Their resources and operational maturity make them more likely to successfully target critical systems than less capable actors listed here.
Unskilled attackers (script kiddies) rely on publicly available tools and exploits with limited understanding. They typically pursue easy targets and opportunistic attacks rather than complex, strategic operations against critical infrastructure. A foreign government would be unlikely to hire an unskilled actor for high-stakes attacks because the probability of failure, exposure, and poor operational security is high.
Core concept: This question tests your ability to map attacker “types” to their typical motivations, resources, and relationships. In Security+, foreign governments commonly use either their own intelligence/military cyber units (APT) or proxies/contractors to create deniability. The key phrase is “hired by a foreign government” to attack “critical systems” in other countries—this implies a paid, task-oriented relationship and potentially high capability. Why the answer is correct: Organized crime is the most likely of the listed options to be hired as a proxy. Mature criminal groups have established infrastructure (botnets, malware development, exploit acquisition, initial access brokerage), operational security, and the ability to execute targeted intrusions or disruptive campaigns for payment. Governments may contract or indirectly task criminal groups to conduct operations that align with state interests while maintaining plausible deniability. Critical infrastructure attacks often require specialized skills, persistence, and coordination—traits more consistent with organized crime than with unskilled attackers, and more “for-hire” than ideologically driven hacktivists. Key features / best-practice context: From a defender perspective, state-sponsored or state-aligned operations often resemble APT tradecraft: long dwell time, stealthy lateral movement, credential theft, living-off-the-land techniques, and targeting of OT/ICS environments. Best practices include strong segmentation between IT/OT, MFA and PAM, continuous monitoring (SIEM/SOAR), threat intelligence integration, incident response playbooks, and supply-chain risk management. Frameworks like MITRE ATT&CK and NIST guidance help map observed behaviors to likely actor profiles. Common misconceptions: Hacktivists can attack critical systems, but their motivation is typically ideological and public-facing rather than “hired.” Whistleblowers are insiders exposing wrongdoing, not external attackers contracted for offensive operations. Unskilled attackers (script kiddies) generally lack the capability and discipline for high-impact, cross-border critical infrastructure targeting. Exam tips: When you see “hired,” think financially motivated actors (organized crime, mercenaries/contractors). When you see “foreign government,” think APT/state-sponsored—if APT isn’t an option, choose the closest proxy with resources and professionalism: organized crime.
Envie de vous entraîner partout ?
Téléchargez Cloud Pass gratuitement — inclut des tests d'entraînement, le suivi de progression et plus encore.


Téléchargez Cloud Pass et accédez gratuitement à toutes les questions d'entraînement SY0-701: CompTIA Security+.
Envie de vous entraîner partout ?
Obtenir l'application gratuite
Téléchargez Cloud Pass gratuitement — inclut des tests d'entraînement, le suivi de progression et plus encore.
Which of the following scenarios describes a possible business email compromise attack?
This matches a common BEC/CEO fraud scenario: an attacker impersonates an executive (often via display-name spoofing or a look-alike domain) and pressures an employee to buy gift cards or send money. The focus is business process manipulation and financial loss, typically without malware. Verifying the sender address and using out-of-band confirmation are key mitigations.
This describes ransomware delivered via an email attachment: opening the attachment leads to file encryption and a ransom demand to regain access. While email may be the initial vector, the defining characteristic is malware-based extortion, not impersonation-driven financial fraud. This is not typically categorized as BEC on the exam.
This is a phishing/social engineering attempt to obtain credentials (requesting a cloud administrator login). It could be part of a broader BEC campaign, but the scenario itself is primarily credential harvesting and violates security policy (no legitimate HR director should request passwords). BEC questions usually emphasize payment diversion, invoice fraud, or executive impersonation for funds.
This is classic credential phishing: a link leads to a fake portal designed to steal usernames and passwords. It’s a common precursor to account takeover, which can later enable BEC, but the described activity is specifically phishing via a spoofed login page rather than the business-payment fraud focus of BEC.
Core concept: Business Email Compromise (BEC) is a targeted social engineering attack that abuses email trust to induce an employee to perform an unauthorized business action (often wire transfers, gift cards, invoice changes, or sensitive data release). Unlike broad phishing, BEC commonly uses impersonation (spoofed display name, look-alike domains, or compromised executive/vendor mailboxes) and focuses on process manipulation rather than malware delivery. Why A is correct: A gift card request that shows an executive’s name in the display field is a classic BEC pattern (often called “CEO fraud”). Attackers rely on urgency, authority, and confidentiality (“I’m in a meeting—buy gift cards now”) to bypass normal approval workflows. The key indicator is impersonation of a trusted executive combined with a financial request that can be quickly monetized and is hard to reverse. Key features and best practices: Defenses include enforcing out-of-band verification for payment/gift card requests, dual approval for financial transactions, and training users to check the actual sender address (not just the display name). Technical controls include SPF/DKIM/DMARC to reduce spoofing, banner warnings for external senders, and monitoring for anomalous mailbox rules or suspicious login activity (if the account is compromised). Strong MFA and conditional access reduce the chance of account takeover, which is another common BEC method. Common misconceptions: Many learners equate any phishing email with BEC. BEC is specifically about business process fraud and impersonation, often without links/attachments. Ransomware (B) and credential-harvesting phishing (D) are serious but are different attack categories. Credential requests (C) are phishing/social engineering, but the scenario is more directly “credential harvesting” than the hallmark BEC financial/invoice redirection theme. Exam tips: For Security+ SY0-701, look for cues like executive/vendor impersonation, payment/invoice/gift card urgency, and requests to change banking details. If the scenario centers on encrypting files and demanding payment, it’s ransomware; if it centers on a fake login page, it’s credential phishing; if it centers on manipulating business payments via trusted email context, it’s BEC.
Several employees received a fraudulent text message from someone claiming to be the Chief Executive Officer (CEO). The message stated: “I’m in an airport right now with no access to email. I need you to buy gift cards for employee recognition awards. Please send the gift cards to following email address.” Which of the following are the best responses to this situation? (Choose two).
Canceling current employee recognition gift cards is not directly responsive to the smishing attempt. The scam is about fraudulent purchasing and exfiltration of gift card codes, not compromise of an existing gift card program. Canceling legitimate cards may disrupt business operations and does not address the root cause (social engineering). A better response is warning users and reinforcing verification procedures.
Adding a smishing exercise to annual training is a strong corrective/preventive control. It targets the root issue: users being manipulated via SMS impersonation and urgency. Simulations and updated training improve recognition of red flags (gift cards, urgency, unusual payment methods) and reinforce out-of-band verification and reporting. This is a best-practice continuous improvement action after a real-world attempt.
Issuing a general email warning is an immediate containment/awareness step. It reduces the likelihood that any employee will comply, helps uncover the full scope (who received it), and provides instructions to report the message and not engage. This is a common incident communications action for widespread social engineering campaigns and supports rapid organizational response.
Having the CEO change phone numbers is usually unnecessary and ineffective. Smishing/impersonation often uses spoofed numbers or lookalike identities; changing the CEO’s number does not prevent attackers from claiming to be the CEO again. It also creates operational disruption. The better approach is verification procedures and awareness rather than changing identifiers.
Conducting a forensic investigation on the CEO’s phone is not the best initial response given the facts. The scenario indicates impersonation via SMS, not confirmed compromise of the CEO’s device. Forensics is appropriate when there are indicators of device compromise (malware, suspicious logins, SIM swap evidence). Here, immediate user warning and training updates are higher-value responses.
Implementing mobile device management can improve mobile security posture (policy enforcement, app control, device compliance), but it is not the best direct response to an SMS impersonation/gift-card scam. MDM typically won’t prevent an attacker from sending fraudulent texts to employees. It’s a broader architectural control and may be beneficial long-term, but it’s not the top response for this specific incident.
Core concept: This scenario is smishing (SMS-based phishing) combined with impersonation/CEO fraud (a form of social engineering and BEC-style pretexting). The question asks for the best responses, which in Security+ typically means immediate user-focused containment/notification plus longer-term awareness and process improvement. Why the answers are correct: C (issue a general email warning) is an immediate incident-response communication control. When multiple employees are targeted, rapid internal notification reduces the chance someone will comply, helps identify additional recipients, and encourages reporting of related indicators (phone number, message content, requested email address). This aligns with security awareness and incident communications best practices: notify, instruct users not to engage, and provide a reporting path. B (add a smishing exercise to annual training) is a programmatic corrective action. After an event, organizations should update security awareness training to address the observed tactic. Adding a smishing simulation/tabletop or phishing-style exercise improves user detection and reporting rates and reinforces verification procedures for unusual requests (gift cards, urgency, out-of-band contact). This fits Security Program Management and Oversight: continuous improvement of training based on real incidents. Key features / best practices: - Establish and reinforce an out-of-band verification policy for executive requests (call known numbers from directory, use internal approval workflows). - Encourage “stop, verify, report” behavior; provide a single reporting channel (security mailbox, ticket, or hotline). - Capture indicators (sender number, requested email, timestamps) for blocking and threat intel. - Incorporate smishing into awareness content and simulations, since many programs over-focus on email phishing. Common misconceptions: It’s tempting to choose technical controls like MDM (F) or a forensic investigation (E). While potentially useful, the prompt doesn’t indicate the CEO’s device is compromised; the attacker can spoof identity without accessing the CEO’s phone. Also, MDM is a broader control that won’t directly stop SMS impersonation and is not the most immediate “best response” to this specific event. Exam tips: For Security+ incident questions, prioritize: (1) immediate risk reduction via user notification and reporting, (2) longer-term prevention via training and policy/process updates. Gift-card scams are classic social engineering; the best defenses are verification procedures and awareness, not drastic actions like changing numbers or canceling legitimate programs.
A company has begun labeling all laptops with asset inventory stickers and associating them with employee IDs. Which of the following security benefits do these actions provide? (Choose two.)
Correct. Associating an asset tag with an employee ID creates accountability and traceability. If logs or alerts indicate suspicious activity from a specific laptop, the security team can quickly identify the assigned custodian and notify the correct person for immediate containment actions. This also supports investigations by tying device events to an accountable owner (even if attribution to a specific user session may require additional logging).
Incorrect. User awareness training is typically delivered to users (email/LMS) or enforced via policy, not “to a device.” While an inventory system could help identify who has a laptop, the sticker itself doesn’t enable training delivery or ensure the right endpoint receives it. This is more of a security program/training administration function than a direct benefit of asset tagging.
Incorrect. Mapping users to devices for software MFA tokens generally depends on identity provider enrollment, device registration, MDM/UEM records, or device certificates. A physical asset sticker is not a trusted technical identifier for MFA provisioning and is not used by MFA systems to bind tokens. Asset tagging may help administratively, but it’s not the security benefit being tested.
Incorrect. User-based firewall policies are usually targeted using user identity (directory groups), device identity (hostnames, certificates), or MDM compliance state. A sticker does not integrate with firewall policy engines and cannot be reliably used for enforcement. Proper targeting requires technical controls like NAC, endpoint agents, or directory-based policy mapping.
Incorrect. Penetration testing targets are defined by scope (IP ranges, hostnames, applications) and discovered through technical inventory and scanning, not by physical stickers. Asset tags might help correlate a discovered host to a physical device after the fact, but they don’t materially enable targeting the “desired laptops” during a test.
Correct. Asset tags tied to employee IDs support offboarding and data governance by identifying which specific device must be returned, inspected, and sanitized. This helps ensure company data is accounted for (e.g., local files, cached credentials, encryption keys) and that the device is properly wiped/reimaged before reassignment or disposal, reducing data leakage risk.
Core concept: This question tests asset management and accountability controls: physically labeling endpoints (asset tags) and logically associating them to an owner/custodian (employee ID). In Security+ terms, this supports inventory/asset tracking, chain of custody, and governance processes across the asset lifecycle (procurement, assignment, use, incident handling, and offboarding). Why the answers are correct: A is correct because linking a laptop’s asset tag/serial to an employee establishes clear device-to-user accountability. During incident response (malware infection, policy violation, lost device, suspicious log activity), the security team can quickly identify the responsible custodian and notify the correct employee for containment steps (disconnect, bring device in, confirm activity). This reduces time-to-triage and improves auditability. F is correct because asset tagging plus employee association supports offboarding and data accountability. When an employee leaves, the organization can verify which specific laptop (and therefore which corporate data repositories, local caches, and encryption keys) were assigned, ensuring return of the device, secure wipe/reimage, and confirmation that company data is recovered or properly disposed of. This aligns with governance and data lifecycle management. Key features / best practices: Asset inventory should include asset tag, serial number, model, assigned user, department, location, and status (in service, repair, retired). Best practice is to integrate physical inventory with a CMDB/asset management system and to tie it to HR identity records for joiner/mover/leaver workflows. This enables audits, loss/theft reporting, and consistent enforcement of policies (e.g., encryption required, patch compliance reporting). Common misconceptions: Several options describe security controls that are not directly enabled by a sticker-to-employee mapping. MFA token assignment and firewall policy targeting typically rely on directory identities, device certificates, MDM enrollment, or endpoint management identifiers—not a physical sticker. Pen testing targeting and awareness training delivery are also not primary benefits of asset stickers. Exam tips: When you see “asset tags,” think inventory, ownership, accountability, audits, and lifecycle/offboarding. If the question asks for “security benefits,” prioritize incident response traceability and governance outcomes over operational conveniences that require additional technical controls (MDM, NAC, certificates, directory integration).
An IT manager informs the entire help desk staff that only the IT manager and the help desk lead will have access to the administrator console of the help desk software. Which of the following security techniques is the IT manager setting up?
Hardening is the process of reducing a system’s attack surface by applying secure configurations such as disabling unnecessary services, closing unused ports, removing default accounts, applying patches, and enforcing secure baselines. Restricting admin console access can be part of an overall hardening strategy, but the scenario is specifically about limiting permissions to certain users, which maps more directly to least privilege than to general system hardening.
Employee monitoring involves observing and recording user activities (e.g., session recording, keystroke logging, web filtering logs, DLP alerts, or SIEM correlation) to detect misuse, policy violations, or insider threats. The scenario does not describe monitoring or auditing behavior; it describes restricting access to an administrative function. Therefore, employee monitoring is not the technique being set up here.
Configuration enforcement ensures systems and applications maintain required settings and do not drift from an approved baseline. Examples include Group Policy, MDM profiles, security configuration management tools, and compliance checks (CIS benchmarks). While an admin console might be used to enforce configurations, the manager’s action is about limiting who can access that console, not about enforcing a configuration standard across endpoints or servers.
Least privilege means granting users only the minimum access needed to perform their job tasks. Limiting administrator console access to only the IT manager and help desk lead reduces the number of privileged accounts, minimizes the risk of accidental or malicious changes, and limits damage if a standard help desk account is compromised. This is a classic least-privilege/RBAC scenario involving privileged administrative capabilities.
Core Concept: This question tests the principle of least privilege (PoLP), a foundational access control concept. PoLP means users are granted only the minimum permissions necessary to perform their job functions, and no more. It is commonly implemented through role-based access control (RBAC), privileged access management (PAM), and administrative role separation. Why the Answer is Correct: By stating that only the IT manager and the help desk lead will have access to the administrator console, the IT manager is restricting high-impact administrative capabilities to a very small set of authorized personnel. The administrator console typically allows actions like changing configurations, viewing sensitive tickets, managing user accounts/roles, exporting data, or altering audit settings. Limiting this access reduces the attack surface, lowers the risk of accidental misconfiguration, and mitigates insider threat and credential compromise impact. This is a direct application of least privilege: most help desk staff do not need admin-console rights to resolve tickets. Key Features / Best Practices: Least privilege is often enforced by: - RBAC: assign “agent” vs “admin” roles with distinct permissions. - Separation of duties: keep administrative functions with designated leads/managers. - Just-in-time (JIT) elevation: grant temporary admin rights only when needed. - Strong authentication for privileged roles (MFA) and logging/auditing of admin actions. - Periodic access reviews to ensure only appropriate staff retain privileged access. Common Misconceptions: Hardening (A) is broader and refers to securing systems by reducing vulnerabilities (patching, disabling services, secure baselines). While limiting admin access contributes to security, the technique described is specifically about permission scoping, not system configuration reduction. Configuration enforcement (C) is about ensuring systems adhere to a defined configuration baseline (e.g., via GPO/MDM/SCM tools). Employee monitoring (B) is about observing user activity, not restricting privileges. Exam Tips: When a scenario focuses on “who is allowed to access” sensitive functions (admin consoles, root access, security settings), think least privilege and RBAC. If it focuses on “locking down settings” or “removing unnecessary services,” think hardening. If it focuses on “ensuring settings stay compliant,” think configuration enforcement. If it focuses on “watching what employees do,” think monitoring.
Which of the following is the most likely to be used to document risks, responsible parties, and thresholds?
Risk tolerance describes the amount of risk an organization is willing to accept and can be expressed as thresholds (e.g., maximum acceptable residual risk). However, it is not typically the document used to list each risk and assign responsible parties. Tolerance is usually defined in policy or governance statements and then applied within tools like the risk register for escalation and acceptance decisions.
Risk transfer is a risk response strategy where the financial or operational impact is shifted to a third party (e.g., cyber insurance, outsourcing with contractual SLAs, indemnification). While transfer decisions may be recorded somewhere, “risk transfer” itself is not the primary artifact for documenting all risks, owners, and thresholds across the organization.
A risk register is the central repository used to document and track identified risks, including descriptions, likelihood/impact, inherent and residual risk, treatment plans, and status. It commonly includes the risk owner (responsible party) and can capture thresholds/criteria for escalation or acceptance by referencing risk tolerance/appetite and defining scoring cutoffs. This best matches documenting risks, responsible parties, and thresholds.
Risk analysis is the process of evaluating risk—often by determining likelihood and impact and calculating a risk rating (qualitative or quantitative). It produces inputs that may be recorded in a risk register, but it is not usually the ongoing tracking document that assigns owners, tracks remediation dates, and records acceptance/escalation thresholds for each risk.
Core concept: This question is testing governance documentation used in risk management. In mature security programs, risks are tracked formally so leadership can see what the risk is, who owns it, what the organization’s acceptance criteria are, and what actions are planned. Why the answer is correct: A risk register is the primary artifact used to document and track risks over time. It typically includes the risk description, likelihood and impact ratings, inherent vs. residual risk, current controls, planned treatments, due dates, status, and—critically—risk owner/responsible party. Many organizations also record risk thresholds/criteria (e.g., what score requires escalation, what residual risk is acceptable, or which risks exceed the organization’s tolerance) either directly in the register fields or via references to the risk appetite/tolerance statements and escalation rules. Because the question asks for a tool to document risks, responsible parties, and thresholds, the risk register is the best match. Key features/best practices: Risk registers support accountability (named owners), traceability (linking risks to assets, controls, and business processes), and governance (approval/acceptance sign-off and review cadence). They often align with frameworks like NIST RMF/800-30 (risk assessment and communication) and ISO 27005 (risk management), where documenting and communicating risk decisions is essential. Registers are living documents updated as threats, controls, and business context change. Common misconceptions: “Risk tolerance” sounds like it relates to thresholds, but it is a concept/statement (how much risk is acceptable), not the tracking document that lists individual risks and owners. “Risk analysis” is an activity/process that produces risk ratings, but it doesn’t inherently serve as the ongoing repository with owners and thresholds. “Risk transfer” is a treatment strategy (e.g., insurance, contracts), not a documentation mechanism. Exam tips: When you see wording like “document/track risks,” “risk owner,” “status,” “mitigation plan,” or “due dates,” think risk register. When you see “how much risk is acceptable,” think risk appetite/tolerance. When you see “evaluate likelihood/impact,” think risk assessment/analysis. When you see “shift liability,” think risk transfer.
Security controls in a data center are being reviewed to ensure data is properly protected and that human life considerations are included. Which of the following best describes how the controls should be set up?
Remote access points failing closed can be a good security practice (e.g., if an authentication service is unavailable, deny access). However, the question emphasizes “human life considerations” in a data center. Remote access behavior does not directly address life safety requirements like emergency egress. Therefore, while plausible from a security standpoint, it is not the best match to the scenario’s stated priority.
Logging controls should not “fail open” as a design goal. If logging fails, you lose visibility, detection, and forensic capability, which weakens security operations and compliance. In practice, some high-assurance environments may even fail closed (stop processing) if audit logging cannot be performed. Regardless, logging behavior is not the primary control tied to human life considerations, so this option is not the best answer.
Safety controls should fail open (fail safe) to protect human life. Examples include doors unlocking on fire alarm/power loss, emergency exits that always allow egress, and systems that default to a safe state during faults. This aligns with life-safety principles and typical building/fire code requirements. Security can be maintained with compensating controls (guards, cameras, alarms) rather than risking trapping people during an emergency.
Logical security controls should generally fail closed to preserve confidentiality and prevent unauthorized access when a dependency fails (e.g., deny access if an auth server is down). This is a correct principle, but it does not address the question’s key requirement: ensuring human life considerations are included. In a data center review that explicitly calls out life safety, the best description is that safety controls fail open.
Core concept: This question tests “fail-safe” vs. “fail-secure” design in data centers. Controls that protect confidentiality/integrity (logical/physical access) typically should fail secure (fail closed). Controls that protect human life (life safety) must prioritize safe egress and hazard mitigation, which generally means failing open/unlocked or otherwise moving to a safe state. Why the answer is correct: Safety controls (e.g., fire exits, emergency door releases, fire suppression interlocks, emergency power-off considerations, alarm-triggered door releases) are designed so that during a fault condition—power loss, controller failure, or emergency event—people can still evacuate and responders can act. In many facilities, access-controlled doors automatically unlock on fire alarm or loss of power. This is a core life-safety principle reflected in building/fire codes and safety engineering: when safety is at stake, the system should default to a state that reduces risk to human life, even if that temporarily reduces security. Key features / best practices: - Use fail-safe locks (unlock on power loss) for egress paths and emergency exits. - Integrate access control with fire alarm systems so doors release during alarms. - Use compensating controls to manage the security tradeoff: mantraps for non-egress areas, CCTV coverage, guards, and post-event investigations. - Clearly separate “security zones” from “life-safety egress routes” in architecture. Common misconceptions: Many candidates assume “everything should fail closed” because it sounds more secure. That is true for many logical security controls, but it can be dangerous for life-safety systems. Another trap is focusing on a specific control type (e.g., remote access) rather than the question’s explicit requirement: “human life considerations are included.” Exam tips: - If the question mentions safety, evacuation, fire, or human life: think fail open/fail safe. - If it mentions protecting data, preventing unauthorized access, or maintaining confidentiality: think fail closed/fail secure. - When both are present, prioritize life safety and use compensating controls to maintain security.
A technician is opening ports on a firewall for a new system being deployed and supported by a SaaS provider. Which of the following is a risk in the new system?
Default credentials are a common deployment risk for devices and self-managed applications (e.g., routers, IoT, on-prem apps). However, the scenario’s distinguishing detail is that the system is supported by a SaaS provider, not that it ships with default passwords or that the technician is configuring accounts. While still possible, it is not the primary risk being tested here.
A non-segmented network is a security architecture concern: flat networks increase lateral movement and blast radius. The question, however, focuses on opening firewall ports for a SaaS-supported system. Segmentation might be a good design recommendation, but it is not the key risk introduced by using a SaaS provider and exposing connectivity for it.
A supply chain vendor risk is the primary concern when adopting SaaS: you rely on the provider’s security controls, patching, availability, and their own upstream suppliers. Compromise, malicious updates, misconfiguration, insider threat at the provider, or outages can directly affect your organization. Opening firewall ports enables the integration, but the risk highlighted is third-party dependency.
Vulnerable software is always a concern, but SaaS typically shifts patching responsibility to the provider under the shared responsibility model. The question does not mention outdated versions, missing patches, or known CVEs. The more specific and differentiating risk in the prompt is third-party/supply chain exposure rather than a generic software vulnerability.
Core concept: This question tests third-party/SaaS risk and how opening firewall ports to support a cloud-hosted service changes an organization’s exposure. In Security+ terms, this maps to supply chain/third-party risk management, vendor dependencies, and externally exposed services. Why the answer is correct: The system is “deployed and supported by a SaaS provider,” meaning a third party will host, operate, patch, and potentially administer parts of the solution. That introduces supply chain vendor risk: your security now depends on the provider’s controls (secure SDLC, patching cadence, access controls, logging, incident response, subcontractors, and availability). Opening firewall ports is the enabling action that creates connectivity, but the key risk described is reliance on an external vendor and their ecosystem. If the SaaS provider is compromised, has a breach, pushes a malicious update, misconfigures tenant isolation, or suffers an outage, your organization is impacted even if your internal network is well secured. Key features/best practices: Manage this risk with vendor due diligence and contractual controls (SLAs, right-to-audit, breach notification timelines), security attestations (SOC 2 Type II, ISO 27001), data protection requirements (encryption, key management, data residency), and integration hardening (least-privilege API scopes, IP allowlisting, MFA, conditional access). Also ensure monitoring (CASB/SSE where applicable), logging integration, and a clear shared responsibility model. Common misconceptions: Candidates may focus on “opening ports” and assume the risk is “vulnerable software” or “default credentials.” Those are real risks, but the prompt specifically highlights SaaS support, which points to third-party dependency and supply chain exposure rather than a local configuration flaw. Exam tips: When a question mentions SaaS/provider-managed systems, think third-party risk, shared responsibility, and supply chain. If the scenario emphasizes vendor hosting/support/updates, the best answer is often vendor/supply chain risk, even if other technical risks could exist in general.
A security analyst reviews domain activity logs and notices the following:
UserID jsmith, password authentication: succeeded, MFA: failed (invalid code)
UserID jsmith, password authentication: succeeded, MFA: failed (invalid code)
UserID jsmith, password authentication: succeeded, MFA: failed (invalid code)
UserID jsmith, password authentication: succeeded, MFA: failed (invalid code)
Which of the following is the best explanation for what the security analyst has discovered?
Incorrect. An account lockout usually results in log entries indicating the account is locked/disabled or that authentication is denied at the password stage. Here, the password authentication continues to succeed repeatedly, which is inconsistent with a lockout condition. If lockout were occurring, you would expect subsequent attempts to fail immediately rather than proceed to MFA validation.
Incorrect as the best explanation. A keylogger could be one way the attacker obtained the correct password, but the logs do not provide evidence of a keylogger specifically. The observable behavior is repeated MFA failures with a successful password, which more directly indicates someone is attempting to guess or bypass the second factor rather than proving malware on the workstation.
Correct. Repeated successful password authentications followed by MFA failures (invalid code) indicate the password is correct but the second factor is not. Multiple consecutive attempts strongly suggest an attacker is attempting to brute force or repeatedly try MFA codes after obtaining the password (e.g., via phishing or credential reuse). This is a classic sign of an ongoing account compromise attempt blocked by MFA.
Incorrect. Ransomware deployment would typically generate endpoint detection alerts, unusual process execution, mass file modifications/encryption, shadow copy deletion, or lateral movement indicators. Authentication logs showing repeated invalid MFA codes do not align with ransomware activity. While attackers may use compromised accounts for initial access, this log pattern alone points to an authentication attack, not ransomware execution.
Core concept: This log pattern tests authentication telemetry interpretation, specifically MFA behavior during credential attacks. In many environments, authentication is a two-step process: (1) primary factor (password) validation, then (2) secondary factor (OTP/push/FIDO) validation. Logs that show “password authentication: succeeded” followed by repeated “MFA: failed (invalid code)” indicate the correct password is being presented, but the second factor is not. Why the answer is correct: Multiple consecutive attempts where the password succeeds but MFA fails strongly suggests an attacker has obtained jsmith’s password (via phishing, credential stuffing, prior breach reuse, etc.) and is now trying to guess or replay MFA codes. Because the MFA failures are specifically “invalid code,” it aligns with brute forcing or repeatedly trying incorrect one-time passwords. This is a common real-world pattern when attackers have the first factor and are blocked by MFA. Key features/best practices: MFA should be paired with controls such as rate limiting, step-up challenges, lockouts or temporary throttling on repeated MFA failures, impossible travel/anomalous login detection, and conditional access policies (device compliance, geo/IP restrictions). Monitoring should correlate source IP, user agent, and time window to confirm automation. If using TOTP, “invalid code” can also occur due to time drift, but repeated failures after password success is still suspicious and should trigger investigation. Common misconceptions: Account lockout (A) would typically show “account locked/disabled” or “authentication failed” rather than repeated password successes. A keylogger (B) could explain password compromise, but the log evidence specifically indicates repeated MFA code failures, which is more directly explained by an attacker attempting to bypass the second factor. Ransomware (D) is unrelated; ransomware indicators would be file encryption events, endpoint alerts, or unusual SMB activity, not MFA invalid code loops. Exam tips: When you see “password success + MFA failure” repeated, think “password is known, MFA is the barrier.” Repetition indicates automated guessing/brute force against the second factor or repeated login attempts by an adversary. Always map the log message to the stage of the authentication flow to identify what the attacker has and what they are missing.
Which of the following is the most common data loss path for an air-gapped network?
A bastion host is a hardened, exposed system used to access a protected network segment (often in a DMZ). If you truly have an air gap, you generally do not have a network-connected bastion providing a route in/out. Bastion hosts are relevant to segmented networks with controlled connectivity, not to fully isolated environments. Therefore, it is not the most common data loss path for an air-gapped network.
Unsecured Bluetooth can be a data loss path if Bluetooth radios are present and enabled, but in properly designed air-gapped environments, wireless interfaces are typically prohibited or physically removed/disabled. While Bluetooth-based exfiltration is possible, it is less common than removable media because it requires compatible hardware, proximity, and often violates standard air-gap operational controls.
An unpatched OS is a vulnerability, not a “path” by itself. In an air-gapped network, attackers still need a delivery mechanism to exploit the unpatched system (e.g., infected removable media, compromised supply chain, or an insider). Unpatched systems increase risk and impact, but the most common way data actually leaves an air-gapped environment is through physical transfer methods.
Removable devices are the classic and most common bridge across an air gap. They are routinely used for legitimate operational needs (patching, updates, data import/export), which makes them prevalent and difficult to eliminate. This creates a high-likelihood channel for accidental leakage, insider exfiltration, and malware introduction. Strong media controls, scanning, encryption, and strict procedures are essential mitigations.
Core concept: Air-gapped networks are intentionally isolated from other networks (especially the internet) to reduce remote attack paths and limit data exfiltration. Because there is no direct network connectivity, the most common data loss path becomes “sneakernet” transfer—data moving in and out via physical media. Why the answer is correct: Removable devices (USB flash drives, external HDDs/SSDs, SD cards, even writable CDs/DVDs) are the most common and realistic way data leaves an air-gapped environment. Users and administrators still need to import patches, signatures, configuration files, and export logs or reports. That operational necessity creates a frequent, repeatable pathway for both accidental leakage (copying sensitive files) and deliberate exfiltration (insider threat) as well as malware introduction (infected USB). Many high-profile incidents in segmented/isolated environments have involved removable media as the bridging mechanism. Key features / best practices: Control removable media through policy and technical enforcement: disable USB mass storage where possible; use device control/DLP; require encryption (e.g., FIPS-validated) and asset tagging; implement strict media handling procedures (check-in/out, chain of custody); scan media on dedicated “kiosk” systems; use signed updates and allowlisting; and consider one-way transfer mechanisms (data diodes) for specific use cases. Also train users—human behavior is central to this risk. Common misconceptions: People often assume “air-gapped” means “safe from exfiltration,” but isolation mainly removes network-based paths; it does not remove physical transfer. Another misconception is focusing on software flaws (unpatched OS) as the primary path; vulnerabilities matter, but they typically require a delivery mechanism—removable media is the common bridge. Exam tips: For Security+ questions about air-gapped or highly segmented networks, look for the most practical bridging vector. If the scenario emphasizes “no network connectivity,” the likely answer is physical media (removable devices) or other out-of-band transfer methods, not typical internet-facing controls.