
Simule a experiência real do exame com 90 questões e limite de tempo de 90 minutos. Pratique com respostas verificadas por IA e explicações detalhadas.
Powered by IA
Cada resposta é verificada por 3 modelos de IA líderes para garantir máxima precisão. Receba explicações detalhadas por alternativa e análise aprofundada das questões.
Which of the following is used to add extra complexity before using a one-way data transformation algorithm?
Key stretching increases the computational cost of deriving a key or password hash by using many iterations and/or memory-hard functions (e.g., PBKDF2 iterations, bcrypt cost factor, scrypt/Argon2). It helps resist brute-force attacks by slowing each guess. However, it is not primarily about adding random extra data to the input; that description more directly matches salting.
Data masking obfuscates sensitive data (e.g., showing only last 4 digits of a credit card, replacing characters with Xs) to reduce exposure in logs, UIs, or non-production environments. It does not involve one-way transformations for password storage and does not add randomness prior to hashing. It is a confidentiality/control measure, not a hashing-hardening technique.
Steganography hides data within other data (e.g., embedding a message in an image or audio file) to conceal the existence of the information. It is not related to password hashing or one-way transformations. Steganography does not add complexity to a hash input; it is a covert communication/data-hiding technique.
Salting adds a unique, random value to each password before applying a one-way hash. This ensures identical passwords do not produce identical hashes and makes precomputed attacks like rainbow tables impractical because attackers cannot reuse a single table across many hashes. The salt is stored with the hash and is typically not secret; its purpose is uniqueness and anti-precomputation.
Core Concept: This question tests password hashing protections used with one-way data transformation algorithms (cryptographic hash functions such as SHA-256, bcrypt, scrypt, Argon2). Because hashes are one-way, systems store hashes instead of plaintext passwords. Attackers can still crack hashes using precomputed tables (rainbow tables) or high-speed brute force, so additional measures are applied before hashing. Why the Answer is Correct: Salting is the practice of adding a unique, random value (the salt) to each password before hashing. The salt is stored alongside the resulting hash. This “extra complexity” ensures that identical passwords produce different hashes across users and across systems, and it defeats rainbow tables because the attacker would need a separate precomputed table for every possible salt value. Salting also prevents attackers from quickly spotting reused passwords by comparing hashes. Key Features / Best Practices: 1) Use a unique, cryptographically random salt per password (not a single global salt). 2) Store the salt with the hash; it is not required to be secret. 3) Combine salting with a slow, password-specific hashing/KDF algorithm (bcrypt, scrypt, Argon2) and appropriate work factors. 4) Consider adding a secret “pepper” (stored separately, e.g., in an HSM) for additional protection, but pepper is not the same as salt. Common Misconceptions: Key stretching sounds similar because it also increases cracking difficulty, but it refers to repeatedly applying a hash/KDF to increase computational cost, not specifically “adding extra complexity” (random data) to the input. Data masking and steganography are unrelated to password hashing. Exam Tips: If the question mentions “random value added to a password before hashing” or “defeats rainbow tables,” choose salting. If it emphasizes “making hashing slower by increasing iterations/work factor,” that points to key stretching (often implemented by PBKDF2/bcrypt/scrypt/Argon2).
Which of the following has been implemented when a host-based firewall on a legacy Linux system allows connections from only specific internal IP addresses?
A compensating control is an alternative safeguard used when the preferred control can’t be implemented, often due to legacy limitations. Restricting inbound connections on a legacy Linux host to only specific internal IPs reduces exposure and compensates for the inability to fully modernize, patch, or deploy stronger controls on that system. It mitigates risk without eliminating the underlying legacy issue.
Network segmentation is the architectural practice of dividing networks into separate zones (VLANs/subnets) and controlling traffic between them using network devices and policies. While segmentation can limit access to a legacy host, the scenario describes a host-based firewall rule on the system itself, not a redesign of network boundaries or inter-segment controls.
Transfer of risk means shifting financial or operational impact to another party, such as through cyber insurance, outsourcing, or contractual agreements. A host-based firewall allowlist does not transfer responsibility or impact; it directly reduces the likelihood of unauthorized access. Therefore, it is risk mitigation, not risk transfer.
SNMP traps are asynchronous alert messages sent from managed devices to an SNMP manager to report events (e.g., interface down, high CPU). They provide monitoring and visibility but do not enforce access control. Allowing connections only from specific internal IP addresses is an access restriction, not an SNMP-based alerting mechanism.
Core Concept: This question tests compensating controls and host-based access restrictions. A compensating control is an alternative security measure used when a primary/desired control cannot be implemented (often due to legacy constraints, cost, or operational limitations). In Security+ terms, it’s a risk mitigation technique that provides comparable protection when the ideal control isn’t feasible. Why the Answer is Correct: A legacy Linux system often cannot support modern security requirements (e.g., current endpoint agents, strong authentication modules, or timely patching). If the organization cannot fully remediate the underlying weakness (legacy OS/app constraints), implementing a host-based firewall rule that only allows connections from specific internal IP addresses reduces the attack surface and limits who can reach the host. This is a classic compensating control: it does not “fix” the legacy risk, but it compensates by adding a restrictive barrier to reduce likelihood/impact of compromise. Key Features / Best Practices: Host-based firewalls (e.g., iptables/nftables/firewalld) can enforce allowlists (source IP restrictions), limit exposed ports, and apply default-deny policies. Best practice is “deny by default, allow by exception,” combined with least privilege networking (only required ports, only required sources). This is frequently paired with additional controls such as logging, IDS/IPS monitoring, and jump hosts/bastions for administrative access. Common Misconceptions: Network segmentation (B) is related but typically refers to separating networks using VLANs/subnets and network devices (switches/routers/firewalls) to control traffic between segments. Here, the control is explicitly host-based on the legacy system, not a network architecture change. Transfer of risk (C) involves shifting risk to a third party (insurance, outsourcing, contracts), not reducing exposure via firewall rules. SNMP traps (D) are monitoring/alerting messages and do not enforce access restrictions. Exam Tips: When you see “legacy system” plus “can’t implement the ideal security solution,” look for compensating controls (additional restrictions, isolation, allowlisting, jump boxes). If the question emphasizes host-level rules restricting who can connect, that’s a compensating control rather than segmentation. Segmentation is more about network design boundaries; host firewalls are endpoint controls that can compensate for weak/unsupported systems.
An enterprise is trying to limit outbound DNS traffic originating from its internal network. Outbound DNS requests will only be allowed from one device with the IP address 10.50.10.25. Which of the following firewall ACLs will accomplish this goal?
This is incorrect because the first rule permits DNS from any source to any destination on port 53, which already allows all outbound DNS. The second rule then denies DNS from 10.50.10.25, which is the opposite of the requirement. Additionally, due to top-down processing, the initial broad permit would match first and the deny would never effectively restrict traffic as intended.
This is incorrect because it reverses the meaning of the destination. It permits traffic from any source to destination 10.50.10.25 on port 53, which would be relevant for inbound DNS to an internal DNS server, not outbound DNS originating from that host. The subsequent deny blocks all DNS to any destination, which would still not correctly allow 10.50.10.25 to query external DNS.
This is incorrect because, like option A, it begins with a broad permit allowing DNS from any source to any destination on port 53, which defeats the goal of limiting outbound DNS. The deny rule also targets traffic destined to 10.50.10.25, not sourced from it, so it does not implement “only this internal host may originate DNS.”
This is correct because it permits outbound DNS only when the source is 10.50.10.25/32 and the destination is any (0.0.0.0/0) on port 53. The next rule denies all other outbound DNS (any source to any destination on port 53). With first-match ACL processing, the specific permit is evaluated before the general deny, enforcing the stated requirement.
Core Concept: This question tests firewall ACL logic for egress (outbound) filtering of DNS. DNS queries typically use destination port 53 (UDP/53 primarily; TCP/53 for zone transfers and large responses). An outbound ACL should restrict which internal source hosts are allowed to send DNS traffic to external resolvers. Why the Answer is Correct: The requirement is: only one internal device (10.50.10.25) may originate outbound DNS requests. Therefore, the ACL must (1) permit DNS traffic with source 10.50.10.25 to any destination on port 53, and then (2) deny DNS traffic from all other sources to any destination on port 53. Option D does exactly this: - permit 10.50.10.25/32 -> 0.0.0.0/0 port 53 - deny 0.0.0.0/0 -> 0.0.0.0/0 port 53 Because ACLs are processed top-down with first-match behavior, the specific permit for the allowed host must appear before the broader deny. Key Features / Best Practices: - Order matters: place specific permits before general denies. - Use /32 for a single host. - In real implementations, specify protocol (udp/tcp) and direction/interface (egress on the internal-to-external interface). Many environments also allow TCP/53 in addition to UDP/53. - Consider logging the deny rule to detect policy violations or malware attempting DNS tunneling. Common Misconceptions: A frequent mistake is reversing source and destination fields, accidentally permitting traffic to the internal host rather than from it. Another is placing a broad permit first, which would allow everyone and render later denies ineffective. Exam Tips: For “only X is allowed,” look for: (1) a permit for X, then (2) a deny for everyone else, matching the same service/port. Also verify the correct directionality: internal host should be the source for outbound traffic, and the destination is typically “any” (0.0.0.0/0).
An analyst is evaluating the implementation of Zero Trust principles within the data plane. Which of the following would be most relevant for the analyst to evaluate?
Secured zones align with Zero Trust data-plane implementation because they describe how resources are segmented and how traffic is controlled between segments. Evaluating secured zones includes checking microsegmentation, default-deny policies, enforcement points in the traffic path, and restrictions on east-west movement. This is a concrete architectural control that directly affects how data flows and is protected in the data plane.
Subject role is an identity and access management concept used to determine authorization (e.g., RBAC/ABAC). While roles influence policy decisions, they are typically evaluated in the control plane (policy decision logic) rather than the data plane (traffic enforcement and segmentation). Roles can feed enforcement rules, but “subject role” itself is not a data-plane implementation element.
Adaptive identity refers to risk-based or context-aware authentication/authorization (device posture, location, behavior, step-up MFA). This is strongly associated with identity systems and policy decision processes (control plane). The data plane may enforce the resulting decision, but adaptive identity is not primarily about how traffic is segmented or routed; it’s about how trust is evaluated before allowing access.
Threat scope reduction is a desired outcome of Zero Trust (limiting blast radius and lateral movement). However, it is not a specific data-plane component to evaluate. The analyst would instead assess the mechanisms that achieve scope reduction—such as secured zones, microsegmentation, and enforcement points—rather than the abstract goal itself.
Core concept: Zero Trust Architecture (ZTA) separates concerns into planes (control/management vs data plane). The data plane is where actual traffic flows and where enforcement happens: segmentation, policy enforcement points (PEPs), microperimeters, and protected communication paths. In Security+ terms, this maps strongly to network/security architecture controls that limit lateral movement and constrain access paths. Why the answer is correct: “Secured zones” are directly relevant to evaluating Zero Trust in the data plane because they represent how the organization segments and isolates resources and enforces access between segments. In ZTA, you assume breach and design the data plane so that workloads, applications, and data are placed into tightly controlled zones (often microsegments) with explicit allow rules, continuous verification, and strong inspection. Evaluating secured zones means checking whether traffic between zones is mediated by enforcement points (e.g., firewalls, microsegmentation agents, service mesh policies), whether east-west traffic is restricted, and whether access is granted per-session/per-request rather than broad network trust. Key features / what to evaluate: - Microsegmentation and zoning strategy (workload/app/data tiers separated; least-privilege flows) - Enforcement points in the data path (NGFW, host-based firewall, SDN policies, service mesh mTLS + authorization) - Default-deny between zones, explicit allow lists, and tight egress controls - Continuous monitoring/telemetry for zone-to-zone flows and policy violations - Minimizing implicit trust based on network location (no “trusted internal network”) Common misconceptions: Options like “adaptive identity” and “subject role” sound Zero Trust-related, but they primarily belong to identity/control-plane decisioning (who you are, what role you have, risk-based authentication). “Threat scope reduction” is a goal/outcome of ZTA, not a concrete data-plane implementation element to evaluate. Exam tips: When you see “data plane” in Zero Trust questions, think “where traffic is enforced and segmented.” Look for answers tied to segmentation, zones, microperimeters, and enforcement points. When you see identity attributes, roles, or adaptive authentication, those are typically control-plane decision inputs rather than data-plane constructs.
A company needs to provide administrative access to internal resources while minimizing the traffic allowed through the security boundary. Which of the following methods is most secure?
A bastion host is a hardened jump point used for administrative access into internal networks. It reduces the attack surface by limiting inbound firewall rules to a single system and centralizes authentication, logging, and monitoring. Admins connect to the bastion (often via VPN/SSH/RDP with MFA) and then access internal resources from there, minimizing exposed management ports at the boundary.
A perimeter network (DMZ) is a segmented network placed between the internet and the internal network, typically used to host public-facing services. While it improves segmentation, it does not by itself provide the most secure method for administrative access or minimize boundary traffic. You would still need a controlled admin entry point (like a bastion) to avoid opening management access broadly.
A WAF (Web Application Firewall) protects web applications by filtering and monitoring HTTP/HTTPS traffic, mitigating attacks like SQL injection and XSS. It is not designed to provide administrative access to internal resources or reduce management-plane exposure. Even with a WAF, you would still need a secure remote administration approach for non-web internal systems.
Single sign-on (SSO) centralizes authentication and can improve security with stronger identity controls and reduced password sprawl. However, SSO does not inherently minimize the network traffic allowed through a security boundary or reduce exposed ports/services. It addresses who can authenticate, not how many firewall openings are required for administrative connectivity.
Core Concept: This question tests secure administrative access design at a network/security boundary. The goal is to minimize allowed inbound traffic while still enabling admins to manage internal systems. This aligns with least privilege, reduced attack surface, and controlled management-plane access. Why the Answer is Correct: A bastion host (also called a jump host/jump box) is the most secure method here because it concentrates administrative entry into a single hardened, tightly monitored system. Instead of opening multiple firewall rules to many internal servers (e.g., SSH/RDP/WinRM to each host), you allow management traffic only to the bastion host, and then admins pivot from that controlled point to internal resources. This minimizes exposed services at the boundary and provides a single choke point for authentication, logging, and session control. Key Features / Best Practices: A secure bastion host is typically placed in a controlled segment (often a DMZ/perimeter subnet) with strict inbound rules (e.g., only VPN-to-bastion, or only SSH from a management network), strict outbound rules to only required internal management ports, and strong hardening (patching, minimal services, host firewall). Use MFA, privileged access management (PAM), short-lived credentials, and session recording where possible. Centralize logs (SIEM), enable command auditing, and restrict admin tools to the bastion to prevent direct admin access from unmanaged endpoints. Common Misconceptions: A perimeter network (DMZ) sounds similar, but it’s a broader architecture for hosting exposed services; it doesn’t inherently minimize admin traffic unless paired with a bastion. A WAF protects web applications (HTTP/HTTPS) and is not a general administrative access solution. Single sign-on improves authentication usability and can strengthen identity controls, but it does not reduce the number of network paths/ports exposed through the boundary. Exam Tips: When you see “administrative access” plus “minimizing traffic through the boundary,” think “jump/bastion host” or “management plane isolation.” Look for answers that reduce exposed ports and centralize control, monitoring, and auditing. Bastion hosts are a common Security+ pattern for secure remote administration and segmentation.
Quer praticar todas as questões em qualquer lugar?
Baixe o Cloud Pass grátis — inclui simulados, acompanhamento de progresso e mais.
A newly appointed board member with cybersecurity knowledge wants the board of directors to receive a quarterly report detailing the number of incidents that impacted the organization. The systems administrator is creating a way to present the data to the board of directors. Which of the following should the systems administrator use?
Packet captures (PCAPs) are detailed network traffic recordings used for troubleshooting, threat hunting, and forensic analysis (e.g., reconstructing sessions, identifying C2 traffic). They are not appropriate for quarterly board reporting because they are highly technical, large in volume, and not summarized into business-relevant metrics like incident counts and trends.
Vulnerability scans identify missing patches, misconfigurations, and known CVEs to measure exposure and support remediation prioritization. While scan results can feed metrics (e.g., critical vulnerabilities over time), they do not directly answer “number of incidents that impacted the organization.” Incidents are realized events; vulnerabilities are potential weaknesses.
Metadata is data about data (timestamps, source, tags, severity labels, asset identifiers). Incident metadata can be used to build reports, but metadata alone is not a reporting or visualization tool. The board needs a consumable presentation format; metadata still requires aggregation and visualization to become an executive report.
A dashboard is the best choice for presenting quarterly incident metrics to the board. It can aggregate incident data from multiple sources and display KPIs/KRIs such as incident counts, severity distribution, trends, and business impact. Dashboards are designed for leadership communication and can be scheduled/exported for recurring board reports.
Core Concept: This question tests security reporting and executive-level communication. Boards need high-level, trend-focused metrics (KPIs/KRIs) presented in a clear, repeatable format. In Security+ terms, this aligns with governance, metrics, and reporting—turning operational security data (incidents) into management-ready information. Why the Answer is Correct: A dashboard is designed to summarize and visualize key metrics—such as the number of incidents impacting the organization—over a defined period (quarterly) and present them in an executive-friendly way. Dashboards can show totals, trends over time, severity breakdowns, business unit impacts, mean time to detect/respond, and comparisons to previous quarters. This is exactly what a board member is requesting: a periodic report that communicates outcomes and risk posture without requiring deep technical artifacts. Key Features / Best Practices: Effective security dashboards typically: - Aggregate data from incident response platforms, SIEM/SOAR, ticketing systems, and EDR tools. - Provide trend charts (QoQ), severity categories, and business impact summaries. - Use consistent definitions (what counts as an “incident,” what counts as “impact”) to avoid metric drift. - Support role-based views: board-level (strategic) vs. SOC-level (tactical). - Enable export/scheduled reporting for quarterly board packets. These practices align with governance and oversight expectations (e.g., communicating risk and performance metrics to leadership). Common Misconceptions: Packet captures and vulnerability scans are valuable security operations tools, but they are too technical or measure different things (network traffic evidence vs. exposure). “Metadata” can support reporting, but it is not itself a presentation mechanism; it’s raw descriptive data that still needs to be summarized and visualized. Exam Tips: When the audience is executives/board, choose solutions that emphasize clarity, trends, and decision support (dashboards, scorecards, metrics reports). When the task is investigation/forensics, think packet captures/logs. When the task is finding weaknesses, think vulnerability scans. When the term is about descriptive attributes of data, think metadata—not reporting outputs.
An administrator notices that several users are logging in from suspicious IP addresses. After speaking with the users, the administrator determines that the employees were not logging in from those IP addresses and resets the affected users’ passwords. Which of the following should the administrator implement to prevent this type of attack from succeeding in the future?
Multifactor authentication is the best control to stop attacks that rely on stolen passwords. MFA requires at least two different factor types (know/have/are), so a compromised password alone cannot be used to log in. For Security+, recognize MFA as the primary mitigation for phishing, credential stuffing, and password reuse. Phishing-resistant MFA (FIDO2/WebAuthn) is strongest, but any MFA is better than passwords alone.
Permissions assignment (authorization) determines what an authenticated user can access (least privilege, RBAC, ACLs). It does not prevent an attacker from successfully logging in as a user if the attacker has the user’s credentials. While proper permissions can limit damage after compromise, it does not address the core issue in the scenario: unauthorized authentication from suspicious IP addresses.
Access management is a broad term covering identity lifecycle, provisioning/deprovisioning, SSO, and policy enforcement. While it can include MFA, the option is too generic compared to the direct, specific control needed here. The question asks what to implement to prevent this type of attack (credential-based unauthorized logins) from succeeding; MFA is the precise mechanism that blocks password-only authentication.
Password complexity increases resistance to brute-force guessing, but it does not stop common real-world credential compromise methods such as phishing, keylogging, database leaks, or credential stuffing. Attackers logging in from suspicious IPs typically indicates they already have valid credentials. Complexity rules also often lead to predictable patterns and reuse. MFA is the stronger, targeted mitigation for this scenario.
Core Concept: This scenario describes account compromise via stolen credentials (e.g., phishing, credential stuffing, password reuse, malware). The key control to prevent a stolen password from being sufficient to authenticate is strong authentication, most commonly multifactor authentication (MFA). Why the Answer is Correct: Users are logging in from suspicious IP addresses and deny the activity, indicating an attacker successfully authenticated as them. Resetting passwords is a short-term containment step, but it does not address the root issue: passwords alone are a single factor and can be captured, guessed, reused, or replayed. Implementing MFA adds an additional factor (something you have/are), so even if an attacker knows the password, they still cannot complete the login without the second factor. This directly prevents the same attack from succeeding in the future. Key Features / Best Practices: Effective MFA implementations include phishing-resistant methods (FIDO2/WebAuthn security keys, passkeys) or app-based push/TOTP as a common baseline. Pair MFA with conditional access policies (step-up MFA for risky sign-ins, impossible travel, new device, or unfamiliar geo/IP), and ensure legacy authentication protocols (e.g., basic auth/IMAP/POP without MFA) are disabled to avoid bypass. For remote access, integrate MFA with VPN/SSO and enforce device posture checks where possible. Common Misconceptions: Password complexity may seem helpful, but complex passwords can still be phished or reused across sites; credential stuffing defeats complexity if the password is already known. Permissions assignment and access management are important IAM practices, but they primarily control what an authenticated user can do, not whether an attacker can authenticate using stolen credentials. Exam Tips: When a question indicates unauthorized logins using valid accounts (especially from unusual IPs/locations), think “credential compromise.” The best preventive control is MFA, ideally phishing-resistant. Password resets are remediation; MFA is prevention. Also remember to consider disabling legacy auth and using risk-based/conditional access as supporting measures.
A company's marketing department collects, modifies, and stores sensitive customer data. The infrastructure team is responsible for securing the data while in transit and at rest. Which of the following data roles describes the customer?
Processor is the role/entity that performs processing activities on data (collecting, transforming, analyzing, or storing) typically on behalf of the data owner/controller. In many privacy frameworks, processors are often third parties (e.g., SaaS analytics vendors) acting under contract. The customer is not processing their own record here; they are the person the data is about.
Custodian is usually IT/infrastructure/security operations that implements and maintains the controls that protect data (encryption, access controls, backups, patching, logging). In the scenario, the infrastructure team securing data in transit and at rest fits custodian responsibilities. The customer is not responsible for safeguarding the organization’s stored data, so they are not the custodian.
Subject is the individual to whom the data pertains—the person described by the record. Since the company collects and stores sensitive customer data, the customer is the data subject. This aligns with common governance and privacy terminology: the subject is the natural person whose PII/PHI is being processed and protected.
Owner is the role accountable for the data set: classification, defining handling requirements, approving access, and ensuring compliance with policy. In many organizations, a business unit like Marketing is the data owner for customer marketing data. Although customers may have legal rights over their personal data, in Security+ role terminology they are not the organizational data owner.
Core Concept: This question tests data governance roles (data owner, custodian, processor, subject) and how they map to people and teams interacting with sensitive data. These roles are common in security programs and privacy frameworks (e.g., GDPR terminology and general data classification/handling models). Why the Answer is Correct: The customer is the individual whose personal/sensitive data is being collected, modified, and stored. That individual is the data subject. A data subject is the person the data is about (PII/PHI/customer records). The scenario explicitly says “sensitive customer data,” so the customer is the subject of the data. Key Features / Role Responsibilities: - Data subject: Has rights/expectations around privacy and appropriate use; in many regulations can request access, correction, deletion, etc. - Data owner (often a business unit like Marketing): Accountable for the data’s classification, acceptable use, retention requirements, and who should have access. - Data custodian (often IT/Infrastructure/Security): Implements and operates controls the owner requires—encryption at rest/in transit, backups, access control enforcement, key management, logging, and secure storage. - Data processor: An entity that processes data on behalf of the controller/owner (often a third party, or sometimes an internal function in privacy contexts). Processing includes collecting, transforming, analyzing, or storing under instruction. Common Misconceptions: Many learners confuse “processor” with “subject” because the marketing department “processes” the data. However, the question asks which role describes the customer, not the department. Others may pick “owner” because the customer ‘owns’ their data in a moral sense, but in Security+ governance terminology the owner is the organizational role accountable for the dataset. Exam Tips: When a question asks about the person the data describes (customer/patient/employee), think “data subject.” When it asks who is accountable for classification and access decisions, think “data owner.” When it asks who implements protections like encryption and backups, think “custodian.” When it asks who handles data on behalf of another party (often third-party services), think “processor.”
Which of the following describes the reason root cause analysis should be conducted as part of incident response?
Gathering IoCs (hashes, domains, IPs, file paths, registry keys) is part of incident investigation and detection engineering, primarily during identification and analysis. RCA may use IoCs as evidence, but collecting IoCs is not the main reason RCA is performed. RCA focuses on underlying causes and control failures, not just observable indicators.
Discovering which systems are affected is scoping, a key incident response activity during identification/analysis and containment planning. RCA can leverage scoping results, but scoping answers “what is impacted,” while RCA answers “why it happened” and “what must change to prevent recurrence.” Therefore this is not the best reason for RCA.
Eradicating malware is a tactical incident response phase (eradication and recovery). While RCA findings can guide eradication (e.g., remove persistence mechanisms, close initial access vector), the purpose of RCA is broader than cleanup. You can eradicate malware and still be vulnerable to the same attack if the root cause remains unaddressed.
RCA is conducted to identify the underlying cause(s) and contributing factors so the organization can implement corrective actions and prevent similar incidents from happening again. This aligns with the lessons learned/continuous improvement portion of incident response and drives long-term fixes such as patching, configuration hardening, process changes, and improved monitoring.
Core Concept: Root cause analysis (RCA) is a post-incident activity within the incident response lifecycle (often aligned to NIST SP 800-61). Its purpose is to determine the underlying cause(s) of an incident (e.g., exploited vulnerability, misconfiguration, weak process controls) so the organization can implement corrective and preventive actions. Why the Answer is Correct: The primary reason to conduct RCA is to prevent recurrence—i.e., prevent future incidents of the same nature. RCA goes beyond identifying what happened and which indicators were observed; it focuses on why it happened and what systemic changes are needed (patching, hardening, policy/process updates, training, architectural changes). Without RCA, teams may restore services and remove malware but leave the enabling conditions in place, leading to repeat compromise. Key Features / Best Practices: RCA typically includes timeline reconstruction, attack path analysis, control gap analysis, and validation of assumptions (e.g., which control failed: MFA not enforced, logging gaps, unpatched CVE, overly permissive IAM). Outputs often include lessons learned, updated playbooks, improved detections, and long-term remediation items tracked to closure. Effective RCA also feeds risk management and continuous improvement (e.g., updating threat models, security baselines, and change management). Common Misconceptions: Many confuse RCA with investigation tasks like collecting indicators of compromise (IoCs) or scoping affected systems. Those are crucial during identification/analysis, but they are not the main purpose of RCA. Others equate RCA with eradication (removing malware). Eradication is a containment/eradication/recovery activity; RCA informs eradication priorities but is broader and focused on preventing recurrence. Exam Tips: On Security+ questions, look for wording like “prevent recurrence,” “lessons learned,” “process improvement,” or “address underlying cause.” Those cues point to RCA. If an option describes immediate tactical response (IoC collection, scoping, eradication), it’s likely not the best answer when the question asks specifically why RCA is conducted.
A security administrator needs a method to secure data in an environment that includes some form of checks so track any changes. Which of the following should the administrator set up to achieve this goal?
SPF (Sender Policy Framework) is an email authentication mechanism that helps prevent sender address spoofing by publishing authorized sending hosts in DNS. It is used to improve email trust and reduce phishing/spam. SPF does not provide file/data integrity checks or change tracking for stored data, so it does not meet the requirement to monitor and detect changes in an environment.
A GPO (Group Policy Object) is used in Windows environments to centrally enforce configuration settings (password policies, security options, software restrictions, firewall rules, etc.). While GPO can harden systems and reduce unauthorized changes by limiting permissions, it does not inherently perform integrity checking or provide continuous monitoring/alerting when files or configurations change.
NAC (Network Access Control) enforces security posture before allowing devices onto a network (e.g., checking AV status, patch levels, certificates, device compliance). NAC is about controlling and segmenting network access, not monitoring the integrity of data at rest. It may reduce risk by limiting who can connect, but it does not provide checks to track changes to files/data.
FIM (File Integrity Monitoring) is specifically designed to detect and report unauthorized changes to files, directories, registry keys, and critical system configurations. It works by creating a baseline of known-good states, often using cryptographic hashes, and then comparing current values against that baseline on a scheduled or real-time basis. This directly matches the requirement for a method that includes checks to track any changes in the environment. FIM is commonly used to support integrity monitoring, compliance requirements, and incident response by showing what changed, when it changed, and potentially who initiated the change.
Core concept: The question is testing integrity monitoring and change detection controls. In Security+ terms, this aligns with File Integrity Monitoring (FIM), which establishes a known-good baseline (often via cryptographic hashes) and then continuously or periodically checks for changes to files, configurations, or critical system objects. Why the answer is correct: The administrator needs to “secure data” with “checks to track any changes.” That requirement maps directly to integrity assurance and auditing of modifications. FIM tools compute hashes (e.g., SHA-256) for protected files/registries/configs and alert when a file is modified, deleted, created, or permissions change. This provides evidence of tampering and supports incident response by showing what changed and when. Key features and best practices: FIM typically includes baselining, scheduled/real-time monitoring, alerting, and reporting. It is commonly deployed on servers, endpoints, and critical infrastructure to detect unauthorized changes (web shells, altered binaries, modified configuration files, persistence mechanisms). Best practices include: protecting the FIM database/baselines (so attackers can’t update the “known good” state), monitoring high-value paths (system binaries, application directories, authentication/authorization configs), integrating alerts with a SIEM, and tuning to reduce noise (approved change windows, allowlists). Common misconceptions: Some may think GPO “secures data” by enforcing policies, but it doesn’t inherently track file changes. NAC controls network access posture, not data integrity. SPF relates to email sender validation and has nothing to do with tracking changes to stored data. Exam tips: When you see wording like “detect changes,” “tamper detection,” “integrity checking,” “baseline and alert,” or “track modifications,” think FIM (and related concepts like hashing, checksums, and integrity monitoring). If the question is about preventing modification, look for access controls; if it’s about detecting/recording modification, FIM is the best match.


Quer praticar todas as questões em qualquer lugar?
Baixe o app grátis
Baixe o Cloud Pass grátis — inclui simulados, acompanhamento de progresso e mais.