
Simula la experiencia real del examen con 90 preguntas y un límite de tiempo de 165 minutos. Practica con respuestas verificadas por IA y explicaciones detalladas.
Impulsado por IA
Cada respuesta es verificada de forma cruzada por 3 modelos de IA líderes para garantizar la máxima precisión. Obtén explicaciones detalladas por opción y análisis profundo de cada pregunta.
A software development company wants to ensure that users can confirm the software is legitimate when installing it. Which of the following is the best way for the company to achieve this security objective?
Code signing uses a digital signature to bind the software to the publisher’s identity and to detect tampering. The vendor signs the installer/binary with a private key, and users/OS verify it using the public key and certificate chain to a trusted CA. This provides integrity and authenticity at install time and is the standard control for establishing software legitimacy.
Non-repudiation is a security property where a party cannot credibly deny an action (often achieved via digital signatures, logging, and timestamps). While code signing can support non-repudiation, the question asks how users confirm software is legitimate during installation. The practical mechanism is code signing, not the abstract concept of non-repudiation.
Key escrow is the practice of storing encryption keys with a trusted third party or internal escrow service to enable recovery (e.g., when users leave or keys are lost). It is used for data availability and compliance, not for proving that an installer is authentic. Escrow does not provide end users with a verification mechanism for software legitimacy.
Private keys are required to create digital signatures, but simply having private keys does not allow users to verify legitimacy. Verification depends on the signed artifact, a corresponding public key, and a trusted certificate chain (PKI) plus revocation checking. The correct control is the process of code signing and certificate management, not the key alone.
Core concept: This question tests software authenticity and integrity controls during distribution/installation. The primary mechanism is digital signatures applied to executables, installers, scripts, or packages using a publisher’s private key and validated with the corresponding public key via a trusted certificate chain (PKI). Why the answer is correct: Code signing is the best way for a software company to let users confirm software is legitimate at install time. When the vendor signs the software, the installer (or OS/package manager) can verify: (1) the software has not been modified since signing (integrity) and (2) the signer’s identity is tied to a certificate issued by a trusted CA (authenticity). If malware tampers with the binary, signature verification fails. If the certificate is untrusted or revoked, users receive warnings or installation is blocked depending on policy. Key features / best practices: - Use Authenticode (Windows), Apple Developer ID signing/notarization (macOS), and package signing (e.g., RPM/DEB, container image signing) as applicable. - Protect signing private keys with HSMs, strong access controls, MFA, and separation of duties; treat signing as a high-risk operation. - Use timestamping so signatures remain valid after certificate expiration. - Implement certificate lifecycle management: renewal, revocation (CRL/OCSP), and monitoring for misuse. - Integrate signing into CI/CD with controlled release pipelines and reproducible builds where possible. Common misconceptions: Non-repudiation is a property provided by digital signatures, but it is not the concrete control users rely on during installation; code signing is the practical implementation. Key escrow relates to recovering encryption keys (often for data access), not proving software legitimacy. “Private keys” alone do nothing unless used in a signing process with verifiable certificates. Exam tips: When you see “users can confirm software is legitimate” or “verify publisher and integrity,” think “digital signature/code signing.” If the question emphasizes “cannot deny having performed an action,” that points to non-repudiation. If it emphasizes “recover encrypted data keys,” that points to key escrow.
¿Quieres practicar todas las preguntas en cualquier lugar?
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.


Descarga Cloud Pass y accede a todas las preguntas de práctica de CAS-005: CompTIA SecurityX gratis.
¿Quieres practicar todas las preguntas en cualquier lugar?
Obtén la app gratis
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
A senior cybersecurity engineer is solving a digital certificate issue in which the CA denied certificate issuance due to failed subject identity validation. At which of the following steps within the PKI enrollment process would the denial have occurred?
RAB is best interpreted here as the Registration Authority/Registration Authority function in the PKI enrollment workflow. The RA is responsible for validating the subject's identity, checking authorization, and ensuring the request complies with certificate policy before the CA issues anything. If subject identity validation fails, the request is typically rejected at this stage and never proceeds to signing. This matches the scenario exactly because the denial reason is tied to identity proofing rather than cryptographic issuance.
The CA is the component that ultimately signs and issues certificates, and it can reject requests based on policy or workflow outcomes. However, the specific task of validating the subject's identity is classically associated with the Registration Authority or an RA-like approval process. In PKI role separation, the RA performs identity proofing while the CA performs certificate generation and signing. Therefore, the denial would have occurred at the identity-validation step before the CA issued the certificate.
An IdP provides authentication and identity assertions for access management systems such as SAML, OAuth, or OpenID Connect. Although an IdP may authenticate a user accessing an enrollment portal, it is not the standard PKI component that performs certificate subject validation and approval. PKI enrollment decisions are governed by certificate policy and RA/CA workflow, not by the IdP itself. Therefore, an IdP is not the step where certificate issuance would be denied for failed subject identity validation.
Core concept: This question tests understanding of the PKI certificate enrollment workflow and the roles involved in identity proofing and authorization. In many enterprise PKI designs, a Registration Authority (RA) or a Registration Authority component such as a Registration Authority/Authority (often implemented as a Registration Authority or a Registration Authority function within a Registration Authority/Authority system) performs subject identity validation and approves or rejects certificate requests before the CA issues a certificate. Why the answer is correct: A denial due to failed subject identity validation occurs at the identity proofing/validation gate. That gate is typically the RA (or RA function), which validates the requester’s identity and entitlement (e.g., verifying a person’s identity, verifying device ownership, checking HR records, confirming domain control, or ensuring the subject DN/SAN matches policy). If validation fails, the request is rejected and never proceeds to certificate issuance. While the CA ultimately issues or refuses issuance, the specific reason given—failed subject identity validation—maps to the RA step in the enrollment process. Key features / best practices: RA responsibilities commonly include: verifying identity per Certificate Policy (CP) and Certification Practice Statement (CPS), enforcing naming rules (DN/SAN constraints), checking authorization (who is allowed which certificate types), validating proof-of-possession (depending on protocol), and approving requests (manual or automated). In Microsoft AD CS, for example, a CA manager or an RA-like approval workflow can be required for certain templates; in many commercial/public CAs, the RA function is the validation team/process. Best practice is separation of duties: RA validates identity; CA signs/creates certificates. Common misconceptions: It’s tempting to choose “CA” because the CA is the entity that “denied issuance.” However, the question ties the denial to “failed subject identity validation,” which is characteristically an RA function. OCSP is about revocation status checking after issuance, not enrollment validation. An IdP is used for federated authentication (SAML/OIDC) and is not a standard PKI enrollment validation component. Exam tips: Associate RA with identity proofing/approval, CA with signing/issuing, OCSP with revocation checking, and IdP with federated login. When a question mentions validation of the subject/requester before issuance, think RA (or RA workflow) in the enrollment process.
A company that provides services to clients who work with highly sensitive data would like to provide assurance that the data's confidentiality is maintained in a dynamic, low-risk environment. Which of the following would best achieve this goal? (Choose two.)
Installing SOAR on all endpoints is generally not how SOAR is deployed. SOAR is a centralized orchestration/automation platform that integrates with EDR, SIEM, DLP, email gateways, and cloud controls. While endpoints can run agents (EDR/DLP), “SOAR on endpoints” is not a standard confidentiality control and would not directly ensure sensitive data remains confidential.
Hashing all files supports integrity verification (detecting changes) and can help with deduplication or malware identification, but it does not provide confidentiality. Anyone with access to the file can still read it. Hashes are one-way digests; they do not encrypt content. For confidentiality requirements, encryption and access controls are the primary mechanisms.
A SIEM within a SOC improves centralized logging, correlation, alerting, and incident response. This is valuable for detection and investigation, but it is primarily a detective/monitoring control rather than a direct confidentiality control. SIEM does not inherently prevent data exposure or make data unreadable; it helps you notice issues after or during events.
Encrypting all data at rest, in transit, and in use is the strongest direct confidentiality control across the full data lifecycle. It protects stored data, data moving across networks, and (with confidential computing/TEEs) data being processed in memory. Combined with strong key management (KMS/HSM, rotation, least privilege), it provides high assurance that unauthorized parties cannot access plaintext.
Configuring SOAR to monitor and intercept files/data leaving the network provides automated enforcement against exfiltration. In practice this is achieved by orchestrating DLP/CASB/SWG/email security/EDR actions: blocking uploads, quarantining attachments, terminating sessions, or requiring approvals. This is well-suited to dynamic environments because playbooks apply policy consistently and quickly, reducing the window of exposure.
File integrity monitoring (FIM) detects unauthorized changes to files and critical system objects, supporting integrity and compliance (e.g., baselines, change control). However, it does not prevent unauthorized reading or disclosure of data. FIM may alert on suspicious modifications, but confidentiality is better addressed through encryption and exfiltration controls.
Core concept: The question is about maintaining confidentiality for highly sensitive data in a dynamic, low-risk environment. The best controls are those that (1) make data unreadable to unauthorized parties across its full lifecycle and (2) prevent/stop unauthorized exfiltration. This maps to data protection (encryption) plus data loss prevention-style monitoring/response. Why the answers are correct: D is correct because encrypting data at rest, in transit, and in use provides end-to-end confidentiality. At rest encryption (e.g., full-disk/database/object storage encryption) protects against lost media, stolen disks, and unauthorized storage access. In transit encryption (e.g., TLS, IPsec, mTLS) protects against interception and man-in-the-middle attacks. Encryption “in use” (e.g., confidential computing/TEEs, memory encryption, enclave-based processing) addresses modern threats such as memory scraping, hypervisor/host compromise, and reduces exposure during processing—important in “dynamic” environments like cloud and containerized workloads. E is correct because configuring SOAR to monitor and intercept data leaving the network is effectively implementing automated detection and response for potential data exfiltration. In practice, this integrates signals from DLP/CASB/secure web gateway/email security/EDR and triggers playbooks to quarantine files, block transfers, revoke tokens, or require step-up authentication. This reduces risk by enforcing policy consistently and quickly, which is crucial when users, endpoints, and workloads change frequently. Key features/best practices: Use strong cryptography and key management (KMS/HSM), separation of duties, key rotation, and least-privilege access to keys. Ensure TLS everywhere with modern ciphers and certificate lifecycle management. For “in use,” consider TEEs (e.g., Intel SGX/TDX, AMD SEV, cloud confidential VMs) and protect secrets in memory. For SOAR interception, integrate with DLP classifiers, content inspection, UEBA, and ticketing; implement well-tested playbooks with approvals for high-impact actions. Common misconceptions: Hashing and file integrity monitoring help detect tampering (integrity), not confidentiality. SIEM/SOC improves visibility and response but does not inherently keep data confidential. Installing SOAR on endpoints is not typical; SOAR orchestrates tools rather than acting as an endpoint control. Exam tips: When the requirement is confidentiality, prioritize encryption across the data lifecycle and controls that prevent exfiltration (DLP/CASB/SWG) with automation. If the question emphasizes “assurance” and “low-risk,” look for preventative controls plus rapid enforcement/response rather than purely detective logging.
An organization wants to implement an access control system based on its data classification policy that includes the following data types:
Confidential -
Restricted -
Internal -
Public Flag for Review - The access control system should support SSO federation to map users into groups. Each group should only access systems that process and store data at the classification assigned to the group. Which of the following should the organization implement to enforce its requirements with a minimal impact to systems and resources?
Correct. Tagging resources with classification labels creates consistent, machine-readable metadata. ABAC uses those attributes plus identity attributes from federated SSO (claims/groups) to make centralized authorization decisions. This minimizes changes to each system because enforcement can occur at shared layers (IAM, gateways, proxies) and policies can be updated centrally without redesigning application roles.
Incorrect. RBAC tied to HR roles is good for job-function access, but it doesn’t naturally bind access to the classification of each system/resource unless every system is redesigned to implement the same role model. It also struggles with exceptions like “Flag for Review” and with resources whose classification changes over time, leading to role explosion or inconsistent enforcement.
Incorrect. Microsegmentation and NAC can restrict network paths, but they don’t enforce object-level or application-level authorization to ensure users only access data at a given classification. Many systems process multiple classifications within the same network zone, and NAC decisions based on user principal won’t reliably map to specific data stores/objects without additional application/IAM controls.
Incorrect. Managing rule-based access per system via SSO/LDAP increases administrative overhead and risks inconsistent policy implementation across environments. It also has higher operational impact because each system needs its own rule set and integration nuances. This approach doesn’t scale well for enterprise-wide classification enforcement compared to centralized ABAC with resource tags.
Core concept: This question tests how to enforce data-classification-driven access control across many systems with minimal changes, while using SSO federation to map users into groups. The best architectural fit is centralized policy enforcement using attributes (ABAC) derived from identity (user/group) and resource metadata (classification tags). Why the answer is correct: Option A combines (1) a consistent tagging strategy on resources (e.g., Confidential/Restricted/Internal/Public/Flag for Review) and (2) an attribute-based access control engine that evaluates access requests based on those tags and identity attributes from the federated SSO. This approach minimizes per-application rework because the authorization decision can be made by a centralized policy decision point (PDP) and enforced by a policy enforcement point (PEP) at common control layers (SSO proxy, API gateway, reverse proxy, service mesh, cloud IAM, or storage front doors). Systems only need to expose resource tags/metadata and integrate with the enforcement layer, rather than implementing custom logic everywhere. Key features / best practices: - Standardize classification tags and ensure they are mandatory on resources (infrastructure, storage buckets, databases, SaaS objects where possible). - Use SSO federation claims (SAML/OIDC) to pass user attributes (group, department, clearance level) to the PDP. - Centralize policies such as: allow if user.classification_clearance >= resource.classification AND resource.review_flag != true (or require additional workflow for “Flag for Review”). - Automate tag enforcement with IaC guardrails and continuous compliance checks. Common misconceptions: RBAC (option B) seems natural because “groups” are mentioned, but mapping data types to HR roles doesn’t ensure resources are consistently protected, and it often requires application-by-application role modeling. Network-only controls (option C) can’t reliably enforce access to specific data objects within systems. Rule-based per-system policies (option D) increases operational overhead and inconsistency. Exam tips: When you see “data classification,” “minimal impact,” and “many systems,” think “resource tagging + ABAC/policy-as-code.” RBAC is best for coarse job functions; ABAC scales better for data-centric controls and heterogeneous environments.
An analyst needs to evaluate all images and documents that are publicly shared on a website. Which of the following would be the best tool to evaluate the metadata of these files?
OllyDbg is a Windows debugger primarily used for analyzing and reverse engineering executable binaries at runtime (breakpoints, stepping through code, inspecting registers/memory). While it can help understand how a program behaves, it is not designed to extract or report metadata from images and documents in bulk. It’s the wrong tool for a website-wide metadata review of public files.
ExifTool is a dedicated metadata extraction and manipulation utility that supports a broad range of file formats, including images and many document types. It can read EXIF/XMP/IPTC and application-specific properties (author, software, timestamps, GPS). It’s ideal for batch analysis of publicly shared website files to identify sensitive metadata disclosures and produce structured reports.
Volatility is a memory forensics framework used to analyze RAM captures (processes, DLLs, network connections, injected code, registry artifacts in memory). It is excellent for incident response involving memory dumps, but it does not target file metadata extraction from images/documents hosted on a website. The artifact type in the question is files, not memory.
Ghidra is a reverse engineering suite used for disassembling/decompiling binaries and analyzing compiled code. It’s appropriate for malware analysis and vulnerability research in executables, not for extracting EXIF/XMP or document properties from large sets of web-hosted images and documents. It would be inefficient and misaligned with the stated goal.
Core Concept: This question tests knowledge of metadata analysis for publicly accessible files (images/documents) as part of security reconnaissance and investigations. Metadata can reveal sensitive information such as usernames, hostnames, internal paths, GPS coordinates, software versions, document authorship, and timestamps—useful for OSINT, incident response, and data leakage assessments. Why the Answer is Correct: ExifTool is the best tool listed for evaluating metadata across a wide range of file types, including common image formats (JPG/PNG/TIFF) and many document formats (PDF, Office files, etc.). An analyst tasked with reviewing “all images and documents publicly shared on a website” needs a tool that can reliably extract, normalize, and report metadata at scale. ExifTool is purpose-built for reading/writing metadata, supports hundreds of formats, and is commonly used in forensic and security workflows to identify unintended disclosures. Key Features / Best Practices: ExifTool can recursively scan directories, output results in structured formats (e.g., JSON/CSV), and extract EXIF, XMP, IPTC, and file header metadata. It can also identify embedded thumbnails, GPS tags, and application-specific fields (e.g., Microsoft Office properties). Best practice is to automate collection (download public files, then batch-run ExifTool), preserve originals (read-only analysis), and focus on high-risk fields (Author, Creator, LastSavedBy, Company, GPSLatitude/Longitude, Software, Producer, PDF Creator, internal file paths). Common Misconceptions: Students may confuse “metadata” with “memory artifacts” or “reverse engineering.” Volatility is for RAM analysis, not file metadata. OllyDbg and Ghidra are for debugging/disassembly of binaries; they do not efficiently extract EXIF/XMP/Office/PDF metadata from large sets of web-hosted documents. Exam Tips: When you see keywords like “images,” “documents,” “publicly shared,” and “evaluate metadata,” think ExifTool (or similar metadata parsers). If the prompt mentions memory dumps, think Volatility. If it mentions malware reversing, binaries, or assembly, think Ghidra/OllyDbg. Map the tool to the artifact type and the analyst’s goal (data leakage/OSINT vs. reverse engineering vs. memory forensics).
An organization's board of directors has asked the Chief Information Security Officer to build a third-party management program. Which of the following best explains a reason for this request?
Risk transference is not the best answer because a third-party management program is primarily established to identify, assess, and monitor risks introduced by vendors and suppliers, not simply to shift liability through contracts or insurance. While contracts, indemnification clauses, and cyber insurance can allocate some financial consequences, the organization still retains accountability for regulatory compliance, data protection, and operational resilience. In practice, boards request these programs to gain oversight into third-party exposure and dependencies across the supply chain. Therefore, risk transference may be one component of vendor governance, but it does not best explain the board’s reason for requesting a full third-party management program.
Supply chain visibility is the best reason: organizations need to know which third parties exist, what they access, how critical they are, and what risks they introduce (including fourth-party dependencies). A third-party management program creates governance processes for inventory, tiering, due diligence, and ongoing monitoring, enabling leadership to make informed risk decisions about the extended enterprise.
Support availability focuses on whether a vendor can provide adequate help desk, maintenance, and uptime commitments (SLAs). While SLAs are included in vendor contracts, they are not the primary driver for a third-party risk management program. TPRM is broader, covering security controls, compliance, data handling, incident notification, and systemic supply chain risk.
Vulnerability management is primarily an internal security operations function involving asset discovery, scanning, prioritization, patching, and remediation tracking. Third-party management may assess a vendor’s vulnerability management maturity, but it is not itself a vulnerability management program. The board’s concern is typically oversight of external risk exposure rather than internal patching processes.
Core concept: A third-party management program (also called third-party risk management/TPRM or vendor risk management) is a governance and risk function used to identify, assess, monitor, and control risks introduced by external parties (vendors, suppliers, MSPs, SaaS providers, contractors). It focuses on due diligence, ongoing monitoring, contract controls, and assurance activities across the vendor lifecycle. Why the answer is correct: The board’s request is best explained by the need for supply chain visibility. Modern organizations rely heavily on third parties for critical services and data processing, and those relationships create “extended enterprise” risk. A TPRM program provides structured visibility into who the third parties are, what data/systems they touch, their security posture, their subcontractors (fourth parties), and how disruptions or breaches could impact the organization. This visibility supports governance oversight, regulatory expectations, and informed risk decisions. Key features / best practices: A mature program includes: vendor inventory and tiering (criticality based on data sensitivity and business impact), due diligence (security questionnaires, SOC 2/ISO 27001 evidence, financial/BCP reviews), contract requirements (SLAs, right-to-audit, breach notification timelines, data handling, encryption, subcontractor controls), continuous monitoring (threat intel, security ratings, periodic reassessments), and offboarding controls (data return/destruction). Framework alignment often maps to NIST SP 800-161 (supply chain risk management) and NIST CSF/800-53 control families for oversight. Common misconceptions: “Risk transference” is tempting because contracts and cyber insurance can shift some financial liability, but you cannot fully transfer accountability for data protection and operational resilience; regulators and customers still hold the organization responsible. “Vulnerability management” is an internal operational program (scanning/patching) and only partially overlaps with vendor assessments. “Support availability” (vendor support/SLAs) is a procurement/operations concern and is only one small component of third-party risk. Exam tips: When you see board-level requests for vendor/third-party programs, think governance, oversight, and supply chain/extended enterprise risk. Look for keywords like vendor inventory, due diligence, ongoing monitoring, fourth parties, and regulatory expectations—these point to supply chain visibility as the primary driver.
A company is rewriting a vulnerable application and adding the mprotect() system call in multiple parts of the application's code that was being leveraged by a recent exploitation tool. Which of the following should be enabled to ensure the application can leverage the new system call against similar attacks in the future?
TPM (Trusted Platform Module) provides hardware-backed key storage, measured boot, and attestation. It helps ensure system integrity and protects secrets (e.g., BitLocker keys), but it does not control whether memory pages are executable at runtime. TPM would not make mprotect() more effective against memory-corruption exploits; it addresses a different threat class (boot/rootkit and credential/key protection).
Secure boot ensures only trusted, signed bootloaders/kernels/drivers are loaded during startup, preventing bootkits and some rootkits. While it improves platform integrity, it does not enforce per-page execute permissions or prevent injected code from running in a vulnerable process. Secure boot is about pre-OS and early OS trust, not runtime exploit mitigation within an application’s memory space.
NX bit (DEP) is the correct control because it enables hardware-enforced non-executable memory pages. With NX enabled, the OS can honor mprotect() permission changes and prevent execution from stack/heap/data pages, blocking many code-injection techniques and supporting W^X. This is a foundational mitigation for memory-corruption exploits and directly aligns with the purpose of adding mprotect() calls.
An HSM (Hardware Security Module) protects cryptographic keys and performs crypto operations in tamper-resistant hardware. It is used for PKI, code signing, TLS key protection, and regulated environments. It does not affect memory page permissions, process execution, or exploit mitigations like DEP/NX. Therefore, it would not help the application leverage mprotect() against similar attacks.
Core Concept: This question tests exploit-mitigation controls related to memory protection. The mprotect() system call is used by applications to change memory page permissions at runtime (e.g., making a region non-executable or read-only). A common class of attacks (buffer overflows, code injection, ROP staging) relies on executing injected code from writable memory regions such as the stack or heap. Why the Answer is Correct: Enabling the NX bit (No-eXecute), also called DEP (Data Execution Prevention) on many platforms, allows the CPU/MMU to enforce per-page execute permissions. When NX is enabled, pages marked as data (stack/heap) cannot be executed. This directly complements mprotect(): the application can explicitly set pages to non-executable after use (W^X policy: writable XOR executable), and the OS/hardware will actually enforce it. Without NX support, marking pages non-executable is ineffective because the processor cannot prevent execution from those pages. Key Features / Best Practices: - NX/DEP is a hardware-backed control (page table “execute disable” bit) used by the OS kernel. - Works best with ASLR and stack canaries; together they raise the cost of exploitation. - Supports W^X: memory should never be writable and executable at the same time. Applications may temporarily mprotect() to add execute permission for JIT code, then remove write permission. - Ensure the OS and CPU support NX, and that the kernel/boot configuration enables it (e.g., DEP “AlwaysOn” on Windows; NX/exec-shield settings on Linux). Common Misconceptions: Secure boot and TPM are often associated with “security hardening,” but they protect boot integrity and key storage/attestation, not runtime memory execution permissions. HSMs protect cryptographic keys and operations, unrelated to preventing code execution from data pages. Exam Tips: When you see mprotect(), mmap(), stack/heap execution, buffer overflow, or “mark memory non-executable,” think NX/DEP. If the question is about ensuring the system call’s memory-permission changes are enforceable, the required underlying control is the CPU/OS execute-disable feature (NX bit).
Which of the following items should be included when crafting a disaster recovery plan?
Redundancy is an important consideration when crafting a disaster recovery plan because it provides alternate components or paths that can be used when primary systems fail. Examples include redundant network links, replicated storage, clustered servers, and secondary recovery sites. These design choices directly affect how quickly services can be restored and whether recovery objectives can be met. While redundancy alone is not the entire DRP, it is a legitimate and common element of disaster recovery strategy.
Testing exercises are a core DRP component because they validate that documented procedures, staffing, access, backups, and failover/failback steps actually work. Exercises uncover hidden dependencies, outdated assumptions, and operational gaps that would otherwise remain unnoticed until a real disaster occurs. Common test types include tabletop exercises, simulations, parallel tests, and full interruption tests. A mature DRP includes test frequency, scope, success criteria, and after-action improvements.
Autoscaling is primarily a cloud elasticity feature that adjusts resources based on workload demand. It helps with performance and capacity management, but it does not by itself define how an organization will recover systems, data, or operations after a disaster. Although it may contribute to resilience in some architectures, it is not a standard DRP element in the same way as redundancy or testing. The exam expects candidates to distinguish operational recovery planning from routine scaling features.
Competitor locations are not relevant to disaster recovery planning. DR site selection is based on your organization’s own risk analysis, geographic separation, regional threats, regulatory requirements, and technical recovery needs. Knowing where competitors operate does not help restore your systems or protect your data during a disaster. This option is a distractor because it sounds location-related but has no practical role in a DRP.
Core Concept: A disaster recovery plan (DRP) defines how an organization will restore critical systems and operations after a disruptive event. When crafting a DRP, organizations should include both preventive/resilience measures, such as redundancy, and validation activities, such as testing exercises. A complete DRP is not just a document of recovery steps; it also accounts for the infrastructure and processes needed to make recovery possible. Why the Answer is Correct: Redundancy should be considered in a DRP because alternate systems, network paths, storage, and sites are common mechanisms used to maintain or restore service after a failure. Testing exercises must also be included because they validate that the recovery procedures, personnel, dependencies, and technologies actually work under realistic conditions. Together, these elements support both preparedness and recoverability. Key Features / Best Practices: A strong DRP includes recovery strategies, backup and restoration procedures, alternate processing capabilities, redundant components where appropriate, communication plans, roles and responsibilities, and a schedule for regular testing. Testing may include tabletop exercises, simulations, and failover drills. Redundancy may involve duplicate hardware, clustered services, secondary sites, or replicated data depending on business requirements and recovery objectives. Common Misconceptions: Some candidates confuse DRP content with only procedural documentation and overlook technical recovery strategies like redundancy. Others assume autoscaling is a DR requirement, but it is mainly a cloud elasticity feature for handling workload demand rather than disaster recovery. Competitor locations are irrelevant because DR planning is based on your own operational, geographic, and risk considerations. Exam Tips: On CAS-005, distinguish between features that support recovery and items that are unrelated to recovery planning. If an option directly improves recoverability or validates the plan, it is likely part of DR planning. Be cautious of distractors that describe performance optimization or irrelevant business intelligence rather than continuity or recovery capabilities.
A security architect wants to ensure a remote host's identity and decides that pinning the X.509 certificate to the device is the most effective solution. Which of the following must happen first?
DER is just a binary encoding format for ASN.1 objects (including X.509 certificates). While certificates can be stored/transmitted in DER or PEM, the encoding choice is not the prerequisite for pinning. Pinning relies on a stable identifier (fingerprint/SPKI hash) and a trusted acquisition process, not a specific encoding rule.
You do not extract a private key from a certificate. Standard X.509 server certificates contain the public key and identity attributes; the private key is generated and stored securely on the server (or HSM) and should never be distributed to clients. Pinning validates the server’s public identity, so private key extraction is both incorrect and insecure.
Correct. The first step in effective certificate pinning is securely obtaining the legitimate certificate (or public key/SPKI hash) via an out-of-band trusted channel. This prevents an attacker from intercepting the initial retrieval and causing the device to pin a malicious certificate, which would permanently trust the attacker until remediation.
Comparing the retrieved certificate to an embedded/pinned certificate is part of the runtime validation step after pinning is established. However, it cannot happen first because you need a trustworthy pinned value already on the device. The question asks what must happen first to make pinning effective: secure initial acquisition (out-of-band).
Core concept: This question tests certificate pinning and trust bootstrapping. Certificate pinning means a client/device stores (“pins”) an expected server X.509 certificate (or public key/SPKI hash) and, on future connections, rejects servers that present anything else—even if a public CA would otherwise validate it. Pinning is used to mitigate CA compromise, rogue/intercepting proxies, and some man-in-the-middle (MITM) scenarios. Why the answer is correct: Before you can pin a remote host’s certificate to a device, you must obtain the correct certificate (or its public key hash) in a way that is not itself vulnerable to MITM. If you fetch the certificate over the same untrusted network you are trying to secure, an attacker could present a fraudulent certificate and you would “pin” the attacker’s identity. Therefore, the first required step is to obtain the certificate via an out-of-band trusted method (e.g., preloading during manufacturing/MDM enrollment, scanning a QR code from a trusted admin console, using a secure internal distribution channel, or retrieving it from a known-good repository with independent verification). Key features / best practices: Pinning typically stores a certificate fingerprint (SHA-256) or SPKI pin rather than the whole certificate to reduce brittleness. Operationally, you should plan for certificate rotation (pin multiple valid keys/certs, include backup pins, and define update mechanisms). In enterprise settings, pinning is often implemented during device provisioning (gold image, MDM profile) so the trust anchor is established before the device is exposed to hostile networks. Common misconceptions: A common trap is thinking the “first” step is comparing the retrieved certificate to the embedded one. That comparison is essential, but it presumes you already have a trustworthy embedded/pinned certificate. Another misconception is focusing on certificate encoding (DER) or private key handling; pinning validates the server’s presented public identity, not by extracting private keys. Exam tips: Look for wording like “must happen first” and “ensure identity.” For pinning, the critical idea is secure initial acquisition (trust-on-first-use is risky unless the first use is trusted). If the question emphasizes preventing MITM during setup, the correct choice is usually an out-of-band or pre-provisioned trust method.
A security analyst is using data provided from a recent penetration test to calculate CVSS scores to prioritize remediation. Which of the following metric groups would the analyst need to determine to get the overall scores? (Choose three.)
Temporal is one of the three CVSS metric groups. It refines the Base score using time-dependent factors such as Exploit Code Maturity, Remediation Level, and Report Confidence (CVSS v3.1). These values change as exploits become public, patches are released, or confidence in the vulnerability details improves. Temporal is required (along with Base and Environmental) to produce an overall score beyond Base alone.
Availability is not a CVSS metric group. It is part of the impact metrics (Availability Impact) within the Base group (and can be modified in Environmental as Modified Availability Impact). While it influences the score, it is only one component and does not represent a full group needed to compute the overall CVSS score.
Integrity is not a CVSS metric group. It is an impact metric (Integrity Impact) within the Base group and can be adjusted in Environmental as Modified Integrity Impact. It helps quantify the consequence of exploitation, but by itself it is not one of the three top-level groups (Base, Temporal, Environmental) used to calculate the overall score.
Confidentiality is not a CVSS metric group. It is an impact metric (Confidentiality Impact) within the Base group and may be modified in Environmental as Modified Confidentiality Impact. It contributes to the impact sub-score but does not represent a full metric group required for the overall CVSS calculation.
Base is a required CVSS metric group and forms the foundation of scoring. It captures intrinsic exploitability and impact characteristics that do not change across time or organizations. In CVSS v3.1, Base includes metrics like Attack Vector, Attack Complexity, Privileges Required, User Interaction, Scope, and the CIA impacts. Without Base, no CVSS score can be computed.
Environmental is one of the three CVSS metric groups and is used to tailor scoring to a specific organization. It accounts for the importance of the affected system (security requirements) and allows modified base metrics to reflect local conditions and compensating controls. Environmental scoring is essential when the question asks for “overall scores” used for remediation prioritization in a specific environment.
Impact is not a CVSS metric group. It is a concept and a sub-score component within the Base group (impact sub-score derived from Confidentiality, Integrity, and Availability impacts, plus Scope considerations). Because the question asks for metric groups, the correct selections are Base, Temporal, and Environmental—not the impact component itself.
Attack vector is not a CVSS metric group; it is a single Base metric (AV) that describes how remotely or locally an attacker can exploit the vulnerability (e.g., Network, Adjacent, Local, Physical in CVSS v3.1). It is important for scoring but is only one input within the Base group, not a top-level group needed for the overall score.
Core concept: This question tests knowledge of the CVSS (Common Vulnerability Scoring System) metric groups used to compute an overall vulnerability score for prioritization. CVSS (commonly v3.1 in many programs, with v4.0 increasingly referenced) organizes metrics into groups that roll up into a final score. Why the answer is correct: To calculate the overall CVSS score, an analyst must determine the three CVSS metric groups: Base, Temporal, and Environmental. - Base metrics represent intrinsic characteristics of the vulnerability that are constant over time and across environments (e.g., how it can be exploited and the inherent impact). - Temporal metrics adjust the score based on factors that change over time, such as exploit maturity and availability of fixes. - Environmental metrics tailor the score to a specific organization by incorporating the importance of impacted systems and the presence of compensating controls. Together, these three groups produce the overall score used for remediation prioritization. Key features / best practices: - Base is mandatory and should be derived from pen test findings and technical validation (attack vector, complexity, privileges required, user interaction, scope, and CIA impacts). - Temporal should be updated as exploit code becomes available or as vendor patches are released. - Environmental should reflect business context (asset criticality, modified impact metrics, and control strength) and is often the differentiator between “patch now” and “patch in next cycle.” Common misconceptions: Options like Confidentiality/Integrity/Availability, Attack vector, and Impact are real CVSS components, but they are not metric groups; they are individual metrics (or subcomponents) within the Base (and sometimes modified within Environmental). Selecting them confuses “metric groups” with “metrics.” Exam tips: Remember the three CVSS metric groups as “BTE”: Base (intrinsic), Temporal (time-sensitive), Environmental (org-specific). If the question asks for “overall score,” it’s pointing to these groups, not individual metrics like AV or CIA. Also note that many tools default to Base score only unless Temporal/Environmental inputs are provided—so the question explicitly asking for overall scores is a clue that all three groups are needed.