
Simula la experiencia real del examen con 100 preguntas y un límite de tiempo de 120 minutos. Practica con respuestas verificadas por IA y explicaciones detalladas.
Impulsado por IA
Cada respuesta es verificada de forma cruzada por 3 modelos de IA líderes para garantizar la máxima precisión. Obtén explicaciones detalladas por opción y análisis profundo de cada pregunta.
What provides visibility and awareness into what is currently occurring on the network?
CMX (Cisco Connected Mobile Experiences) is primarily a wireless location and analytics solution. It provides visibility into Wi-Fi client presence, movement, and engagement (e.g., location-based services), not comprehensive awareness of overall network operations and security events across routing, switching, and security infrastructure. It’s valuable for wireless analytics use cases, but it is not the general mechanism for “what is currently occurring on the network.”
WMI (Windows Management Instrumentation) is a Microsoft framework for managing and monitoring Windows endpoints and servers. It can provide visibility into host processes, services, and system metrics, but it is not a network visibility technology for Cisco network devices. In SCOR context, WMI is more aligned with endpoint/host management rather than network-wide operational awareness.
Cisco Prime Infrastructure is a network management platform (especially for enterprise wired/wireless) that can monitor devices, configurations, and performance. While it can provide dashboards and reports, it is a specific product and often relies on underlying data collection methods (SNMP, syslog, NetFlow, telemetry). The question asks for what provides visibility conceptually; telemetry is the more direct and general answer.
Telemetry is the mechanism for exporting operational and security data from network devices to collectors/analytics tools, often in a streaming, near-real-time fashion (model-driven telemetry using YANG/gRPC/gNMI, or related exports). This continuous data feed enables rapid visibility and awareness of current network conditions, anomalies, and events, making it the best match for the question’s intent.
Core Concept: This question tests network visibility and situational awareness—how security and operations teams understand what is happening on the network in near real time. In Cisco security architectures, this is commonly achieved through streaming data (metrics, events, flow records, logs) from network devices to collectors/analytics platforms. Why the Answer is Correct: Telemetry provides visibility and awareness into what is currently occurring on the network by continuously exporting operational and security-relevant data from devices (routers, switches, firewalls, wireless controllers) to monitoring and analytics systems. Unlike periodic polling, modern telemetry is typically model-driven and streaming (for example, using YANG models over gRPC/gNMI), enabling faster detection of anomalies, performance issues, and security events. This “what is happening now” aspect aligns directly with telemetry’s purpose: timely, high-fidelity observability. Key Features / Best Practices: Telemetry can include interface statistics, CPU/memory, routing state, NetFlow/IPFIX flow data, security events, and application performance indicators. Model-driven telemetry reduces overhead versus frequent SNMP polling and supports structured data. Best practices include selecting the right data sources (flows + device health + security logs), tuning export intervals, ensuring secure transport (TLS), and integrating with SIEM/NDR tools (e.g., Secure Network Analytics/Stealthwatch, Splunk) for correlation and alerting. Common Misconceptions: Cisco Prime Infrastructure and similar management platforms provide monitoring and reporting, but they are products that may consume telemetry rather than being the foundational concept. CMX is focused on location analytics for wireless clients, not broad network-wide operational awareness. WMI is a Windows management interface, relevant to endpoint monitoring, not network device visibility. Exam Tips: When you see wording like “visibility and awareness of what is currently occurring,” think “observability” and “streaming/real-time data.” On SCOR, telemetry is a key enabler for visibility (often paired with NetFlow/IPFIX, syslog, and analytics). If the option list includes a general concept (Telemetry) versus specific tools (Prime, CMX), the concept is usually the best match unless the question explicitly names a platform.
¿Quieres practicar todas las preguntas en cualquier lugar?
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.


Descarga Cloud Pass y accede a todas las preguntas de práctica de Cisco 350-701: Implementing and Operating Cisco Security Core Technologies (SCOR) gratis.
¿Quieres practicar todas las preguntas en cualquier lugar?
Obtén la app gratis
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
What are the two most commonly used authentication factors in multifactor authentication? (Choose two.)
Biometric factor is a legitimate authentication factor category representing something you are, such as a fingerprint or facial scan. However, the question specifically asks for the two most commonly used factors in MFA, and in practice the most common pair is knowledge plus possession, not knowledge plus biometric. Because possession is not offered as an option, selecting biometric reflects the flawed option set rather than the true industry-standard answer. Therefore this option is valid as a factor type but not as one of the two most commonly used factors in the usual MFA context.
Time factor is not a recognized authentication factor category in standard MFA models. Time may be used as an input to generate a one-time password in TOTP systems, but the factor there is possession of the token or authenticator application, not time itself. Time can also be used in access policy decisions such as restricting logins to business hours. It does not independently prove identity as a factor.
Confidentiality factor is not an authentication factor category. Confidentiality is one of the core security objectives in the CIA triad and refers to preventing unauthorized disclosure of information. It is achieved through controls such as encryption, access restrictions, and data classification. None of those make it a factor used to authenticate a user.
Knowledge factor is a valid authentication factor category and represents something the user knows, such as a password, PIN, or passphrase. It is one of the most widely deployed factors because it is easy to implement and integrates with nearly every identity system. In real-world MFA, it is commonly paired with a possession factor like a token or authenticator app. Among the listed options, this is unquestionably one of the standard factor types.
Encryption factor is not an authentication factor category. Encryption is a security mechanism used to protect data at rest or in transit and may support authentication protocols, but it is not itself evidence of identity. Authentication factors are based on what a user knows, has, or is. This option confuses a cryptographic control with an identity-verification category.
Core concept: Multifactor authentication uses two or more different categories of evidence to verify identity. The standard factor categories are knowledge (something you know), possession (something you have), and inherence/biometric (something you are). Why correct: Of the options provided, only knowledge factor and biometric factor are legitimate authentication factor categories, but the question asks for the two most commonly used factors in MFA, which are typically knowledge and possession. Key features: Knowledge includes passwords and PINs, biometric includes fingerprints and facial recognition, while possession includes tokens, smart cards, and authenticator apps. Common misconceptions: Time, confidentiality, and encryption are not authentication factor categories; however, biometric is not generally considered one of the two most commonly used MFA factors compared with possession. Exam tips: On certification exams, if asked for the most common MFA factors, expect knowledge plus possession unless the available options force selection of the only valid factor categories listed.
Which two features of Cisco DNA Center are used in a Software Defined Network solution? (Choose two.)
Accounting is an AAA function (tracking/logging user/device activity) typically handled by RADIUS/TACACS+ infrastructure such as Cisco ISE or AAA servers. Cisco DNA Center can integrate with identity systems and display operational data, but “accounting” is not a primary DNA Center feature pillar in SD-Access. On exams, accounting aligns more with AAA services than with SDN controller capabilities.
Assurance is a core Cisco DNA Center feature used in SDN/SD-Access operations. It provides end-to-end visibility using telemetry and analytics to produce health scores, client and application experience insights, and guided troubleshooting (e.g., path trace, client 360). In an intent-based SDN, assurance validates that the network is delivering the intended policy and performance and helps rapidly isolate issues across the fabric.
Automation is a foundational Cisco DNA Center feature in SDN solutions. It enables intent-based provisioning and lifecycle management such as Plug and Play onboarding, configuration templating, software image management, and SD-Access fabric workflows (building underlay/overlay, adding fabric nodes, deploying virtual networks and policies). Automation is central to SDN because it replaces manual CLI-driven changes with controller-driven, consistent deployment at scale.
Authentication is the process of verifying identity (users/devices/admins) and is typically provided by Cisco ISE (802.1X/MAB with RADIUS) or TACACS+ servers for device administration. While Cisco DNA Center integrates with ISE to apply group-based policy and facilitate SD-Access segmentation, authentication itself is not a primary DNA Center feature category. Exams often separate controller functions from AAA services.
Encryption protects data confidentiality and integrity (e.g., MACsec on links, IPsec tunnels, TLS for management APIs). Cisco DNA Center uses secure channels for management and can help orchestrate configurations, but encryption is not one of its main SDN feature pillars. In SD-Access, the overlay is VXLAN and control-plane uses LISP; these are not synonymous with encryption, and encryption is handled by separate mechanisms.
Core Concept: Cisco DNA Center (now commonly referred to as Cisco Catalyst Center) is the controller and management platform used in Cisco SD-Access (Software-Defined Access), Cisco’s campus SDN solution. In SDN, the controller provides centralized policy, automation, and operational visibility across the fabric. Why the Answer is Correct: The two Cisco DNA Center feature areas most directly used in an SDN solution are Automation and Assurance. Automation is fundamental to SDN because it enables controller-driven, intent-based provisioning of the underlay and overlay, including device onboarding, fabric bring-up, and policy deployment at scale. Assurance is equally important because SDN environments rely on continuous telemetry and analytics to validate that the intended state matches the actual state, and to rapidly troubleshoot issues across endpoints, users, and applications. Key Features / How They Map to SD-Access: - Automation: Plug and Play (PnP) onboarding, software image management (SWIM), template-based configuration, and SD-Access fabric workflows (creating sites, building the underlay, enabling the fabric, adding edge/border/control-plane nodes, and pushing virtual networks/SGT-based policy). Automation reduces human error and ensures consistent configuration. - Assurance: Collection of streaming telemetry, SNMP, syslog, NetFlow (where applicable), and client telemetry to provide health scores, path trace, client 360 views, and proactive issue detection. Assurance helps validate fabric connectivity, policy enforcement, and user experience. Common Misconceptions: Options like authentication, accounting, and encryption are security functions, but they are not “Cisco DNA Center features” in the SDN sense. Authentication/accounting are typically provided by Cisco ISE (AAA, 802.1X, TACACS+) and encryption is provided by protocols and platforms (e.g., MACsec, IPsec, TLS) rather than being a primary DNA Center feature category. DNA Center integrates with these systems (especially ISE) to orchestrate policy, but it is not the core provider of AAA or encryption. Exam Tips: For SDN/controller questions, look for controller-centric operational pillars: provisioning/automation and assurance/analytics. In Cisco SD-Access, remember the common architecture trio: DNA Center (intent + automation + assurance), ISE (identity and policy/SGT), and the fabric (VXLAN/LISP). If the option is a classic AAA or crypto term, it’s usually not the controller feature being tested.
An engineer configured a new network identity in Cisco Umbrella but must verify that traffic is being routed through the Cisco Umbrella network. Which action tests the routing?
Pointing clients to on-premises DNS servers is not a routing test. It only confirms where clients send DNS queries first. Unless those DNS servers are configured to forward to Umbrella (or use a Virtual Appliance), traffic may never reach Umbrella. This option is a partial prerequisite in some designs, but it does not validate that Umbrella is actually receiving and enforcing policy for the new identity.
Intelligent Proxy in Umbrella is used to proxy certain web requests for deeper inspection and enforcement (for example, to apply more granular controls beyond DNS-only decisions). Enabling it does not validate that DNS traffic is being routed through Umbrella or that the correct network identity is in use. It’s a policy/enforcement feature, not a primary connectivity/identity verification tool.
Adding the public IP address to a Core Identity is a configuration step that allows Umbrella to map DNS requests to the correct network identity when queries come from that egress IP. However, it does not test routing by itself. You could add the IP and still have clients using a different DNS path, NAT, or resolver, resulting in no Umbrella visibility or incorrect identity attribution.
Browsing to http://welcome.umbrella.com/ is the standard test to validate Umbrella protection and confirm that DNS requests are being resolved through Umbrella. If the client is correctly routed/forwarded to Umbrella resolvers and the identity is recognized, the page indicates the system is protected (and often reflects the organization/identity context). This is the most direct functional verification action.
Core Concept: This question tests how to validate that DNS traffic from a newly configured Cisco Umbrella Network Identity is actually reaching Umbrella and being associated with the correct identity. In Umbrella, “routing traffic through Umbrella” in the context of a Network Identity typically means DNS queries are being sent to Umbrella resolvers (direct-to-Umbrella, via a virtual appliance, or via an on-prem DNS forwarder) and Umbrella can identify the source network (usually by public IP or via VA/internal mapping). Why the Answer is Correct: Browsing to http://welcome.umbrella.com/ is the standard functional test to confirm that a client’s DNS requests are being handled by Umbrella. The page reports whether you are “Protected” and often indicates which Umbrella organization/identity is in effect. This validates end-to-end: the client is using a DNS path that ultimately resolves through Umbrella, and Umbrella recognizes the request as coming from a configured identity. Key Features / Best Practices: - Use welcome.umbrella.com as the quick validation step after configuring a Network Identity, VA, or DNS forwarding. - Also verify in the Umbrella dashboard (Activity Search / Reporting) that DNS requests appear under the expected identity. - Ensure the correct egress public IP is registered in Umbrella (for direct-to-Umbrella) or that the Virtual Appliance is correctly mapping internal networks. Common Misconceptions: - Simply pointing clients to on-prem DNS does not prove Umbrella is in the path; the on-prem DNS might not forward to Umbrella. - Intelligent Proxy is for proxying/inspection of web traffic (often after DNS decisioning), not the primary validation method for DNS routing. - Adding a public IP to a Core Identity is configuration, not a test; it enables identification but doesn’t confirm live traffic flow. Exam Tips: For Umbrella verification questions, remember: configuration steps (adding IPs, enabling features) are different from validation steps. The canonical “am I protected?” validation is welcome.umbrella.com, complemented by checking Umbrella logs for the correct identity attribution.
What is a commonality between DMVPN and FlexVPN technologies?
Incorrect. FlexVPN is explicitly based on IKEv2, but DMVPN is not defined by IKEv2 and has historically been deployed extensively with IKEv1. Although some DMVPN deployments can use IKEv2, that does not make IKEv2 a universal commonality of both technologies. The option is therefore too absolute to be correct. On Cisco exams, wording like 'use IKEv2' usually points to FlexVPN specifically, not DMVPN as a whole.
Incorrect. IS-IS is not a required routing protocol for either DMVPN or FlexVPN. Both technologies can carry multiple routing protocols such as EIGRP, OSPF, BGP, or even static routing depending on the design. Routing protocol choice is independent of the VPN framework itself. Therefore, IS-IS is not a defining shared characteristic.
Correct. Cisco IOS routers use the same NHRP code base for DMVPN and FlexVPN, which is a well-known implementation commonality between the two technologies. DMVPN relies on NHRP as a foundational component for next-hop resolution and dynamic spoke-to-spoke tunnels over mGRE. FlexVPN is built around IKEv2 and VTIs, but in scalable hub-and-spoke deployments it can also use NHRP for registration and resolution functions. This makes shared NHRP implementation the most specific and accurate commonality among the options.
Incorrect. Both technologies can indeed use similar hashing or integrity algorithms because both rely on IPsec-related cryptographic functions, but this is too generic and not the specific commonality typically tested between DMVPN and FlexVPN. Many unrelated VPN technologies also support the same hashing algorithms, so this does not distinguish a meaningful architectural overlap. Cisco exam questions on this topic usually target the shared NHRP implementation rather than broad crypto support. As a result, D is less precise and not the best answer.
Core concept: This question asks for a specific architectural commonality between DMVPN and FlexVPN on Cisco IOS. DMVPN is built from mGRE, NHRP, and IPsec, while FlexVPN is an IKEv2-based framework that can support hub-and-spoke and spoke-to-spoke designs. Why correct: A notable Cisco implementation commonality is that both technologies use the same NHRP code on IOS when NHRP-based spoke registration and resolution are used. Key features: DMVPN depends on NHRP for dynamic tunnel endpoint discovery, and FlexVPN can also leverage NHRP for scalable spoke-to-spoke communication in certain designs, even though its control framework is centered on IKEv2 and VTIs. Common misconceptions: It is easy to choose IKEv2 because FlexVPN is defined by it, but DMVPN is not inherently IKEv2-only. It is also tempting to choose hashing algorithms, but that is a broad IPsec trait rather than the specific shared implementation detail the exam is targeting. Exam tips: When Cisco asks about DMVPN versus FlexVPN commonality, look for platform and control-plane implementation details rather than generic cryptographic capabilities shared by many VPN solutions.
What is an attribute of the DevSecOps process?
DevSecOps commonly uses automated security scanning throughout the CI/CD pipeline (SAST, SCA, container/IaC scanning, secret scanning). Many findings are “potential” or “theoretical” until validated and prioritized in context, which matches real DevSecOps workflows: continuous detection, triage, and rapid remediation rather than one-time, late-stage testing.
“Development security” is vague and not a recognized defining attribute by itself. DevSecOps is broader than securing development; it integrates security practices across development, operations, and delivery pipelines with automation and shared responsibility. This option lacks the key idea of continuous, pipeline-driven security integration.
An isolated security team is the opposite of DevSecOps. DevSecOps promotes collaboration and shared ownership where developers, operations, and security work together, with security embedded into tooling and processes. Siloed security typically leads to late reviews, slower releases, and weaker feedback loops.
Mandated controls and checklists describe traditional compliance-heavy approaches and can be largely manual and inflexible. While DevSecOps can include required controls, the hallmark is automating and codifying them (policy as code) and integrating them into CI/CD. A checklist-only mindset is not a defining DevSecOps attribute.
Core Concept: DevSecOps extends DevOps by integrating security into every phase of the software delivery lifecycle (plan, code, build, test, release, deploy, operate). A defining attribute is “shift-left” security: automated security checks embedded into CI/CD pipelines so issues are found early and continuously, rather than late and manually. Why the Answer is Correct: Option A best reflects a key DevSecOps attribute: continuous security scanning to detect vulnerabilities as code changes. In practice this includes SAST (static analysis), SCA (dependency/package scanning), secret scanning, IaC scanning, container image scanning, and DAST in test environments. The wording “theoretical vulnerabilities” aligns with how many tools report potential issues based on patterns, CVEs, or heuristics that must be triaged and validated (risk-based prioritization) rather than treated as guaranteed exploitable findings. Key Features / Best Practices: DevSecOps emphasizes automation, repeatability, and fast feedback loops. Typical controls include: - Automated scans on pull requests/commits and during builds - Policy-as-code (guardrails) and security gates with risk thresholds - Continuous monitoring and vulnerability management post-deploy - Collaboration/shared responsibility between dev, ops, and security - Rapid remediation via backlog items and pipeline re-runs These align with modern guidance such as NIST Secure Software Development Framework (SSDF) and common CI/CD security practices. Common Misconceptions: Some confuse DevSecOps with simply “adding a security team review” or enforcing rigid checklists. DevSecOps is not about isolating security or slowing delivery; it’s about embedding security into workflows with automation and shared ownership. Also, scan results are not always definitive; they require tuning, context, and prioritization. Exam Tips: For SCOR-style questions, look for keywords like “integrated,” “automated,” “continuous,” “shift-left,” “CI/CD pipeline,” and “policy as code.” Answers implying siloed security teams or purely manual compliance checklists usually contradict DevSecOps principles.
After a recent breach, an organization determined that phishing was used to gain initial access to the network before regaining persistence. The information gained from the phishing attack was a result of users visiting known malicious websites. What must be done in order to prevent this from happening in the future?
Modifying web proxy settings can ensure web traffic is forwarded to an inspection point (proxy) and can improve control coverage. However, proxy configuration alone does not inherently block known malicious websites unless paired with explicit URL/category/reputation policies. The question asks what must be done to prevent users from visiting known malicious sites; that prevention is achieved by enforcement rules, not merely changing proxy settings.
Outbound malware scanning policies focus on detecting and blocking malicious files or payloads leaving the network (exfiltration) or sometimes inspecting downloads. Phishing sites often harvest credentials or redirect users without delivering a detectable malware file, so malware scanning may not stop the initial web visit. This option is more aligned with data loss/exfiltration or file-based malware controls than URL reputation blocking.
Identification profiles (for example, user identity mapping via ISE/AD/agent-based identity) enable user- or group-based policy decisions and better attribution in logs. They are important for visibility and for applying different rules to different users, but they do not directly prevent access to known malicious websites unless used within an access policy rule that blocks those destinations.
Modifying an access policy is the correct action because access policies (such as an FTD Access Control Policy) define allow/deny decisions and can incorporate URL filtering, category/reputation blocks, and Security Intelligence feeds. By adding or tightening rules to block known malicious/phishing categories and bad reputation domains/IPs, the organization can prevent users from reaching the malicious websites that enabled the phishing-based initial access.
Core Concept: This question tests secure web access controls in Cisco security platforms (commonly Cisco Secure Firewall/FTD with URL filtering or Cisco Secure Web Appliance/Umbrella concepts). The scenario is phishing leading users to known malicious websites, so the control needed is web/URL access enforcement to block those destinations. Why the Answer is Correct: To prevent users from visiting known malicious websites, you must enforce a policy decision that blocks traffic based on URL reputation/category (and often DNS/IP reputation). In Cisco Secure Firewall (FTD), this is accomplished by modifying the Access Control Policy (ACP) to include URL filtering/reputation rules (or Security Intelligence feeds) that deny connections to known bad sites. An access policy is where you define “allow/deny” decisions, URL category blocks, and reputation-based controls that directly stop the browsing event that enabled the phishing chain. Key Features / Best Practices: 1) URL Filtering: Block “Malware,” “Phishing,” “Newly Seen Domains,” and other high-risk categories; tune exceptions carefully. 2) Security Intelligence (SI): Use SI feeds (Talos) to block known malicious IPs/domains before deeper inspection. 3) TLS/SSL Decryption (where appropriate): Many malicious sites are HTTPS; without decryption, URL/path visibility may be limited. Use decryption selectively with privacy/legal considerations. 4) Logging and User Awareness: Enable logging on URL blocks to identify targeted users and measure effectiveness. 5) Defense-in-depth: Combine URL filtering with DNS-layer protection (Umbrella), email security, and endpoint controls. Common Misconceptions: It’s tempting to focus on “malware scanning” (B), but scanning is reactive and may miss drive-by credential harvesting or phishing pages that deliver no malware. “Web proxy settings” (A) can help route traffic through controls, but the question asks what must be done to prevent access to known malicious websites—policy enforcement is the direct control. “Identification profiles” (C) help map users to IPs for user-based rules, but they don’t block anything by themselves. Exam Tips: When the stem says users visited known malicious websites, think URL/DNS reputation enforcement. In Cisco Secure Firewall/FTD, that typically means adjusting the Access Control Policy (and associated URL filtering/Security Intelligence settings). If the question emphasizes user-based exceptions, then identity policies become relevant—but blocking known bad destinations is fundamentally an access policy decision.
An engineer needs a cloud solution that will monitor traffic, create incidents based on events, and integrate with other cloud solutions via an API. Which solution should be used to accomplish this goal?
A CASB is primarily used to provide visibility and control over the use of cloud applications, especially SaaS services. Its strengths are enforcing security policies, discovering shadow IT, and protecting data in cloud apps rather than serving as a centralized incident-generation platform across many telemetry sources. Although a CASB can raise alerts, it is not the best match for broad event correlation and incident management requirements. The question's emphasis on creating incidents from events and integrating broadly via API points more clearly to SIEM.
Cisco Cloudlock is a CASB offering focused on SaaS security use cases such as user activity monitoring, DLP, and governance for cloud applications. It can identify risky behavior in supported cloud apps, but it is not a general-purpose SIEM designed to aggregate diverse security events and create incidents across the environment. Its scope is narrower and centered on cloud application security rather than centralized security operations. Therefore it does not best satisfy the full set of requirements in the question.
Adaptive MFA is an access control technology that changes authentication requirements based on contextual risk such as device posture, location, or user behavior. Its purpose is to strengthen identity verification, not to collect and correlate security events from multiple systems or generate incidents for analysts. It also is not the primary platform used for broad security monitoring and operational integrations. Because the requirement is about monitoring events and creating incidents, Adaptive MFA is not the correct choice.
A SIEM is specifically built to ingest security logs and telemetry from many sources, analyze and correlate those events, and generate alerts or incidents when suspicious activity is detected. It commonly integrates with other cloud and security platforms through APIs, connectors, or webhooks so that data can be shared and response workflows can be automated. This aligns directly with the stated need to monitor events, create incidents based on those events, and integrate with other cloud solutions. While some SIEMs can also consume network traffic metadata, their defining role is centralized event management and incident generation.
Core concept: The question is asking which security solution is designed to collect and analyze security events, generate incidents from those events, and integrate with other platforms through APIs. Those are core characteristics of a SIEM platform. A SIEM centralizes logs and telemetry from multiple sources, correlates them, and turns notable activity into alerts or incidents for investigation. The key features here are event monitoring, incident creation, and broad integration capabilities rather than SaaS access control or authentication. A common misconception is to confuse CASB products with SIEM because both can produce alerts, but CASB focuses on cloud application visibility and policy enforcement, not enterprise-wide event correlation. Exam tip: when you see wording like logs/events, incident generation, correlation, and API integration, think SIEM.
Which PKI enrollment method allows the user to separate authentication and enrollment actions and also provides an option to specify HTTP/TFTP commands to perform file retrieval from the server?
terminal enrollment is a manual method where the device generates a CSR and the administrator copies/pastes the request to a CA and then pastes the issued certificate back into the device. While it can be performed in discrete steps, it does not provide an option to specify HTTP/TFTP commands for retrieving files from a server, which is the key requirement in the question.
selfsigned is not a CA enrollment method. It instructs the device to generate its own self-signed identity certificate locally, typically used for temporary testing, bootstrap, or when no external PKI is available. Because there is no external server interaction, it cannot separate CA authentication from enrollment or specify HTTP/TFTP retrieval commands.
url enrollment points the trustpoint to a CA/RA URL and supports workflows where CA authentication (retrieving/accepting CA certificates) and enrollment (CSR submission and certificate retrieval) are distinct actions. It also provides the capability to specify HTTP/TFTP-based retrieval behavior for obtaining certificates or related files from a server, matching both clues in the question.
profile enrollment uses an enrollment profile (a predefined set of enrollment parameters) to streamline certificate requests. It is aimed at simplifying and standardizing enrollment settings, not at specifying HTTP/TFTP commands for file retrieval. The question’s emphasis on separate authentication/enrollment actions plus HTTP/TFTP retrieval options aligns more directly with enrollment url.
Core Concept: This question tests Cisco IOS/IOS XE PKI enrollment methods (trustpoints) and how a router/switch obtains an identity certificate from a CA. Different enrollment methods control how the device authenticates the CA, how the certificate request is generated, and how the resulting certificate/CA chain is retrieved. Why the Answer is Correct: The enrollment method that separates authentication and enrollment actions and provides an option to specify HTTP/TFTP commands for file retrieval is enrollment url. With enrollment url, the device can be pointed at a CA/RA URL and can perform distinct steps: authenticate the CA (for example, by retrieving and accepting the CA certificate/chain) and then enroll (generate/send the CSR and retrieve the issued certificate). Additionally, enrollment url supports specifying how the device should retrieve required files (commonly via HTTP or TFTP), which is particularly useful when the CA/RA publishes certificates or enrollment artifacts that must be fetched from a server rather than being returned inline. Key Features / How It’s Used: - Configured under a trustpoint (crypto pki trustpoint NAME) with “enrollment url <URL>”. - Supports workflows where the CA certificate retrieval/acceptance (authentication) is performed separately from the actual enrollment request. - Can leverage HTTP/TFTP retrieval mechanisms (commands/options) to pull CA certificates, issued certificates, or related enrollment files from a server. - Common in environments using SCEP-like or web-based enrollment endpoints, or where an RA fronts the CA. Common Misconceptions: - “terminal” is often associated with manual cut-and-paste enrollment and can feel like “separate steps,” but it does not provide the HTTP/TFTP file retrieval command options described. - “selfsigned” is not an enrollment method to a CA; it creates a local self-signed certificate. - “profile” relates to enrollment profiles (often for automated enrollment parameters), but the question’s key clue is explicit HTTP/TFTP retrieval command capability, which aligns with enrollment url. Exam Tips: - Memorize trustpoint enrollment keywords: terminal (manual), url (web/RA/CA URL with retrieval options), selfsigned (local cert), and profile (uses a predefined enrollment profile). - When you see wording about HTTP/TFTP retrieval commands or fetching files from a server, think enrollment url. - Distinguish “authenticate CA” (obtaining/verifying CA cert/chain) from “enroll” (CSR/issuance). The question explicitly calls out that separation.
Which two behavioral patterns characterize a ping of death attack? (Choose two.)
Incorrect. IP fragmentation does not use 16-octet units for the Fragment Offset. The Fragment Offset field in IPv4 is expressed in 8-byte (8 octet) blocks. While fragments may be various sizes based on MTU, the protocol’s offset granularity is 8 bytes, not 16, so this does not characterize Ping of Death behavior.
Correct. In IPv4, the Fragment Offset is measured in 8-octet blocks. Ping of Death commonly involves sending fragments that look valid individually, but when reassembled they form an oversized/invalid packet. The 8-octet grouping aligns with how fragmentation offsets are represented and is a key protocol detail often tested in exams.
Incorrect. “Short synchronized bursts of traffic to disrupt TCP connections” describes TCP-focused DoS behaviors (for example SYN floods, RST injection, or bursty volumetric attacks affecting TCP state). Ping of Death is primarily an ICMP/IP fragmentation and reassembly vulnerability exploit, not a TCP connection disruption technique.
Correct. Ping of Death uses malformed/oversized packets (often only becoming malformed after fragment reassembly) to exploit weaknesses in a target’s IP stack. Vulnerable systems may crash, hang, or reboot due to buffer overflows or improper handling of packets exceeding the maximum IP packet size.
Incorrect. Using publicly accessible DNS servers is characteristic of DNS reflection/amplification DDoS attacks, where spoofed requests generate large responses toward a victim. Ping of Death does not rely on third-party reflectors; it relies on oversized ICMP/IP packets and fragmentation/reassembly issues at the target.
Core Concept: A Ping of Death is a classic denial-of-service technique that abuses ICMP Echo (ping) by sending an oversized IP packet that exceeds the maximum IP packet size (65,535 bytes). The key is that the packet is often transmitted as fragments that appear valid individually, but when the target reassembles them, the resulting packet becomes oversized or otherwise invalid, triggering buffer overflows, memory corruption, or crashes in vulnerable IP stack implementations. Why the Answer is Correct: Option B is correct because IP fragmentation commonly uses 8-octet (64-bit) units for the Fragment Offset field. The fragment offset is measured in 8-byte blocks, which is why many descriptions of Ping of Death reference fragmentation into 8-byte groups/units. While real fragmentation sizes can vary based on MTU, the protocol’s offset granularity is 8 bytes, making “groups of 8 octets” the accurate behavioral/protocol characteristic. Option D is correct because Ping of Death relies on malformed/oversized packets (often malformed only after reassembly) to crash or destabilize systems. The attack’s impact is typically a system crash, hang, or reboot due to poor handling of reassembled oversized packets. Key Features / Best Practices: Modern systems largely mitigate Ping of Death via robust IP stack implementations, strict reassembly checks, and dropping invalid fragments. Network defenses include ACLs/ICMP rate limiting (CoPP/CPPr), IPS signatures, and anti-fragmentation evasion features (normalization) on firewalls/NGFW/IPS. Monitoring for abnormal fragmentation patterns and reassembly failures is also useful. Common Misconceptions: Option C sounds like a DoS pattern but describes TCP disruption (more aligned with SYN floods, RST attacks, or bursty volumetric DoS), not ICMP oversized reassembly. Option E describes DNS amplification/reflection, a different DDoS technique. Option A is incorrect because “16 octets” is not the IP fragmentation offset unit. Exam Tips: For SCOR, tie Ping of Death to ICMP + fragmentation + oversized reassembly leading to malformed packets and crashes. Remember: IP fragment offset is in 8-byte units, and Ping of Death is not a reflection/amplification attack (DNS) nor a TCP synchronization/burst tactic.