
Cisco
878+ Questões de Prática Gratuitas com Respostas Verificadas por IA
Powered by IA
Cada resposta de Cisco 350-401: Implementing and Operating Cisco Enterprise Network Core Technologies (ENCOR) é verificada por 3 modelos de IA líderes para garantir máxima precisão. Receba explicações detalhadas por alternativa e análise aprofundada das questões.
Which feature does Cisco TrustSec use to provide scalable, secure communication throughout a network?
Incorrect. TrustSec does not achieve scalability by assigning an ACL to each switch port. Per-port ACL deployment is the traditional approach that becomes difficult to manage as the network grows. In TrustSec, SGACLs are derived from group-based policy and enforced using SGT context, rather than manually attaching unique ACLs everywhere. Therefore this option describes the opposite of the scalable TrustSec design goal.
Incorrect. TrustSec can assign an SGT to an authenticated user or device session, but this option is too specific and inaccurately states it is assigned to each user on a switch. The question asks which feature TrustSec uses to provide scalable secure communication throughout the network, and that feature is the SGT mechanism itself, not a user-on-a-switch phrasing. Also, TrustSec applies to devices, services, and traffic classification broadly, not just users. Because of that imprecise wording, this is not the best answer.
Correct. Cisco TrustSec scales by using Security Group Tags as numeric classifications that represent security groups across the network. These tags allow policy decisions to be made based on identity or role rather than on IP addresses and interface-specific ACLs. Although the wording is not ideal, this option is the closest to the core TrustSec mechanism because it identifies the SGT number itself as the enabling feature. The other options incorrectly focus on ACL assignment rather than the tag-based model that provides scalability.
Incorrect. TrustSec does not rely on assigning an SGT ACL to each router in the network. Routers may participate in enforcement, but the scalable architecture comes from propagating SGT information and applying centralized group-based policy. Assigning ACLs router by router would create the same operational burden that TrustSec is designed to reduce. This option confuses the enforcement point with the underlying TrustSec feature.
Core concept: Cisco TrustSec provides scalable, secure communication by using Security Group Tags (SGTs) to classify traffic and endpoints into security groups. These tags allow policy to be enforced based on group identity instead of relying on large numbers of IP-based ACLs tied to interfaces or subnets. Why correct: The feature that makes TrustSec scalable is the use of an SGT number, not an ACL assigned per device or port. The SGT is the classification element used throughout the network so that enforcement devices can apply policy consistently based on group membership. Key features: - SGTs identify the security group of a user, device, or traffic flow. - SGACLs are then applied between source and destination SGTs. - TrustSec supports propagation of SGT information across the network for consistent policy enforcement. - This reduces operational complexity compared to per-port or per-router ACL administration. Common misconceptions: A common mistake is confusing the SGT itself with the SGACL. The SGT is the scalable classification mechanism, while the SGACL is the policy applied based on those tags. Another misconception is thinking TrustSec is fundamentally based on assigning ACLs to ports or routers, which is the older, less scalable model. Exam tips: When a TrustSec question asks about scalability, focus on SGT-based classification and group-based access control. Eliminate answers that describe traditional ACL placement on ports or routers. If the options are imperfect, choose the one that refers to the SGT number rather than ACL assignment.
Quer praticar todas as questões em qualquer lugar?
Baixe o Cloud Pass grátis — inclui simulados, acompanhamento de progresso e mais.
Quer praticar todas as questões em qualquer lugar?
Baixe o Cloud Pass grátis — inclui simulados, acompanhamento de progresso e mais.
Quer praticar todas as questões em qualquer lugar?
Baixe o Cloud Pass grátis — inclui simulados, acompanhamento de progresso e mais.


Baixe o Cloud Pass e acesse todas as questões de prática de Cisco 350-401: Implementing and Operating Cisco Enterprise Network Core Technologies (ENCOR) gratuitamente.
Quer praticar todas as questões em qualquer lugar?
Baixe o app grátis
Baixe o Cloud Pass grátis — inclui simulados, acompanhamento de progresso e mais.
A network administrator is implementing a routing configuration change and enables routing debugs to track routing behavior during the change. The logging output on the terminal is interrupting the command typing process. Which two actions can the network administrator take to minimize the possibility of typing commands incorrectly? (Choose two.)
Incorrect. "logging synchronous" is not a global configuration command in Cisco IOS; it is entered under line configuration mode such as "line console 0" or "line vty 0 4". Because the option explicitly says global configuration command, it is technically invalid. Even though the feature itself is relevant, the command context given in the option makes this answer wrong. Cisco certification questions often test exact command mode syntax, so this distinction matters.
Correct. The command "logging synchronous" is configured under line configuration mode, including VTY lines used for SSH or Telnet access. It causes unsolicited log and debug messages to be displayed in a way that preserves usability by restoring the prompt and partially entered command after the message appears. This directly reduces the chance of mistyping commands during a routing change when debug output is active. Because administrators commonly work over remote sessions, applying it under the VTY is a standard and effective mitigation.
Incorrect. The command "terminal length" changes the number of lines displayed before paging occurs with a --More-- prompt. It affects the presentation of command output such as show commands, but it does not control asynchronous debug or syslog messages arriving while the user is typing. Therefore it does not reduce command-line interruption in the way the question asks. It is useful for output navigation, not for protecting interactive input.
Incorrect. The logging delimiter feature may improve readability by visually separating messages, but it does not stop those messages from interrupting the command line. The administrator can still have keystrokes mixed with unsolicited output, which is the real operational problem here. It is a formatting aid rather than a mechanism for preserving typed commands. As a result, it does not directly minimize the possibility of typing commands incorrectly.
Correct. Pressing the TAB key can help reprint or complete the partially typed command after the line has been visually disrupted by debug output. While it is primarily known for command completion, in practice it also helps the administrator recover the current input line and continue typing more accurately. This makes it a useful immediate action during an active troubleshooting session. On Cisco exams, TAB is often recognized as a practical CLI aid when terminal output interrupts command entry.
Core concept: This question is about minimizing CLI input disruption caused by asynchronous debug and syslog messages during an IOS session. When unsolicited log messages appear while an administrator is typing, they can break up the command line and increase the chance of entering an incorrect command. Why correct: The best remedies are to enable synchronous logging on the active terminal line and to use the TAB key to help redraw or complete the interrupted command. Key features: The command "logging synchronous" is configured under line configuration mode, such as console or VTY lines, and causes IOS to display log messages on a new line before restoring the prompt and partially typed input. Common misconceptions: It is not a global configuration command, and terminal length does not affect asynchronous logging behavior. Exam tips: If a Cisco exam question asks how to prevent debug output from interrupting typing, think of line-level "logging synchronous" first, and remember that TAB can help recover the command line during interactive entry.
Which two pieces of information are necessary to compute SNR? (Choose two.)
Transmit power is the RF output level at the transmitter (often in dBm or mW). While increasing transmit power can improve the received signal (RSSI) under some conditions, it is not directly used to compute SNR. SNR is calculated at the receiver using the received signal level and the measured noise floor, regardless of what transmit power was configured.
Noise floor is the baseline RF energy level present at the receiver when no desired signal is being decoded, typically expressed in dBm (for example, -95 dBm). It is one of the two required values for SNR calculation because SNR measures how far above this noise baseline the received signal sits. Higher noise floor reduces SNR even if RSSI remains unchanged.
EIRP (Effective Isotropic Radiated Power) represents the transmitter’s effective radiated power after accounting for antenna gain and losses, and is important for coverage planning and regulatory compliance. However, EIRP is not required to compute SNR. SNR is derived from what the receiver measures (RSSI and noise floor), not from transmitter-side calculated values.
RSSI is the received signal strength at the receiver, commonly displayed in dBm in enterprise Wi-Fi tools. It is required to compute SNR because SNR compares the received signal level to the noise floor. For example, if RSSI is -60 dBm and noise floor is -90 dBm, SNR is 30 dB, indicating a strong, clean signal.
Antenna gain describes how an antenna focuses energy in certain directions relative to an isotropic radiator, affecting coverage and potentially the received signal level. While antenna gain influences link budget and can indirectly affect RSSI, it is not an input needed to compute SNR once RSSI and noise floor are known. SNR is strictly based on receiver measurements.
Core Concept: Signal-to-Noise Ratio (SNR) is a key RF performance metric used heavily in Wi-Fi design and troubleshooting. It expresses how much stronger the received signal is compared to the background noise. In 802.11 networks, SNR strongly influences modulation and coding scheme (MCS) selection, data rates, retransmissions, and overall reliability. Why the Answer is Correct: SNR is computed as: SNR (dB) = Received Signal Level (dBm) − Noise Floor (dBm) Therefore, you need (1) the received signal level and (2) the noise floor. In Cisco/Wi-Fi terminology, the received signal level is commonly represented as RSSI (Received Signal Strength Indicator) and is typically shown in dBm on enterprise WLAN platforms. The noise floor is the measured ambient RF noise at the receiver, also in dBm. Subtracting noise floor from RSSI yields SNR in dB. Key Features / Best Practices: - Typical design targets: ~25 dB SNR for high-throughput/voice, ~20 dB for general data, and lower SNR may still work but with reduced rates and higher retries. - Noise floor rises with interference (non-802.11 sources, co-channel contention, poor cabling/shielding, etc.), reducing SNR even if RSSI stays constant. - RSSI alone is not sufficient: a “strong” RSSI can still perform poorly if the noise floor is high. Common Misconceptions: - Transmit power, antenna gain, and EIRP affect what RSSI might become at the receiver, but they are not required inputs to calculate SNR once RSSI and noise floor are known. - EIRP is often confused as a “signal strength at the client,” but it is a transmitter-side regulatory/engineering value, not the received level. Exam Tips: For ENCOR, memorize the simple relationship: SNR = RSSI − Noise Floor. If the question asks what you need to compute SNR, look for “received signal” (RSSI) and “noise floor.” If it asks how to improve SNR, then transmit power/antenna placement/channel planning may become relevant, but they are not part of the calculation itself.
Which two steps are required for a complete Cisco DNA Center upgrade? (Choose two.)
Automation backup is a recommended pre-upgrade precaution to protect configuration, inventory, and automation artifacts. However, it is not inherently a required step to perform the upgrade itself. Cisco DNA Center upgrades can proceed without an automation backup (though it is strongly advised), so it does not match the question’s “required for a complete upgrade” phrasing.
System update is required because it upgrades the underlying Cisco DNA Center platform components (base OS/platform services/cluster infrastructure). Without the system update, you may not meet prerequisites for the target release, and the deployment would not be fully upgraded. This is one of the two core phases that make the upgrade “complete.”
Golden image selection relates to Cisco DNA Center SWIM (Software Image Management) for network devices, where you choose a “golden” IOS-XE image for compliance and distribution. It is not part of upgrading the Cisco DNA Center appliance/cluster itself, so it is not required for a Cisco DNA Center upgrade.
Proxy configuration is only necessary if the Cisco DNA Center cluster requires a proxy to reach Cisco cloud resources or repositories to download upgrade packages. Many deployments use direct internet access or offline/manual package upload. Because it is environment-dependent, it is not a universally required step for a complete upgrade.
Application updates are required because they upgrade the Cisco DNA Center application layer (the actual DNA services and features). Even if the system/platform is updated, you have not fully upgraded Cisco DNA Center functionality until the application bundle is updated. Along with the system update, this completes the end-to-end upgrade process.
Core Concept: A Cisco DNA Center upgrade is typically a two-part process: upgrading the underlying platform (the “system”) and upgrading the Cisco DNA Center application services that run on top of it. The exam is testing recognition that a “complete” upgrade involves both the base system software and the application layer. Why the Answer is Correct: A complete upgrade requires (1) a system update and (2) application updates. The system update upgrades the appliance/cluster base software components (OS, platform services, cluster framework, and other foundational packages). Application updates then upgrade the Cisco DNA Center application bundle itself (assurance, automation, design, policy, integrations, etc.). In practice, Cisco packages these as distinct steps in the upgrade workflow, and both must be performed to fully move the deployment to the target release. Key Features / Best Practices: - Follow the built-in upgrade workflow and pre-checks (cluster health, disk space, services status, compatibility). - Plan maintenance windows: system updates can require node reboots and service restarts; application updates can take additional time for database migrations and service redeployments. - Validate post-upgrade: verify cluster health, critical services, device connectivity, and assurance data ingestion. - Always take backups (system and/or application data) before upgrades, but backups are a best practice rather than a required “upgrade step” in the question’s wording. Common Misconceptions: - “Automation backup” sounds mandatory because backups are strongly recommended. However, it is not one of the two canonical steps that constitute the upgrade itself. - “Golden image selection” is part of SWIM/image management for network devices, not the Cisco DNA Center platform upgrade. - “Proxy configuration” may be needed in some environments to download updates, but it is conditional, not universally required. Exam Tips: When ENCOR questions say “complete Cisco DNA Center upgrade,” think in layers: platform/system first, then the application bundle. Separate operational best practices (backups, proxy, image selection) from the actual upgrade phases (system update + application updates).
What are two common sources of interference for Wi-Fi networks? (Choose two.)
LED lights can emit electromagnetic noise in some cases, especially if they use poor-quality drivers or switching power supplies, but they are not typically identified as one of the most common Wi-Fi interference sources in Cisco certification questions. They are more of an edge-case environmental issue than a standard exam answer. When compared with radar and rogue APs, LED lights are the less defensible choice. For exam purposes, they are not usually considered a primary common interferer.
Radar is a well-known source of interference for 5 GHz Wi-Fi networks operating on DFS channels. When an access point detects radar energy, it must stop using that channel and move clients elsewhere to comply with regulations. This causes service disruption, channel changes, and intermittent connectivity symptoms that are commonly tested in Cisco wireless exams. Because DFS and radar avoidance are fundamental parts of 5 GHz WLAN operation, radar is clearly a correct answer.
Fire alarm systems are not generally considered common sources of Wi-Fi interference. While any electronic system could theoretically emit noise if malfunctioning, fire alarms are not standard examples of devices that disrupt 2.4 GHz or 5 GHz WLANs. Cisco exams usually focus on more established interferers such as radar, microwave ovens, Bluetooth devices, or other access points. Therefore this option is not a correct choice.
A conventional oven is not a common Wi-Fi interference source. The classic appliance associated with Wi-Fi disruption is a microwave oven, which leaks energy around 2.45 GHz and can interfere with 2.4 GHz WLANs. A conventional oven does not operate using the same RF mechanism and is not typically cited in wireless troubleshooting references. This makes it an incorrect option in the context of common Wi-Fi interferers.
A rogue AP is a common source of Wi-Fi interference because it transmits in the same unlicensed spectrum as authorized WLAN infrastructure. Even if it is not malicious, it can create co-channel interference, adjacent-channel interference, and increased airtime contention for nearby clients and APs. In enterprise environments, rogue APs are frequently monitored not just for security reasons but also for their RF impact. Cisco exam objectives commonly treat unauthorized APs as both a security concern and an operational interference source.
Core Concept: This question tests recognition of common Wi-Fi interference sources, including both non-802.11 RF interferers and other Wi-Fi devices that disrupt normal channel use. In enterprise WLANs, interference is not limited to external electronics; unauthorized or unmanaged Wi-Fi devices can also degrade service by consuming airtime and creating co-channel or adjacent-channel interference. Why the Answer is Correct: Radar (B) is a classic source of interference for 5 GHz Wi-Fi, especially on DFS channels. Rogue APs (E) are also a common real-world source of interference because they transmit in the same unlicensed spectrum as the production WLAN, causing contention and channel overlap. Both are widely recognized in Cisco wireless design and troubleshooting contexts. Key Features / Best Practices: - Use spectrum analysis and controller/AP tools to identify radar events, DFS channel changes, and non-Wi-Fi noise. - Continuously scan for rogue APs and classify them as interfering, neighboring, or malicious devices. - Design channel plans carefully to reduce co-channel and adjacent-channel interference. - Prefer proper RF monitoring and security policies to detect unauthorized wireless devices early. Common Misconceptions: A rogue AP is not only a security issue; it is also an RF issue because it actively transmits and competes for airtime. LED lights may generate some electromagnetic noise in certain environments, but they are not typically listed as a common Wi-Fi interference source in certification exam questions. Conventional ovens are also not the classic appliance interferer; microwave ovens are. Exam Tips: For Cisco exams, remember that interference can come from both non-802.11 sources like radar and from other 802.11 devices such as neighboring or rogue APs. If you see radar in a Wi-Fi interference question, it is usually a strong choice because of DFS behavior in 5 GHz. Also distinguish between a conventional oven and a microwave oven, since only the latter is a classic 2.4 GHz interferer.
Which behavior can be expected when the HSRP version is changed from 1 to 2?
Incorrect. Upgrading the standby router first does not make the transition nondisruptive because HSRP version 1 and version 2 are not interoperable as a single stable group during the change. The routers will not maintain seamless adjacency across the version mismatch, so some reinitialization behavior is expected. Therefore, saying that no changes occur is technically wrong. The order of upgrade may help operational planning, but it does not eliminate the protocol impact.
Incorrect. HSRP version 1 and version 2 do not use the same virtual MAC OUI and format. HSRP v1 uses the 0000.0c07.acxx pattern, while HSRP v2 uses 0000.0c9f.fxxx. Since the virtual MAC changes, hosts and switches must relearn the gateway MAC information. That means a version change does not occur transparently.
Correct. HSRP version 2 uses a different virtual MAC address format than version 1, so changing the version changes the virtual gateway MAC associated with each HSRP group. Because the active gateway identity on the segment changes, the HSRP group reinitializes and devices must relearn the new MAC through ARP and switching table updates. This is the expected operational behavior when migrating an HSRP group from v1 to v2. Cisco exam questions commonly test this distinction by tying the disruption directly to the virtual MAC change.
Incorrect as the best answer. It is true that HSRP v1 and v2 use different multicast addresses for hello packets, and mixed versions will not exchange hellos normally. However, the question asks what behavior can be expected when the version is changed, and the standard expected behavior is that each group reinitializes because the virtual MAC address changes. The multicast difference explains lack of interoperability during mismatch, but it is not the best match to the behavior being tested here. On Cisco exams, the virtual MAC change is the more direct and canonical reason tied to HSRP group reinitialization.
Core concept: This question tests the operational differences between HSRP version 1 and version 2, especially what happens to an existing HSRP group when the version is changed. HSRP v2 introduces a different virtual MAC address format from HSRP v1, so the group identity on the LAN changes when the version changes. Why correct: because hosts and switches see a different virtual gateway MAC, the HSRP group must reinitialize and reestablish forwarding using the new virtual MAC. Key features: HSRP v1 uses virtual MAC addresses in the 0000.0c07.acxx range, while HSRP v2 uses 0000.0c9f.fxxx; v2 also supports a larger group range and different packet handling. Common misconceptions: many candidates focus on the multicast hello address difference, but the expected behavior asked here is the group reinitialization caused by the virtual MAC change. Exam tips: remember that HSRP version changes are disruptive, and if asked what specifically changes operationally, the virtual MAC change is the classic trigger for reinitialization.
Which IP SLA operation requires the IP SLA responder to be configured on the remote end?
UDP jitter is the classic IP SLA operation that requires (or is intended to use) an IP SLA responder on the destination. The responder listens on the specified UDP port, timestamps/acknowledges probe packets, and enables accurate calculation of jitter, loss, and delay metrics. This is why UDP jitter is commonly used for VoIP/WAN performance validation and is associated with two-ended measurement behavior.
ICMP jitter is based on ICMP echo-style probing and can be performed without configuring an IP SLA responder because ICMP Echo Reply is a standard network function on most IP devices. While it can provide delay variation estimates, it does not require a specialized responder process on the far end in the way UDP jitter does for structured measurement and reporting.
TCP connect measures the time to establish a TCP session (SYN, SYN-ACK, ACK) to a destination port. Because it uses the normal TCP stack behavior of the remote host (a service listening on that port), it does not require an IP SLA responder. The remote endpoint simply needs to allow/accept the TCP connection attempt for meaningful results.
ICMP echo is a one-ended IP SLA operation that sends ICMP Echo Requests and measures the time until ICMP Echo Replies are received. Since ICMP echo/reply is a standard capability on routers, switches, and hosts (unless blocked), no IP SLA responder configuration is required on the remote end. This makes it simple but less feature-rich than responder-based tests.
Core concept: This question tests Cisco IP SLA operation types and when an IP SLA responder is required. IP SLA can run “one-ended” tests (the source device sends probes and relies on standard protocol behavior) or “two-ended” tests (a responder on the target cooperates to timestamp/reflect traffic in a controlled way). Two-ended operations are typically used for accurate delay/jitter/loss measurements and to traverse devices that might treat test traffic differently. Why the answer is correct: UDP jitter (often called “IP SLA UDP jitter” or “VoIP jitter”) is designed to measure one-way and round-trip delay, jitter (delay variation), packet loss, and out-of-sequence packets using UDP probe streams. For accurate results, the far end should run the IP SLA responder so it can receive the UDP probes on a known port, timestamp them, and send structured responses back. Without the responder, the operation either fails or degrades into less reliable behavior (for example, relying on generic UDP port behavior), and you lose the precision and metrics that make UDP jitter valuable. Key features / configuration notes: - Responder is enabled on the target with the global command: "ip sla responder". - UDP jitter operation specifies destination IP and UDP port; the responder listens and replies on the configured port. - Common use cases: validating VoIP readiness (jitter/loss), WAN SLA verification, and tracking performance for routing decisions (e.g., with object tracking). - Best practice: permit the UDP ports and responder traffic in ACLs/firewalls; ensure time synchronization (NTP) if you are interpreting one-way measurements. Common misconceptions: - Many assume any “jitter” operation needs a responder. In reality, ICMP-based operations can be one-ended because ICMP Echo/Reply is natively supported by most devices. - TCP connect is also one-ended; it measures connection setup time using standard TCP handshake behavior and does not require a special responder. Exam tips: Remember: operations that rely on standard protocol replies (ICMP echo, TCP connect) generally do not require a responder. Operations that emulate application streams and need detailed metrics (notably UDP jitter) are the classic responder-required choice in ENCOR-style questions.
Which statement about TLS is accurate when using RESTCONF to write configurations on network devices?
Incorrect. TLS is not used for plain HTTP; HTTP is unencrypted and has no TLS handshake. TLS is used to secure HTTP by creating HTTPS (HTTP over TLS). RESTCONF can technically run over HTTP, but secure configuration operations in enterprise networks should use HTTPS, not HTTP.
Correct. TLS uses X.509 certificates to authenticate endpoints, at minimum the server certificate presented by the network device when clients connect via HTTPS for RESTCONF. Clients validate the certificate chain and identity before sending credentials and configuration payloads. Mutual TLS can also require client certificates, but server certificates are fundamental.
Incorrect. RESTCONF on Cisco devices does not require NGINX or any external reverse proxy to provide TLS. While a proxy could be used in some architectures (API gateways, load balancers), Cisco platforms commonly terminate TLS directly on the device’s HTTPS/RESTCONF service.
Incorrect. Cisco enterprise devices that support RESTCONF typically support HTTPS/TLS as well. RESTCONF is designed to operate over HTTP/HTTPS, and Cisco implementations commonly recommend HTTPS for secure management. Therefore, stating TLS is not supported on Cisco devices is false.
Core concept: RESTCONF is a RESTful management protocol that uses HTTP methods (GET/POST/PUT/PATCH/DELETE) to operate on YANG-modeled data. When you “write configurations” with RESTCONF, you are typically using HTTPS so the session is protected by TLS. TLS provides confidentiality (encryption), integrity, and endpoint authentication. Why the answer is correct: TLS relies on X.509 certificates to authenticate the server (and optionally the client). In RESTCONF deployments on Cisco enterprise devices, HTTPS is enabled and the device presents a server certificate during the TLS handshake. The client validates that certificate (trust chain, validity, hostname/SAN match) before sending credentials and configuration payloads. Optionally, mutual TLS (mTLS) can be used where the client also presents a certificate, but even in one-way TLS the server certificate is required for proper TLS authentication. Key features / best practices: - Use HTTPS (RESTCONF over TLS) for configuration changes; avoid cleartext HTTP. - Install a CA-signed certificate on the device or distribute the device’s self-signed certificate to clients as a trusted anchor. - Ensure correct certificate attributes (CN/SAN matching the device FQDN/IP as used by clients) to prevent validation failures. - Combine TLS with strong AAA (local/AAA) and role-based authorization; TLS secures the transport, while AAA controls who can change configuration. Common misconceptions: - Confusing “TLS is used for HTTP and HTTPS” (TLS is for HTTPS; HTTP is cleartext). - Thinking TLS is “provided by NGINX proxy” (not required; devices can terminate TLS themselves). - Believing Cisco devices don’t support TLS for RESTCONF (they do; RESTCONF commonly runs over HTTPS). Exam tips: For ENCOR, remember: RESTCONF uses HTTP semantics, but secure RESTCONF is HTTPS, which implies TLS and certificates. If a question mentions secure RESTCONF or writing configs, assume HTTPS/TLS and focus on certificate-based server authentication and trust validation steps.
A customer has several small branches and wants to deploy a Wi-Fi solution with local management using CAPWAP. Which deployment model meets this requirement?
Local mode is the classic lightweight AP mode where APs join a Cisco WLC using CAPWAP. It provides centralized control, RF management, and policy enforcement, but typically the controller is a dedicated WLC (often centralized). It can be used for branches, yet it doesn’t inherently imply local management at each small branch unless you deploy a WLC per site.
Autonomous APs are managed locally on each AP and do not require a controller. However, they do not use CAPWAP; CAPWAP is specific to lightweight/controller-based deployments. Autonomous is therefore incompatible with the requirement to use CAPWAP, even though it matches the “local management” idea at a high level.
SD-Access wireless integrates wireless into the SD-Access fabric, leveraging DNA Center and controller-based wireless (CAPWAP to WLC) with fabric concepts like VXLAN and SGT-based policy. It is aimed at campus/enterprise architectures rather than small-branch local management. It adds complexity and dependencies that don’t match the stated small-branch CAPWAP local-management requirement.
Mobility Express (often referred to as EWC on AP) is built for small deployments where one AP runs the controller function locally and other APs join it via CAPWAP. This provides local management and controller-based operation without dedicated WLC hardware, making it ideal for multiple small branches needing CAPWAP with on-site control.
Core concept: This question tests Cisco wireless deployment models and where CAPWAP-based control/management resides. CAPWAP implies a controller-based architecture (APs join a controller over CAPWAP), but the requirement adds “local management” for several small branches—meaning the customer wants controller functionality on-site at each branch without relying on a centralized WLC. Why the answer is correct: Mobility Express is designed specifically for small/remote sites that need a locally managed WLAN solution while still using CAPWAP. In Mobility Express, one AP (the “controller AP”) runs the embedded wireless LAN controller (EWC) function, and other APs at the site join it using CAPWAP—providing centralized configuration, RF management, and client services locally at the branch. This meets both requirements: CAPWAP-based AP-to-controller operation and local management suitable for small branches. Key features / best practices: Mobility Express (EWC on AP) provides a single management point per site, supports multiple APs per branch, and avoids the cost/complexity of dedicated WLC hardware. It is commonly used where WAN links are limited or where local survivability is desired. Operationally, you plan for controller AP redundancy (if supported in the chosen platform/version), ensure consistent software versions across APs, and size the solution based on supported AP/client limits. Common misconceptions: “Local mode” is CAPWAP-based, but it assumes a separate WLC (often centralized). That can be used for branches, but it does not inherently satisfy “local management” unless you deploy a WLC at every branch, which is not the typical small-branch model. “Autonomous” APs are locally managed but do not use CAPWAP (they are standalone). “SD-Access wireless” is an enterprise fabric architecture and still relies on controller-based operation; it’s not the small-branch local-management CAPWAP model described. Exam tips: When you see CAPWAP, eliminate autonomous. Then decide where the controller lives: centralized WLC (local mode) vs controller-on-AP for small sites (Mobility Express/EWC). Keywords like “small branches,” “local management,” and “no dedicated controller” strongly point to Mobility Express.
Refer to the exhibit.
R1# sh run | begin line con
line con 0
exec-timeout 0 0
privilege level 15
logging synchronous
stopbits 1
line aux 0
exec-timeout 0 0
privilege level 15
logging synchronous
stopbits 1
line vty 0 4
password 7 045802150C2E
login
line vty 5 15
password 7 045802150C2E
login
!
end
R1# sh run | include aaa | enable
no aaa new-model
R1#
Which privilege level is assigned to VTY users?
Privilege level 1 is the default for a user who logs in on VTY using legacy line password authentication (login/password) when no per-line "privilege level" is configured and AAA is disabled. The user enters user EXEC mode and must use the enable command to attempt to reach privileged EXEC.
Privilege level 7 is not a default for VTY access. It could be assigned only if explicitly configured (for example, "line vty ... privilege level 7") or via AAA/username privilege settings. Nothing in the provided configuration assigns level 7 to VTY sessions.
Privilege level 13 is also not a default. Higher privilege levels (2–14) are typically used for custom role/command sets and must be explicitly assigned via line configuration or AAA authorization/username privilege. The exhibit shows no such configuration for VTY lines.
Privilege level 15 would apply if the VTY lines had "privilege level 15" configured, or if AAA exec authorization/username privilege granted it. In the exhibit, only console and AUX lines have privilege level 15 configured; VTY lines do not, so they default to level 1.
Core Concept: This question tests Cisco IOS user privilege behavior on VTY lines when AAA is not enabled. Privilege levels control what commands a user can execute after logging in. The key is understanding the default privilege level assigned to a user who authenticates via a line password (login) without AAA and without an explicit per-line privilege configuration. Why the Answer is Correct: The configuration shows VTY lines 0–4 and 5–15 with only: - password 7 ... - login There is no "privilege level X" configured under either VTY line. The output also shows "no aaa new-model", meaning AAA is disabled and the device is using legacy line-based authentication. In this legacy mode, a successful VTY login using the line password places the user into user EXEC mode, which is privilege level 1 by default. To reach privileged EXEC (level 15), the user must still use the enable command and provide the enable secret/password (if configured). Key Features / Configuration Notes: - "privilege level 15" is explicitly configured on the console and AUX lines, so those sessions start at level 15 immediately after login. - VTY lines do not inherit console/AUX privilege settings; each line type can have its own privilege level. - With AAA enabled, privilege can be assigned via local usernames (username ... privilege X) or via authorization (aaa authorization exec ...), but AAA is explicitly disabled here. Common Misconceptions: - Seeing "privilege level 15" on console/AUX can mislead candidates into thinking all access methods default to 15. They do not. - The presence of an encrypted password (type 7) on VTY lines does not imply privileged access; it only controls authentication to get a shell. - Some assume VTY users get level 15 if enable is not configured; in reality they still start at level 1, and enable behavior is separate. Exam Tips: For ENCOR, always check three things for remote access privilege: (1) line vty privilege level, (2) AAA new-model and any exec authorization, and (3) local usernames with privilege levels. If none are present, assume VTY logins land in user EXEC (level 1) and require enable to elevate.
Refer to the exhibit.
PYTHON CODE:
import requests
import json
url='http://YOURIP/ins'
switchuser='USERID'
switchpassword='PASSWORD'
myheaders={'content-type':'application/json'}
payload={
"ins_api":{
"version":"1.0",
"type":"cli_show",
"chunk":"0",
"sid":"1",
"input":"show version",
"output_format":"json"
}
}
response = requests.post(url,data=json.dumps(payload), headers=myheaders,auth=(switchuser,switchpassword)).json()
print(response['ins_api']['outputs']['output']['body']['kickstart_ver_str'])
HTTP JSON Response:
{
"ins_api": {
"type": "cli_show",
"version": "1.0",
"sid": "eoc",
"outputs": {
"output": {
"input": "show version",
"msg": "Success",
"code": "200",
"body": {
"bios_ver_str": "07.61",
"kickstart_ver_str": "7.0(3)I7(4)",
"bios_cmpl_time": "04/06/2017",
"kick_file_name": "bootflash:///nxos.7.0.3.I7.4.bin",
"kick_cmpl_time": "6/14/1970 2:00:00",
"kick_tmstmp": "06/14/1970 09:49:04",
"chassis_id": "Nexus9000 93180YC-EX chassis",
"cpu_name": "Intel(R) Xeon(R) CPU @ 1.80GHz",
"memory": 24633488,
"mem_type": "kB",
"rr_usecs": 134703,
"rr_crime": "Sun Mar 10 15:41:46 2019",
"rr_reason": "Reset Requested by CLI command reload",
"rr_sys_ver": "7.0(3)I7(4)",
"rr_service": "",
"manufacturer": "Cisco Systems, Inc.",
"TABLE_package_list": {
"ROW_package_list": {
"package_id": {}
}
}
}
}
}
}
}
Which HTTP JSON response does the Python code output give?
Correct. The Python code prints response['ins_api']['outputs']['output']['body']['kickstart_ver_str'], and that exact key path is present in the provided JSON response. The value stored at that location is 7.0(3)I7(4), which is the NX-OS kickstart version string. Since the path is valid and the json module is imported, the script successfully prints that value rather than raising an exception.
Incorrect. The value 7.61 corresponds to bios_ver_str, not kickstart_ver_str. The code does not reference the BIOS version field anywhere in the print statement. Therefore, this value would only be printed if the code accessed response['ins_api']['outputs']['output']['body']['bios_ver_str'] instead.
Incorrect. The script explicitly includes import json at the top, so json.dumps(payload) is valid. A NameError for json would occur only if the module were not imported or if the name had been overwritten later in the script, neither of which is shown. Therefore, this option is not supported by the exhibit.
Incorrect. The key kickstart_ver_str is present exactly under ins_api -> outputs -> output -> body in the provided JSON. Because the dictionary path in the code matches the response structure precisely, Python does not raise a KeyError. This distractor relies on a common concern about NX-API structural variations, but the exhibit itself shows a valid direct match.
Core concept: This question tests Python JSON parsing of a Cisco NX-API response. The code sends a POST request, converts the HTTP response body into a Python dictionary with .json(), and then accesses a nested key path to print a specific field from the returned JSON. Why correct: The key path response['ins_api']['outputs']['output']['body']['kickstart_ver_str'] exists exactly as shown in the provided HTTP JSON response, and its value is the NX-OS kickstart version string. Key features: The script correctly imports both requests and json, uses json.dumps(payload), and references a valid nested dictionary path. Common misconceptions: Candidates may confuse bios_ver_str with kickstart_ver_str, or assume a KeyError due to list-versus-dict variations in other NX-API examples, but this exhibit clearly shows output as a dictionary. Exam tips: For automation questions, trace the JSON hierarchy literally from the exhibit and avoid assuming alternate response formats unless they are explicitly shown.
Which access control list allows only TCP traffic with a destination port range of 22-443, excluding port 80?
This option first denies TCP traffic with destination port 80, which correctly removes the excluded port before any broader permit is evaluated. The second statement permits TCP traffic with destination ports greater than 21 and less than 444, which corresponds exactly to ports 22 through 443. Because ACLs are processed top-down, packets for port 80 match the deny first and are dropped, while packets for ports 22-79 and 81-443 are permitted by the second line. Traffic outside that range is not matched by the permit and is denied by the implicit deny, so the logic satisfies the requirement.
This option places the permit for the full range 22 through 443 before the deny for port 80, which breaks the intended logic. Because port 80 falls inside the permitted range, packets to port 80 match the first statement and are allowed immediately. The later deny is never evaluated for those packets due to first-match ACL behavior. As a result, port 80 is not excluded, so the ACL does not meet the requirement.
This option permits only TCP traffic with destination port 80, which is the exact opposite of the requirement to exclude port 80. It does not allow the broader destination port range of 22 through 443, so valid traffic to ports such as 22, 443, or 3389 within the intended range would not be handled correctly. Because of the implicit deny at the end of the ACL, all other traffic would be blocked. Therefore this option clearly fails to implement the requested access policy.
This option is correct because it explicitly denies TCP destination port 80 and then permits the inclusive destination port range 22 through 443. Since ACLs stop processing after the first match, traffic to port 80 is blocked by the first line and never reaches the permit statement. Traffic to other ports in the 22-443 range is allowed by the second line, and traffic outside that range is denied implicitly. This is the clearest and most direct expression of the requested policy.
Core concept: This question tests extended ACL processing order and TCP destination port matching using operators such as eq, gt, lt, and range. The requirement is to permit only TCP traffic whose destination port is between 22 and 443 while excluding port 80, which means the ACL must deny port 80 before permitting the broader allowed set. Both a deny-then-range approach and a deny-then-gt/lt approach can satisfy the requirement because ACLs are evaluated top-down and stop at the first match. A common misconception is that only the range keyword is valid for expressing the allowed ports, but gt 21 lt 444 is functionally equivalent to ports 22 through 443. Exam tip: always verify both the logic and the order of ACL entries, and remember that anything not explicitly permitted is blocked by the implicit deny at the end.
In which part of the HTTP message is the content type specified?
HTTP method (GET, POST, PUT, etc.) is part of the request start-line and indicates the action to perform on a resource. It does not specify the media type of any payload. While some methods commonly include a body (POST/PUT/PATCH), the method itself never declares whether the body is JSON, XML, or another format; that is done via the Content-Type header.
The body contains the actual payload (for example, JSON text, HTML, or binary data). It does not inherently include standardized metadata fields like Content-Type. Although an application could embed type information inside the payload, HTTP’s defined mechanism for declaring the payload’s media type is the Content-Type header, which is separate from the body and precedes it.
The HTTP header section is where Content-Type is specified (for example, "Content-Type: application/json"). Headers carry metadata about the message and its payload, including type, length, caching directives, authentication, and more. Both HTTP requests and responses use Content-Type to tell the receiver how to interpret the body, making the header the correct location.
The URI identifies the resource (path and optional query) being accessed, such as /api/items?id=10. While URIs sometimes include file extensions that hint at content (like .html), that is not the authoritative HTTP mechanism for payload typing. The formal declaration of the body’s media type is done with the Content-Type header, not the URI.
Core Concept: This question tests basic HTTP message structure and where metadata about the payload is carried. HTTP is a text-based application-layer protocol that separates message metadata (start-line and headers) from the message payload (body). Content typing is part of that metadata. Why the Answer is Correct: The content type is specified in the HTTP headers using the "Content-Type" header field. In HTTP requests, Content-Type describes the media type of the request body being sent to the server (for example, JSON, XML, form data). In HTTP responses, Content-Type describes the media type of the response body returned to the client (for example, text/html, application/json). Because it is a header field, it belongs to the header section of the HTTP message, not the method, URI, or body. Key Features / Details: An HTTP message is generally: 1) Start-line: request line (METHOD SP URI SP VERSION) or status line (VERSION SP STATUS SP REASON) 2) Headers: key-value fields such as Host, User-Agent, Accept, Content-Type, Content-Length 3) Blank line (CRLF) separating headers from body 4) Optional body (payload) Common Content-Type examples include: - application/json - application/xml - application/x-www-form-urlencoded - multipart/form-data In networking/security contexts (relevant to ENCOR), Content-Type is often used by proxies, firewalls, and NBAR/application recognition to help classify traffic and enforce policy, though it can be spoofed and should not be the only trust signal. Common Misconceptions: Learners sometimes think the body “contains” the content type because the body contains the content itself. However, the body is raw payload; the type is declared in headers. Others confuse Content-Type with the URI extension (like .html) or with the HTTP method (POST/GET), but those do not formally define the media type of the payload. Exam Tips: Remember: anything describing the payload (type, length, encoding) is typically in headers: Content-Type, Content-Length, Content-Encoding. Also distinguish Content-Type (what the body is) from Accept (what the client can receive). For quick elimination, method and URI are part of the request line, not where payload metadata is declared.
A network is being migrated from IPv4 to IPv6 using a dual-stack approach. Network management is already 100% IPv6 enabled. In a dual-stack network with two dual-stack NetFlow collectors, how many flow exporters are needed per network device in the flexible NetFlow configuration?
One exporter would only be sufficient if there were a single collector destination or some external abstraction explicitly presented, such as a load balancer or anycast VIP acting as one logical collector target. The question states there are two dual-stack NetFlow collectors, which in standard Flexible NetFlow design means two export destinations. Dual-stack capability does not merge two separate collectors into one exporter definition. Therefore, 1 is too few.
Two exporters are needed because the device must send flow data to two separate NetFlow collectors, and each exporter defines one collector destination with its associated transport settings. In Flexible NetFlow, exporter configuration is tied to where the data is sent, not to whether the observed traffic is IPv4 or IPv6. Since the collectors are dual-stack and management is IPv6 enabled, the exporters can use IPv6 transport, but there is still one exporter per collector. Therefore, each network device requires two exporters in this scenario.
Four exporters would incorrectly assume that exporter count must be doubled for IPv4 and IPv6 traffic in addition to the two collectors. Flexible NetFlow does not require separate exporters for IPv4 flows and IPv6 flows when the same collector destination is used. The distinction between IPv4 and IPv6 is handled in the flow records and monitors, while exporters remain destination-based. As a result, 4 overstates the requirement.
Eight exporters is far beyond what is needed and reflects an incorrect multiplication of collectors, protocol families, or other factors such as interfaces or directions. Exporters are not created per interface, per direction, or per address family unless the design explicitly requires different destinations or transport settings. In this case, there are only two collector endpoints to send data to. Therefore, 8 is not justified by the scenario.
Core concept: Flexible NetFlow separates flow records, flow monitors, and flow exporters. A flow exporter defines the destination collector and transport parameters used to send flow data, while records and monitors define what traffic is measured. Why correct: because there are two NetFlow collectors, each device needs two exporter definitions—one for each collector—even in a dual-stack network. Key features: dual-stack refers to the observed traffic and possibly the transport used to reach the collectors, but exporter count is driven by the number of collector destinations, not by IPv4 versus IPv6 flows. Common misconceptions: many candidates incorrectly multiply exporters by protocol family, or assume dual-stack collectors allow one exporter to cover multiple distinct collector endpoints. Exam tips: remember that IPv4 and IPv6 flows may require separate records/monitors, but exporter count normally maps to collector destinations.
Refer to the exhibit.
R1#show ip bgp
BGP table version is 32, local router ID is 192.168.101.5
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
x best-external, a additional-path, c RIB-compressed,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
Network Next Hop Metric LocPrf Weight Path
* 192.168.102.0 192.168.101.18 80 0 64517i
* 192.168.101.14 80 80 0 64516i
* 192.168.101.10 0 64515 64515i
*> 192.168.101.2 32768 64513i
* 192.168.101.6 80 0 64514 64514i
Which IP address becomes the active next hop for 192.168.102.0/24 when 192.168.101.2 fails?
192.168.101.10 is not selected because its AS-path is longer than the path through 192.168.101.18. The path attribute shown is `64515 64515i`, which indicates AS 64515 has been prepended, giving it an AS-path length of 2. After the failed best path is removed, BGP compares the remaining eBGP candidates and prefers the shorter AS-path. Therefore, 192.168.101.18 is preferred over 192.168.101.10.
192.168.101.14 is not selected because it is an iBGP-learned route, indicated by the presence of a Local Preference value in the table. Although Local Preference is an important BGP attribute within an AS, Cisco best-path selection still prefers an eBGP path over an iBGP path when earlier comparisons among the remaining candidates do not make the iBGP route uniquely best in this context. The remaining eBGP routes are therefore preferred before this iBGP route. As a result, 192.168.101.14 does not become the active next hop.
192.168.101.6 is not selected because its AS-path is longer than the path through 192.168.101.18. The path shown is `64514 64514i`, which is an AS-path length of 2 because of prepending. While it also shows a MED of 80, MED is considered later and does not help it beat a shorter eBGP AS-path. Therefore, 192.168.101.18 remains the best remaining route.
192.168.101.18 becomes the active next hop after 192.168.101.2 is withdrawn. Among the remaining candidates, 192.168.101.14 is an iBGP-learned route because it shows a Local Preference value, while 192.168.101.18, 192.168.101.10, and 192.168.101.6 are eBGP-learned routes. BGP prefers eBGP over iBGP when earlier decisive attributes do not produce a winner, so the iBGP path via 192.168.101.14 is not selected. Between the remaining eBGP paths, 192.168.101.18 has the shortest AS-path length of 1 compared with 192.168.101.6 and 192.168.101.10, which each show AS-path length 2 due to AS prepending, so 192.168.101.18 wins.
Core concept: This question tests Cisco BGP best-path selection after the current best route is withdrawn. When the path via 192.168.101.2 fails, R1 reevaluates the remaining valid paths for 192.168.102.0/24 using the normal BGP decision process. Key features to compare here are Local Preference, AS-path length, origin, MED, and especially the eBGP-versus-iBGP preference. A common misconception is to assume that a displayed LocPrf of 80 automatically beats all other paths, but Local Preference is only compared among paths where it is actually present and meaningful within the same AS context; in this table, the path via 192.168.101.14 is iBGP while several others are eBGP. Exam tip: after eliminating the failed best path, identify which remaining routes are eBGP versus iBGP and remember that eBGP is preferred over iBGP before later tiebreakers like IGP metric.
An engineer configures a WLAN with fast transition enabled. Some legacy clients fail to connect to this WLAN. Which feature allows the legacy clients to connect while still allowing other clients to use fast transition based on their OUIs?
“Over the DS” is one of the two 802.11r FT methods (FT over-the-air vs FT over-the-distribution-system). It affects how FT authentication frames are exchanged during roaming, not whether a non-FT client can associate to an SSID advertising FT. Legacy clients that fail due to FT IEs/AKMs will typically still fail regardless of over-the-DS selection.
802.11k provides radio resource measurements and neighbor reports to help clients discover nearby APs and roam more efficiently. It does not change the authentication/keying method and does not address clients that cannot connect because they do not support or mis-handle 802.11r FT. It’s complementary to 11r but not a compatibility mechanism.
Adaptive R (Adaptive 802.11r) is a Cisco feature that allows a single WLAN/SSID to support both FT-capable and legacy clients. The controller can selectively allow fast transition for specific client types (often identified by OUI/vendor) while permitting other clients to connect using non-FT security methods. This preserves fast roaming benefits without breaking older endpoints.
802.11v adds network-assisted roaming features such as BSS Transition Management (steering) and other management enhancements. Like 802.11k, it can improve roaming decisions but does not modify the security handshake in a way that enables legacy clients to connect when 802.11r causes association/authentication issues. It’s not the selective FT-by-OUI feature.
Core concept: This question tests IEEE 802.11r Fast Transition (FT) behavior and Cisco WLAN features that preserve FT benefits while maintaining compatibility for legacy/non-FT clients. 802.11r changes the key management/roaming handshake (FT authentication) to reduce roam time. Some older clients mis-handle FT information elements (IEs) in beacons/probe responses or do not support FT AKM suites, causing association/authentication failures when FT is enabled. Why the answer is correct: Cisco “Adaptive 802.11r” (often shown as Adaptive R) allows a single SSID to support both FT-capable clients and legacy clients. The controller uses client identification (commonly via OUI/vendor) and/or client capability detection to decide whether to advertise/offer FT to that client. FT-capable clients can use 802.11r and get fast roaming, while legacy clients can connect using standard WPA2/802.1X or PSK without being forced into FT. This directly matches the requirement: “still allowing other clients to use fast transition based on their OUIs.” Key features / configuration notes: - 802.11r can be enabled in different modes; some environments require mixed support. - Adaptive 11r is a Cisco feature that selectively enables FT for known-good client types (by OUI/vendor) while keeping the SSID usable for others. - Best practice: validate client compatibility lists (especially for voice devices) and test FT with your endpoint mix; enable adaptive/mixed approaches when you have heterogeneous clients. Common misconceptions: - “Over the DS” is an 802.11r method (FT over-the-DS vs over-the-air), but it does not solve legacy-client association failures; it only changes how FT messages are transported. - 802.11k and 802.11v are roaming-assist standards (neighbor reports, BSS transition management) and do not provide the selective FT compatibility mechanism. Exam tips: - If the question mentions legacy clients failing when 802.11r is enabled and asks for a way to keep FT for some clients (often referencing OUI/vendor), think “Adaptive 802.11r.” - Remember: 11k/11v help clients roam smarter; 11r changes the security handshake to roam faster. Compatibility issues are specifically tied to 11r IEs/AKMs, so the fix is an adaptive/mixed 11r feature, not 11k/11v.
Based on this interface configuration, what is the expected state of OSPF adjacency?
R1:
interface GigabitEthernet0/1
ip address 192.0.2.1 255.255.255.252
ip ospf 1 area 0
ip ospf hello-interval 2
ip ospf cost 1
end
R2:
interface GigabitEthernet0/1
ip address 192.0.2.2 255.255.255.252
ip ospf 1 area 0
ip ospf cost 500
end
2WAY/DROTHER is a broadcast multiaccess behavior where routers may stop at 2-Way with non-DR/BDR neighbors. However, this requires that neighbors at least successfully exchange matching hellos and reach 2-Way. Here, the hello/dead timers are mismatched (2s vs default 10s), so the routers will not form a stable neighbor relationship to reach 2-Way.
Correct. OSPF neighbors must have matching hello and dead intervals. R1 sets hello to 2 seconds; R2 uses the default (10 seconds on Ethernet). This mismatch prevents the routers from accepting each other’s hellos as valid, so the adjacency will not be established (it will not reach FULL). The cost mismatch is irrelevant to adjacency formation.
FULL adjacency would occur on a point-to-point link or between DR/BDR and others on a broadcast segment, but only after successful neighbor parameter negotiation. Because the hello/dead timers do not match, the routers will not become neighbors and cannot reach FULL. Different interface costs do not prevent FULL; timer mismatch does.
DR/BDR roles apply to broadcast and NBMA network types, and the state would be FULL/DR and FULL/BDR (not FULL/BDR on both). More importantly, DR/BDR election and FULL state require a working neighbor relationship first. With mismatched hello/dead timers, the routers will not establish adjacency, so DR/BDR states are not reached.
Core concept: This question tests OSPF neighbor formation requirements versus parameters that only influence path selection. For an OSPF adjacency to form, key interface parameters must match between neighbors (same area, same network type, same authentication, same hello/dead timers, same stub flags, and compatible MTU behavior). Separately, OSPF cost affects route calculation (SPF) and does not need to match. Why the answer is correct: R1 explicitly sets ip ospf hello-interval 2 on the interface. R2 does not, so it uses the default hello interval for the interface network type. On an Ethernet/broadcast interface, the default OSPF hello interval is 10 seconds (dead interval 40 seconds). R1 will therefore send hellos every 2 seconds and (unless also changed) will use a dead interval derived from the hello interval (commonly 4x hello, i.e., 8 seconds). R2 will send hellos every 10 seconds and expect a 40-second dead interval. Because OSPF requires the hello and dead intervals to match exactly between neighbors, the routers will not progress to FULL; they will typically remain stuck in INIT/EXSTART or fail to become neighbors at all, resulting in no established adjacency. Key features / best practices: - Matching requirements: hello/dead timers must match; area must match; authentication must match; network type must match. - Non-matching but allowed: interface cost can differ (it is local and used for outbound metric calculation). - If you tune hello intervals for fast convergence, tune both sides consistently (and consider BFD as an alternative). Common misconceptions: Many assume a large cost difference (1 vs 500) prevents adjacency. It does not; it only changes which path is preferred. Another trap is thinking only the hello interval matters; in practice, hello and dead must both match, and changing hello often implicitly changes dead unless explicitly set. Exam tips: When you see OSPF adjacency questions, immediately check: area, timers, authentication, network type, and MTU. Treat cost differences as routing-policy/metric issues, not neighbor-formation blockers. If only one side changes hello/dead, expect “not established.”
Refer to the exhibit.
access-list 1 permit 10.1.1.0 0.0.0.31
ip nat pool CISCO 209.165.201.1 209.165.201.30 netmask 255.255.255.224
ip nat inside source list 1 pool CISCO
What are two effects of this configuration? (Choose two.)
Incorrect. A true one-to-one NAT translation in the exam sense usually implies a fixed, permanent mapping (static NAT), configured with "ip nat inside source static". This configuration is dynamic NAT using a pool; mappings are allocated as needed and can change over time. While each active inside host may temporarily get one global address, it is not a guaranteed permanent one-to-one mapping.
Incorrect. The 209.165.201.0/27 network in the pool represents inside global addresses (public addresses used to represent inside hosts). “Outside local” refers to how an outside host’s address is represented inside the network (often only relevant with overlapping addressing or special NAT cases). Nothing here defines outside local addresses or an outside address range.
Correct. The standard ACL 1 permits 10.1.1.0 0.0.0.31, which matches 32 addresses (10.1.1.0–10.1.1.31), i.e., 10.1.1.0/27. In NAT terms, these are the inside local addresses (the private addresses assigned to inside hosts) that are eligible to be translated when traffic is sent toward the outside.
Correct. The NAT pool CISCO provides addresses 209.165.201.1–209.165.201.30 with a /27 mask, meaning they are part of the 209.165.201.0/27 subnet. The command "ip nat inside source list 1 pool CISCO" translates matched inside source addresses to addresses from this pool, making them inside global addresses visible to the outside.
Incorrect. 10.1.1.0/27 is not the inside global range; it is the inside local range (private addresses). The inside global range is the public address space used to represent inside hosts externally, which in this configuration is the NAT pool 209.165.201.1–209.165.201.30 (within 209.165.201.0/27).
Core concept: This configuration is testing Dynamic NAT using a NAT pool and a standard ACL to match inside source addresses. In Cisco NAT terminology, the inside local address is the private (RFC1918) address of an inside host, and the inside global address is the public address that represents that host to the outside. Why the answer is correct: The ACL statement "access-list 1 permit 10.1.1.0 0.0.0.31" matches the inside source subnet 10.1.1.0/27 (wildcard 0.0.0.31 corresponds to a /27). That means hosts in 10.1.1.0 through 10.1.1.31 are eligible for NAT. The NAT pool "CISCO" defines a public range from 209.165.201.1 to 209.165.201.30 with netmask 255.255.255.224 (/27). The command "ip nat inside source list 1 pool CISCO" ties them together: any packet sourced from an address matched by ACL 1 (inside local) will be translated to an address from the pool (inside global). Therefore, (C) the inside local addresses are 10.1.1.0/27, and (D) those inside source addresses are translated to the 209.165.201.0/27 public subnet (specifically .1-.30 from that /27). Key features / best practices: This is dynamic NAT (not PAT) because no "overload" keyword is present. Each active translation consumes one address from the pool; when the pool is exhausted, new translations fail until existing ones time out. Also remember NAT requires interface roles (ip nat inside / ip nat outside) on the relevant interfaces, though they are not shown in the snippet. Common misconceptions: Many candidates confuse dynamic NAT with one-to-one static NAT. Dynamic NAT can be one-to-one per active session/host, but it is not a fixed mapping. Another common confusion is mixing up “outside local/global” with “inside local/global”; the pool defines inside global addresses, not outside local. Exam tips: If you see "ip nat inside source list <acl> pool <name>" without "overload", think dynamic NAT with a pool. Convert wildcard masks to prefix lengths quickly (0.0.0.31 = /27). Then identify: ACL = inside local match; pool = inside global range.
Which OSPF network types are compatible and allow communication through the two peering devices?
Point-to-multipoint and nonbroadcast are not a compatible pair because they use different OSPF operational models. Point-to-multipoint treats each neighbor as an individual point-to-point relationship and does not elect a DR or BDR. Nonbroadcast is a multiaccess type that does use DR/BDR election and usually requires manually configured neighbors. Because their adjacency expectations differ, this pairing is not the standard compatible combination.
Broadcast and nonbroadcast are compatible because both are OSPF multiaccess network types and both use DR/BDR election. Their Hello packets can be accepted as long as key parameters such as area ID, timers, authentication, and subnet match. The NBMA side typically needs manual neighbor configuration because it does not rely on multicast discovery in the same way as broadcast. Once neighbors are defined appropriately, the routers can form adjacency and communicate through the peering devices.
Point-to-multipoint and broadcast are not considered compatible because point-to-multipoint does not use DR/BDR election, while broadcast does. Broadcast assumes a shared multiaccess segment with designated router behavior, but point-to-multipoint models the network as separate logical point-to-point links. These different assumptions can prevent proper adjacency formation. Therefore this is not the correct compatible pair.
Broadcast and point-to-point are not the correct compatible pair in OSPF network type matching. Broadcast networks expect DR/BDR election on a multiaccess segment, while point-to-point explicitly disables DR/BDR and assumes exactly one neighbor on the link. This mismatch in Hello semantics and adjacency behavior makes them unsuitable as the best answer. Cisco exam questions on network type compatibility typically expect matching multiaccess types together, which points to broadcast and nonbroadcast instead.
Core concept: This question tests OSPF interface network type compatibility during neighbor discovery and adjacency formation. OSPF network types determine whether Hellos are sent via multicast or unicast, whether DR/BDR election occurs, and what the routers expect from the segment. For adjacency to form, the two sides must have compatible expectations for neighbor discovery and multiaccess behavior. Why correct: Broadcast and nonbroadcast (NBMA) are considered compatible because both are multiaccess network types and both use DR/BDR logic. The main operational difference is neighbor discovery: broadcast uses multicast Hellos, while NBMA typically requires manually defined neighbors and unicast Hellos. With proper neighbor configuration on the NBMA side, the two routers can communicate and form adjacency through the peering devices. Key features: Broadcast networks support dynamic discovery with multicast and elect a DR/BDR. Nonbroadcast networks also support DR/BDR election but usually require static neighbor statements because multicast may not be available. Point-to-point and point-to-multipoint do not use DR/BDR, so they are less compatible with multiaccess types that expect DR/BDR behavior. Common misconceptions: A frequent mistake is assuming point-to-point can interoperate with broadcast simply because there are only two routers on the link. In OSPF, the network type affects Hello contents and adjacency expectations, so differing DR/BDR behavior can cause incompatibility. Another misconception is that NBMA is always incompatible with broadcast, when in fact they share the same multiaccess model and can interoperate if neighbors are configured correctly. Exam tips: Remember that broadcast and NBMA are the two OSPF multiaccess network types that elect DR/BDR, making them the most naturally compatible pair. Point-to-point and point-to-multipoint are non-DR network types and generally should match each other rather than multiaccess types. On Cisco exams, when asked about compatibility, focus on whether the two types share the same adjacency model and DR/BDR expectations.
Which two mechanisms are available to secure NTP? (Choose two.)
IPsec can encrypt and authenticate IP traffic, and in theory could protect NTP if you built tunnels between peers. However, this is not a typical or primary Cisco IOS NTP hardening mechanism and is rarely used solely for NTP. ENCOR-style questions usually focus on native NTP authentication plus traffic restriction rather than overlay security like IPsec.
IP prefix lists are used to match and filter IP routes (for example, in BGP/OSPF route filtering and redistribution policy). They do not filter or secure application traffic like NTP (UDP/123). While the term “prefix” relates to IP networks, it is not an NTP security control on Cisco devices.
Encrypted authentication refers to NTP authentication using configured keys so the device can validate that NTP messages are from a trusted source. On Cisco IOS/IOS XE, you define NTP keys and enable NTP authentication so only servers/peers using the correct key can influence time. This mitigates spoofed NTP responses and malicious time manipulation.
TACACS+ is an AAA protocol used for authenticating and authorizing administrative access to network devices (login, exec, command authorization, accounting). It does not authenticate NTP servers or protect NTP time synchronization. It may be confused with “authentication” in general, but it is unrelated to NTP packet validation.
IP access list-based security restricts NTP communication to known, trusted sources by filtering UDP port 123 traffic and/or using NTP access control features. This reduces attack surface (spoofing, reflection/amplification, unauthorized queries) and ensures only approved NTP servers/peers can interact with the device. ACLs are a standard, widely tested NTP hardening method.
Core concept: This question tests how to secure Network Time Protocol (NTP) on Cisco IOS/IOS XE devices. NTP is critical because accurate time underpins logs, NetFlow, PKI certificate validation, Kerberos/802.1X, and incident response. If an attacker can spoof or manipulate NTP, they can disrupt operations or obscure forensic timelines. Why the answers are correct: Two commonly available and exam-relevant mechanisms are (1) NTP authentication and (2) restricting who can talk NTP to the device. - Encrypted authentication (NTP authentication) uses shared keys so the client can verify that NTP updates are coming from a trusted source. On Cisco, this is typically configured with NTP authentication keys (for example, MD5-based keying in classic NTP). This prevents unauthorized time sources from influencing the clock. - IP access list-based restriction limits which IPs can send NTP traffic or which peers/servers the device will accept. On Cisco, this is commonly done with interface ACLs and/or NTP-specific access control (for example, restricting NTP queries/updates/peering). This reduces exposure to spoofing, reflection/amplification abuse, and unauthorized time servers. Key features / best practices: Use authenticated NTP with trusted internal time sources, and apply ACLs so only those sources can reach UDP/123. Prefer an internal hierarchy (stratum design) and avoid exposing NTP to untrusted networks. Ensure consistent key IDs and key strings across peers, and monitor for time jumps. Common misconceptions: IPsec can secure traffic, but it is not a standard, commonly tested “NTP security mechanism” in Cisco enterprise core contexts and is rarely deployed just to protect NTP. Prefix lists are for routing policy, not for filtering NTP packets. TACACS+ is for device administration (AAA), not for authenticating NTP time updates. Exam tips: For ENCOR, think in layers: (1) authenticate the NTP source (NTP authentication) and (2) limit who can exchange NTP with you (ACL/NTP access control). If an option mentions TACACS/RADIUS or prefix lists, it’s likely a distractor for NTP security questions.




