
Simulate the real exam experience with 100 questions and a 120-minute time limit. Practice with AI-verified answers and detailed explanations.
AI-Powered
Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.
Which two pieces of information are necessary to compute SNR? (Choose two.)
Transmit power is the RF output level at the transmitter (often in dBm or mW). While increasing transmit power can improve the received signal (RSSI) under some conditions, it is not directly used to compute SNR. SNR is calculated at the receiver using the received signal level and the measured noise floor, regardless of what transmit power was configured.
Noise floor is the baseline RF energy level present at the receiver when no desired signal is being decoded, typically expressed in dBm (for example, -95 dBm). It is one of the two required values for SNR calculation because SNR measures how far above this noise baseline the received signal sits. Higher noise floor reduces SNR even if RSSI remains unchanged.
EIRP (Effective Isotropic Radiated Power) represents the transmitter’s effective radiated power after accounting for antenna gain and losses, and is important for coverage planning and regulatory compliance. However, EIRP is not required to compute SNR. SNR is derived from what the receiver measures (RSSI and noise floor), not from transmitter-side calculated values.
RSSI is the received signal strength at the receiver, commonly displayed in dBm in enterprise Wi-Fi tools. It is required to compute SNR because SNR compares the received signal level to the noise floor. For example, if RSSI is -60 dBm and noise floor is -90 dBm, SNR is 30 dB, indicating a strong, clean signal.
Antenna gain describes how an antenna focuses energy in certain directions relative to an isotropic radiator, affecting coverage and potentially the received signal level. While antenna gain influences link budget and can indirectly affect RSSI, it is not an input needed to compute SNR once RSSI and noise floor are known. SNR is strictly based on receiver measurements.
Core Concept: Signal-to-Noise Ratio (SNR) is a key RF performance metric used heavily in Wi-Fi design and troubleshooting. It expresses how much stronger the received signal is compared to the background noise. In 802.11 networks, SNR strongly influences modulation and coding scheme (MCS) selection, data rates, retransmissions, and overall reliability. Why the Answer is Correct: SNR is computed as: SNR (dB) = Received Signal Level (dBm) − Noise Floor (dBm) Therefore, you need (1) the received signal level and (2) the noise floor. In Cisco/Wi-Fi terminology, the received signal level is commonly represented as RSSI (Received Signal Strength Indicator) and is typically shown in dBm on enterprise WLAN platforms. The noise floor is the measured ambient RF noise at the receiver, also in dBm. Subtracting noise floor from RSSI yields SNR in dB. Key Features / Best Practices: - Typical design targets: ~25 dB SNR for high-throughput/voice, ~20 dB for general data, and lower SNR may still work but with reduced rates and higher retries. - Noise floor rises with interference (non-802.11 sources, co-channel contention, poor cabling/shielding, etc.), reducing SNR even if RSSI stays constant. - RSSI alone is not sufficient: a “strong” RSSI can still perform poorly if the noise floor is high. Common Misconceptions: - Transmit power, antenna gain, and EIRP affect what RSSI might become at the receiver, but they are not required inputs to calculate SNR once RSSI and noise floor are known. - EIRP is often confused as a “signal strength at the client,” but it is a transmitter-side regulatory/engineering value, not the received level. Exam Tips: For ENCOR, memorize the simple relationship: SNR = RSSI − Noise Floor. If the question asks what you need to compute SNR, look for “received signal” (RSSI) and “noise floor.” If it asks how to improve SNR, then transmit power/antenna placement/channel planning may become relevant, but they are not part of the calculation itself.
Want to practice all questions on the go?
Download Cloud Pass for free — includes practice tests, progress tracking & more.


Download Cloud Pass and access all Cisco 350-401: Implementing and Operating Cisco Enterprise Network Core Technologies (ENCOR) practice questions for free.
Want to practice all questions on the go?
Get the free app
Download Cloud Pass for free — includes practice tests, progress tracking & more.
Which behavior can be expected when the HSRP version is changed from 1 to 2?
Incorrect. Upgrading the standby router first does not make the transition nondisruptive because HSRP version 1 and version 2 are not interoperable as a single stable group during the change. The routers will not maintain seamless adjacency across the version mismatch, so some reinitialization behavior is expected. Therefore, saying that no changes occur is technically wrong. The order of upgrade may help operational planning, but it does not eliminate the protocol impact.
Incorrect. HSRP version 1 and version 2 do not use the same virtual MAC OUI and format. HSRP v1 uses the 0000.0c07.acxx pattern, while HSRP v2 uses 0000.0c9f.fxxx. Since the virtual MAC changes, hosts and switches must relearn the gateway MAC information. That means a version change does not occur transparently.
Correct. HSRP version 2 uses a different virtual MAC address format than version 1, so changing the version changes the virtual gateway MAC associated with each HSRP group. Because the active gateway identity on the segment changes, the HSRP group reinitializes and devices must relearn the new MAC through ARP and switching table updates. This is the expected operational behavior when migrating an HSRP group from v1 to v2. Cisco exam questions commonly test this distinction by tying the disruption directly to the virtual MAC change.
Incorrect as the best answer. It is true that HSRP v1 and v2 use different multicast addresses for hello packets, and mixed versions will not exchange hellos normally. However, the question asks what behavior can be expected when the version is changed, and the standard expected behavior is that each group reinitializes because the virtual MAC address changes. The multicast difference explains lack of interoperability during mismatch, but it is not the best match to the behavior being tested here. On Cisco exams, the virtual MAC change is the more direct and canonical reason tied to HSRP group reinitialization.
Core concept: This question tests the operational differences between HSRP version 1 and version 2, especially what happens to an existing HSRP group when the version is changed. HSRP v2 introduces a different virtual MAC address format from HSRP v1, so the group identity on the LAN changes when the version changes. Why correct: because hosts and switches see a different virtual gateway MAC, the HSRP group must reinitialize and reestablish forwarding using the new virtual MAC. Key features: HSRP v1 uses virtual MAC addresses in the 0000.0c07.acxx range, while HSRP v2 uses 0000.0c9f.fxxx; v2 also supports a larger group range and different packet handling. Common misconceptions: many candidates focus on the multicast hello address difference, but the expected behavior asked here is the group reinitialization caused by the virtual MAC change. Exam tips: remember that HSRP version changes are disruptive, and if asked what specifically changes operationally, the virtual MAC change is the classic trigger for reinitialization.
Based on this interface configuration, what is the expected state of OSPF adjacency?
R1:
interface GigabitEthernet0/1
ip address 192.0.2.1 255.255.255.252
ip ospf 1 area 0
ip ospf hello-interval 2
ip ospf cost 1
end
R2:
interface GigabitEthernet0/1
ip address 192.0.2.2 255.255.255.252
ip ospf 1 area 0
ip ospf cost 500
end
2WAY/DROTHER is a broadcast multiaccess behavior where routers may stop at 2-Way with non-DR/BDR neighbors. However, this requires that neighbors at least successfully exchange matching hellos and reach 2-Way. Here, the hello/dead timers are mismatched (2s vs default 10s), so the routers will not form a stable neighbor relationship to reach 2-Way.
Correct. OSPF neighbors must have matching hello and dead intervals. R1 sets hello to 2 seconds; R2 uses the default (10 seconds on Ethernet). This mismatch prevents the routers from accepting each other’s hellos as valid, so the adjacency will not be established (it will not reach FULL). The cost mismatch is irrelevant to adjacency formation.
FULL adjacency would occur on a point-to-point link or between DR/BDR and others on a broadcast segment, but only after successful neighbor parameter negotiation. Because the hello/dead timers do not match, the routers will not become neighbors and cannot reach FULL. Different interface costs do not prevent FULL; timer mismatch does.
DR/BDR roles apply to broadcast and NBMA network types, and the state would be FULL/DR and FULL/BDR (not FULL/BDR on both). More importantly, DR/BDR election and FULL state require a working neighbor relationship first. With mismatched hello/dead timers, the routers will not establish adjacency, so DR/BDR states are not reached.
Core concept: This question tests OSPF neighbor formation requirements versus parameters that only influence path selection. For an OSPF adjacency to form, key interface parameters must match between neighbors (same area, same network type, same authentication, same hello/dead timers, same stub flags, and compatible MTU behavior). Separately, OSPF cost affects route calculation (SPF) and does not need to match. Why the answer is correct: R1 explicitly sets ip ospf hello-interval 2 on the interface. R2 does not, so it uses the default hello interval for the interface network type. On an Ethernet/broadcast interface, the default OSPF hello interval is 10 seconds (dead interval 40 seconds). R1 will therefore send hellos every 2 seconds and (unless also changed) will use a dead interval derived from the hello interval (commonly 4x hello, i.e., 8 seconds). R2 will send hellos every 10 seconds and expect a 40-second dead interval. Because OSPF requires the hello and dead intervals to match exactly between neighbors, the routers will not progress to FULL; they will typically remain stuck in INIT/EXSTART or fail to become neighbors at all, resulting in no established adjacency. Key features / best practices: - Matching requirements: hello/dead timers must match; area must match; authentication must match; network type must match. - Non-matching but allowed: interface cost can differ (it is local and used for outbound metric calculation). - If you tune hello intervals for fast convergence, tune both sides consistently (and consider BFD as an alternative). Common misconceptions: Many assume a large cost difference (1 vs 500) prevents adjacency. It does not; it only changes which path is preferred. Another trap is thinking only the hello interval matters; in practice, hello and dead must both match, and changing hello often implicitly changes dead unless explicitly set. Exam tips: When you see OSPF adjacency questions, immediately check: area, timers, authentication, network type, and MTU. Treat cost differences as routing-policy/metric issues, not neighbor-formation blockers. If only one side changes hello/dead, expect “not established.”
Refer to the exhibit. General Security QoS Advanced Policy Mapping
Layer 2 Layer 3 AAA Servers
Fast Transition Fast Transition ☐
Protected Management Frame PMF Disabled
WPA+WPA2 Parameters WPA Policy ☐ WPA2 Policy-AES ☑
Authentication Key Management 802.1X ☐ Enable CCKM ☐ Enable PSK ☑ Enable FT 802.1X ☐ Enable FT PSK ☐ Enable
PSK Format ASCII Based on the configuration in this WLAN security setting, which method can a client use to authenticate to the network?
Correct. PSK is enabled and the PSK format is ASCII, which means clients authenticate using a shared passphrase (a text string). In WPA2-Personal, the client proves knowledge of the PSK during the 4-way handshake; there is no per-user identity validation by RADIUS. This matches the exhibit exactly: WPA2-AES with PSK enabled.
Incorrect. Username and password authentication implies 802.1X/EAP (WPA2-Enterprise) with an AAA/RADIUS backend (for example PEAP/MSCHAPv2). In the exhibit, 802.1X is not enabled under Authentication Key Management, so the WLAN is not configured for enterprise authentication. A client will not be prompted for per-user credentials in this configuration.
Incorrect. A RADIUS token (such as OTP) is also an 802.1X/EAP enterprise method that requires AAA servers and 802.1X enabled on the WLAN. Since only PSK is enabled, the WLC will not perform EAP exchanges with a RADIUS server for client authentication. Tokens are not used with WPA2-Personal PSK networks.
Incorrect. Certificate-based authentication (for example EAP-TLS) requires WPA2-Enterprise with 802.1X enabled and a RADIUS server to validate the client certificate chain. The exhibit shows 802.1X unchecked and PSK checked, so there is no certificate-based EAP method in use. Certificates are unrelated to PSK authentication.
Core Concept: This question tests WLAN security authentication methods on Cisco wireless (WLC) and how the selected Authentication Key Management (AKM) options map to what a client must present to join the SSID. In Wi-Fi security, the AKM choice (PSK vs 802.1X/EAP) determines whether authentication is based on a shared secret (pre-shared key) or on per-user/per-device credentials validated by an AAA server. Why the Answer is Correct: In the exhibit, WPA2 Policy-AES is enabled (WPA2-Enterprise or WPA2-Personal encryption suite selection), and under Authentication Key Management, only PSK is enabled (PSK checked). 802.1X is not enabled, CCKM is not enabled, and Fast Transition (FT 802.1X / FT PSK) is not enabled. With PSK enabled, the WLAN is operating as WPA2-Personal (also called WPA2-PSK). Clients authenticate by proving knowledge of the pre-shared key during the 4-way handshake. The PSK format shown is ASCII, meaning the key is entered as a human-readable text passphrase (rather than a 64-hex character key). Therefore, the client authenticates using a text string (the passphrase). Key Features / Configuration Notes: - WPA2 Policy-AES indicates WPA2 with AES-CCMP encryption. - PSK enabled means no external AAA (RADIUS) is required for client authentication. - ASCII PSK format means the credential is a passphrase (text string), typically 8–63 characters. - PMF is disabled; this affects management frame protection, not the primary authentication method. Common Misconceptions: - Seeing “WPA2” can mislead candidates into thinking 802.1X (Enterprise) is used. WPA2 can be Personal (PSK) or Enterprise (802.1X). - “AAA Servers” tab existence does not imply AAA is active; 802.1X must be enabled to use RADIUS-based authentication. Exam Tips: Always key off the AKM section: PSK = WPA/WPA2-Personal (passphrase). 802.1X = WPA/WPA2-Enterprise (EAP with username/password, certificates, or tokens via RADIUS). FT options only matter if fast roaming is enabled and the corresponding FT AKM is checked.
Which method creates an EEM applet policy that is registered with EEM and runs on demand or manually?
Correct. “event none” defines an EEM applet with no automatic trigger. The applet is still registered with EEM and can be invoked manually/on demand (for example, with an EEM run command). This is the standard IOS/IOS XE method to create an on-demand applet while keeping the configuration syntactically complete (event + actions).
Incorrect. It omits the required event detector line. EEM applets must include an “event …” statement to define the trigger (even if it is “event none”). Without an event line, the applet configuration is incomplete and will not be properly registered/usable as intended.
Incorrect. “event register” is not a valid EEM event detector keyword for creating an on-demand applet. EEM uses specific event detectors (syslog, timer, snmp, track, etc.), and for manual/on-demand behavior the correct detector is “event none,” not “event register.” This option is a distractor using plausible wording.
Incorrect. “event manual” is not the typical/valid EEM event detector syntax on IOS/IOS XE for creating a manually run applet. The accepted method is “event none,” which prevents automatic triggering while still registering the applet. This option is designed to mislead by matching the question’s wording (“manual”).
Core Concept: This question tests Cisco Embedded Event Manager (EEM) applets and, specifically, how to create an applet that is registered with EEM but does not run automatically from a syslog/timer/interface trigger. Instead, it must be invoked on demand (manually). In EEM, the “event” statement defines the trigger. If you want manual execution, you use an event type that never triggers by itself. Why the Answer is Correct: Option A uses: “event manager applet ondemand event none …”. The “event none” keyword creates an applet that is registered/loaded into EEM but has no automatic event detector. That is exactly how you build an on-demand EEM applet: it exists and can be run manually (for example, via the EEM CLI run command), but it will not fire due to any system event. Key Features / How it Works: - EEM applets require an event detector line (event <type>) to be considered complete and registered. - “event none” is the canonical way to define an applet that has no trigger. - Such applets are typically executed manually using an EEM invocation command (platform-dependent, commonly “event manager run <applet-name>”). - Actions (like “action 1.0 syslog …”) define what happens when the applet is executed. Common Misconceptions: Many assume “event manual” exists as a trigger type; on IOS/IOS XE, the standard approach is “event none” for manual/on-demand execution. Another common mistake is omitting the event line entirely; without an event statement, the applet definition is incomplete and won’t be properly registered. Exam Tips: For ENCOR, remember: EEM applet = name + event detector + actions. If the question says “runs on demand/manually,” look for “event none.” If you see syslog/timer/interface/track/snmp triggers, those are automatic. Also watch for syntactically invalid keywords (like “event register” or “event manual”) that may appear as distractors.
What are two device roles in Cisco SD-Access fabric? (Choose two.)
Edge node is a valid Cisco SD-Access fabric role. It is the fabric switch where endpoints connect (wired or wireless via APs). The edge node provides the first-hop gateway (often anycast), enforces segmentation and policy (SGT/TrustSec), and performs VXLAN encapsulation/decapsulation to carry endpoint traffic across the fabric.
vBond controller is not part of Cisco SD-Access. vBond is a Cisco SD-WAN (Viptela) component used for orchestration and secure control-plane onboarding of WAN edge devices. Because the question is specifically about SD-Access fabric roles, SD-WAN controllers are a common distractor and should be eliminated.
Access switch is a traditional campus design term (access/distribution/core) rather than an SD-Access fabric role. In many deployments, an SD-Access edge node may physically be an access-layer switch, which makes this option tempting. However, the exam expects SD-Access role terminology (edge, border, control-plane, intermediate), not classic hierarchical labels.
Core switch is also a traditional campus hierarchical role and not an SD-Access fabric role. In SD-Access, the equivalent concept for transit within the fabric is an intermediate node, which provides IP underlay forwarding for VXLAN traffic but does not typically host endpoints or perform fabric border functions.
Border node is a valid Cisco SD-Access fabric role. It connects the SD-Access fabric to external networks (campus non-fabric, data center, Internet, WAN). It performs routing between fabric virtual networks (VNs) and outside domains/VRFs, and is commonly used for shared services and integration with fusion devices for segmentation handoff.
Core Concept: Cisco SD-Access (Software-Defined Access) uses a fabric architecture based on VXLAN data-plane encapsulation and LISP control-plane mapping. Within the fabric, devices take on specific fabric roles that define how endpoints attach, how traffic is forwarded inside the fabric, and how the fabric connects to external networks. Why the Answer is Correct: Two primary device roles in an SD-Access fabric are the edge node and the border node. The edge node is where endpoints (users, IoT, APs) connect to the fabric; it performs endpoint onboarding, applies policy (SGT-based segmentation via Cisco TrustSec), and encapsulates/decapsulates traffic into VXLAN for transport across the fabric. The border node is the fabric’s gateway to external networks (campus non-fabric, WAN, Internet, data center). It handles routing between the fabric virtual networks (VNs) and outside routing domains, and is the typical insertion point for shared services and external connectivity. Key Features / Best Practices: - Edge nodes provide anycast default gateway for fabric subnets (often via SVIs) and register endpoint-to-location mappings into the fabric control-plane system. - Border nodes can be internal or external border, and commonly integrate with fusion routers for segmentation handoff between fabric VNs and external VRFs. - Policy is centrally defined in Cisco DNA Center and enforced at the fabric edge (and sometimes border) using SGTs and scalable group access control. Common Misconceptions: Options like “access switch” and “core switch” are traditional campus roles, not SD-Access fabric roles. While an edge node is often physically an access-layer switch, the exam is testing SD-Access-specific terminology. “vBond controller” belongs to Cisco SD-WAN (Viptela), not SD-Access. Exam Tips: For ENCOR, memorize the SD-Access fabric roles: edge node, border node, control plane node, and intermediate node. If you see SD-WAN controllers (vBond/vSmart/vManage), eliminate them for SD-Access questions. Also distinguish physical topology terms (access/core) from fabric roles (edge/border/intermediate/control-plane).
How does Cisco TrustSec enable more flexible access controls for dynamic networking environments and data centers?
Flexible NetFlow provides traffic visibility and telemetry (who is talking to whom, how much, and what protocols). While it can support security monitoring and troubleshooting, it does not enforce access control policy. TrustSec is about identity-based segmentation and authorization, not flow export/analytics, so NetFlow is not the mechanism that enables flexible access controls.
Assigning a VLAN to an endpoint is a traditional segmentation approach (often via 802.1X dynamic VLAN assignment). It can separate traffic, but policies remain tied to network location and VLAN/subnet design. In dynamic data centers and highly mobile environments, VLAN-based controls become complex and do not inherently provide group-to-group identity-based policy that follows the endpoint like TrustSec does.
Classifying traffic based on advanced application recognition aligns with technologies like NBAR2 and DPI, commonly used for QoS, application visibility, and sometimes security controls. TrustSec’s primary value is not application-based classification; it is identity/group-based segmentation using SGTs and SGACLs. Application recognition may complement security strategy but is not what TrustSec uses to enable flexible access controls.
TrustSec classifies and enforces policy based on contextual identity by assigning Security Group Tags (SGTs) to users/devices and applying Security Group ACLs (SGACLs) between groups. This decouples policy from IP addressing and VLANs, making it ideal for dynamic environments where endpoints move or change IPs. The policy follows the identity, enabling scalable, consistent access control across campus and data center networks.
Core Concept: Cisco TrustSec is an identity-based access control architecture. Instead of tying policy to changing network attributes (IP addresses, subnets, VLANs), it uses contextual identity and assigns a Security Group Tag (SGT) to users/devices. Enforcement devices then apply Security Group Access Control Lists (SGACLs) based on source/destination SGTs. Why the Answer is Correct: Option D is correct because TrustSec enables flexible access controls by classifying and enforcing policy based on the contextual identity of the endpoint (who/what it is) rather than its IP address (where it is). In dynamic environments and data centers—where workloads move, IPs change (DHCP), and virtualization/containers cause frequent topology changes—IP-based ACLs become brittle. With TrustSec, the policy follows the identity via SGT propagation, so access decisions remain consistent even as endpoints move across switches, wireless, or between data center segments. Key Features / How it Works: - SGT assignment: Typically from Cisco ISE (via RADIUS) based on user, device posture, group membership, or other context. - SGT propagation: Carried inline using Cisco proprietary tagging (e.g., over TrustSec-capable links) or via SXP (Security Group Tag Exchange Protocol) to share IP-to-SGT bindings when inline tagging isn’t available. - Policy model: SGACLs define “who can talk to whom” using group-to-group rules (e.g., Finance-to-DB allowed, Guest-to-Internal denied), reducing rule sprawl compared to per-subnet ACLs. - Scalable Group Enforcement: Centralized policy definition with distributed enforcement at switches/routers/firewalls. Common Misconceptions: - VLAN assignment (option B) can provide segmentation, but it is location-based and operationally heavy in dynamic environments; it doesn’t inherently provide identity-based policy that follows endpoints. - Application recognition (option C) relates more to NBAR2/DPI and QoS/security classification, not TrustSec’s core identity/group-based enforcement. - NetFlow (option A) is for telemetry/visibility, not access control. Exam Tips: For ENCOR, remember: TrustSec = identity-based segmentation using SGT/SGACL. If the question contrasts IP/VLAN-based controls with “context/identity,” the TrustSec answer is almost always SGT-based classification and enforcement (option D).
Refer to the exhibit.
R1#debug ip ospf hello
R1#debug condition interface Fa0/1
Condition 1 Set
Which statement about the OSPF debug output is true?
Correct. debug ip ospf hello limits output to OSPF Hello packets only, and debug condition interface Fa0/1 further filters the debug so that only Hello messages associated with Fa0/1 are displayed. The output therefore includes Hellos that R1 sends and receives on Fa0/1, which is exactly what the combined commands are intended to show.
Incorrect. The command used is debug ip ospf hello, not a debug that covers all OSPF message types. Additionally, debug condition interface Fa0/1 restricts output to a single interface, not all interfaces. This option is wrong on both dimensions: packet-type scope and interface scope.
Incorrect. While the interface condition does restrict output to Fa0/1, the debug command itself is only for Hello packets. “All OSPF messages” would require a broader debug such as debug ip ospf packet (and even then, you’d want to confirm the exact IOS behavior and filtering). This option overstates what will be displayed.
Incorrect. debug ip ospf hello does not include LSACK messages; LSACKs are a different OSPF packet type related to LSA flooding acknowledgment. To see LSACKs you would need a more general OSPF packet debug. The interface condition also does not change which OSPF packet types are being debugged—only where they are seen.
Core concept: This question tests Cisco IOS debugging scope control using conditional debugging. Specifically, it combines an OSPF-specific debug command (debug ip ospf hello) with a debug condition tied to an interface (debug condition interface Fa0/1). In ENCOR, this falls under Network Assurance because it focuses on operational troubleshooting and controlling debug output. Why the answer is correct: The command debug ip ospf hello enables debug output only for OSPF Hello packets (not all OSPF packet types). By default, that debug would show Hello activity across all interfaces. However, debug condition interface Fa0/1 sets a conditional filter so that subsequent debug output is displayed only when the packets/events match the condition—in this case, traffic associated with interface FastEthernet0/1. Therefore, the resulting output will display OSPF Hello messages that R1 sends or receives on Fa0/1 only. Key features / best practices: - debug ip ospf hello is narrowly scoped to Hello packets (neighbor discovery/maintenance). - debug condition interface <int> is a global conditional debugging feature that filters debug output to a specific interface, reducing noise and CPU impact. - Best practice: always use conditional debugging (and/or terminal monitor carefully) in production to avoid excessive CPU utilization and log flooding. Also remember to disable debugging afterward (undebug all). Common misconceptions: - Confusing “hello” debug with “all OSPF messages.” Only Hello packets are shown; LSAs, DBD, LSR, LSU, and LSACK are not included. - Assuming the interface condition is automatically applied to all previously enabled debugs in every context. On IOS, the condition is intended to filter debug output, but the key exam takeaway is that it limits the displayed output to the specified interface. Exam tips: - Memorize what each OSPF debug covers: “hello” is only Hellos; “adj” focuses on adjacency events; “packet” is broader. - When you see debug condition interface, think “filtered to that interface,” not “all interfaces.” - Read options carefully for scope (hello vs all OSPF packets) and direction (sent/received).
Which PAgP mode combination prevents an EtherChannel from forming?
auto/desirable will form an EtherChannel with PAgP because desirable actively initiates negotiation and auto passively responds. As long as the physical and Layer 2 settings (trunking, VLANs, speed/duplex, etc.) match, the desirable side will send PAgP packets and the auto side will reply, allowing the bundle to come up.
desirable/desirable will form an EtherChannel because both sides actively send PAgP negotiation packets. This is a robust dynamic configuration since either side can initiate and maintain negotiation. It is commonly used when you want dynamic formation but still want both ends to be proactive in establishing the Port-Channel.
desirable/auto will form an EtherChannel for the same reason as auto/desirable: the desirable side initiates PAgP negotiation and the auto side responds. The order doesn’t matter; what matters is that at least one side is desirable. With matching interface parameters, the Port-Channel should successfully establish.
auto/auto prevents an EtherChannel from forming with PAgP because both sides are passive and neither initiates negotiation. Since no PAgP negotiation packets are sent to start the process, the links remain as individual interfaces rather than bundling into a Port-Channel, even if all other settings match.
Core Concept: This question tests EtherChannel negotiation using PAgP (Port Aggregation Protocol). PAgP is Cisco-proprietary and dynamically negotiates bundling of physical links into a single logical Port-Channel. The key is understanding which PAgP modes actively initiate negotiation versus which only respond. Why the Answer is Correct: PAgP has two negotiation modes: desirable and auto (plus “on,” which forces a channel without negotiation). Desirable actively sends PAgP negotiation packets to form an EtherChannel. Auto is passive: it listens and responds to PAgP packets but does not initiate them. If both ends are configured as auto/auto, neither side initiates PAgP negotiation, so no PAgP packets are sent to start the process. As a result, the EtherChannel does not form. Key Features / Best Practices: - PAgP desirable: actively negotiates; will form with desirable or auto on the neighbor. - PAgP auto: passive; forms only if the neighbor is desirable. - Consistency requirements still apply: same speed/duplex (where relevant), trunk/access mode, allowed VLANs, native VLAN, and other interface settings must match across member links, or the bundle will be suspended/mis-bundled. - In modern enterprise designs, LACP (802.3ad/802.1AX) is often preferred for multivendor interoperability, but PAgP is still tested and encountered. Common Misconceptions: Many assume “auto” means it will automatically form a channel with another “auto.” In PAgP, auto does not initiate; it only responds. Therefore, auto/auto is the one combination that prevents formation (assuming all other parameters match and PAgP is the chosen protocol). Exam Tips: Memorize the negotiation matrix: PAgP desirable forms with desirable or auto; PAgP auto forms only with desirable; auto/auto fails. Also remember that “on” bypasses negotiation and can create misconfigurations if the far end isn’t forced similarly. For ENCOR, be ready to distinguish PAgP vs LACP behaviors and identify which combinations fail to establish a Port-Channel.
Which two actions provide controlled Layer 2 network connectivity between virtual machines running on the same hypervisor? (Choose two.)
A hypervisor-provided virtual switch is the standard mechanism for Layer 2 communication between VMs on the same host. It switches Ethernet frames locally inside the hypervisor and allows administrators to apply controls such as VLAN membership, port groups, security policies, and traffic monitoring. This is the most direct and canonical answer for controlled intra-host VM connectivity. In Cisco and general virtualization contexts, the vSwitch is the foundational component for VM networking.
A virtual switch running as a separate virtual machine is not the normal or expected answer in Cisco ENCOR-style virtualization questions. While specialized virtual appliances can bridge traffic in some designs, they are not the standard mechanism used by a hypervisor to provide basic Layer 2 connectivity between local VMs. The question asks for actions that provide controlled Layer 2 connectivity in the virtualization platform, which points to the native vSwitch and physical switch uplinks. Treating a separate VM as the switch is too niche and not the intended architectural model here.
VXLAN does provide Layer 2 extension over a Layer 3 underlay, but it is not implemented by installing tunneling drivers inside each VM in standard enterprise virtualization designs. VXLAN encapsulation is typically handled by the hypervisor, virtual switch, or hardware VTEP, not by the guest operating systems. Also, the question focuses on VMs on the same hypervisor, where local switching or VLAN trunking is the simpler and expected solution. Therefore this option is technically and architecturally incorrect as written.
A single routed link to an external router-on-a-stick is used for Layer 3 inter-VLAN routing, not for providing Layer 2 switching between VMs. Router-on-a-stick requires traffic to be sent to a router subinterface for routing decisions, which changes the problem from Layer 2 adjacency to Layer 3 forwarding. For VMs on the same hypervisor, Layer 2 connectivity is normally handled by the virtual switch without involving an external router. This option does not match the requirement for controlled Layer 2 connectivity.
A single trunk link to an external Layer 2 switch is a valid way to provide controlled Layer 2 connectivity by extending VLANs from the hypervisor to the physical network. The hypervisor can map VM interfaces or port groups to VLANs and send tagged traffic over the trunk, allowing those VMs to participate in broader Layer 2 domains. This design is common when VMs need Layer 2 connectivity not only locally but also to devices or VMs on other hosts. The external switch provides centralized VLAN-based control and integration with the campus or data center switching environment.
Core concept: This question tests how Layer 2 connectivity for virtual machines is provided in a virtualized environment. On a hypervisor, VMs commonly connect to a virtual switch for local switching, and that virtual switch can also uplink through a trunk to an external Layer 2 switch to extend VLAN-based connectivity beyond the host. Why correct: A built-in virtual switch is the native mechanism for same-host VM Layer 2 communication, while a trunk to an external Layer 2 switch is a valid way to provide controlled VLAN-based Layer 2 connectivity to and from those VMs. Key features: virtual switches support local frame forwarding, VLAN tagging, port groups, and policy enforcement; trunk links extend multiple VLANs to the physical switching infrastructure. Common misconceptions: a separate VM acting as a switch is not the typical hypervisor networking model tested on ENCOR, and router-on-a-stick is Layer 3 routing rather than Layer 2 switching. Exam tips: when asked about VM Layer 2 connectivity, first think of the hypervisor vSwitch and VLAN trunks to external switches; avoid choosing options that imply guest-installed tunneling or unnecessary Layer 3 devices.