
Simule a experiência real do exame com 100 questões e limite de tempo de 120 minutos. Pratique com respostas verificadas por IA e explicações detalhadas.
Powered by IA
Cada resposta é verificada por 3 modelos de IA líderes para garantir máxima precisão. Receba explicações detalhadas por alternativa e análise aprofundada das questões.
What are two common sources of interference for Wi-Fi networks? (Choose two.)
LED lights can emit electromagnetic noise in some cases, especially if they use poor-quality drivers or switching power supplies, but they are not typically identified as one of the most common Wi-Fi interference sources in Cisco certification questions. They are more of an edge-case environmental issue than a standard exam answer. When compared with radar and rogue APs, LED lights are the less defensible choice. For exam purposes, they are not usually considered a primary common interferer.
Radar is a well-known source of interference for 5 GHz Wi-Fi networks operating on DFS channels. When an access point detects radar energy, it must stop using that channel and move clients elsewhere to comply with regulations. This causes service disruption, channel changes, and intermittent connectivity symptoms that are commonly tested in Cisco wireless exams. Because DFS and radar avoidance are fundamental parts of 5 GHz WLAN operation, radar is clearly a correct answer.
Fire alarm systems are not generally considered common sources of Wi-Fi interference. While any electronic system could theoretically emit noise if malfunctioning, fire alarms are not standard examples of devices that disrupt 2.4 GHz or 5 GHz WLANs. Cisco exams usually focus on more established interferers such as radar, microwave ovens, Bluetooth devices, or other access points. Therefore this option is not a correct choice.
A conventional oven is not a common Wi-Fi interference source. The classic appliance associated with Wi-Fi disruption is a microwave oven, which leaks energy around 2.45 GHz and can interfere with 2.4 GHz WLANs. A conventional oven does not operate using the same RF mechanism and is not typically cited in wireless troubleshooting references. This makes it an incorrect option in the context of common Wi-Fi interferers.
A rogue AP is a common source of Wi-Fi interference because it transmits in the same unlicensed spectrum as authorized WLAN infrastructure. Even if it is not malicious, it can create co-channel interference, adjacent-channel interference, and increased airtime contention for nearby clients and APs. In enterprise environments, rogue APs are frequently monitored not just for security reasons but also for their RF impact. Cisco exam objectives commonly treat unauthorized APs as both a security concern and an operational interference source.
Core Concept: This question tests recognition of common Wi-Fi interference sources, including both non-802.11 RF interferers and other Wi-Fi devices that disrupt normal channel use. In enterprise WLANs, interference is not limited to external electronics; unauthorized or unmanaged Wi-Fi devices can also degrade service by consuming airtime and creating co-channel or adjacent-channel interference. Why the Answer is Correct: Radar (B) is a classic source of interference for 5 GHz Wi-Fi, especially on DFS channels. Rogue APs (E) are also a common real-world source of interference because they transmit in the same unlicensed spectrum as the production WLAN, causing contention and channel overlap. Both are widely recognized in Cisco wireless design and troubleshooting contexts. Key Features / Best Practices: - Use spectrum analysis and controller/AP tools to identify radar events, DFS channel changes, and non-Wi-Fi noise. - Continuously scan for rogue APs and classify them as interfering, neighboring, or malicious devices. - Design channel plans carefully to reduce co-channel and adjacent-channel interference. - Prefer proper RF monitoring and security policies to detect unauthorized wireless devices early. Common Misconceptions: A rogue AP is not only a security issue; it is also an RF issue because it actively transmits and competes for airtime. LED lights may generate some electromagnetic noise in certain environments, but they are not typically listed as a common Wi-Fi interference source in certification exam questions. Conventional ovens are also not the classic appliance interferer; microwave ovens are. Exam Tips: For Cisco exams, remember that interference can come from both non-802.11 sources like radar and from other 802.11 devices such as neighboring or rogue APs. If you see radar in a Wi-Fi interference question, it is usually a strong choice because of DFS behavior in 5 GHz. Also distinguish between a conventional oven and a microwave oven, since only the latter is a classic 2.4 GHz interferer.
Quer praticar todas as questões em qualquer lugar?
Baixe o Cloud Pass grátis — inclui simulados, acompanhamento de progresso e mais.


Baixe o Cloud Pass e acesse todas as questões de prática de Cisco 350-401: Implementing and Operating Cisco Enterprise Network Core Technologies (ENCOR) gratuitamente.
Quer praticar todas as questões em qualquer lugar?
Baixe o app grátis
Baixe o Cloud Pass grátis — inclui simulados, acompanhamento de progresso e mais.
Which action is the vSmart controller responsible for in an SD-WAN deployment?
Incorrect. Onboarding and initial orchestration of WAN edge devices into the fabric is primarily the role of vBond (orchestrator). vBond helps with initial authentication, NAT traversal, and directing edges to the appropriate controllers. vSmart becomes relevant after control connections are established, but it is not the component responsible for onboarding.
Incorrect. Telemetry collection/monitoring is mainly associated with vManage (management plane), which aggregates statistics, logs, and operational state for visualization and troubleshooting. While edges send operational data into the system, vSmart’s primary job is control-plane route/policy distribution, not acting as the telemetry collector.
Correct. vSmart is the SD-WAN control-plane controller. It distributes control-plane information (via OMP) including reachability (TLOCs/routes) and policy attributes that enable WAN edges to discover peers and establish the secure SD-WAN overlay (IPsec tunnels). This is the best match to distributing security/tunnel-establishment-related information between vEdge routers.
Incorrect. Managing, maintaining, and gathering configuration and status for nodes is the responsibility of vManage. vManage provides centralized configuration templates, device inventory, software upgrades, and operational dashboards. vSmart does maintain control-plane state, but it is not the management-plane system of record for configuration and status.
Core concept: Cisco SD-WAN (Viptela) uses a controller-plane architecture with three primary components: vManage (management plane), vSmart (control plane), and vBond (orchestrator/onboarding). vEdge/cEdge routers form the data plane. This question tests which controller function belongs to vSmart. Why the answer is correct: vSmart is the centralized control-plane element responsible for distributing routing, policy, and security information that allows vEdge/cEdge devices to build the SD-WAN overlay. A key part of this is distributing the information needed for secure tunnel establishment (IPsec) between WAN edges. While the actual key exchange is done using certificates and DTLS/TLS control connections, vSmart advertises overlay reachability (TLOCs/OMP routes) and associated attributes so edges know which peers to form tunnels to and how to treat traffic. In exam terms, “security information for tunnel establishment between vEdge routers” maps to vSmart’s role in orchestrating secure overlay connectivity via the control plane. Key features/best practices: vSmart runs OMP (Overlay Management Protocol) to exchange routes, TLOCs, and policy attributes with WAN edges. It distributes centralized policies (control, data, app-aware routing) and can influence which tunnels are formed/used by manipulating TLOCs and route attributes. vSmart also participates in the SD-WAN security model by leveraging PKI (certificates) and secure control connections, enabling authenticated, encrypted control-plane communications that drive IPsec overlay formation. Common misconceptions: Many confuse vManage and vSmart. vManage is “single pane of glass” for configuration, monitoring, and lifecycle management, not the control-plane distribution point. vBond is the orchestrator used for initial bring-up and NAT traversal, often described as “onboarding,” which can mislead test-takers into picking option A. Exam tips: Memorize the three-controller split: - vBond = orchestrator/onboarding (initial authentication, NAT traversal) - vSmart = control plane (OMP routes, policy, overlay/tunnel-related distribution) - vManage = management plane (configuration, monitoring/telemetry visualization, alarms) When an option mentions routing/policy distribution or overlay/tunnel establishment logic, think vSmart.
Which statement about route targets is true when using VRF-Lite?
Incorrect. Route targets do control route import and export in MPLS L3VPN environments where MP-BGP distributes VPN routes between PE devices. However, the question asks specifically about VRF-Lite, which does not rely on route targets for route exchange or VRF membership. In VRF-Lite, routes are typically exchanged with per-VRF static routing or standard routing protocols, so this statement is not true in that context.
Incorrect. Route targets are not transmitted as BGP standard communities; they are carried as BGP extended communities. In addition, VRF-Lite does not inherently depend on MP-BGP VPN route exchange, so this statement is doubly misleading for the scenario given. The wording is technically wrong even outside VRF-Lite because the community type is incorrect.
Correct. VRF-Lite allows different customers or departments to use overlapping IP address space because each VRF has its own independent routing and forwarding tables. The same prefix can exist in multiple VRFs without ambiguity because lookups are performed within the context of the specific VRF. This capability comes from VRF isolation itself, not from route targets or MPLS VPN signaling.
Incorrect. Route targets do not uniquely identify a customer routing table. In MPLS L3VPN, route distinguishers are used to make otherwise identical prefixes unique in VPNv4/vpnv6 advertisements, while route targets define import/export policy. Multiple VRFs can share the same route target, so an RT cannot serve as a unique identifier for a specific customer routing table.
Core concept: This question tests the distinction between VRF-Lite and MPLS L3VPN features. VRF-Lite provides multiple isolated routing tables on a device without MPLS VPN signaling or MP-BGP VPNv4/vpnv6 route exchange. Route targets are an MPLS L3VPN concept, while VRF separation itself is what allows overlapping customer address spaces. Why correct: The true statement in the context of VRF-Lite is that customers can be assigned overlapping addresses because each VRF maintains an independent routing table and forwarding instance. This isolation allows identical prefixes to exist in different VRFs without conflict. Route targets are not what make this possible in VRF-Lite. Key features: - VRF-Lite uses separate VRFs locally on routers/switches without requiring MPLS. - Overlapping IP space is supported because each VRF has its own RIB/FIB. - Route targets are BGP extended communities used in MPLS L3VPN, not a core mechanism of VRF-Lite. - Route distinguishers and route targets are commonly associated with MPLS VPNs, not basic VRF-Lite operation. Common misconceptions: - Confusing VRF-Lite with MPLS L3VPN and assuming route targets are required for VRF operation. - Mixing up route targets and route distinguishers: RTs control VPN membership in MPLS L3VPN, while RDs make VPN routes unique in MP-BGP. - Assuming route targets are transmitted as standard communities; they are extended communities. Exam tips: When a question explicitly says VRF-Lite, focus on local VRF separation rather than MPLS VPN control-plane features. If the option mentions import/export policy with route targets, think MPLS L3VPN. If the option mentions overlapping addresses due to separate VRFs, that aligns with VRF-Lite behavior.
Which LISP device is responsible for publishing EID-to-RLOC mappings for a site?
ETR (Egress Tunnel Router) is the device that registers/publishes EID-to-RLOC mappings for the EID prefixes located behind it. It sends Map-Register messages to the Map-Server, advertising which EIDs it can deliver to and which RLOCs should be used. This is the authoritative source of the site’s EID reachability information in the LISP mapping system.
MR (Map-Resolver) is used by ITRs to resolve EID-to-RLOC mappings. It receives Map-Requests (from ITRs) and helps return mapping information, often by querying the mapping system. The MR facilitates lookup, not publication. It does not originate or register a site’s EID prefixes into the mapping database.
ITR (Ingress Tunnel Router) encapsulates traffic from non-LISP or LISP sites toward the destination RLOC after learning the mapping. It triggers mapping lookups by sending Map-Requests (typically to an MR). The ITR consumes mappings to forward traffic; it does not publish/register EID-to-RLOC mappings for a site.
MS (Map-Server) is the control-plane component that receives Map-Register messages from ETRs, authenticates them, and stores the EID-to-RLOC mappings in its database. While it is central to making mappings available, it is not the device that publishes them on behalf of a site; the ETR is the one that originates the registrations.
Core Concept: Locator/ID Separation Protocol (LISP) separates endpoint identity (EID) from routing location (RLOC). Devices in a LISP site must advertise (register) which EID prefixes exist behind the site and which RLOCs can be used to reach them. This is done through the LISP mapping system (Map-Server/Map-Resolver). Why the Answer is Correct: The Egress Tunnel Router (ETR) is responsible for publishing EID-to-RLOC mappings for a site. Concretely, the ETR performs EID registration by sending Map-Register messages to a Map-Server (MS). The Map-Register contains the EID prefix(es) reachable behind the ETR and the associated RLOC(s) (often the ETR’s own RLOC, or a set of RLOCs for redundancy). This is the act of “publishing” the mapping into the mapping system so that remote sites can discover how to reach the EIDs. Key Features / How it Works: - ETR learns local EID prefixes (from connected interfaces, VRFs, or routing) and registers them. - The Map-Server authenticates/accepts registrations (often using shared keys) and stores them in its mapping database. - Remote ITRs query via a Map-Resolver (MR), which returns the mapping (EID-to-RLOC) so the ITR can encapsulate traffic to the correct RLOC. - Best practice: deploy redundant ETRs and MS/MR pairs; ensure consistent authentication keys and correct EID-prefix ownership to avoid registration conflicts. Common Misconceptions: - Many confuse “publishing” with “answering queries.” The MS stores/publishes mappings, but it does not originate them; the ETR registers them. - The ITR is involved in requesting mappings (Map-Request), not publishing them. - The MR resolves requests (like a DNS resolver role), but does not create the authoritative mapping. Exam Tips: Remember the directional roles: ETR = registers/publishes EIDs (Map-Register to MS). ITR = requests mappings (Map-Request via MR) and encapsulates to RLOC. MS = database/authority that accepts registrations. MR = query/lookup function for ITRs. If the question says “publishing/registering EID-to-RLOC for a site,” think ETR.
Which NGFW mode blocks flows crossing the firewall?
Tap mode is out-of-band monitoring. The NGFW receives a copy of traffic from a network TAP device or SPAN session, analyzes it, and generates alerts/telemetry. Because it is not in the forwarding path, it cannot directly drop packets or reset sessions for the original flow. Tap is chosen for visibility, baselining, and low-risk deployments where blocking is not required.
Inline mode places the NGFW directly in the traffic path (Layer 2 transparent or Layer 3 routed). All flows must traverse the firewall, so it can enforce security policy by allowing, denying, dropping, or resetting connections. This is the standard mode for true prevention and segmentation use cases (edge firewalling, inter-zone controls, IPS blocking).
Passive mode is another term commonly used for out-of-band monitoring/IDS-style deployment. The device inspects mirrored traffic and provides detection and reporting, but it does not sit inline and therefore cannot block flows crossing the network. Passive deployments are useful for evaluation, troubleshooting, and environments where introducing an inline device is not acceptable.
Inline tap is a hybrid term that can be confusing. It typically indicates the device is physically inline but configured to operate like a tap (monitor-only), forwarding traffic without enforcing policy drops. In many vendor contexts, it is used to reduce risk during initial rollout (visibility first), but it is not the mode associated with blocking flows.
Core concept: This question tests NGFW (Next-Generation Firewall) deployment modes and, specifically, which mode can actively enforce policy by stopping traffic. The key distinction is whether the firewall is in the forwarding path (can block) or only observing a copy of traffic (cannot block). Why the answer is correct: Inline mode places the NGFW directly in the traffic path (Layer 2 transparent/bridge or Layer 3 routed). Because every packet/flow must traverse the firewall, the NGFW can enforce access control, application control, IPS, URL filtering, malware inspection, and other security policies by dropping packets, resetting sessions, or denying new connections. Therefore, inline mode is the mode that blocks flows crossing the firewall. Key features, configurations, and best practices: In inline deployments, the firewall becomes a control point for traffic. This enables stateful inspection and full enforcement actions (allow/deny, rate-limit, TCP reset, quarantine). Inline designs must consider high availability (active/standby or clustering), bypass behavior (fail-open vs fail-close depending on platform and risk tolerance), sizing/performance (throughput, concurrent sessions, SSL decryption impact), and routing/segmentation (inside/outside zones, VRFs, VLANs). Inline is typically used at the internet edge, between user and data center segments, or between critical zones where prevention is required. Common misconceptions: “Tap” and “passive” modes sound like they might still block because the NGFW can detect threats; however, they only receive mirrored/copied traffic (SPAN/TAP) and are not in the forwarding path, so they cannot stop the original flow—at best they can alert or trigger external actions (for example, via integrations) that may indirectly mitigate later. “Inline tap” can be confusing: it often refers to a hybrid where the device is physically inline but configured to behave like a tap (monitoring-only), meaning it does not enforce blocking. Exam tips: For ENCOR, memorize the simple rule: “Inline = in-path = can block.” “Tap/Passive = out-of-band = cannot block original traffic.” If the question asks about prevention/enforcement, choose inline. If it asks about visibility/monitoring with minimal risk of outage, choose tap/passive.
To increase total throughput and redundancy on the links between the wireless controller and switch, the customer enabled LAG on the wireless controller. Which EtherChannel mode must be configured on the switch to allow the WLC to connect?
Active is an LACP mode that actively sends LACPDU packets to negotiate an EtherChannel. Cisco WLC LAG does not use LACP negotiation, so configuring the switch for active will not establish the expected bundle with the controller. This option is a common distractor because many candidates equate LAG with LACP. In this scenario, active is incorrect because the WLC expects a static port-channel instead.
Passive is also an LACP mode, but it only responds to LACP packets rather than initiating them. Since the WLC does not negotiate EtherChannel with LACP, passive will not successfully form the required port-channel. Even in normal LACP operation, passive depends on the other side being active, which is not the case here. Therefore passive is not valid for connecting a WLC with LAG enabled.
On is correct because Cisco WLC LAG requires the switch to form a static EtherChannel without any negotiation protocol. The WLC does not participate in LACP or PAgP exchanges, so the switch ports must be forced into the port-channel with `mode on`. This allows the controller to use multiple physical links as one logical uplink for redundancy and increased aggregate throughput. In Cisco exam context, WLC LAG is a classic case where static EtherChannel is required on the switch.
Auto is a PAgP mode, which is Cisco proprietary and used for PAgP-based EtherChannel negotiation. Cisco WLC LAG does not use PAgP, so auto cannot establish the bundle. This option is incorrect both because it uses the wrong protocol and because the WLC requires a static EtherChannel. On exam questions, auto should be eliminated whenever the peer does not support PAgP.
Core concept: This question tests how Cisco Wireless LAN Controllers implement Link Aggregation (LAG) toward a switch. On Cisco WLCs, enabling LAG causes the controller to present its uplinks as a single logical interface, but the switch side must be configured as a static EtherChannel rather than using a negotiation protocol. Why correct: The correct switch EtherChannel mode is **on** because Cisco WLC LAG does not use PAgP or LACP negotiation with the switch. The switch ports must be manually bundled into a static port-channel so the WLC can forward traffic across the aggregated links as one logical connection. Key features: - WLC LAG uses one logical interface and one MAC address across the bundled links. - The switch must use a static EtherChannel with `channel-group <id> mode on`. - All member interfaces must have identical Layer 2 settings such as trunking, VLAN allowance, native VLAN, speed, and duplex. - LAG on the WLC improves redundancy and aggregate bandwidth, but load balancing is still determined by the switch hashing algorithm. Common misconceptions: - Many engineers associate the term LAG with LACP, but on Cisco WLC platforms, LAG is not negotiated with LACP. - Choosing active or passive is incorrect because those are LACP modes. - Choosing auto is incorrect because it is a PAgP mode. Exam tips: For Cisco WLC exam questions, remember that WLC LAG typically maps to a **static EtherChannel** on the connected switch. If the options are active, passive, on, and auto, the expected answer is usually **on**.
What is the difference between the enable password and the enable secret password when service password encryption is enabled on an IOS device?
Correct. The enable secret is stored using a stronger, one-way cryptographic hash (historically type 5 MD5; on many modern platforms type 8 PBKDF2-SHA256 or type 9 scrypt). This is fundamentally stronger than the enable password, which is either plaintext or type 7 reversible obfuscation when service password-encryption is enabled.
Incorrect. The enable password can be decrypted if it is stored as type 7 (which is what service password-encryption produces). Type 7 is reversible and widely crackable. Only the enable secret is intended to be non-reversible (one-way hash), assuming a strong algorithm and password.
Incorrect. The enable password does not get a stronger encryption method than enable secret. With service password-encryption, enable password becomes type 7, which is weaker than enable secret hashing. The enable secret remains the stronger mechanism and is the recommended configuration for privileged access.
Incorrect. They are not encrypted identically. Enable password with service password-encryption uses type 7 reversible encoding, while enable secret uses a one-way hash (type 5/8/9). Additionally, when both are configured, IOS uses enable secret for authentication, underscoring their functional and security differences.
Core concept: This question tests how Cisco IOS stores and protects privileged EXEC (enable) credentials, and what “service password-encryption” actually does. IOS can be configured with both “enable password” and “enable secret”. They are not equivalent in strength, and “service password-encryption” does not upgrade weak password storage into strong cryptography. Why the answer is correct: The enable secret is protected with a stronger cryptographic mechanism than the enable password. Historically, “enable secret” used an MD5-based one-way hash (type 5). Newer IOS versions can store secrets using stronger algorithms (for example, type 8 PBKDF2-SHA256 and type 9 scrypt, depending on platform/software). In contrast, “enable password” is either stored in cleartext (type 0) or, if “service password-encryption” is enabled, obfuscated using Cisco type 7 (a reversible encoding). Therefore, even with service password-encryption enabled, enable secret remains significantly more secure. Key features / configuration notes: - “service password-encryption” encrypts (obfuscates) certain plaintext passwords in the running/startup config (console, vty, username password, enable password) using type 7. - “enable secret” is always stored as a one-way hash and takes precedence over “enable password” when both are configured. - Best practice: always configure “enable secret” (preferably with modern type 8/9 where supported) and avoid relying on type 7 for security; treat type 7 as only preventing casual shoulder-surfing. Common misconceptions: Many assume enabling “service password-encryption” makes all passwords “secure” or “non-decryptable.” In reality, type 7 is easily reversible with widely available tools, so it is not strong encryption. Another misconception is that enable password becomes stronger than enable secret when encrypted—this is false. Exam tips: Remember these quick rules: (1) enable secret overrides enable password, (2) service password-encryption = type 7 reversible obfuscation, (3) enable secret uses one-way hashing (type 5/8/9) and is the recommended method. If you see “stronger cryptography” vs “no difference,” choose the option that highlights enable secret’s stronger protection.
What is the result of applying this access control list? ip access-list extended STATEFUL 10 permit tcp any any established 20 deny ip any any
Incorrect. The URG bit is a TCP flag used to indicate urgent data and is unrelated to the IOS ACL “established” keyword. “Established” does not match on URG; it matches on ACK or RST. Even if a packet has URG set, it would still need ACK or RST set to match this ACL line.
Incorrect. SYN is used to initiate a TCP connection (first step of the three-way handshake). The “established” keyword is specifically intended to block new inbound connection attempts, so SYN-only packets (typical initial connection attempts) do not match and will be denied by the explicit “deny ip any any.”
Correct. In Cisco IOS extended ACLs, the “established” keyword matches TCP segments with the ACK or RST bit set. Most packets in an existing TCP session (including the server’s responses) have ACK set, so they are permitted by line 10. All other traffic is denied by line 20.
Incorrect. DF (Don’t Fragment) is an IP header bit, not a TCP flag. The ACL line uses “permit tcp ... established,” which evaluates TCP flags (ACK/RST) only. This ACL does not include any match criteria for IP DF, so DF being set does not cause a permit.
Core Concept: This ACL tests understanding of the Cisco IOS “established” keyword in extended ACLs. It is a simple, stateless approximation of stateful filtering for TCP: it permits only TCP segments that appear to be part of an already-established TCP session. Why the Answer is Correct: The entry “permit tcp any any established” matches TCP packets that have the ACK or RST control bits set. In practice, this means return traffic for a TCP connection (after the initial SYN) is allowed because those packets normally carry the ACK bit. The next line “deny ip any any” blocks everything else (including new TCP connection attempts and all UDP/ICMP). Therefore, the result is that TCP traffic with the ACK bit set is allowed (and also RST, though not offered as an option). Key Features / Behavior: - “established” does not track sessions like a firewall; it only checks TCP flags. - It matches packets with ACK=1 and/or RST=1. - It is commonly used inbound on an interface to allow return traffic from outbound TCP sessions while blocking unsolicited inbound connection attempts. - Because it is not truly stateful, it can be bypassed in some scenarios (e.g., crafted packets with ACK set) and does not help with UDP. Common Misconceptions: Many assume “established” means “SYN-ACK” or “any packet after the handshake,” but IOS implements it as a simple flag check (ACK/RST). Another trap is confusing TCP flags (SYN/ACK/URG/RST/FIN/PSH) with IP header bits like DF (Don’t Fragment), which ACL “established” does not evaluate. Exam Tips: Remember: in Cisco extended ACLs, “established” = TCP ACK or RST set. If you see “deny ip any any” after it, only those matching TCP packets pass; everything else is dropped. Also note direction: applied inbound, it typically permits return traffic; applied outbound, it’s usually not meaningful for this purpose.
What is a fact about Cisco EAP-FAST?
Incorrect. Requiring a client certificate is characteristic of EAP-TLS, which uses mutual certificate-based authentication. EAP-FAST is designed to avoid mandatory client certificates by using a PAC to establish the protected tunnel and then performing inner authentication (often password-based) inside the tunnel. While certificates can be used in some designs, they are not required for EAP-FAST.
Incorrect. EAP-FAST originated as a Cisco method and is not an IETF standards-track EAP type in the same way as broadly standardized approaches. It is documented in RFCs (for example, RFC 4851) but not as a universally adopted IETF standard method like many candidates assume. On exams, treat EAP-FAST as Cisco-developed rather than an IETF standard.
Correct. A defining fact about EAP-FAST is that it does not require a RADIUS server certificate because it can establish the secure tunnel using a PAC (Protected Access Credential). This can simplify deployments where PKI and certificate distribution/validation are challenging. However, administrators must manage PAC provisioning securely to avoid weakening trust during initial provisioning.
Incorrect. “Transparent mode” is not a defining operational mode of EAP methods. Transparent mode is more commonly associated with certain network devices (for example, firewalls or bridging behaviors), not with EAP-FAST. EAP-FAST is an authentication method used within 802.1X/EAP exchanges between supplicant, authenticator (switch/AP), and authentication server (RADIUS).
Core Concept: EAP-FAST (Extensible Authentication Protocol – Flexible Authentication via Secure Tunneling) is a Cisco-developed EAP method used with 802.1X (wired/wireless) to provide strong user authentication by creating a protected tunnel (similar in goal to PEAP/EAP-TTLS) but using a PAC (Protected Access Credential) mechanism rather than relying strictly on server certificates. Why the Answer is Correct: A key fact about EAP-FAST is that it can be deployed without requiring a RADIUS server certificate, which is the distinguishing point compared to PEAP and EAP-TLS deployments that typically depend on validating a server certificate to establish the TLS tunnel. In EAP-FAST, the tunnel is established using a PAC (shared secret-like credential) that the client presents to the server. Because the PAC can bootstrap trust, EAP-FAST can operate in environments where deploying and managing a full PKI for server certificates is difficult. Key Features / How it Works: EAP-FAST uses a two-phase approach: 1) Phase 1 establishes a secure tunnel using the PAC. 2) Phase 2 performs inner authentication (often username/password via MSCHAPv2 or other EAP methods) inside that tunnel. PAC provisioning can be manual (out-of-band) or automatic (in-band). Automatic provisioning is convenient but must be controlled to avoid “trust on first use” risks. In Cisco enterprise designs, EAP-FAST is commonly associated with Cisco ISE/ACS and Cisco AnyConnect/NAM or supplicants that support PAC handling. Common Misconceptions: Many candidates confuse EAP-FAST with EAP-TLS and assume certificates are mandatory. EAP-FAST can use certificates, but its hallmark is that it does not require them. Another trap is thinking it is an IETF standard; it is not—EAP-FAST is Cisco-proposed and documented via RFCs as informational rather than a broadly adopted standards-track method. Exam Tips: For ENCOR, remember the “certificate requirement” differences: - EAP-TLS: client certificate required (strongest, mutual cert auth). - PEAP/EAP-TTLS: server certificate typically required. - EAP-FAST: PAC-based; can avoid server certificate requirement. If you see “PAC” or “no server cert required,” think EAP-FAST.
Which two network problems indicate a need to implement QoS in a campus network? (Choose two.)
Port flapping is a Layer 1/2 stability problem (bad cable/SFP, duplex mismatch, STP issues, interface errors) causing link up/down events and reconvergence. QoS does not stabilize a physical link or prevent interface resets; traffic will be disrupted regardless of priority. The correct fix is to troubleshoot the interface, cabling, transceiver, and switchport configuration.
Excess jitter indicates variable queuing delay, commonly caused by congestion and contention on an egress interface. Real-time applications are especially sensitive to jitter. Implementing QoS (classification/marking plus priority queuing and bandwidth guarantees) reduces delay variation for voice/video and makes performance more predictable under load.
Misrouted packets point to routing/control-plane issues (incorrect routes, redistribution mistakes, asymmetric routing, policy-based routing errors) or possibly L2 loops. QoS does not correct forwarding decisions; it only influences scheduling and drop behavior once packets are already on the correct path. Fix routing tables, adjacencies, and policies instead.
Duplicate IP addresses are an IP management and Layer 2/ARP problem that can cause intermittent connectivity, ARP instability, and traffic blackholing. QoS cannot resolve address conflicts because the issue is not congestion-based. The solution is proper IPAM, DHCP snooping/DAI where appropriate, and correcting host/static addressing.
Bandwidth-related packet loss is a hallmark of congestion: queues overflow and packets are tail-dropped when the offered load exceeds available bandwidth. QoS helps by allocating bandwidth per class, prioritizing critical traffic, and using congestion avoidance (e.g., WRED) and shaping/policing strategies to control bursts and protect important applications.
Core Concept: This question tests when Quality of Service (QoS) is required in a campus network. QoS is used to manage congestion by classifying traffic, marking it (DSCP/CoS), and applying queuing/scheduling, shaping, and policing so that delay- and loss-sensitive applications (voice/video/real-time) remain usable when links become oversubscribed. Why the Answer is Correct: Excess jitter (B) is a classic symptom of congestion and inconsistent queuing delay. Real-time traffic (VoIP, interactive video) is highly sensitive to variation in packet delay; even if average latency is acceptable, jitter causes choppy audio/video and buffer underruns. QoS addresses this by prioritizing real-time traffic (e.g., LLQ/priority queue), reducing contention, and ensuring predictable forwarding. Bandwidth-related packet loss (E) indicates tail drops due to queue overflow during congestion. When egress interfaces are oversubscribed, buffers fill and packets are dropped, which severely impacts TCP throughput and can break UDP-based real-time streams. QoS mitigates this by allocating bandwidth per class, using congestion management (CBWFQ), congestion avoidance (WRED for TCP-friendly behavior), and (where appropriate) shaping to smooth bursts. Key Features / Best Practices: In campus designs, QoS is typically end-to-end: classify/mark at the access edge (trust boundary), preserve markings across the distribution/core, and apply queuing on egress where congestion occurs. Common tools include DSCP marking, priority queuing for voice, bandwidth guarantees for critical apps, and WRED for scalable drop behavior. The campus core is often engineered to be non-blocking, but oversubscription at access uplinks, WAN edges, or internet edges still makes QoS necessary. Common Misconceptions: Many issues look like “performance problems” but are not QoS problems. Layer 1/2 instability (port flapping) and addressing/routing faults (duplicate IPs, misrouting) cause loss and disruption regardless of QoS policy. QoS cannot fix incorrect forwarding or physical/link errors; it only manages traffic under congestion. Exam Tips: On ENCOR, associate QoS needs with congestion symptoms: jitter, delay, and drops due to oversubscription. Eliminate options that are clearly physical, routing, or IP management issues. If the problem statement implies queueing/buffering behavior (drops, jitter), think QoS; if it implies correctness (wrong path, duplicate IP), think routing/addressing troubleshooting instead.