
Simulasikan pengalaman ujian sesungguhnya dengan 100 soal dan batas waktu 120 menit. Berlatih dengan jawaban terverifikasi AI dan penjelasan detail.
Didukung AI
Setiap jawaban diverifikasi silang oleh 3 model AI terkemuka untuk memastikan akurasi maksimum. Dapatkan penjelasan detail per opsi dan analisis soal mendalam.
Which access control list allows only TCP traffic with a destination port range of 22-443, excluding port 80?
This option first denies TCP traffic with destination port 80, which correctly removes the excluded port before any broader permit is evaluated. The second statement permits TCP traffic with destination ports greater than 21 and less than 444, which corresponds exactly to ports 22 through 443. Because ACLs are processed top-down, packets for port 80 match the deny first and are dropped, while packets for ports 22-79 and 81-443 are permitted by the second line. Traffic outside that range is not matched by the permit and is denied by the implicit deny, so the logic satisfies the requirement.
This option places the permit for the full range 22 through 443 before the deny for port 80, which breaks the intended logic. Because port 80 falls inside the permitted range, packets to port 80 match the first statement and are allowed immediately. The later deny is never evaluated for those packets due to first-match ACL behavior. As a result, port 80 is not excluded, so the ACL does not meet the requirement.
This option permits only TCP traffic with destination port 80, which is the exact opposite of the requirement to exclude port 80. It does not allow the broader destination port range of 22 through 443, so valid traffic to ports such as 22, 443, or 3389 within the intended range would not be handled correctly. Because of the implicit deny at the end of the ACL, all other traffic would be blocked. Therefore this option clearly fails to implement the requested access policy.
This option is correct because it explicitly denies TCP destination port 80 and then permits the inclusive destination port range 22 through 443. Since ACLs stop processing after the first match, traffic to port 80 is blocked by the first line and never reaches the permit statement. Traffic to other ports in the 22-443 range is allowed by the second line, and traffic outside that range is denied implicitly. This is the clearest and most direct expression of the requested policy.
Core concept: This question tests extended ACL processing order and TCP destination port matching using operators such as eq, gt, lt, and range. The requirement is to permit only TCP traffic whose destination port is between 22 and 443 while excluding port 80, which means the ACL must deny port 80 before permitting the broader allowed set. Both a deny-then-range approach and a deny-then-gt/lt approach can satisfy the requirement because ACLs are evaluated top-down and stop at the first match. A common misconception is that only the range keyword is valid for expressing the allowed ports, but gt 21 lt 444 is functionally equivalent to ports 22 through 443. Exam tip: always verify both the logic and the order of ACL entries, and remember that anything not explicitly permitted is blocked by the implicit deny at the end.
Ingin berlatih semua soal di mana saja?
Unduh Cloud Pass gratis — termasuk tes latihan, pelacakan progres & lainnya.


Unduh Cloud Pass dan akses semua soal latihan Cisco 350-401: Implementing and Operating Cisco Enterprise Network Core Technologies (ENCOR) secara gratis.
Ingin berlatih semua soal di mana saja?
Dapatkan aplikasi gratis
Unduh Cloud Pass gratis — termasuk tes latihan, pelacakan progres & lainnya.
Refer to the exhibit.
R1#show ip bgp
BGP table version is 32, local router ID is 192.168.101.5
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
x best-external, a additional-path, c RIB-compressed,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
Network Next Hop Metric LocPrf Weight Path
* 192.168.102.0 192.168.101.18 80 0 64517i
* 192.168.101.14 80 80 0 64516i
* 192.168.101.10 0 64515 64515i
*> 192.168.101.2 32768 64513i
* 192.168.101.6 80 0 64514 64514i
Which IP address becomes the active next hop for 192.168.102.0/24 when 192.168.101.2 fails?
192.168.101.10 is not selected because its AS-path is longer than the path through 192.168.101.18. The path attribute shown is `64515 64515i`, which indicates AS 64515 has been prepended, giving it an AS-path length of 2. After the failed best path is removed, BGP compares the remaining eBGP candidates and prefers the shorter AS-path. Therefore, 192.168.101.18 is preferred over 192.168.101.10.
192.168.101.14 is not selected because it is an iBGP-learned route, indicated by the presence of a Local Preference value in the table. Although Local Preference is an important BGP attribute within an AS, Cisco best-path selection still prefers an eBGP path over an iBGP path when earlier comparisons among the remaining candidates do not make the iBGP route uniquely best in this context. The remaining eBGP routes are therefore preferred before this iBGP route. As a result, 192.168.101.14 does not become the active next hop.
192.168.101.6 is not selected because its AS-path is longer than the path through 192.168.101.18. The path shown is `64514 64514i`, which is an AS-path length of 2 because of prepending. While it also shows a MED of 80, MED is considered later and does not help it beat a shorter eBGP AS-path. Therefore, 192.168.101.18 remains the best remaining route.
192.168.101.18 becomes the active next hop after 192.168.101.2 is withdrawn. Among the remaining candidates, 192.168.101.14 is an iBGP-learned route because it shows a Local Preference value, while 192.168.101.18, 192.168.101.10, and 192.168.101.6 are eBGP-learned routes. BGP prefers eBGP over iBGP when earlier decisive attributes do not produce a winner, so the iBGP path via 192.168.101.14 is not selected. Between the remaining eBGP paths, 192.168.101.18 has the shortest AS-path length of 1 compared with 192.168.101.6 and 192.168.101.10, which each show AS-path length 2 due to AS prepending, so 192.168.101.18 wins.
Core concept: This question tests Cisco BGP best-path selection after the current best route is withdrawn. When the path via 192.168.101.2 fails, R1 reevaluates the remaining valid paths for 192.168.102.0/24 using the normal BGP decision process. Key features to compare here are Local Preference, AS-path length, origin, MED, and especially the eBGP-versus-iBGP preference. A common misconception is to assume that a displayed LocPrf of 80 automatically beats all other paths, but Local Preference is only compared among paths where it is actually present and meaningful within the same AS context; in this table, the path via 192.168.101.14 is iBGP while several others are eBGP. Exam tip: after eliminating the failed best path, identify which remaining routes are eBGP versus iBGP and remember that eBGP is preferred over iBGP before later tiebreakers like IGP metric.
Refer to the exhibit.
access-list 1 permit 10.1.1.0 0.0.0.31
ip nat pool CISCO 209.165.201.1 209.165.201.30 netmask 255.255.255.224
ip nat inside source list 1 pool CISCO
What are two effects of this configuration? (Choose two.)
Incorrect. A true one-to-one NAT translation in the exam sense usually implies a fixed, permanent mapping (static NAT), configured with "ip nat inside source static". This configuration is dynamic NAT using a pool; mappings are allocated as needed and can change over time. While each active inside host may temporarily get one global address, it is not a guaranteed permanent one-to-one mapping.
Incorrect. The 209.165.201.0/27 network in the pool represents inside global addresses (public addresses used to represent inside hosts). “Outside local” refers to how an outside host’s address is represented inside the network (often only relevant with overlapping addressing or special NAT cases). Nothing here defines outside local addresses or an outside address range.
Correct. The standard ACL 1 permits 10.1.1.0 0.0.0.31, which matches 32 addresses (10.1.1.0–10.1.1.31), i.e., 10.1.1.0/27. In NAT terms, these are the inside local addresses (the private addresses assigned to inside hosts) that are eligible to be translated when traffic is sent toward the outside.
Correct. The NAT pool CISCO provides addresses 209.165.201.1–209.165.201.30 with a /27 mask, meaning they are part of the 209.165.201.0/27 subnet. The command "ip nat inside source list 1 pool CISCO" translates matched inside source addresses to addresses from this pool, making them inside global addresses visible to the outside.
Incorrect. 10.1.1.0/27 is not the inside global range; it is the inside local range (private addresses). The inside global range is the public address space used to represent inside hosts externally, which in this configuration is the NAT pool 209.165.201.1–209.165.201.30 (within 209.165.201.0/27).
Core concept: This configuration is testing Dynamic NAT using a NAT pool and a standard ACL to match inside source addresses. In Cisco NAT terminology, the inside local address is the private (RFC1918) address of an inside host, and the inside global address is the public address that represents that host to the outside. Why the answer is correct: The ACL statement "access-list 1 permit 10.1.1.0 0.0.0.31" matches the inside source subnet 10.1.1.0/27 (wildcard 0.0.0.31 corresponds to a /27). That means hosts in 10.1.1.0 through 10.1.1.31 are eligible for NAT. The NAT pool "CISCO" defines a public range from 209.165.201.1 to 209.165.201.30 with netmask 255.255.255.224 (/27). The command "ip nat inside source list 1 pool CISCO" ties them together: any packet sourced from an address matched by ACL 1 (inside local) will be translated to an address from the pool (inside global). Therefore, (C) the inside local addresses are 10.1.1.0/27, and (D) those inside source addresses are translated to the 209.165.201.0/27 public subnet (specifically .1-.30 from that /27). Key features / best practices: This is dynamic NAT (not PAT) because no "overload" keyword is present. Each active translation consumes one address from the pool; when the pool is exhausted, new translations fail until existing ones time out. Also remember NAT requires interface roles (ip nat inside / ip nat outside) on the relevant interfaces, though they are not shown in the snippet. Common misconceptions: Many candidates confuse dynamic NAT with one-to-one static NAT. Dynamic NAT can be one-to-one per active session/host, but it is not a fixed mapping. Another common confusion is mixing up “outside local/global” with “inside local/global”; the pool defines inside global addresses, not outside local. Exam tips: If you see "ip nat inside source list <acl> pool <name>" without "overload", think dynamic NAT with a pool. Convert wildcard masks to prefix lengths quickly (0.0.0.31 = /27). Then identify: ACL = inside local match; pool = inside global range.
What is the difference between a RIB and a FIB?
Correct. The FIB (CEF table) is programmed from the RIB’s selected best routes. The RIB collects routes from connected, static, and dynamic protocols and performs best-path selection. The resulting active routes are installed into the FIB for fast data-plane forwarding, along with adjacency information to rewrite Layer 2 headers toward the next hop.
Incorrect. The RIB does not maintain a mirror image of the FIB. The RIB can contain multiple routes to the same prefix from different sources (or multiple BGP paths), including routes that are not currently used for forwarding. The FIB is typically a subset containing the best/installed routes used for actual packet forwarding.
Incorrect. Standard IP routing/CEF forwarding uses destination prefix longest-prefix-match, not source prefix-based switching. While policy-based routing (PBR) can influence forwarding based on source or other fields, that is not the default RIB function. The RIB’s role is route selection and maintaining routing information, not source-prefix switching decisions.
Incorrect. The FIB is not where all IP routing information is stored; that is the role of the RIB. The FIB is an optimized forwarding structure used by the data plane and generally contains only the active/best routes needed to forward packets efficiently. Full routing information and candidate routes remain in the RIB.
Core Concept: This question tests the distinction between the Routing Information Base (RIB) and the Forwarding Information Base (FIB). In Cisco IOS/IOS XE, the RIB is the control-plane routing table (built by routing protocols and static routes), while the FIB is the data-plane forwarding table used for high-speed packet forwarding (Cisco Express Forwarding, CEF). Why the Answer is Correct: Option A is correct because the FIB is derived from the RIB’s best routes. Routing protocols (OSPF, EIGRP, BGP), connected routes, and static routes are installed into the RIB. The router then selects the best route per prefix (based on administrative distance, metric, and other selection rules). Those best routes are programmed into the FIB, along with adjacency information (next-hop Layer 2 rewrite details) so packets can be forwarded quickly without re-running routing lookups for every packet. Key Features / Best Practices: - RIB (control plane): stores all learned routes (including candidates), runs best-path selection, supports multiple sources, and is used for routing decisions and redistribution. - FIB (data plane): optimized structure for fast longest-prefix-match lookups; typically contains only the active/best routes used for forwarding. - Adjacency table: paired with the FIB in CEF; contains Layer 2 next-hop rewrite information (MAC, encapsulation). - Troubleshooting: use "show ip route" for RIB and "show ip cef" (and adjacency outputs) for FIB/CEF behavior. Common Misconceptions: A frequent trap is thinking the FIB is “where all routing info is stored” (it is not; it’s a forwarding structure). Another misconception is that the RIB mirrors the FIB; in reality, the RIB can contain many more routes (including non-selected paths), while the FIB typically contains only the installed best paths for forwarding. Exam Tips: Remember: RIB = routing table (control plane, route learning/selection). FIB = forwarding table (data plane, packet switching). If you see “CEF,” think FIB/adjacencies. If you see “routing protocols, AD/metric, best path,” think RIB. Also note that forwarding decisions are destination-prefix based (longest match), not source-prefix based in normal routing.
What is the structure of a JSON web token?
Correct. A standard signed JWT (JWS) is composed of three Base64URL-encoded segments separated by dots: header, payload, and signature. The header states token type and algorithm, the payload carries claims, and the signature provides integrity/authenticity by signing header+payload. This is the canonical structure tested on exams and used in HTTP Authorization: Bearer tokens.
Incorrect. JWTs do not have a “version” segment as part of their dot-separated structure. Versioning is handled by standards (RFC) and sometimes by claims or application logic, not by adding a token segment. The three segments are specifically header, payload, and signature for JWS. Introducing “version” is a distractor that sounds plausible but is not part of JWT format.
Incorrect. While header and payload are two of the three JWT segments and are easily decoded, a proper JWT used for trust is typically signed, requiring the third segment: signature. Without the signature, you cannot verify integrity or authenticity, and the token is effectively untrusted. Some systems may use unsecured JWTs, but that is not the standard or best practice.
Incorrect. Payload and signature alone are insufficient because the header is required to indicate how the signature was produced (algorithm, key ID/kid, type). The signature validation process depends on header parameters. Omitting the header breaks the defined JWT/JWS structure and prevents interoperable verification. This option is a common trap for those focusing only on claims and integrity.
Core Concept: A JSON Web Token (JWT) is a compact, URL-safe token format defined by RFC 7519, commonly used for authentication and authorization in APIs (including network automation controllers and identity integrations). It is a signed (and optionally encrypted) object that carries claims about a subject (user/service) and context. Why the Answer is Correct: A JWT in its common form (JWS: JSON Web Signature) has three Base64URL-encoded parts separated by dots: header.payload.signature. The header describes the token type and cryptographic algorithm (for example, typ=JWT, alg=RS256). The payload contains the claims (registered claims like iss, sub, aud, exp; and custom claims). The signature is computed over the encoded header and payload using the algorithm in the header, providing integrity and authenticity. Key Features / Best Practices: - Structure: header.payload.signature (three dot-separated segments). - Base64URL encoding (not standard Base64) to be safe in URLs/HTTP headers. - Signature validates integrity; it does not automatically provide confidentiality. Anyone can decode header/payload, so do not place secrets in the payload. - Validate exp/nbf/iat, issuer (iss), audience (aud), and algorithm expectations (avoid accepting “none” or unexpected alg). - Use JWE (encrypted JWT) when confidentiality is required; JWE has a different 5-part structure. Common Misconceptions: Some confuse JWT with “just JSON” or think it’s only two parts (header+payload) because those are the readable sections. Others invent fields like “version” as a segment. The signature is essential for trust; without it, the token is just an unsigned blob of claims. Exam Tips: For ENCOR security/automation topics, remember: JWT = three segments for signed tokens (JWS). If you see five segments, that’s typically JWE (encrypted). Also, decoding a JWT is not the same as validating it—validation requires signature verification and claim checks.
Why is an AP joining a different WLC than the one specified through option 43?
Correct. A primed AP (or an AP with primary/secondary/tertiary controller configured) will attempt to join its stored/configured WLC before relying on DHCP option 43. This is a common scenario when APs are redeployed or when controller preferences were set for HA. If that primed WLC is reachable, the AP may ignore option 43 and join the primed controller.
Incorrect. If L2 broadcast traffic cannot reach the WLC, that mainly impacts local subnet broadcast-based discovery, not DHCP option 43 (which is delivered via DHCP). Also, a failure of L2 broadcast does not inherently explain why the AP would join a different WLC; it would more likely reduce discovery options and potentially cause join failure unless another method succeeds.
Incorrect. CAPWAP discovery does not require multicast to reach a WLC in typical deployments; multicast is not the primary mechanism for controller discovery across Layer 3. If multicast were blocked, it would not usually cause the AP to join an unexpected controller; it would more likely have no effect or only affect specific discovery modes that are not commonly relied upon.
Incorrect. A software version mismatch can prevent an AP from joining a controller (or trigger an AP image download/upgrade if supported), but it does not explain joining a different WLC than specified by option 43. Version differences influence compatibility and upgrade behavior, not the selection precedence among discovered controllers.
Core Concept: This question tests Cisco AP discovery and join behavior, specifically the precedence order of controller discovery methods. In Cisco enterprise WLANs, an AP can discover WLCs via multiple mechanisms (DHCP option 43, DNS “cisco-capwap-controller”, locally stored controller list/priming, broadcast on the local subnet, and others). The AP does not blindly follow option 43 if a higher-priority or already-known controller is available. Why the Answer is Correct: If an AP has been previously joined to a controller, it can store that controller information (often referred to as being “primed” to a WLC, or having a primary/secondary/tertiary controller configured). On reboot, the AP will preferentially attempt to join its configured/stored controller(s) before using DHCP option 43. Therefore, even if option 43 points to WLC-X, the AP may join WLC-Y because WLC-Y is the primed/primary controller and is reachable. Key Features / Behaviors: - Discovery precedence: stored/primed controller (and configured primary/secondary/tertiary) typically takes precedence over DHCP option 43. - AP high availability design: primary/secondary/tertiary controller settings are used to control deterministic failover and avoid random joins. - Operational reality: APs moved between sites/VLANs often keep their previous controller in NVRAM unless cleared (e.g., “clear capwap private-config” on AP) or re-primed. Common Misconceptions: Many engineers assume option 43 is authoritative. It is not; it is one of several discovery inputs. Also, L2 broadcast or L3 multicast reachability issues can affect discovery, but those issues would more commonly prevent joining the intended WLC rather than cause joining a different one—unless another discovery method succeeds. Exam Tips: For ENCOR, memorize discovery/join precedence and remember that “primed” (stored) controller information can override DHCP option 43. If an AP consistently joins an unexpected WLC, check AP primary/secondary/tertiary settings, the AP’s stored controller list, and whether the AP was previously deployed elsewhere. Clearing the AP’s private config or re-priming is a common remediation.
Which algorithms are used to secure REST API from brute attacks and minimize the impact?
SHA-512 and SHA-384 are SHA-2 family hash functions used for integrity (e.g., HMAC, digital signatures) and are designed to be fast. Fast hashes are poor choices for password storage because attackers can brute-force them extremely quickly with GPUs/ASICs. They do not provide adaptive cost factors or memory hardness, so they do not effectively minimize brute-force impact if credentials are leaked.
MD5 is obsolete and broken for collision resistance and is extremely fast, making it unsuitable for password hashing. Pairing it with SHA-384 does not solve the core issue: both are general-purpose hashes, not adaptive password KDFs. This option may look appealing because it lists “algorithms,” but it does not address brute-force resistance for stored credentials and includes an explicitly insecure legacy hash (MD5).
SHA-1 is deprecated for many security uses due to known collision attacks, and SHA-256/SHA-512 are fast general-purpose hashes. Even though SHA-256 and SHA-512 are strong for integrity, they are not designed to slow password guessing. Without an adaptive work factor and proper salting/KDF construction, using SHA directly for passwords leaves REST API credentials vulnerable to rapid offline brute-force if the password database is compromised.
PBKDF2, bcrypt, and scrypt are standard password hashing/key-derivation functions intended to resist brute-force attacks. They use salts and configurable cost parameters to slow each guess; scrypt also adds memory hardness to reduce GPU/ASIC efficiency. These properties directly minimize the impact of credential theft by making offline cracking far more expensive and time-consuming, aligning with best practices for securing API authentication.
Core Concept: This question tests secure credential storage and resistance to brute-force/password-guessing attacks in the context of REST APIs. While REST APIs are protected with controls like rate limiting, lockout, MFA, and WAF rules, the algorithms that specifically “minimize the impact” of brute-force (especially if a password database is stolen) are adaptive password hashing/key-derivation functions. Why the Answer is Correct: PBKDF2, bcrypt, and scrypt are purpose-built password hashing/KDF algorithms designed to slow down guessing attempts by making each password verification computationally expensive (and, for scrypt, also memory expensive). They incorporate a salt to prevent rainbow-table attacks and enable configurable work factors (iterations/cost) so defenders can increase difficulty as hardware improves. This directly reduces the feasibility and speed of offline brute-force attacks and limits damage from credential compromise. Key Features / Best Practices: 1) Salted hashing: unique per-password salt prevents precomputed attacks. 2) Work factor: PBKDF2 iterations, bcrypt cost factor, scrypt N/r/p parameters. 3) Memory hardness (scrypt): increases GPU/ASIC resistance. 4) Operational guidance: store salt and parameters with the hash; periodically raise cost; prefer bcrypt/scrypt/Argon2 (Argon2 is modern but not listed). 5) REST API context: combine strong password hashing with online protections (rate limiting, exponential backoff, account lockout, IP reputation, and MFA) to address both online and offline brute-force. Common Misconceptions: Many assume “stronger SHA” (SHA-512, SHA-384, etc.) equals better password protection. General-purpose hashes are fast by design, which is the opposite of what you want for password storage; attackers can try billions of SHA hashes per second on GPUs. MD5 and SHA-1 are also cryptographically weak and fast, making them even worse. Exam Tips: For ENCOR-style security questions, remember: use fast hashes (SHA-2) for integrity (HMAC, signatures) and use slow, salted, adaptive KDFs (PBKDF2/bcrypt/scrypt/Argon2) for passwords. If the question mentions brute force and minimizing impact of credential theft, look for password hashing/KDF algorithms, not generic hashing algorithms.
Which NTP Stratum level is a server that is connected directly to an authoritative time source?
Stratum 0 represents the authoritative reference clock itself (GPS receiver, atomic clock, radio clock). These devices typically do not act as NTP servers on the network; they provide time to a Stratum 1 NTP server via a hardware interface. Therefore, a “server connected directly to an authoritative time source” is not Stratum 0; it is Stratum 1.
Stratum 1 is the correct choice because it describes an NTP server that is directly connected to a Stratum 0 reference clock (authoritative time source). The Stratum 1 server then distributes time to downstream NTP clients and servers. This is the standard NTP hierarchy definition and a common ENCOR exam point.
Stratum 14 is a very high stratum level, meaning the device is many hops away from the reference clock (e.g., Stratum 1 -> 2 -> ...). While it may still be synchronized, it is far from the authoritative source and generally less preferred than lower stratum sources. It is not directly connected to an authoritative time source.
Stratum 15 is the highest stratum value typically considered usable for synchronization in NTP (with Stratum 16 meaning unsynchronized). A Stratum 15 server is very far removed from the reference clock and is not directly connected to an authoritative time source. This option can mislead test-takers who think higher numbers imply higher authority.
Core Concept: This question tests Network Time Protocol (NTP) hierarchy and the meaning of stratum levels. NTP organizes time sources in layers, where lower stratum numbers indicate closer proximity to an authoritative reference clock. Why the Answer is Correct: A server connected directly to an authoritative time source, such as a GPS receiver or atomic clock, is a Stratum 1 NTP server. The authoritative clock itself is Stratum 0. Stratum 0 devices are reference clocks and are not typically network NTP servers; instead, they feed time to a Stratum 1 server. Key Features: - Stratum 0: Reference clocks such as GPS, atomic, or radio clocks. - Stratum 1: NTP servers directly attached to Stratum 0 reference clocks. - Stratum 2 and below: NTP servers or clients synchronized from upstream NTP servers. - Stratum 15 is the highest usable stratum, while Stratum 16 indicates an unsynchronized source. - In Cisco enterprise networks, using multiple reliable NTP sources and authentication improves resiliency and security. Common Misconceptions: A common trap is choosing Stratum 0 because it is the authoritative source. However, the question asks for the server connected directly to that source, which is Stratum 1. Another misconception is thinking a higher stratum number means a better or more authoritative source, when the opposite is true. Exam Tips: Memorize the mapping: Stratum 0 is the reference clock, and Stratum 1 is the server directly connected to it. If the wording mentions a server attached to GPS, atomic, or radio time, the answer is Stratum 1. Also remember that Stratum 16 means unsynchronized, which is a frequent exam distractor.
A network administrator applies the following configuration to an IOS device: aaa new-model aaa authentication login default local group tacacs+ What is the process of password checks when a login attempt is made to the device?
Incorrect because the configured order is not TACACS+ first. The method list is “local group tacacs+,” so local is attempted before TACACS+. Option A matches what would happen with “aaa authentication login default group tacacs+ local,” not with the given configuration. Also, there is no mention of RADIUS, so only local and TACACS+ are relevant.
Incorrect because RADIUS is not part of the configured method list. The command includes only “local” and “group tacacs+.” Even if RADIUS servers exist on the device, IOS will not use them unless “group radius” (or a named RADIUS server group) is explicitly included in the authentication method list.
Incorrect because it introduces RADIUS and also implies a three-step sequence not present in the configuration. The given method list has only two methods. While it correctly starts with local, it incorrectly adds RADIUS as a third step and misrepresents the actual configured behavior.
Correct because IOS processes AAA authentication methods in the order listed: first “local,” then “group tacacs+.” If local authentication cannot be completed (for example, user not found locally), IOS proceeds to TACACS+. There is no RADIUS step. Note that an explicit local reject typically stops the process rather than falling through.
Core Concept: This question tests AAA method lists and the order IOS uses to authenticate logins. With aaa new-model enabled, authentication is controlled by method lists. The keyword “default” applies to all lines (console/VTY/aux) unless a line-specific method list is configured. Why the Answer is Correct: The command “aaa authentication login default local group tacacs+” defines the authentication methods in the exact order they are tried. IOS processes methods left-to-right. Therefore, the device first checks the local user database (“local”). If local authentication cannot be performed (for example, no matching username exists locally), IOS then tries the next method, which is the TACACS+ server group (“group tacacs+”). There is no RADIUS referenced, so it is never consulted. Key Features / Details: 1. Method list order matters: IOS does not “prefer” TACACS+ just because it is centralized; it follows the configured sequence. 2. “local” uses the local username database (username <name> secret <pw>) and/or local password mechanisms depending on context. 3. “group tacacs+” uses the TACACS+ server group (configured via tacacs server / tacacs-server host and aaa group server tacacs+ ... depending on IOS version). 4. Important nuance: fallback behavior depends on failure type. If the first method returns an explicit reject (bad password for an existing local user), IOS typically stops and does not try the next method. Fallback generally occurs when the method is unavailable or cannot be completed (e.g., local user not found, TACACS+ servers unreachable/timeouts). Common Misconceptions: Many engineers assume AAA always tries TACACS+ first for device administration. That is only true if the method list is “group tacacs+ local” (or similar). Another trap is thinking “if that check fails” means “any failure.” In AAA, an authentication “fail” (reject) is different from an “error” (no response/unavailable), and only certain conditions trigger moving to the next method. Exam Tips: Always read the method list left-to-right and map each token to a specific database/service. If you see “group tacacs+” but no “radius,” eliminate any option mentioning RADIUS. Also, remember that “default” applies broadly unless overridden on the line with “login authentication <list-name>.”
A local router shows an EBGP neighbor in the Active state. Which statement is true about the local router?
Correct. In the Active state, the local router is attempting to establish the BGP peering by opening a TCP session to the neighbor on port 179 (and/or retrying after failures). This state commonly appears when the TCP handshake cannot complete due to reachability problems, ACL/firewall blocks, incorrect neighbor IP, or eBGP multihop/TTL issues.
Incorrect. Receiving prefixes and placing them into the Adj-RIB-In (RIB-IN) requires the BGP session to be in the Established state, where UPDATE messages are exchanged. In Active, the session is not up, so no BGP UPDATEs are being received and no prefixes can be stored in RIB-IN.
Incorrect. Having active prefixes in the forwarding table (FIB) implies routes have been learned, selected as best paths, installed into the routing table (RIB), and then programmed into the FIB. That only happens after a successful BGP adjacency (Established) and after best-path selection, not while the neighbor is stuck in Active.
Incorrect. Cisco IOS BGP does not use a typical “passive mode” neighbor setting in the same way as protocols like EIGRP or OSPF. While BGP always listens for incoming TCP connections and may initiate them, the Active state specifically indicates repeated attempts to establish the TCP/BGP session, not that the router is configured to be passive.
Core Concept: This question tests BGP neighbor finite state machine (FSM) states and what each state implies operationally. In ENCOR, you must recognize how BGP progresses from Idle to Established and what failures keep it from forming. For eBGP, the session is built over TCP port 179, and the FSM states indicate where the router is in establishing that TCP/BGP relationship. Why the Answer is Correct: When an eBGP neighbor is in the Active state, the local router is actively trying to establish the BGP session by initiating a TCP connection to the peer (TCP 179). “Active” does not mean “actively exchanging routes”; it means the router has moved past Idle/Connect and is repeatedly attempting to complete the TCP three-way handshake (or is failing and retrying). In practice, Active often indicates the TCP connection cannot be completed (peer down, ACL/firewall blocking, wrong IP, wrong update-source, TTL/hop issue, etc.). Key Features / Operational Details: BGP FSM basics: Idle (waiting), Connect (attempting TCP), Active (retrying TCP / listening and initiating), OpenSent/OpenConfirm (BGP OPEN negotiation), Established (routes can be exchanged). For eBGP, common requirements include reachability to the neighbor IP, correct neighbor statement, and typically TTL=1 unless using eBGP multihop. Troubleshooting commands include: show ip bgp summary, show tcp brief | include 179, ping/traceroute to neighbor, and checking ACLs, VRFs, and interface status. Common Misconceptions: “Active” is frequently misread as “working” or “actively receiving routes.” Route exchange and RIB-IN population occur only after Established. Also, “passive mode” is not a standard BGP neighbor feature in Cisco IOS like it is in some other protocols; BGP generally listens on TCP 179 and may initiate connections depending on configuration and FSM behavior. Exam Tips: Memorize the meaning of Active vs Established: Active = trying (or failing) to bring up TCP/BGP; Established = exchanging keepalives/updates and potentially installing routes. If you see Active, think connectivity/TCP/ACL/TTL/update-source/VRF issues rather than policy issues like route-maps (those usually appear after Established).