
Simule a experiência real do exame com 100 questões e limite de tempo de 120 minutos. Pratique com respostas verificadas por IA e explicações detalhadas.
Powered by IA
Cada resposta é verificada por 3 modelos de IA líderes para garantir máxima precisão. Receba explicações detalhadas por alternativa e análise aprofundada das questões.
A network administrator is implementing a routing configuration change and enables routing debugs to track routing behavior during the change. The logging output on the terminal is interrupting the command typing process. Which two actions can the network administrator take to minimize the possibility of typing commands incorrectly? (Choose two.)
Incorrect. "logging synchronous" is not a global configuration command in Cisco IOS; it is entered under line configuration mode such as "line console 0" or "line vty 0 4". Because the option explicitly says global configuration command, it is technically invalid. Even though the feature itself is relevant, the command context given in the option makes this answer wrong. Cisco certification questions often test exact command mode syntax, so this distinction matters.
Correct. The command "logging synchronous" is configured under line configuration mode, including VTY lines used for SSH or Telnet access. It causes unsolicited log and debug messages to be displayed in a way that preserves usability by restoring the prompt and partially entered command after the message appears. This directly reduces the chance of mistyping commands during a routing change when debug output is active. Because administrators commonly work over remote sessions, applying it under the VTY is a standard and effective mitigation.
Incorrect. The command "terminal length" changes the number of lines displayed before paging occurs with a --More-- prompt. It affects the presentation of command output such as show commands, but it does not control asynchronous debug or syslog messages arriving while the user is typing. Therefore it does not reduce command-line interruption in the way the question asks. It is useful for output navigation, not for protecting interactive input.
Incorrect. The logging delimiter feature may improve readability by visually separating messages, but it does not stop those messages from interrupting the command line. The administrator can still have keystrokes mixed with unsolicited output, which is the real operational problem here. It is a formatting aid rather than a mechanism for preserving typed commands. As a result, it does not directly minimize the possibility of typing commands incorrectly.
Correct. Pressing the TAB key can help reprint or complete the partially typed command after the line has been visually disrupted by debug output. While it is primarily known for command completion, in practice it also helps the administrator recover the current input line and continue typing more accurately. This makes it a useful immediate action during an active troubleshooting session. On Cisco exams, TAB is often recognized as a practical CLI aid when terminal output interrupts command entry.
Core concept: This question is about minimizing CLI input disruption caused by asynchronous debug and syslog messages during an IOS session. When unsolicited log messages appear while an administrator is typing, they can break up the command line and increase the chance of entering an incorrect command. Why correct: The best remedies are to enable synchronous logging on the active terminal line and to use the TAB key to help redraw or complete the interrupted command. Key features: The command "logging synchronous" is configured under line configuration mode, such as console or VTY lines, and causes IOS to display log messages on a new line before restoring the prompt and partially typed input. Common misconceptions: It is not a global configuration command, and terminal length does not affect asynchronous logging behavior. Exam tips: If a Cisco exam question asks how to prevent debug output from interrupting typing, think of line-level "logging synchronous" first, and remember that TAB can help recover the command line during interactive entry.
Quer praticar todas as questões em qualquer lugar?
Baixe o Cloud Pass grátis — inclui simulados, acompanhamento de progresso e mais.


Baixe o Cloud Pass e acesse todas as questões de prática de Cisco 350-401: Implementing and Operating Cisco Enterprise Network Core Technologies (ENCOR) gratuitamente.
Quer praticar todas as questões em qualquer lugar?
Baixe o app grátis
Baixe o Cloud Pass grátis — inclui simulados, acompanhamento de progresso e mais.
Which function does a fabric edge node perform in an SD-Access deployment?
Correct. A fabric edge node is the node that directly connects endpoints to the SD-Access fabric and forwards their traffic. It acts as the access-layer entry point for wired and integrated wireless clients, allowing endpoint traffic to enter the fabric overlay. The edge node commonly provides first-hop services for attached hosts and applies segmentation and policy based on the assigned virtual network and security context. In short, its defining responsibility is endpoint attachment and forwarding traffic into the SD-Access fabric.
Incorrect. In SD-Access, LISP is used primarily for the control plane (endpoint registration and mapping of EIDs to RLOCs via the control-plane node). End-user data traffic is encapsulated using VXLAN in the data plane. Edge nodes may participate in LISP signaling, but they do not encapsulate user data “into LISP.”
Incorrect. This describes the fabric border node role. Border nodes provide connectivity between the SD-Access fabric and external Layer 3 networks (campus core, WAN, internet, data center) and often integrate with the fusion router concept for policy and route exchange between fabric VNs and outside networks.
Incorrect. Reachability between nodes in the fabric underlay is provided by the underlay routing design (typically IS-IS, OSPF, or BGP) across all fabric nodes. This is not a specific function of a fabric edge node, and “between border nodes” is not a defined SD-Access node role responsibility.
Core Concept: This question tests SD-Access fabric node roles. In Cisco SD-Access, the fabric is built on an IP underlay and an overlay that uses VXLAN for data-plane encapsulation and LISP for control-plane endpoint-to-location mapping. Different nodes have distinct responsibilities: fabric edge (access), fabric border (external connectivity), and fabric control-plane (LISP map-server/resolver). Why the Answer is Correct: A fabric edge node is the access-layer device that directly connects endpoints (wired, wireless via fabric WLC, or IoT) to the SD-Access fabric. It performs endpoint onboarding functions (such as 802.1X/MAB with ISE integration), assigns endpoints to virtual networks (VNs) and scalable groups (SGTs), and forwards their traffic into the fabric. When an endpoint sends traffic, the edge node classifies it into the correct VN, applies policy (often via SGT), and encapsulates/forwards it across the fabric to the destination edge or to a border node for external destinations. Key Features / Best Practices: Fabric edge nodes provide Anycast Gateway for host default gateway within a VN, enabling mobility and consistent gateway behavior across the fabric. They participate in the VXLAN data plane (VTEP function) to encapsulate traffic, but their defining role is endpoint attachment and first-hop services. They register endpoint information with the control-plane node using LISP (EID-to-RLOC mapping). Best practice is to remember: “Edge = endpoints,” “Border = outside,” “Control-plane = mapping.” Common Misconceptions: Option B is tempting because edge nodes do interact with LISP, but LISP is the control-plane protocol; the data plane is VXLAN. Also, “encapsulates end-user data traffic into LISP” is incorrect because LISP is not the data-plane encapsulation in SD-Access. Option C describes a border node. Option D describes underlay routing responsibilities (e.g., IS-IS/OSPF/BGP) and is not a specific fabric node role. Exam Tips: For ENCOR, memorize SD-Access roles and their one-line purpose: - Fabric Edge: connects endpoints and forwards traffic into the fabric. - Fabric Border: connects fabric to external networks (fusion, WAN, DC). - Control-plane: LISP mapping services. Also remember: SD-Access overlay uses VXLAN (data plane) + LISP (control plane).
Which technology provides a secure communication channel for all traffic at Layer 2 of the OSI model?
SSL (more accurately TLS today) provides encryption for application sessions, typically operating above the transport layer (often described as Layer 4-7). It secures specific application flows such as HTTPS, not all Ethernet frames on a link. It does not provide native Layer 2 frame-by-frame protection across a switched Ethernet segment.
Cisco TrustSec is an identity-based security architecture that includes components like Security Group Tags (SGT), Security Group ACLs (SGACL), and policy enforcement. While TrustSec deployments may use MACsec for link encryption in certain scenarios, TrustSec itself is not the technology that provides a Layer 2 secure channel for all traffic.
MACsec (IEEE 802.1AE) encrypts and authenticates Ethernet frames at Layer 2, providing confidentiality, integrity, and replay protection on a point-to-point link. Because it operates at the data link layer, it protects all upper-layer protocols carried in Ethernet frames (IPv4/IPv6/ARP and more), matching the requirement for securing all traffic at Layer 2.
IPsec secures traffic at Layer 3 by encrypting and authenticating IP packets using protocols like ESP/AH, commonly in VPNs (site-to-site or remote access). It does not protect non-IP Layer 2 control frames and is not a Layer 2 secure channel. IPsec is best described as network-layer security, not data-link-layer security.
Core Concept: The question tests which technology creates a secure communication channel for all traffic at Layer 2 (Data Link) of the OSI model. Layer 2 security typically means protecting Ethernet frames on a hop-by-hop basis (between directly connected devices) rather than protecting IP packets end-to-end. Why the Answer is Correct: MACsec (IEEE 802.1AE) is specifically designed to provide confidentiality, integrity, and replay protection for Ethernet frames at Layer 2. It encrypts and authenticates traffic on a per-link basis, meaning all Ethernet payloads traversing a MACsec-enabled link are protected regardless of the upper-layer protocol (IPv4, IPv6, ARP, LLDP, etc.). This matches the requirement: “secure communication channel for all traffic at Layer 2.” Key Features / How It Works: MACsec uses the MACsec Security Association (SA) and Security Tag (SecTAG) inserted into frames, along with an Integrity Check Value (ICV). Keying is commonly handled by MKA (MACsec Key Agreement, IEEE 802.1X-2010) using EAP authentication (often integrated with RADIUS/ISE). On Cisco switches/routers, MACsec is typically deployed on uplinks, switch-to-switch links, or switch-to-host links where supported, providing line-rate encryption with minimal operational complexity once 802.1X/MKA is established. Common Misconceptions: Many candidates confuse “secure channel” with IPsec or SSL because those are well-known encryption technologies. However, IPsec operates at Layer 3 (protecting IP packets) and SSL/TLS operates above Layer 4 (protecting application sessions). Cisco TrustSec is a broader architecture for identity-based access control and segmentation (SGT/SGACL), not a universal Layer 2 encryption mechanism. Exam Tips: Map technologies to OSI layers: - MACsec = Layer 2 frame encryption (hop-by-hop) - IPsec = Layer 3 packet encryption (end-to-end or tunnel) - TLS/SSL = session/application protection Also remember: TrustSec can leverage MACsec for link encryption in some designs, but TrustSec itself is not the Layer 2 encryption protocol. When the question explicitly says “all traffic at Layer 2,” MACsec is the direct match.
Which statement about Cisco Express Forwarding is true?
Incorrect. Direct CPU involvement in packet-switching decisions is characteristic of process switching (slow path), where each packet is punted to the CPU and handled by IOS processes. CEF is designed specifically to avoid per-packet CPU involvement by using precomputed forwarding entries in the data plane (often hardware). The CPU may be involved only for exceptions (glean, ARP/ND resolution, or punted packets).
Incorrect. A “fast cache” used for forwarding is associated with fast switching (route cache), where the first packet is process-switched and subsequent packets use a cached entry. CEF does not rely on a per-flow/per-destination cache; instead it uses a prebuilt FIB and adjacency table derived from the control plane. This distinction is a frequent exam trap.
Correct. CEF maintains two primary data-plane tables: the FIB (Forwarding Information Base) and the adjacency table. The FIB provides the optimized prefix-to-next-hop mapping (longest-prefix match), and the adjacency table provides the Layer 2 rewrite/encapsulation information needed to forward frames. Together they enable deterministic, high-speed forwarding without cache churn.
Incorrect. Forwarding decisions “scheduled through the IOS scheduler” describes process switching, where packet handling is performed by IOS processes under CPU scheduling. CEF forwarding occurs in the data plane (often ASIC-based) and is not dependent on IOS process scheduling for each packet. Only exception traffic is punted to the CPU and handled by scheduled processes.
Core concept: Cisco Express Forwarding (CEF) is Cisco’s primary high-performance forwarding mechanism. It is a topology-based, precomputed forwarding model designed to forward packets in the data plane without per-flow caching and without involving the CPU for each packet. Why the answer is correct: CEF maintains two key data-plane structures: the Forwarding Information Base (FIB) and the adjacency table. The FIB is derived from the routing table (RIB) and contains the best-match prefixes plus next-hop information optimized for fast lookups. The adjacency table is derived from Layer 2/ARP/ND information and contains the rewrite information (for example, destination MAC, encapsulation details) needed to build the outbound frame. When a packet arrives, CEF performs a longest-prefix match in the FIB, then uses the corresponding adjacency entry to rewrite the Layer 2 header and forward the packet—typically in hardware on modern platforms. Key features / best practices: - CEF is “prepopulated” and “topology-based”: it builds tables ahead of time from control-plane information. - It scales better than fast switching because it does not rely on per-destination or per-flow caches that can churn. - CEF supports features like load balancing (per-destination and per-packet, depending on platform), and is foundational for many services (QoS, NetFlow/Flexible NetFlow interactions, uRPF, etc.). - Troubleshooting commonly uses: show ip cef, show ip cef exact-route, show adjacency, show ip arp, and platform-specific hardware forwarding verification. Common misconceptions: Options describing CPU involvement or IOS-scheduler-driven forwarding describe process switching, not CEF. Another common trap is confusing CEF with fast switching: fast switching uses a cache, whereas CEF uses the FIB/adjacency tables. Exam tips: Remember the switching path hierarchy: process switching (slow, CPU), fast switching (cache-based), and CEF (FIB + adjacency, precomputed). If you see “FIB and adjacency,” that is the hallmark of CEF. If you see “cache maintained in the data plane,” that points to fast switching, not CEF.
Which TCP setting is tuned to minimize the risk of fragmentation on a GRE/IP tunnel?
MSS (Maximum Segment Size) is the TCP parameter that controls the largest TCP payload per segment. On Cisco devices, “ip tcp adjust-mss” is commonly applied on GRE/tunnel interfaces to clamp MSS in TCP SYN packets. This ensures TCP endpoints send smaller segments that still fit after tunnel encapsulation, reducing or eliminating fragmentation and avoiding PMTUD/ICMP-related black holes.
MTU is the maximum IP packet size an interface can transmit without fragmentation. Lowering tunnel MTU can prevent fragmentation for all IP traffic, but it is not a TCP setting and can have broader side effects (e.g., impacts UDP, routing protocols, and may require consistent end-to-end design). The question specifically asks for a TCP setting, making MTU the less correct choice.
MRU (Maximum Receive Unit) is typically associated with PPP and some access technologies, defining the largest frame a device is willing to receive. It is not the standard mechanism used to address GRE/IP tunnel fragmentation in enterprise routing scenarios, and it is not a TCP parameter. Therefore it does not match the question’s focus on a TCP setting.
TCP window size controls how much unacknowledged data can be in flight and primarily affects throughput and performance over high-latency links. It does not change the size of individual TCP segments, so it does not directly reduce the risk of fragmentation on a GRE/IP tunnel. Fragmentation is driven by packet size versus path MTU, not by windowing.
Core concept: This question tests how to prevent IP fragmentation when TCP traffic is carried through an encapsulating tunnel (GRE over IP). GRE adds extra headers, reducing the effective payload size that can traverse the path without fragmentation. Because most user applications ride over TCP, the most common best practice is to tune TCP’s Maximum Segment Size (MSS) so endpoints never send TCP segments that would exceed the tunnel’s effective path MTU. Why the answer is correct: TCP MSS is the maximum TCP payload (not including IP/TCP headers) a host will place in a single TCP segment. On Cisco routers, using “ip tcp adjust-mss <value>” on the tunnel interface (or ingress interface) rewrites the MSS in TCP SYN packets so both endpoints agree to a smaller segment size. This proactively keeps TCP segments small enough that, after GRE + outer IP encapsulation, the resulting packet fits within the physical interface MTU (commonly 1500) and avoids fragmentation. This is especially important when DF (Don’t Fragment) is set or when PMTUD is blocked by filtering ICMP “fragmentation needed” messages. Key features / best practices: - GRE over IPv4 adds overhead (GRE header plus new outer IP header), reducing effective MTU. - “ip tcp adjust-mss” is a targeted fix for TCP only, minimizing fragmentation without changing L2/L3 MTU everywhere. - Typical approach: calculate tunnel overhead and set MSS accordingly (often 1360–1460 depending on encapsulation like GRE, IPsec, additional options). - Works well in enterprise WAN/overlay designs where you can’t guarantee PMTUD success end-to-end. Common misconceptions: - MTU tuning (option B) can help, but it affects all traffic (including non-TCP) and may require consistent changes across the path; it’s not specifically a “TCP setting.” - MRU is a PPP concept and not the typical control for GRE/IP fragmentation. - TCP window size affects throughput/latency behavior, not packet size, so it doesn’t directly prevent fragmentation. Exam tips: If the question says “TCP setting” and mentions tunnels/fragmentation, think MSS clamping/adjustment. MTU is a Layer 3 interface parameter; MSS is the TCP-friendly method used on tunnel edges to avoid fragmentation issues for TCP flows.
What is the correct EBGP path attribute list, ordered from most preferred to least preferred, that the BGP best-path algorithm uses?
Incorrect because it places Local Preference ahead of Weight. On Cisco IOS/IOS XE, Weight is evaluated before Local Preference and is the most preferred attribute among those listed. The rest of the sequence (AS_PATH before MED) is correct, but the first two are reversed, making the overall ordered list wrong for Cisco’s best-path algorithm.
Correct. On Cisco routers, the BGP best-path algorithm evaluates Weight before any of the other listed attributes, and the path with the highest Weight is preferred. If Weight is equal, Local Preference is compared next, with the highest Local Preference winning. After that, AS_PATH length is considered, where the shorter path is preferred, and only later is MED evaluated, with the lower MED preferred under normal comparison rules. This makes the correct relative order among the listed choices Weight, Local Preference, AS_PATH, then MED.
Incorrect because it places AS_PATH ahead of Local Preference. Local Preference is evaluated earlier than AS_PATH in the best-path algorithm and is a primary tool for outbound path selection within an AS. While Weight first is correct, swapping Local Preference and AS_PATH changes the decision outcome and does not match Cisco’s best-path order.
Incorrect because it places Local Preference ahead of Weight and also places MED ahead of AS_PATH. Both of these are out of order for Cisco’s best-path algorithm. AS_PATH length is evaluated before MED, and Weight is evaluated before Local Preference. This option reflects a misunderstanding of attribute precedence.
Core concept: This question tests knowledge of the Cisco BGP best-path selection order and which attributes are evaluated first when multiple BGP paths exist for the same prefix. For ENCOR, you must know the early, high-impact attributes (Weight and Local Preference) and how they compare to AS_PATH length and MED, especially in eBGP scenarios. Why the answer is correct: On Cisco IOS/IOS XE, the BGP best-path algorithm evaluates attributes in a defined sequence. Among the attributes listed, the preference order from most preferred to least preferred is: 1) Weight (Cisco-specific, locally significant) 2) Local Preference (well-known discretionary, propagated within an AS) 3) AS_PATH length (shorter is preferred) 4) MED (lower is preferred; compared only under certain conditions) Therefore, the correct ordered list is “weight, local preference, AS path, MED,” which matches option B. Key features / details to know: - Weight is not a standard BGP attribute; it is Cisco-specific and only influences the local router. Highest weight wins. - Local Preference is used to choose the outbound path from an AS. Highest local-pref wins and is typically set on iBGP-learned routes and propagated to iBGP peers. - AS_PATH is a loop-prevention mechanism and a policy tool; shorter AS_PATH is preferred. - MED is intended to influence how a neighboring AS enters your AS. Lower MED is preferred, but by default it is only compared among routes received from the same neighboring AS (unless “bgp always-compare-med” is configured). Also, MED is evaluated after AS_PATH in Cisco’s decision process. Common misconceptions: Many learners incorrectly place Local Preference ahead of Weight because Local Preference is a standard attribute and widely discussed for policy. However, on Cisco devices, Weight is evaluated first. Another common mistake is thinking MED is a stronger preference than AS_PATH; in reality, AS_PATH is considered earlier. Exam tips: Memorize the early part of the Cisco best-path order: Weight > Local Preference > locally originated > AS_PATH > Origin > MED. If a question only lists a subset, keep their relative order consistent with the full algorithm. Also remember MED comparison rules (same neighboring AS by default) because it often appears in follow-up questions.
Which feature of EIGRP is not supported in OSPF?
Correct. EIGRP supports unequal-cost load balancing via the variance command, allowing multiple loop-free paths with different metrics to be installed and used. OSPF does not support unequal-cost multipath; it only installs equal-cost paths (ECMP). You can tune OSPF costs to make paths equal, but you cannot load-balance across different-cost routes at the same time.
Incorrect. OSPF supports equal-cost multipath (ECMP) and can load-balance across multiple equal-cost routes. The exact maximum number of paths depends on platform and configuration (often adjustable with maximum-paths). “Four equal-cost paths” is not unique to EIGRP and is generally supported by OSPF implementations as ECMP.
Incorrect. OSPF uses interface cost, which by default is derived from interface bandwidth (cost = reference-bandwidth / bandwidth). While the metric is not identical to EIGRP’s composite metric, bandwidth still influences OSPF’s best path selection unless costs are manually set. Therefore, this behavior is supported in OSPF.
Incorrect. Per-packet load balancing is primarily a data-plane/forwarding behavior (e.g., CEF load-sharing) and can be used with equal-cost routes regardless of whether the routes came from EIGRP or OSPF. OSPF can participate in per-packet load balancing when multiple equal-cost next hops exist and the forwarding configuration enables it.
Core Concept: This question tests knowledge of EIGRP vs OSPF path selection and load-balancing capabilities. Both protocols can install multiple routes to the same destination and load-balance traffic, but they differ in whether they can use unequal-cost paths. Why the Answer is Correct: EIGRP supports unequal-cost load balancing using the variance feature. EIGRP’s metric is based on bandwidth and delay (by default), and with variance it can install feasible successor routes whose metrics are within a multiple of the best (successor) metric. This allows EIGRP to forward traffic across multiple paths even when their metrics are not equal. OSPF, by contrast, is a link-state protocol that uses SPF and installs only equal-cost paths for a given destination (ECMP). Standard OSPF does not support unequal-cost multipath (UCMP). Therefore, “load balancing of unequal-cost paths” is an EIGRP feature not supported in OSPF. Key Features / Configurations / Best Practices: EIGRP: unequal-cost load balancing is enabled with “variance <n>” and controlled by “maximum-paths <n>”. Feasibility is enforced via the Feasible Distance (FD) and Reported Distance (RD) feasibility condition, which helps maintain loop-free alternate paths. OSPF: supports ECMP up to a platform-dependent maximum (commonly configured with “maximum-paths”), and can influence path choice via cost (often derived from reference bandwidth and interface bandwidth). OSPF can load-share across equal-cost next hops. Common Misconceptions: Many assume OSPF can do unequal-cost balancing because you can manipulate interface costs. However, cost manipulation only changes which path becomes the single best path or which paths become equal; it does not allow installing multiple different-cost paths simultaneously. Another trap is per-packet load balancing: it’s a forwarding behavior (CEF) and not unique to EIGRP. Exam Tips: Remember: EIGRP = ECMP + UCMP (variance). OSPF = ECMP only. If you see “unequal-cost load balancing,” think EIGRP (and also EIGRP named mode/classic), not OSPF. Also separate routing-protocol capabilities (ECMP/UCMP) from forwarding-plane behaviors (per-packet vs per-destination).
At which layer does Cisco DNA Center support REST controls?
Session layer is an OSI concept and is not the way Cisco DNA Center describes where REST is supported. REST uses HTTP/HTTPS, which is considered an application-layer protocol suite rather than a distinct “session layer” feature in this context. On ENCOR, expect architectural terms (northbound/southbound) rather than OSI layers for controller API questions.
Northbound APIs are the correct layer/interface for Cisco DNA Center REST controls. DNA Center exposes RESTful endpoints for intent-based networking, provisioning, assurance, topology, and inventory so that external applications and automation tools can programmatically interact with the controller. This is the standard controller architecture: northbound = consumed by apps/orchestrators via REST.
EEM (Embedded Event Manager) applets/scripts run on Cisco network devices (IOS/IOS XE) to automate local actions based on events. They are not the mechanism by which Cisco DNA Center provides REST control. While DNA Center can push configurations to devices, REST control is provided through DNA Center’s own API interfaces, not device-resident EEM.
YAML output is not an architectural layer and is not the defining characteristic of DNA Center REST controls. DNA Center REST APIs typically use JSON payloads for requests and responses. Even if a tool converts JSON to YAML for readability, that does not represent where REST is supported; the correct concept is the northbound API interface.
Core concept: Cisco DNA Center (now commonly referred to as Cisco Catalyst Center) exposes programmatic control and data access through RESTful APIs. In enterprise controller architectures, these APIs are typically categorized as “northbound” interfaces—used by external applications, ITSM tools, scripts, and orchestration platforms to interact with the controller. Why the answer is correct: REST controls in Cisco DNA Center are supported via its northbound APIs. “Northbound” means the interface faces upward toward business/IT applications (ServiceNow, custom portals, Python automation, CI/CD pipelines) rather than downward toward network devices. DNA Center provides a REST API catalog (intent, assurance, topology, device inventory, configuration templates, SDA fabric operations, etc.) that clients consume over HTTPS using standard REST methods (GET/POST/PUT/DELETE) and typically JSON payloads. This is the primary and expected layer where REST is used in the DNA Center architecture. Key features and best practices: - API access is secured (HTTPS/TLS) and authenticated (token-based authentication). Use least privilege via role-based access control. - APIs are documented in the built-in API documentation portal and are designed for automation workflows (inventory, provisioning, assurance queries, event subscriptions). - Typical automation pattern: obtain auth token, call intent/assurance endpoints, parse JSON, and integrate with tools like Ansible, Terraform (via custom providers), or Python scripts. Common misconceptions: - OSI “session layer” is not how Cisco describes DNA Center REST support; REST is an application-layer paradigm over HTTP(S), and the exam expects the architectural term “northbound APIs.” - EEM applets/scripts are device-local automation on IOS/IOS XE, not DNA Center controller APIs. - YAML is a data serialization format; DNA Center REST responses are generally JSON, and the format does not define the architectural layer. Exam tips: When you see “controller + REST,” think “northbound APIs” for external consumption. “Southbound” would be protocols/controllers use to talk to devices (NETCONF/RESTCONF/CLI/SNMP, etc.). For ENCOR, map terms to architecture: northbound = apps/automation; southbound = device control/telemetry.
Which two namespaces does the LISP network architecture and protocol use? (Choose two.)
TLOC (Transport Locator) is not a LISP namespace. It is primarily associated with Cisco SD-WAN, where a TLOC identifies a WAN edge’s transport address, color, and encapsulation. While conceptually similar to a locator, it is not part of LISP terminology or the LISP EID-to-RLOC mapping system tested in ENCOR.
RLOC (Routing Locator) is one of the two LISP namespaces. RLOCs are routable addresses in the underlay that identify where a LISP site (xTR) is located. After an EID-to-RLOC mapping lookup, traffic is encapsulated to the destination RLOC, allowing the core to route based on RLOCs rather than endpoint identifiers.
DNS is a distributed naming system that maps hostnames to IP addresses (and other records). It is not a LISP namespace. Although LISP uses a mapping system, it is specifically for EID-to-RLOC resolution, not name resolution. DNS may coexist with LISP in networks but does not define LISP’s architectural namespaces.
VTEP (VXLAN Tunnel Endpoint) is a VXLAN concept used to encapsulate/decapsulate VXLAN traffic in data center and campus overlays. LISP uses xTRs (ITR/ETR) and the EID/RLOC model, not VTEPs. The presence of “tunnel endpoint” can mislead, but VTEP is not a LISP namespace.
EID (Endpoint Identifier) is one of the two LISP namespaces. EIDs identify endpoints or EID-prefixes in the overlay and are intended to be stable identifiers, independent of the underlay topology. LISP maps destination EIDs to one or more RLOCs, enabling mobility, multihoming, and reduced routing state in the core.
Core concept: LISP (Locator/ID Separation Protocol) is built around separating “who” an endpoint is from “where” it is in the network. To do that, LISP defines two distinct namespaces and uses a mapping system to translate between them. Why the answer is correct: The two namespaces are EID (Endpoint Identifier) and RLOC (Routing Locator). The EID namespace identifies endpoints (hosts, subnets, or virtual networks) and is typically stable, independent of the underlying topology. The RLOC namespace represents the topological location of the LISP site in the underlay network—these are routable addresses used to reach the LISP tunnel routers (xTRs). LISP encapsulates traffic from an EID-based “overlay” into an RLOC-based “underlay” header after performing an EID-to-RLOC mapping lookup. Key features and best practices: LISP uses a mapping system (often a Map-Server/Map-Resolver) to store and retrieve EID-to-RLOC mappings. Ingress Tunnel Routers (ITRs) query mappings for destination EIDs; Egress Tunnel Routers (ETRs) decapsulate and deliver to the destination EID. This separation improves mobility (EIDs don’t change when a site moves), multihoming (multiple RLOCs per EID prefix), and scalable routing (the core routes RLOCs, not potentially massive EID space). In enterprise designs, LISP is commonly seen in Cisco SD-Access as part of the control-plane/overlay mechanisms. Common misconceptions: TLOC and VTEP are often confused with LISP terms because they also relate to overlays. TLOC is an SD-WAN concept (transport locator) and VTEP is a VXLAN concept (tunnel endpoint). DNS is a name-to-IP resolution system, not a LISP namespace. These may appear “mapping-like,” but LISP’s specific namespaces are EID and RLOC. Exam tips: If a question says “LISP uses two namespaces,” immediately think “ID vs Locator” → EID and RLOC. Also remember the functional roles: EID is the overlay identity; RLOC is the underlay routable locator used for encapsulation and transport.
Which outbound access list, applied to the WAN interface of a router, permits all traffic except for http traffic sourced from the workstation with IP address 10.10.10.1?
This option places `eq 80` immediately after the source host, so it matches the source TCP port rather than the destination port. Normal HTTP clients do not use source port 80; they use ephemeral high-numbered ports, so this would not block typical web browsing traffic from 10.10.10.1. Although it includes a final permit statement, the port matching logic is wrong for the requirement. Therefore it does not correctly deny HTTP traffic sourced from that workstation.
The traffic-matching logic is correct because it denies TCP from source 10.10.10.1 to any destination on destination port 80, then permits everything else. However, the ACL identifier `10` is a standard ACL number, and standard ACL numbers cannot be used with `ip access-list extended`. Extended ACLs must use a valid extended number such as 100-199 or 2000-2699, or a name. Because the syntax is invalid for an extended ACL, this is not the correct answer.
This option correctly identifies HTTP traffic by matching source host 10.10.10.1 and destination port 80. However, it contains only a deny statement and no explicit permit for other traffic. Since every ACL ends with an implicit `deny ip any any`, applying this ACL outbound would block all other traffic as well. That does not meet the requirement to permit all traffic except HTTP from the specified workstation.
This option correctly matches TCP traffic sourced from host 10.10.10.1 going to any destination where the destination port is 80, which is how normal HTTP client traffic is identified. The `permit ip any any` statement after the deny ensures that all other traffic is allowed, which is required by the question. ACLs are evaluated top-down, so HTTP from that host is denied first and everything else is permitted. The ACL number 100 is also in the valid extended ACL range, making the syntax and logic both correct.
Core concept: This question tests Cisco extended IPv4 ACL syntax, especially how to match HTTP traffic from a specific source host while allowing all other traffic. To block web traffic initiated by a client, the ACL must match the source IP address and the destination TCP port 80, because clients use ephemeral source ports and servers listen on destination port 80. Since the ACL is applied outbound on the WAN interface, it filters packets as they leave the router toward the WAN. Why correct: The correct ACL must deny TCP traffic sourced from 10.10.10.1 to any destination with destination port 80, then explicitly permit all remaining IP traffic. Option D does exactly that with `deny tcp host 10.10.10.1 any eq 80` followed by `permit ip any any`. This satisfies the requirement to permit everything except HTTP from that workstation. Key features: Extended ACLs can match protocol, source, destination, and port numbers. ACLs are processed top-down, first match wins, and every ACL has an implicit `deny ip any any` at the end. Therefore, when the requirement is to block only one traffic type and allow the rest, an explicit permit statement is required after the deny. Common misconceptions: A frequent mistake is matching port 80 as the source port instead of the destination port, which would not block normal client HTTP sessions. Another mistake is forgetting the final permit statement, which causes all unmatched traffic to be dropped by the implicit deny. It is also important to recognize valid Cisco ACL numbering: standard ACLs use 1-99 and 1300-1999, while extended ACLs use 100-199 and 2000-2699. Exam tips: For client web browsing, think destination port 80, not source port 80. Always verify whether the ACL syntax is valid for a standard or extended ACL. If the question says "permit all except," look for a deny statement followed by `permit ip any any`.