
Simulate the real exam experience with 100 questions and a 120-minute time limit. Practice with AI-verified answers and detailed explanations.
AI-Powered
Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.
How do TCP and UDP differ in the way that they establish a connection between two endpoints?
Correct. TCP establishes a connection using the three-way handshake (SYN, SYN-ACK, ACK) before sending application data, enabling reliable, ordered delivery with retransmissions and acknowledgments. UDP is connectionless and does not perform a handshake; it sends datagrams without guaranteeing delivery, order, or duplicate protection. Any reliability with UDP must be implemented by the application layer, not by UDP itself.
Incorrect. TCP does use SYN as part of connection establishment, but UDP does not use acknowledgment packets at the transport layer. UDP has no built-in ACK mechanism and no connection setup. While some applications running over UDP may send their own acknowledgments, those are application-layer behaviors and not a feature of UDP as a transport protocol.
Incorrect. This reverses the protocols. TCP is the reliable, connection-oriented protocol that provides sequencing, acknowledgments, and retransmissions. UDP is connectionless and does not provide reliable delivery. On the exam, “reliable transfer” and “connection-oriented” are strong indicators of TCP, not UDP.
Incorrect. UDP does not have SYN, SYN-ACK, FIN, or any TCP-style flags because UDP’s header is very small and contains only source port, destination port, length, and checksum. TCP uses flags such as SYN, ACK, and FIN in its TCP header to establish and tear down connections. The option incorrectly assigns TCP flags to UDP.
Core Concept: This question tests the fundamental difference between TCP and UDP regarding connection establishment and reliability. TCP is a connection-oriented, reliable transport protocol that establishes a session before data transfer. UDP is connectionless and does not establish a session; it simply sends datagrams without built-in delivery guarantees. Why the Answer is Correct: TCP establishes a connection using the three-way handshake: SYN (synchronize), SYN-ACK (synchronize acknowledgment), and ACK (acknowledgment). This process synchronizes sequence numbers and confirms both endpoints are ready to communicate, enabling reliable, ordered delivery with retransmissions and flow control. UDP does not perform any handshake; there is no session setup, no sequence-number synchronization, and no transport-layer acknowledgments. As a result, UDP does not guarantee delivery, ordering, or duplicate suppression—those functions must be handled by the application if needed. Key Features: TCP features include connection establishment/teardown, sequence numbers, acknowledgments, retransmission on loss, ordered delivery, flow control (windowing), and congestion control. UDP features include minimal overhead (8-byte header), no handshake, and low latency; it is commonly used for DNS queries, VoIP, streaming, and some routing/management protocols where speed matters and occasional loss is acceptable or handled elsewhere. Common Misconceptions: A frequent trap is thinking UDP uses acknowledgments or “reliability” because some applications built on UDP implement their own ACK/sequence logic (for example, certain streaming or tunneling solutions). However, that is not UDP itself. Another misconception is mixing up TCP flags (SYN/ACK/FIN) and assuming UDP has similar bits—UDP has no flags like SYN or FIN. Exam Tips: For CCNA, remember: TCP = connection-oriented + three-way handshake + reliability; UDP = connectionless + no handshake + best-effort delivery. If a question mentions SYN/SYN-ACK/ACK, it is TCP. If it emphasizes low overhead/latency and no delivery guarantee, it is UDP.
Want to practice all questions on the go?
Download Cloud Pass for free — includes practice tests, progress tracking & more.


Download Cloud Pass and access all Cisco 200-301: Cisco Certified Network Associate (CCNA) practice questions for free.
Want to practice all questions on the go?
Get the free app
Download Cloud Pass for free — includes practice tests, progress tracking & more.
Which two values or settings must be entered when configuring a new WLAN in the Cisco Wireless LAN Controller GUI? (Choose two.)
QoS settings are commonly configured per WLAN (e.g., Platinum/Gold/Silver/Bronze on AireOS, or QoS profiles/policy on Catalyst 9800), but they are not mandatory fields to create a new WLAN object in the WLC GUI. QoS is typically adjusted after the WLAN is created, based on application requirements like voice, video, or best-effort data.
You do not enter the IP address of one or more access points when creating a WLAN. APs discover and join the controller using CAPWAP and are managed separately from WLAN definitions. Once APs are joined, WLANs are advertised based on controller configuration (and possibly AP groups/policy tags), not by manually listing AP IPs during WLAN creation.
The SSID is required because it is the client-facing wireless network name that users see and connect to. In the WLC GUI WLAN creation process, the SSID must be specified to define what beacon/probe response name will be broadcast (or optionally hidden). Without an SSID, the controller cannot present a usable WLAN to wireless clients.
The profile name is required because it is the controller’s internal identifier for the WLAN configuration object. It distinguishes WLANs in the configuration database and is used for management, troubleshooting, and referencing the WLAN in lists and logs. The profile name can differ from the SSID and is mandatory at creation time in the WLC GUI.
Management interface settings are part of controller system configuration (management IP, default gateway, DHCP server settings, etc.), not a required input when creating a new WLAN. A WLAN is typically mapped to a dynamic interface/VLAN (or policy profile) for client traffic, but the controller’s management interface settings are not entered as part of the WLAN creation step.
Core Concept: This question tests basic Cisco WLC WLAN creation requirements. On Cisco Wireless LAN Controllers (AireOS and similarly in Catalyst 9800 concepts), a WLAN is a logical wireless network definition that maps an SSID to security, QoS, and a wired VLAN/interface. In the GUI “Create New WLAN” workflow, the controller requires a minimal set of identifiers before you can proceed to advanced settings. Why the Answer is Correct: When creating a new WLAN in the Cisco WLC GUI, you must enter (1) a Profile Name and (2) an SSID. The Profile Name is the internal identifier used by the controller to reference the WLAN configuration object (often used in logs, configuration lists, and when applying policies). The SSID is the actual wireless network name broadcast (or not) to clients. Without these two values, the controller cannot create the WLAN object because it lacks both an internal handle (profile) and the client-facing identifier (SSID). Key Features / Configuration Notes: After the WLAN object is created with profile name and SSID, you typically configure: - Interface/VLAN mapping (dynamic interface on AireOS; policy profile/VLAN on Catalyst 9800) - Security (WPA2/WPA3, 802.1X, PSK) - QoS (WMM, AVC markings, per-WLAN QoS profiles) - Advanced settings (broadcast SSID, client exclusion, session timeouts) These are important, but they are not mandatory inputs at the initial “new WLAN” creation step. Common Misconceptions: QoS and management interface settings are commonly configured and may feel “required” in real deployments, but they are not mandatory fields to create the WLAN object in the GUI. Also, you never enter AP IP addresses to create a WLAN; APs join the controller separately, and WLANs are then enabled and advertised by APs based on AP group/site/policy assignments. Exam Tips: For CCNA-level wireless questions, remember the separation of roles: - AP join/controller management is separate from WLAN definition. - A WLAN minimally needs an internal name (profile) and the SSID. Security/VLAN/QoS are configured afterward. If asked “must be entered when configuring a new WLAN,” think of the first required fields in the GUI wizard: Profile Name and SSID.
Two switches are connected and using Cisco Dynamic Trunking Protocol. SW1 is set to Dynamic Auto and SW2 is set to Dynamic Desirable. What is the result of this configuration?
Incorrect. An access port results when trunk negotiation does not succeed (for example, dynamic auto to dynamic auto, or when one side is forced to access). In this scenario, SW2 is dynamic desirable and actively requests trunking, and SW1 (dynamic auto) will accept that request, so the port does not remain access.
Incorrect. DTP mismatches do not normally place a port into err-disabled. Error-disable is triggered by specific protection mechanisms (for example BPDU Guard, Port Security violations, UDLD, or link-flap detection). With auto/desirable, DTP negotiation succeeds and the interface transitions normally, so err-disable is not expected.
Incorrect. The physical link state (up/down) depends on Layer 1/2 connectivity (cabling, speed/duplex negotiation, admin shutdown, etc.), not on whether DTP negotiates trunking. With dynamic auto and dynamic desirable, the link can come up and then negotiate trunking; it should not be down due to these DTP settings.
Correct. Dynamic desirable actively tries to form a trunk and sends DTP frames to negotiate trunking. Dynamic auto is passive but will become a trunk if the neighbor requests it. Therefore, SW2 (desirable) + SW1 (auto) results in a negotiated 802.1Q trunk port.
Core Concept: This question tests Cisco Dynamic Trunking Protocol (DTP) negotiation behavior. DTP is a Cisco-proprietary Layer 2 protocol used to automatically negotiate whether a switchport becomes an access port or an 802.1Q trunk. Why the Answer is Correct: With SW1 set to dynamic auto and SW2 set to dynamic desirable, the link will negotiate to become a trunk. Dynamic desirable actively attempts to form a trunk by sending DTP messages indicating trunking intent. Dynamic auto is passive; it does not actively try to trunk, but it will agree to trunking if the neighbor requests it. Because SW2 (desirable) initiates trunking and SW1 (auto) is willing to accept, the result is an operational trunk port. Key Features / Behaviors to Know: - dynamic desirable: actively negotiates trunking; will form a trunk with neighbor modes trunk, desirable, or auto. - dynamic auto: passive; will form a trunk only if the neighbor is trunk or desirable. - access: forces non-trunking. - trunk: forces trunking (typically still sends DTP unless disabled). - nonegotiate: disables DTP negotiation (commonly used for security and interoperability). Best practice in many environments is to statically configure trunking (switchport mode trunk) and disable DTP (switchport nonegotiate) on trunk links to prevent unintended trunk formation. Common Misconceptions: A frequent mistake is thinking “auto” means “automatic trunk,” when it actually means “wait and see.” Another misconception is that mismatched dynamic modes cause link failure; they generally do not—DTP affects trunk/access state, not physical link state. Exam Tips: Memorize the classic DTP outcomes: - desirable + auto = trunk - desirable + desirable = trunk - auto + auto = access (no trunk forms) Also remember that trunk negotiation is separate from the interface being administratively/physically up, and that error-disable is typically caused by features like port security, BPDU guard, or UDLD, not by DTP mode combinations.
How do traditional campus device management and Cisco DNA Center device management differ in regards to deployment?
Incorrect. Traditional campus management is usually slower to scale because it often relies on per-device CLI configuration and manual change processes. Scaling requires repeating tasks across many devices, increasing time and risk of inconsistent configurations. DNAC is designed specifically to improve scalability through centralized automation, templates, and policy-driven provisioning.
Correct. Cisco DNA Center can deploy a network more quickly by automating onboarding and provisioning (PnP, LAN Automation), pushing standardized configurations via templates, and orchestrating changes centrally. This reduces manual steps, speeds multi-device rollouts, and improves consistency—key advantages over traditional device-by-device management approaches.
Incorrect. DNAC may reduce operational effort over time, but it is not generally positioned as a lower-cost implementation compared to “most” traditional options, especially considering licensing and appliance/virtual infrastructure. Exam questions typically treat DNAC’s primary benefits as automation, consistency, visibility, and faster deployment—not guaranteed lower cost.
Incorrect. Traditional methods can sometimes apply an urgent patch to a single device quickly, but at scale they are typically slower and more error-prone. DNAC includes centralized image and upgrade management (SWIM) that can streamline patching across many devices with controlled workflows. Therefore, traditional management is not typically faster for patches/updates overall.
Core Concept: This question tests the difference between traditional (device-by-device) campus management and Cisco DNA Center (DNAC) intent-based, controller-assisted management, specifically focusing on deployment speed. This aligns with the CCNA Automation and Programmability domain: centralized management, automation, and orchestration. Why the Answer is Correct: Cisco DNA Center device management can deploy a network more quickly than traditional campus device management because DNAC centralizes configuration and uses automation workflows (templates, profiles, and policy-based provisioning) to push consistent configurations to many devices at once. Traditional approaches typically involve manual CLI configuration per device (or ad-hoc scripts), which increases time, variability, and the chance of errors. DNAC’s LAN Automation and Plug and Play (PnP) capabilities further accelerate initial rollout by discovering devices, assigning images/configs, and onboarding them with minimal manual intervention. Key Features / Best Practices: DNAC speeds deployment through: 1) Plug and Play (PnP): zero-touch/low-touch onboarding, device claiming, and automated provisioning. 2) Templates and configuration compliance: standardized configs applied at scale, with drift detection. 3) Software Image Management (SWIM): centralized image distribution and upgrade workflows (often with maintenance windows). 4) Policy-based segmentation (e.g., via SD-Access in larger designs): intent defined once and applied consistently. Best practice is to use golden templates, staged rollouts, and compliance checks to reduce outages and configuration drift. Common Misconceptions: Some assume “traditional” is faster because it avoids standing up DNAC infrastructure. While DNAC has an initial setup cost/time, the question is about deployment capability and speed at scale. Once DNAC is in place, provisioning many devices and sites is typically faster and more repeatable than manual methods. Exam Tips: For CCNA, remember the general rule: controller-based/automation platforms (like DNAC) improve speed, consistency, and scalability of deployments compared to per-device CLI management. Also note that cost claims are rarely absolute on exams; focus on operational efficiency and automation benefits rather than assuming lower cost.
While examining excessive traffic on the network, it is noted that all incoming packets on an interface appear to be allowed even though an IPv4 ACL is applied to the interface. Which two misconfigurations cause this behavior? (Choose two.)
An empty ACL (no ACEs) is not a typical operational state on IOS; you generally cannot apply a truly empty ACL in a way that permits all traffic. If an ACL had no permit entries, the implicit “deny any” concept would result in blocking, not allowing. “All packets allowed” points away from an empty ACL and toward an overly permissive match.
If a permit statement is too broadly defined (for example, “permit ip any any”), it will match most or all inbound packets. Because ACLs use first-match logic, that broad permit will allow traffic immediately and prevent later deny statements from ever being evaluated. This is a classic misconfiguration that makes an ACL appear ineffective.
If packets fail to match any permit statement, they hit the implicit deny at the end of the ACL (deny ip any any). That would cause traffic to be dropped, not allowed. So this option describes the opposite symptom (unexpected blocking), not “all incoming packets appear to be allowed.”
A deny statement being too high would typically cause more traffic to be blocked than intended, because it would match earlier and stop processing before later permits. That leads to “too much denied,” not “everything allowed.” It’s a real ACL ordering problem, but it does not explain the observed behavior.
A permit statement that is too high in the ACL can allow traffic before intended denies or more specific filtering entries are reached. Even if the permit is not “any any,” placing it above the intended control lines can still match the observed traffic and permit it, making it appear that the ACL is not filtering inbound packets.
Core concept: This question tests IPv4 ACL processing on Cisco IOS interfaces. ACLs are evaluated top-down, first match wins, and if a packet matches a statement, processing stops. If an ACL is applied inbound on an interface, every incoming packet is compared against the ACL entries in order. Why the answers are correct: (B) A matching permit statement that is too broadly defined (for example, “permit ip any any” or “permit ip 10.0.0.0 0.255.255.255 any”) will cause nearly all traffic to match that permit line. Because ACLs stop at the first match, the broad permit effectively bypasses later, more specific deny lines, making it appear that “everything is allowed.” (E) A matching permit statement that is too high in the access list causes the same symptom even if it is not “permit any any.” If a permit that matches the observed traffic is placed above intended denies (or above a more restrictive permit/deny structure), the traffic is permitted before the router ever evaluates the intended filtering lines. Key features / best practices: - ACLs are sequential; order matters. Place specific denies (or specific permits in a whitelist design) before broader permits. - Use the “show access-lists” command to check hit counts per ACE; a high hit count on an early permit often reveals the issue. - Use “ip access-list extended NAME” and sequence numbers to insert entries in the correct order without rebuilding the ACL. Common misconceptions: Many assume that ACLs “collectively” evaluate all lines and then decide. They do not; they stop at the first match. Another misconception is that an ACL always blocks something by default; in reality, a poorly ordered or overly broad permit can result in no effective filtering. Exam tips: When you see “all traffic is allowed even though an ACL is applied,” think: (1) an early broad permit, (2) wrong direction/interface, or (3) ACL not actually applied. For this question, the only choices that directly explain “everything allowed” are broad permit definitions and permit statements placed too early.
How do TCP and UDP differ in the way they provide reliability for delivery of packets?
Incorrect. It reverses the roles: TCP does provide error detection (checksum), sequencing, acknowledgments, and retransmissions to improve reliability. UDP does not provide message acknowledgments or retransmit lost data at the transport layer. While UDP can detect corruption via checksum, it does not guarantee delivery or recovery—applications must handle that if required.
Correct. TCP provides reliability through mechanisms such as acknowledgments, sequence numbers, retransmissions, and flow control using windowing so a sender does not overwhelm a receiver. UDP does not provide transport-layer reliability features like acknowledgments, retransmissions, or flow control. Although the option's wording about a 'continuous stream' is imprecise because UDP is datagram-based rather than stream-based, it is still the best choice because it correctly identifies TCP flow control as a reliability-related difference and contrasts it with UDP's lack of such mechanisms.
Incorrect. It swaps the fundamental definitions. TCP is connection-oriented and provides reliable, ordered delivery using sequencing and acknowledgments. UDP is connectionless and does not provide reliable delivery or sequencing by default. If reliability is needed with UDP, it must be implemented by the application (or by another protocol layered above UDP).
Incorrect. TCP does use windowing (sliding window) as part of flow control and efficient reliable delivery, but UDP does not establish a three-way handshake and does not provide reliable message transfer. The three-way handshake (SYN, SYN-ACK, ACK) is a TCP-only mechanism used to establish a connection and synchronize sequence numbers.
Core Concept: This question tests transport-layer reliability differences between TCP and UDP. Reliability includes connection establishment, sequencing, acknowledgments (ACKs), retransmissions, and flow control. Why the Answer is Correct: TCP is designed to provide reliable, ordered delivery of a byte stream. It uses sequence numbers and ACKs to confirm receipt, retransmits missing segments, and uses flow control (sliding window) so a fast sender does not overwhelm a slow receiver. UDP, by contrast, is connectionless and best-effort: it sends independent datagrams without establishing a session and without built-in ACKs, retransmissions, or flow control. Option B correctly highlights a key TCP reliability mechanism (flow control) and contrasts it with UDP’s lack of such checking at the transport layer. Key Features: TCP reliability mechanisms include: - Connection-oriented setup/teardown (3-way handshake, FIN/RST) - Sequencing and reassembly (sequence numbers) - Positive acknowledgments and retransmission (ACKs, timers) - Flow control (receiver window / windowing) - Congestion control (e.g., slow start, congestion avoidance—important but distinct from flow control) UDP characteristics include: - Connectionless, message-oriented datagrams - No built-in delivery guarantee, ordering, or duplicate suppression - Low overhead and latency; commonly used for DNS, VoIP, streaming, and routing protocols Note: UDP still has an error-detection checksum, but it does not recover from errors; recovery is left to the application if needed. Common Misconceptions: Many confuse UDP’s checksum with “reliability.” The checksum can detect corruption, but UDP does not retransmit or acknowledge. Another common confusion is mixing up “windowing” and “handshake” as UDP features—those are TCP features. Exam Tips: For CCNA, remember: TCP = connection-oriented + reliable (sequencing, ACKs, retransmissions, windowing/flow control). UDP = connectionless + best-effort (no ACK/retransmit/flow control). If an option claims UDP uses a handshake or acknowledgments, it’s almost certainly wrong.
An engineer needs to configure LLDP to send the port description type length value (TLV). Which command sequence must be implemented?
Correct. Port Description TLV advertisement is configured per interface, so the command must be entered in interface configuration mode. Using switch(config-if)# lldp port-description enables the Port Description TLV to be included in LLDP advertisements sent out that specific port (assuming LLDP is globally enabled with lldp run). This matches how IOS provides granular LLDP control per port.
Incorrect. switch# is privileged EXEC mode, used for show/clear/debug and some exec-level commands, not for configuring LLDP TLV advertisement behavior. While you can view LLDP neighbors and status from EXEC mode, enabling a specific TLV to be advertised requires configuration mode (typically interface configuration mode).
Incorrect. switch(config-line)# is line configuration mode (console/VTY/AUX), used for settings like login, transport input, exec-timeout, and access-class. LLDP is a Layer 2 discovery protocol and its TLV advertisement settings are not configured under line settings, so this mode is not applicable.
Incorrect. Global configuration mode is used for enabling LLDP overall (lldp run) and setting some global LLDP parameters, but the port-description TLV advertisement is controlled per interface. The distractor is plausible because LLDP is enabled globally, but the specific TLV command in this question is an interface-level configuration.
Core Concept: This question tests Cisco Discovery protocols—specifically LLDP (IEEE 802.1AB) and how LLDP advertises information using TLVs (Type-Length-Value fields). LLDP can advertise optional TLVs such as Port Description, System Name, System Description, and System Capabilities. On Cisco IOS, enabling or disabling specific TLVs is typically done at the interface level because the information is sent out per-port. Why the Answer is Correct: To send the Port Description TLV, you must configure it under the interface where LLDP frames are transmitted. The correct command is entered in interface configuration mode: switch(config-if)# lldp port-description. This enables the Port Description TLV to be included in LLDP advertisements sent out that specific interface. LLDP itself must also be enabled globally (lldp run) for any LLDP advertisements to be sent, but the question focuses on the command sequence to send the port-description TLV. Key Features / Best Practices: - LLDP is standards-based and commonly used in multi-vendor environments. - LLDP advertisements are sent periodically and contain TLVs. - Many LLDP TLVs are controlled per-interface (granular control), which is useful when you want to limit information disclosure on untrusted ports. - Typical verification commands include show lldp neighbors detail and show lldp interface. Common Misconceptions: - Confusing global LLDP enablement (lldp run) with per-interface TLV configuration. Global mode enables the protocol; interface mode controls what each port advertises. - Assuming the command is available in EXEC mode (switch#) or line configuration mode (switch(config-line)#). LLDP TLV advertisement settings are not configured there. Exam Tips: - Remember the hierarchy: enable LLDP globally (lldp run), then tune advertisements per interface (config-if). If a question asks about sending/advertising a TLV “on a port,” expect interface configuration mode. Also watch for distractors using the right command but wrong mode (global vs interface vs EXEC).
Internet
2
10.10.11.0/30
.1
Router1
.2
10.10.10.0/28
.1
Switch1
Host A
10.10.13.214
MPLS
2
10.10.12.0/30
.1
Router1#show ip route
Gateway of last resort is 10.10.11.2 to network 0.0.0.0
209.165.200.0/27 is subnetted, 1 subnets
B 209.165.200.224 [20/0] via 10.10.12.2, 03:22:14
209.165.201.0/27 is subnetted, 1 subnets
B 209.165.201.0 [20/0] via 10.10.12.2, 02:26:33
209.165.202.0/27 is subnetted, 1 subnets
B 209.165.202.128 [20/0] via 10.10.12.2, 02:26:03
10.0.0.0/8 is variably subnetted, 8 subnets, 4 masks
C 10.10.10.0/28 is directly connected, GigabitEthernet0/0
C 10.10.11.0/30 is directly connected, FastEthernet2/0
C 10.10.12.0/30 is directly connected, GigabitEthernet0/1
O 10.10.13.0/25 [110/2] via 10.10.10.1, 00:00:04, GigabitEthernet0/0
O 10.10.13.128/28 [110/2] via 10.10.10.1, 00:00:04, GigabitEthernet0/0
O 10.10.13.144/28 [110/2] via 10.10.10.1, 00:00:04, GigabitEthernet0/0
O 10.10.13.160/29 [110/2] via 10.10.10.1, 00:00:04, GigabitEthernet0/0
O 10.10.13.208/29 [110/2] via 10.10.10.1, 00:00:04, GigabitEthernet0/0
S* 0.0.0.0/0 [1/0] via 10.10.11.2
Refer to the exhibit. Which prefix does Router1 use for traffic to Host A?
10.10.10.0/28 is Router1’s directly connected LAN on Gi0/0, but it is not a matching destination prefix for Host A (10.10.13.214). It would only be used for destinations in 10.10.10.0–10.10.10.15. This option can seem tempting because it is connected, but connected status does not override longest prefix match.
10.10.13.0/25 matches only addresses 10.10.13.0 through 10.10.13.127. Host A is 10.10.13.214, which is outside that range. Even though /25 is a valid route in the table, it cannot be used because it does not include the destination IP.
10.10.13.144/28 matches addresses 10.10.13.144 through 10.10.13.159 (a /28 has a block size of 16). Host A at 10.10.13.214 is not in that range. This option is a common trap because it is more specific than /25, but it still must contain the destination to be considered.
10.10.13.208/29 matches addresses 10.10.13.208 through 10.10.13.215 (a /29 has a block size of 8). Since 10.10.13.214 falls within this range, Router1 selects this route via 10.10.10.1. It is also the most specific matching prefix among the listed options, so it wins by longest prefix match.
Core concept: This question tests longest prefix match (LPM) route selection, a key IP routing behavior. When multiple routes could match a destination IP, the router chooses the route with the most specific (longest) subnet mask, regardless of routing protocol (OSPF, static, connected), as long as the route is in the routing table. Why the answer is correct: Host A has IP 10.10.13.214. Router1 has several OSPF-learned routes for parts of 10.10.13.0/24, including 10.10.13.0/25, 10.10.13.144/28, and 10.10.13.208/29. We determine which prefix contains 10.10.13.214: - 10.10.13.0/25 covers 10.10.13.0–10.10.13.127, so it does not include .214. - 10.10.13.144/28 covers 10.10.13.144–10.10.13.159, so it does not include .214. - 10.10.13.208/29 covers 10.10.13.208–10.10.13.215, which includes 10.10.13.214. Therefore Router1 forwards traffic to Host A using the 10.10.13.208/29 route (via 10.10.10.1 out Gi0/0, per the routing table). Key features/best practices: LPM is applied after the router identifies all matching routes. Administrative distance and metric matter only when comparing routes to the same destination prefix length (same subnet/mask). Also note that the default route (S* 0.0.0.0/0 via 10.10.11.2) is the least specific and is used only if no more specific match exists. Common misconceptions: Many assume the router prefers connected routes or a default route first, or they focus on AD/metric across different prefix lengths. In reality, a more specific OSPF route will beat a less specific connected or static route if the connected/static route does not match the destination as specifically. Exam tips: For LPM questions, always (1) write the destination IP, (2) compute each candidate subnet’s address range, and (3) pick the most specific matching prefix. Remember common block sizes: /25=128, /28=16, /29=8. Here, /29 subnets increment by 8 in the last octet (…200, 208, 216…), making 208–215 the correct range.
What are two benefits that the UDP protocol provide for application traffic? (Choose two.)
Correct. UDP has lower overhead than TCP because it is connectionless and does not use mechanisms like three-way handshake, acknowledgments, sequencing, retransmissions, or flow/congestion control. UDP’s header is only 8 bytes, which reduces processing and bandwidth overhead. This makes UDP attractive for latency-sensitive applications and simple request/response traffic such as DNS.
Incorrect. UDP does not provide a built-in recovery mechanism. There are no acknowledgments, sequence numbers, or retransmissions in UDP. If an application needs reliability while still using UDP, it must implement its own recovery logic at the application layer (or use a different protocol such as TCP).
Incorrect. UDP has no CTL field and does not perform a three-way handshake. A three-way handshake (SYN, SYN-ACK, ACK) is a TCP function used to establish a connection and synchronize sequence numbers. UDP is connectionless and begins sending data immediately without session establishment.
Incorrect. UDP does not maintain connection state. It is stateless at the transport layer, meaning it does not track sessions, sequence numbers, or window sizes. “More stable connections” is not applicable to UDP because it does not create a connection in the TCP sense.
Correct. UDP includes a checksum field used for error detection of the UDP header and data (plus a pseudo-header). This helps detect corruption in transit so the receiver can discard invalid datagrams. Note that the checksum is optional in IPv4 but mandatory in IPv6, and it provides detection—not retransmission or recovery.
Core Concept: This question tests understanding of UDP (User Datagram Protocol) characteristics at the transport layer. UDP is a connectionless, best-effort transport protocol used when applications value low latency, low overhead, or can tolerate/handle loss and reordering themselves. Why the Answer is Correct: A is correct because UDP has lower overhead than TCP. UDP does not establish a connection (no three-way handshake), does not maintain session state, and does not provide built-in reliability features such as sequencing, acknowledgments, retransmissions, or sliding windows. Its header is only 8 bytes (source port, destination port, length, checksum), compared to TCP’s minimum 20-byte header plus additional control mechanisms. E is correct because UDP includes a checksum field that can be used to verify integrity of the UDP header and payload (and parts of the IP header via the pseudo-header). While the checksum is mandatory in IPv6 and optional in IPv4, it is still a key benefit: it allows detection of corruption so the receiving host can discard bad datagrams. The application can then decide what to do (ignore, request again via its own logic, etc.). Key Features / Use Cases: UDP is commonly used for DNS, DHCP, VoIP, streaming media, online gaming, and routing protocols. These applications often prefer timely delivery over perfect delivery, and many implement their own reliability or loss concealment if needed. Common Misconceptions: Many confuse “transport layer” with “reliable delivery.” Reliability is a TCP feature, not a UDP feature. UDP does not retransmit lost packets and does not guarantee ordering. Also, UDP has no connection establishment; any mention of a handshake or connection state is inherently TCP-oriented. Exam Tips: When you see UDP, think: connectionless, minimal header, no ACKs/retransmissions, no sequencing/windowing, checksum for error detection (not recovery). If an option describes reliability, session establishment, or stateful behavior, it is almost certainly describing TCP, not UDP.
Which two WAN architecture options help a business scalability and reliability for the network? (Choose two.)
Asynchronous routing is not a typical CCNA WAN architecture option used to describe scalable and reliable enterprise WAN design. While traffic can sometimes take different forward/return paths in real networks, that behavior is not a design goal or a named architecture choice in this context. CCNA questions about scalability/reliability usually point to redundancy (dual-homing) and dynamic routing for failover.
Single-homed branches connect to the WAN through one circuit/provider/edge path. This is simple and cost-effective, but it reduces reliability because the branch has a single point of failure (link failure, provider outage, CPE failure). It also limits scalability for high availability because adding redundancy later often requires redesigning addressing, routing, and edge hardware.
Dual-homed branches use two WAN connections (often diverse circuits or providers) to improve availability. If one link or provider fails, the branch can continue operating via the second path. Dual-homing also supports growth by allowing additional bandwidth and resilience as the business expands. It is a common enterprise/SD-WAN design pattern for high availability.
Static routing can work well for small, stable networks, but it does not scale because every new network or topology change requires manual updates. For reliability, static routes do not inherently adapt to failures; you must add mechanisms like floating static routes and IP SLA/object tracking, which increases complexity and still lacks the flexibility of dynamic routing in larger WANs.
Dynamic routing (such as OSPF or BGP in WAN contexts) improves scalability by automatically learning and distributing routes as new sites and prefixes are added. It improves reliability by detecting failures and reconverging to alternate paths, which is especially important in dual-homed designs. Proper tuning and policy (metrics, filtering, summarization) help ensure stable, predictable failover.
Core Concept: This question tests WAN design choices that improve scalability (ability to grow sites, prefixes, and bandwidth without constant redesign) and reliability (continued connectivity during failures). In CCNA terms, WAN architecture includes branch connectivity models (single-homed vs dual-homed) and routing approach (static vs dynamic). Why the Answer is Correct: Dual-homed branches (C) increase reliability by providing two WAN connections (often to different providers, different circuits, or different edge devices). If one link, ISP, or edge router fails, the branch can still reach the WAN/Internet over the alternate path. Dual-homing also supports scalability because additional bandwidth and redundancy can be added incrementally, and it enables more resilient topologies (e.g., dual MPLS/Internet, SD-WAN with multiple transports). Dynamic routing (E) improves both scalability and reliability. As the network grows, dynamic routing protocols (e.g., OSPF, EIGRP in legacy contexts, or BGP in many WAN/ISP edges) automatically learn and advertise routes, reducing manual configuration. For reliability, dynamic routing detects failures and reconverges to alternate paths, which is essential in dual-homed designs. This combination (redundant links + dynamic routing) is a common best practice for resilient WANs. Key Features / Best Practices: - Dual-homing: diverse physical paths, separate providers, redundant CPE/edge routers, and appropriate first-hop redundancy (HSRP/VRRP) when multiple routers exist. - Dynamic routing: tuned timers, summarization where appropriate, route filtering, and default route strategies for branches. Use metrics and policy to prefer primary links and fail over cleanly. Common Misconceptions: Static routing (D) can be reliable in very small, stable networks, but it does not scale well and requires manual intervention or complex tracking for failover. Single-homed branches (B) are simpler and cheaper but create a single point of failure. “Asynchronous routing” (A) is not a standard WAN architecture choice for scalability/reliability in CCNA; the intended concept is usually “dynamic routing” or “redundant paths,” not asynchronous behavior. Exam Tips: When you see “scalability and reliability,” think: redundancy in the physical/logical design (dual-homed) plus automated route learning and failover (dynamic routing). Static routing and single-homing generally indicate smaller, less resilient designs unless the question explicitly focuses on simplicity/cost.