
Simulate the real exam experience with 100 questions and a 120-minute time limit. Practice with AI-verified answers and detailed explanations.
AI-Powered
Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.
Which 802.11 frame type is Association Response?
Correct. Association Response is a management frame subtype used by the AP to accept or reject a client’s Association Request and to provide association parameters (e.g., AID, capabilities, supported rates). Management frames handle discovery and connection lifecycle tasks such as authentication, association, reassociation, and termination (deauth/disassoc).
Incorrect. “Protected frame” is not one of the three 802.11 frame types. Protection refers to security mechanisms (e.g., encryption for data frames with WPA2/WPA3, and 802.11w PMF for certain management frames). An Association Response may be protected when PMF is enabled, but its type remains a management frame.
Incorrect. Action frames are a category within management frames used to carry specific management actions (e.g., spectrum management, QoS, block ack negotiation, radio measurement). While Action frames are management-related, “Association Response” is its own management subtype and is not classified as an Action frame.
Incorrect. Control frames support reliable delivery and medium access coordination, such as ACK, RTS, CTS, PS-Poll, and Block Ack. They do not establish association. Since Association Response is part of the join process and BSS membership establishment, it is not a control frame.
Core Concept: IEEE 802.11 (Wi-Fi) defines three primary frame types: management, control, and data. Management frames establish and maintain the wireless connection (discovery, authentication, association, roaming support). Control frames assist in delivering data reliably (e.g., acknowledgments, RTS/CTS). Data frames carry upper-layer payloads. Why the Answer is Correct: An Association Response is a management frame subtype. During the process of joining a WLAN, a client (STA) sends an Association Request to an access point (AP). The AP replies with an Association Response indicating whether the association is accepted or denied and providing key parameters (such as the assigned Association ID, supported rates, and capability information). Because this exchange is part of connection establishment and BSS membership, it is classified as a management frame. Key Features / What to Know: Association is distinct from authentication. In classic 802.11, “authentication” (Open System or Shared Key) occurs before association; in modern enterprise networks, 802.1X/EAP authentication happens after association at Layer 2/2.5, but the 802.11 association step is still required to join the BSS. Management frames include Beacon, Probe Request/Response, Authentication, Association Request/Response, Reassociation Request/Response, Disassociation, and Deauthentication. Some management frames can be protected by 802.11w (PMF—Protected Management Frames), but “protected” is not a separate frame type; it’s a security feature applied to certain management frames. Common Misconceptions: “Action” frames are also management subtypes, which can confuse test-takers. However, Association Response is not an Action frame; it is explicitly an Association Response management subtype. “Protected frame” sounds like a category, but it refers to whether a frame is cryptographically protected (e.g., PMF) rather than being a distinct 802.11 frame type. Exam Tips: Memorize the big three: Management (join/leave/discover), Control (assist delivery), Data (payload). If the frame name relates to joining a WLAN (probe/auth/assoc/reassoc/deauth/disassoc), it’s management. If it’s ACK/RTS/CTS/Block Ack, it’s control.
Want to practice all questions on the go?
Download Cloud Pass for free — includes practice tests, progress tracking & more.


Download Cloud Pass and access all Cisco 200-301: Cisco Certified Network Associate (CCNA) practice questions for free.
Want to practice all questions on the go?
Get the free app
Download Cloud Pass for free — includes practice tests, progress tracking & more.
In which way does a spine-and-leaf architecture allow for scalability in a network when additional access ports are required?
Adding both a spine and a leaf can increase capacity, but it is not the fundamental scaling method when you specifically need more access ports. Spine-and-leaf scales access by adding leaf switches; spines are added when you need more fabric bandwidth/paths. Also, “redundant connections between them” is vague and doesn’t capture the required full-mesh leaf-to-all-spines connectivity rule.
Adding a spine switch can improve overall fabric capacity and increase the number of ECMP paths, but it does not directly add access ports. The “at least 40 GB uplinks” detail is not a defining requirement of spine-and-leaf scalability; link speed is an implementation choice. Scalability is about adding nodes (leaf/spine) with consistent interconnections, not a specific uplink rate.
Correct. To scale when more access ports are needed, you add a new leaf switch and connect it to every spine switch. This preserves the fabric’s predictable 2-hop leaf-to-leaf forwarding and enables ECMP across multiple spines. It’s the classic scale-out approach: more leafs = more edge/access ports while maintaining consistent performance and resiliency.
A single connection to a “core spine switch” contradicts the spine-and-leaf design principle. Spine-and-leaf avoids a single core dependency by having each leaf connect to all spines, providing multiple equal-cost paths and eliminating bottlenecks. Connecting a leaf to only one spine reduces redundancy, limits ECMP, and can create oversubscription and single points of failure.
Core concept: Spine-and-leaf is a data-center switching topology designed for predictable, scalable east-west traffic. It uses a two-tier fabric: leaf switches provide access ports (servers/endpoints), and spine switches provide the high-speed interconnect. The key scalability property is that every leaf connects to every spine, creating consistent, low-latency paths. Why the answer is correct: When additional access ports are required, you scale out by adding another leaf switch. To preserve the fabric’s uniform performance and full bisection bandwidth characteristics, the new leaf must connect to every spine switch. This maintains the design goal that any endpoint on any leaf is at most one leaf-to-spine hop away from any other leaf (typically a 2-hop path leaf→spine→leaf). Adding a leaf in this way increases port capacity without redesigning the entire network. Key features / best practices: - “Scale-out” model: add leafs for more access ports; add spines for more fabric capacity. - Equal-cost multipathing (ECMP) is commonly used so traffic can load-balance across multiple spine paths. - Consistent cabling rule: each leaf has uplinks to all spines; each spine has downlinks to all leafs. - Underlay is often routed (eBGP/OSPF/IS-IS) to support ECMP; overlays (e.g., VXLAN) may be used, but CCNA focuses on the topology concept. Common misconceptions: - Thinking you add both a spine and a leaf together (not required for access-port growth). - Believing a single “core spine” is sufficient (that reintroduces hierarchy and bottlenecks). - Assuming higher-speed uplinks alone provide scalability (speed helps capacity, not the scaling method). Exam tips: - If the question mentions “more access ports,” think “add a leaf.” - If it mentions “more bandwidth between leafs” or “more fabric capacity,” think “add a spine.” - Remember the defining rule: leafs connect to all spines (not just one), enabling ECMP and predictable latency.
Which network allows devices to communicate without the need to access the Internet?
172.9.0.0/16 is NOT a private RFC 1918 network. The private 172 range is 172.16.0.0/12, which covers only 172.16.0.0 through 172.31.255.255. Since 172.9.x.x is outside that boundary, it is publicly routable in principle (assuming it is allocated/used publicly) and is not intended for internal-only addressing without Internet routing.
172.28.0.0/16 is a private network because it falls within the RFC 1918 private block 172.16.0.0/12 (172.16–172.31). Devices using addresses in this range can communicate internally without Internet access. If Internet connectivity is later required, NAT/PAT at the edge can translate these private addresses to a public address.
192.0.0.0/8 is not an RFC 1918 private range. The private range in the 192 space is specifically 192.168.0.0/16. The 192.0.0.0/8 block includes various special-use and publicly routable ranges (for example, 192.0.2.0/24 is TEST-NET-1 for documentation), but it is not the standard private addressing block used for internal networks.
209.165.201.0/24 is a public IPv4 network (not RFC 1918). In many Cisco CCNA labs and examples, 209.165.200.0/24 or similar ranges are used to represent an ISP/public Internet segment. Devices can certainly communicate within this subnet, but it is not an internal-only private range; it is intended to be globally routable rather than isolated from the Internet.
Core Concept: This question tests knowledge of private IPv4 addressing (RFC 1918). Private IP networks are not routable on the public Internet. Devices can communicate internally (within the private network and between private networks via routing/VPN), but to reach the Internet they typically require Network Address Translation (NAT). Why the Answer is Correct: RFC 1918 defines three private IPv4 ranges: 1) 10.0.0.0/8 2) 172.16.0.0/12 (172.16.0.0 through 172.31.255.255) 3) 192.168.0.0/16 Option B is 172.28.0.0/16, which falls inside 172.16.0.0/12, so it is a private network. Using this range allows devices to communicate on a LAN/WAN without needing Internet connectivity, and it prevents accidental global routing on the public Internet. Key Features / Best Practices: - Private addressing is used for internal networks to conserve public IPv4 space. - To access the Internet from private IPs, configure NAT/PAT on an edge router/firewall. - Use proper subnetting and summarization; for example, 172.16.0.0/12 is a large block often subdivided into /16s or smaller. - Ensure internal routing (static, OSPF, EIGRP, etc.) is in place so private subnets can reach each other. Common Misconceptions: - Many assume any 172.x.x.x network is private. That is incorrect: only 172.16.0.0–172.31.255.255 is private. Therefore 172.9.0.0/16 (Option A) is public. - 192.0.0.0/8 (Option C) looks like it might be private because it starts with 192, but the private range is specifically 192.168.0.0/16. - Some exam questions use 209.165.201.0/24 (Option D) as a “public/ISP” example network in Cisco materials; it is not RFC 1918 private. Exam Tips: Memorize RFC 1918 exactly and be able to quickly check boundaries: - 172 private starts at 172.16 and ends at 172.31. If a 172 address is outside that window (like 172.9.x.x), it is not private. Also remember: 192.168 is private, not all 192.x.
Which mode allows access points to be managed by Cisco Wireless LAN Controllers?
Bridge mode refers to an AP operating as a wireless bridge (for example, connecting two wired networks over a wireless link, or certain mesh/bridging scenarios). While some bridge/mesh deployments can be controller-based, “bridge” is not the defining mode for WLC management. The question is asking for the AP mode that is specifically intended to be managed by a WLC, which is lightweight.
Lightweight mode is the correct answer because lightweight APs are designed to be centrally managed by a Cisco Wireless LAN Controller. They use CAPWAP to join a WLC, receive configuration (SSIDs, security, RF parameters), and rely on the controller for centralized control and management features such as RRM, roaming coordination, and policy enforcement.
Mobility Express is a solution where one AP runs an embedded wireless controller function and manages other APs. It can look like “controller-managed,” but it is not the classic model of APs being managed by a dedicated Cisco WLC appliance/VM. On CCNA-style questions, “managed by Cisco WLC” typically maps to lightweight mode, not Mobility Express.
Autonomous mode means the AP operates independently without a WLC. Configuration and control functions reside on the AP itself (often configured via CLI/GUI per device). Autonomous APs can provide WLAN services, but they are not managed by a Cisco Wireless LAN Controller, which is exactly what the question is asking about.
Core Concept: This question tests Cisco AP operating modes and how they are managed. In Cisco wireless architectures, access points can run either as autonomous (controller-less) devices or as lightweight APs that are centrally managed by a Cisco Wireless LAN Controller (WLC) using CAPWAP. Why the Answer is Correct: Lightweight mode is specifically designed for controller-based deployments. A lightweight AP does not hold the full WLAN configuration locally; instead, it discovers a WLC (via DHCP option 43, DNS entry like CISCO-CAPWAP-CONTROLLER, broadcast on the local subnet, or previously learned controller info), establishes a CAPWAP tunnel to the WLC, downloads its configuration, and is then managed centrally. The WLC handles key functions such as RF management (RRM), security policy enforcement, client authentication integration, and consistent SSID/WLAN provisioning. Key Features / Best Practices: In lightweight deployments, the WLC provides centralized configuration, monitoring, and troubleshooting (e.g., client state, RF interference, rogue detection). CAPWAP (UDP 5246/5247) is the control/data protocol used between AP and controller. This architecture scales well and supports enterprise features like seamless roaming, consistent QoS, and coordinated channel/power planning. From an exam perspective, remember: “WLC-managed AP” almost always implies “lightweight AP.” Common Misconceptions: Autonomous mode can sound like it “manages itself,” but it is not managed by a WLC; it is configured per-AP (or via management tools) and runs the full control plane locally. Mobility Express can be confused with lightweight because it uses a controller-like approach, but it is an embedded controller running on an AP, not a separate WLC managing APs in the traditional sense. Bridge mode is a role/function (often for point-to-point/mesh bridging) and does not inherently mean WLC management. Exam Tips: If the question asks “managed by Cisco WLC,” choose lightweight. If it says “no controller,” choose autonomous. If it mentions “one AP acts as controller for others,” that points to Mobility Express (or newer embedded controller concepts), but the classic WLC-managed mode remains lightweight.
Which two outcomes are predictable behaviors for HSRP? (Choose two.)
Correct. HSRP routers in the same group elect an Active router and a Standby router. The Active router forwards traffic sent to the HSRP virtual IP/MAC, while the Standby monitors hellos and takes over if the Active fails. Election uses priority (default 100) and then highest IP address as a tiebreaker; preemption can allow a higher-priority router to regain Active.
Incorrect. In HSRP, the routers do not “share the same interface IP address.” Each router keeps its own unique physical interface IP, and the group shares a separate virtual IP. Also, HSRP does not automatically load-balance default gateway traffic for a single VIP; it is active/standby per group. Load sharing requires multiple groups, not one shared interface IP.
Incorrect. HSRP does not synchronize router configurations. It only provides first-hop redundancy by coordinating which router owns the virtual IP/MAC and should forward packets. Configuration consistency (routing, ACLs, NAT, etc.) must be managed separately (templates, automation, change control). Assuming config sync is a common misunderstanding when learning FHRPs.
Incorrect. While each router does have a different physical IP address, HSRP does not have both routers simultaneously act as the default gateway for the same virtual IP, nor does it inherently load-balance traffic for that gateway. Only the Active router forwards for the VIP in a given HSRP group. Load balancing is more associated with GLBP or with multiple HSRP groups.
Correct. HSRP creates a virtual IP address (and virtual MAC) that hosts use as their default gateway. The Active router answers ARP for the VIP and forwards traffic. If the Active fails, the Standby assumes the VIP/MAC, allowing hosts to keep the same default gateway setting and continue communicating with minimal interruption.
Core concept: HSRP (Hot Standby Router Protocol) is a first-hop redundancy protocol (FHRP) used to provide a highly available default gateway for hosts on a LAN. Instead of hosts pointing to a physical router interface IP as their gateway, they use an HSRP virtual IP (VIP) and virtual MAC. If the active router fails, another router takes over the VIP/MAC so host traffic continues with minimal disruption. Why the answers are correct: A is correct because HSRP uses an election process to determine roles: one router becomes the Active router (forwards traffic sent to the virtual gateway) and another becomes the Standby router (ready to take over). The election is based primarily on HSRP priority (default 100) and then highest interface IP address as a tiebreaker. Preemption can be enabled so a higher-priority router can reclaim Active after recovery. E is correct because HSRP presents a shared virtual IP address to end hosts. Hosts configure their default gateway as this VIP, not the physical IP of either router. The Active router answers ARP for the VIP using the HSRP virtual MAC, ensuring hosts always send frames to the same L2 destination even during failover. Key features / best practices: HSRP uses hello messages (UDP 1985) to maintain state and detect failure. Timers (hello/hold) control failover speed. Interface tracking can decrement priority if an uplink fails, triggering a role change even if the router itself is still up. Use authentication where appropriate and consider aligning HSRP Active with the STP root to avoid suboptimal traffic paths. Common misconceptions: HSRP does not inherently load-balance default gateway traffic with a single group; it is active/standby per group. Load sharing is possible only by configuring multiple HSRP groups and splitting VLANs/hosts across different VIPs. Also, HSRP does not synchronize full router configurations; it only coordinates gateway redundancy state. Exam tips: Remember: HSRP = one VIP + one virtual MAC per group, with Active forwarding and Standby waiting. If you see “shared virtual IP used as default gateway” and “active/standby election,” that’s HSRP. If you see “both forward simultaneously for the same gateway,” that points more toward GLBP (not HSRP).
Which command prevents passwords from being stored in the configuration as plain text on a router or switch?
"enable secret" sets the password required to enter privileged EXEC mode and stores it using a one-way hash (Type 5/8/9 depending on IOS/platform). It is a best practice over "enable password" for securing enable access. However, it does not globally encrypt other passwords (like console/VTY line passwords) in the configuration, so it does not meet the question’s requirement.
"enable password" sets the privileged EXEC password but stores it in cleartext by default in the configuration. If "service password-encryption" is enabled, it may be obfuscated (Type 7), but by itself it does not prevent plain-text storage. It is also not recommended because "enable secret" provides stronger protection for privileged access.
"service password-encryption" is a global configuration command that obfuscates plaintext passwords in the configuration so they are not displayed or saved as readable text. It converts many "password" entries (console, VTY, etc.) into Type 7 encrypted strings. While Type 7 is weak and reversible, it directly addresses the question: preventing passwords from being stored as plain text in the config.
"username cisco password encrypt" is not the standard IOS syntax for encrypting a local username password. Typically you use "username <name> secret <password>" (preferred, hashed) or "username <name> password <password>" (plaintext unless service password-encryption is enabled). Because the command shown is not correct/standard and does not provide the global behavior asked, it is not the right answer.
Core Concept: This question tests how Cisco IOS stores passwords in the running/startup configuration and which feature prevents them from appearing in cleartext. Cisco devices can store certain passwords (console, VTY, AUX, some local user passwords, etc.) as plain text unless a global obfuscation feature is enabled. Why the Answer is Correct: The command "service password-encryption" enables Cisco IOS to encrypt (obfuscate) all plaintext passwords that are stored in the configuration using Cisco’s Type 7 reversible encryption. After enabling it, passwords configured with the "password" keyword (for example, under line console 0 or line vty 0 4) will no longer be displayed as readable text in "show running-config" or saved in startup-config as plain text. Key Features / Best Practices: - "service password-encryption" is a global configuration command. - It primarily affects passwords entered with the "password" keyword (line passwords, some username passwords depending on syntax/IOS behavior). - The resulting encryption is Type 7, which is weak and reversible; it is meant to prevent casual shoulder-surfing or accidental disclosure, not to provide strong security. - Best practice is to use stronger mechanisms where available: "enable secret" (Type 5/8/9 depending on platform/IOS) for privileged EXEC access, and "username <name> secret" for local users, plus AAA and external authentication (RADIUS/TACACS+). Common Misconceptions: - Many assume "enable secret" encrypts all passwords in the config. It only protects the enable (privileged EXEC) password, not line passwords. - "enable password" is often confused with encryption; it is stored in cleartext unless service password-encryption is enabled (and even then it becomes Type 7, still weak). - Options that mention “encrypt” in the command text can be misleading; the actual IOS command to globally obfuscate stored passwords is "service password-encryption". Exam Tips: - If the question asks about preventing passwords from being stored as plain text in the configuration (broadly), think "service password-encryption". - If it asks for the best way to secure privileged mode access, think "enable secret". - Remember: Type 7 is reversible and not considered secure; CCNA often tests the distinction between obfuscation (Type 7) and hashing (enable secret / username secret).
ip arp inspection vlan 2
interface fastethernet 0/1
switchport mode access
switchport access vlan 2
Refer to the exhibit. What is the effect of this configuration?
Incorrect. DAI does not keep an access port administratively down until it connects to another switch. Administrative down is controlled by the "shutdown" command. DAI is a control-plane/data-plane inspection feature that can drop ARP packets, but it does not require the port to be an uplink nor does it change the admin state based on what is connected.
Incorrect. Dynamic ARP Inspection is not disabled just because an ARP ACL is missing. DAI can validate ARP packets using the DHCP snooping binding table (most common) and/or ARP ACLs. If neither a binding nor an ACL entry exists for a sender, ARP packets may be dropped, but the feature itself remains enabled for the VLAN.
Correct. With "ip arp inspection vlan 2" configured, DAI is enabled for VLAN 2. All ports in that VLAN are untrusted by default unless explicitly configured with "ip arp inspection trust" under the interface. Since Fa0/1 is an access port in VLAN 2 and no trust command is present, its trust state is untrusted.
Incorrect. The port does not remain down waiting for a trust/untrust configuration. Untrusted is the default state, so no additional configuration is required for the port to come up. DAI may drop ARP packets that fail validation, but it does not keep the interface link down or require explicit trust/untrust commands to enable the port.
Core concept: This question tests Dynamic ARP Inspection (DAI), a Layer 2 security feature that validates ARP packets on untrusted switch ports to prevent ARP spoofing/poisoning (man-in-the-middle). DAI is enabled per-VLAN and uses a trusted/untrusted port model. Why the answer is correct: The command "ip arp inspection vlan 2" enables DAI for VLAN 2 globally on the switch. Once enabled for a VLAN, all switch ports in that VLAN are, by default, in the untrusted state unless explicitly configured with "ip arp inspection trust". The interface FastEthernet0/1 is configured as an access port in VLAN 2, and there is no trust command under the interface. Therefore, Fa0/1’s DAI trust state is untrusted. Key features, behavior, and best practices: - DAI inspects ARP requests and replies received on untrusted ports. - Validation is typically performed against the DHCP snooping binding table (most common) or against ARP ACLs (static mappings) when DHCP snooping is not available. - Trusted ports (usually uplinks toward other switches, DHCP servers, or routers) bypass DAI checks; untrusted ports (typically user access ports) are inspected. - If DAI is enabled and there is no valid binding/ACL entry for a host, that host’s ARP packets may be dropped, but the physical/link state of the port does not go down. Common misconceptions: A frequent trap is thinking DAI requires an ARP ACL to be “enabled.” In reality, DAI can be enabled without an ARP ACL; it will then rely on DHCP snooping bindings (if present). Another misconception is that ports go administratively down when DAI is enabled or when trust is not configured—DAI drops invalid ARP traffic; it does not shut the interface. Exam tips: Remember these defaults: enabling DAI is per-VLAN, and ports are untrusted by default. Also distinguish between “traffic is filtered/dropped” (DAI) versus “interface is shut down” (administrative state). For CCNA, associate DAI closely with DHCP snooping and the trusted/untrusted port concept.
Which two encoding methods are supported by REST APIs? (Choose two.)
SGML (Standard Generalized Markup Language) is an older markup standard and the conceptual ancestor of HTML and XML. While it influenced modern markup languages, it is not used as a typical REST API payload format. REST APIs overwhelmingly use JSON and/or XML with well-known HTTP media types, not SGML.
YAML is a human-friendly data serialization format commonly used for configuration and automation (for example, Ansible playbooks). Although some niche APIs may support YAML, it is not considered a standard or commonly supported REST API encoding method for CCNA exam purposes compared to JSON and XML.
XML is a widely supported structured data format for REST APIs. It is commonly carried over HTTP using Content-Type: application/xml (or text/xml). XML is prevalent in enterprise environments and legacy integrations, supports schemas and namespaces, and remains a common alternative to JSON in API content negotiation.
JSON is the most common REST API payload format today due to its simplicity and low overhead. It is typically indicated with Content-Type: application/json and requested with Accept: application/json. JSON maps naturally to objects and arrays, making it easy for clients and servers to parse and generate in network automation workflows.
EBCDIC is a character encoding used primarily on IBM mainframe systems. It describes how characters are represented as bytes, not how structured data is modeled like JSON or XML. REST APIs may use character encodings (most commonly UTF-8), but EBCDIC is not an API payload encoding method in the sense tested here.
Core Concept: This question tests REST API data representation (payload encoding/serialization) and content negotiation. REST APIs commonly exchange structured data in the HTTP message body, and the format is indicated using MIME types such as Content-Type (request body) and Accept (desired response format). Why the Answer is Correct: JSON and XML are the two most widely supported and standardized encoding formats for REST APIs. JSON (application/json) is lightweight, easy to parse, and maps naturally to objects used in modern programming languages, making it the de facto standard for many RESTful services. XML (application/xml or text/xml) is also broadly supported, especially in enterprise and legacy integrations, and provides strong structure with schemas and namespaces. Key Features / Best Practices: - REST commonly uses HTTP with JSON/XML payloads and standard verbs (GET, POST, PUT, PATCH, DELETE). - Content negotiation: clients send Accept: application/json (or application/xml) and servers respond accordingly when supported. - Many network automation platforms (including Cisco controllers and management APIs) default to JSON but may offer XML for compatibility. - For CCNA-level automation, focus on recognizing JSON/XML and the associated headers rather than deep schema design. Common Misconceptions: YAML is popular in automation tools (e.g., Ansible playbooks) and configuration files, so it can seem like an API encoding. However, YAML is not as universally supported as JSON/XML in REST APIs and is less commonly offered as an HTTP API media type in mainstream products. SGML is a predecessor to XML and not used for modern REST payloads. EBCDIC is a character encoding (like ASCII/UTF-8), not a structured API data format. Exam Tips: When CCNA questions ask about REST API “encoding methods” or “data formats,” the safest, most standard answers are JSON and XML. Also remember the related HTTP headers: Content-Type describes what you are sending; Accept describes what you want back. If you see YAML in options, treat it as more automation-file-oriented unless the question explicitly states the API supports it.
What event has occurred if a router sends a notice level message to a syslog server?
A certificate expiring can generate syslog messages, but it is not the typical, canonical event associated with severity level 5 in Cisco IOS exam contexts. Certificate-related logs often appear as warnings (level 4) or errors (level 3) depending on the feature (PKI, HTTPS, VPN) and timing. CCNA questions usually map notice/notifications to interface or protocol state changes.
Interface/line protocol state changes are a classic example of syslog severity level 5 (notifications/notice) on Cisco devices. Messages like “%LINEPROTO-5-UPDOWN” or interface up/down transitions are considered normal but significant operational events. These are exactly the kinds of events commonly forwarded to a syslog server when logging trap is set to notifications (level 5).
TCP connection teardown messages can be logged, but they are not the most common CCNA association with a notice-level (5) syslog event. TCP session establishment/teardown logging is more typical in firewall contexts (e.g., ASA) or with specific debugging/feature logs, and the severity can vary. The exam’s best match for level 5 is interface status change.
ICMP does not build or tear down connections because it is a connectionless protocol used for control/diagnostic messages (e.g., echo request/reply). Therefore, “an ICMP connection has been built” is conceptually incorrect. Even if ICMP-related events are logged, they would not be described as a connection being built, making this option invalid.
Core Concept: This question tests Cisco IOS syslog severity levels and the types of events commonly associated with each level. Cisco syslog messages use severity levels from 0 (emergencies) to 7 (debugging). Level 5 is called notifications in Cisco IOS and represents normal but significant conditions. Why the Answer is Correct: A level 5 syslog message is commonly generated for important operational state changes that are not failures but still matter to administrators. A classic Cisco IOS example is an interface or line protocol transition, such as an interface changing state to up or down. Messages like "%LINEPROTO-5-UPDOWN" are standard examples of severity 5 notifications and are frequently sent to a syslog server. Key Features: Cisco syslog severities are 0 emergencies, 1 alerts, 2 critical, 3 errors, 4 warnings, 5 notifications, 6 informational, and 7 debugging. Administrators control what is sent to a syslog server with commands such as "logging host <server-ip>" and "logging trap notifications". Interface state-change messages are especially useful for troubleshooting link flaps and correlating outages with physical or protocol changes. Common Misconceptions: Some learners assume a certificate issue must be the best match because it sounds important, but certificate expiration is not the standard severity-5 example tested in CCNA. Others may choose TCP teardown because it is a logged event in some platforms, but it is not the canonical Cisco IOS notification-level event. ICMP is connectionless, so describing it as building a connection is technically incorrect. Exam Tips: Memorize the Cisco syslog levels and strongly associate level 5 notifications with normal but significant operational changes, especially interface up/down and line protocol transitions. When a CCNA question asks about a notification-level syslog event, think first of interface status changes rather than security, certificate, or session-level events.
What is the difference between RADIUS and TACACS+?
Incorrect. TACACS+ is the protocol commonly associated with command accounting (logging commands executed on a network device) and can provide very granular accounting records. RADIUS accounting typically tracks session start/stop and interim updates for network access sessions, not per-command CLI logging. The option reverses the typical strengths of the protocols.
Correct. TACACS+ supports a clear separation of Authentication and Authorization, enabling granular control such as per-command authorization on Cisco IOS devices. RADIUS generally merges authentication and authorization by returning authorization attributes as part of the authentication decision (Access-Accept), making it less suited for detailed device-administration authorization.
Incorrect. This is reversed. TACACS+ encrypts most of the payload (everything except the header), providing better confidentiality for AAA exchanges. RADIUS encrypts only the user password in the Access-Request; other attributes are not fully encrypted, though integrity protections exist via authenticators and shared secrets.
Incorrect/misleading. RADIUS is not limited to dial authentication; it is widely used for many access types (802.1X for Wi-Fi/wired, VPN remote access, NAC). TACACS+ can be used for authentication too, but it is most appropriate for network device administration. The statement implies RADIUS is narrow-use, which is not true.
Core Concept: This question tests AAA (Authentication, Authorization, Accounting) protocol differences between RADIUS and TACACS+. Both are used to centrally control who can access network devices (routers/switches) or network services (VPN, Wi-Fi/802.1X), but they differ in how they handle AAA functions and what they protect on the wire. Why the Answer is Correct: B is correct because TACACS+ is designed to separate Authentication and Authorization (and Accounting) into distinct processes. This enables granular, per-command authorization on network devices (for example, allowing a user to run show commands but not configure terminal). RADIUS, in contrast, generally combines authentication and authorization in a single Access-Accept/Reject decision that includes authorization attributes, making it less granular for device administration. Key Features / Best Practices: - TACACS+ (TCP/49): Common for device administration (CLI access). Supports per-command authorization and detailed accounting of commands. Encrypts the entire TACACS+ payload (except the header), improving confidentiality. - RADIUS (UDP/1812 auth, 1813 acct; legacy 1645/1646): Common for network access (802.1X, VPN, dial-in). Encrypts only the user password in the Access-Request; other attributes are not fully encrypted (integrity is typically provided via shared secret and authenticators). For stronger protection, use RADIUS over DTLS/TLS where supported. - In Cisco environments, TACACS+ is often preferred for administrative access to IOS/IOS-XE devices; RADIUS is often preferred for user/network access control. Common Misconceptions: Many learners confuse encryption behavior: TACACS+ encrypts more of the payload than RADIUS, not less. Another trap is assuming RADIUS is only for dial-up; while it originated there, it is widely used for modern access control (Wi-Fi, NAC, VPN). Exam Tips: Memorize these anchors: “TACACS+ = device admin + per-command authorization + TCP + encrypts payload” and “RADIUS = network access (802.1X/VPN) + UDP + password-only encryption.” When you see “separates authn/authz” or “command authorization,” think TACACS+. References (conceptual): Cisco AAA configuration guides and IETF RADIUS RFCs (e.g., RFC 2865/2866) describe RADIUS behavior; TACACS+ is documented in Cisco AAA/TACACS+ guidance.