
Simulate the real exam experience with 100 questions and a 120-minute time limit. Practice with AI-verified answers and detailed explanations.
AI-Powered
Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.
What is the default behavior of a Layer 2 switch when a frame with an unknown destination MAC address is received?
Incorrect. A switch does not add the destination MAC address to the MAC address table just because it appears as a destination in a received frame. MAC learning is based on the source MAC address (and VLAN/ingress port). If the destination is unknown, the switch cannot forward it to a single port; it floods it within the VLAN instead.
Incorrect. In normal operation, unknown unicast frames are handled in hardware (ASIC) and flooded within the VLAN; they are not typically sent to the CPU for “destination MAC learning.” The switch learns MAC addresses from the source field of frames it receives, not by punting frames to the CPU to discover destinations.
Correct. If the destination MAC is not in the MAC address table, the switch treats it as an unknown unicast and floods the frame out all other ports in the same VLAN (excluding the ingress port). This ensures the frame can still reach the destination, and the return traffic allows the switch to learn the destination MAC-to-port mapping.
Incorrect. Dropping unknown unicast frames is not the default behavior of a Layer 2 switch. If switches dropped unknown destinations, initial communication would fail whenever MAC entries are missing (e.g., after aging or reboot). Flooding is the default mechanism to deliver the frame and enable subsequent MAC learning via the destination’s reply.
Core Concept: This question tests Layer 2 switching behavior when the destination MAC address is unknown (an “unknown unicast” frame). A switch makes forwarding decisions using its MAC address table (CAM table), which maps MAC addresses to switch ports per VLAN. Why the Answer is Correct: When a Layer 2 switch receives a frame, it first learns the source MAC address by recording the source MAC and the ingress port in the MAC address table (within that VLAN). Then it looks up the destination MAC. If the destination MAC is not in the table (unknown), the switch cannot determine the correct egress port. The default behavior is to flood the frame out all ports in the same VLAN except the port it was received on. This flooding increases the chance the frame reaches the correct destination, and when the destination replies, the switch learns that MAC-to-port mapping, reducing future flooding. Key Features / Best Practices: Flooding is constrained to the VLAN (broadcast domain) of the incoming frame; trunks carry it only for that VLAN. Flooding also applies to broadcasts (FF:FF:FF:FF:FF:FF) and unknown multicast (depending on features like IGMP snooping). To reduce unnecessary flooding, networks rely on stable MAC learning, appropriate VLAN design, and avoiding loops (STP). Security features like port security, DHCP snooping, and dynamic ARP inspection don’t change the fundamental unknown-unicast flooding behavior, but can limit abuse. Common Misconceptions: Many assume the switch “learns” the destination MAC from the unknown frame—switches learn from the source MAC, not the destination. Others think the switch sends unknown frames to the CPU; in normal hardware switching, flooding is done in ASICs, not punted to CPU. Dropping unknown unicasts is not default behavior; it would break basic Ethernet communication when the table is empty or aged out. Exam Tips: Remember the three classic Layer 2 forwarding cases: 1) Known unicast: forward to the single learned port. 2) Unknown unicast: flood within the VLAN (except ingress). 3) Broadcast/many multicasts: flood within the VLAN. Also remember MAC table entries age out (commonly ~300 seconds on Cisco), so unknown-unicast flooding can reappear after inactivity.
Want to practice all questions on the go?
Download Cloud Pass for free — includes practice tests, progress tracking & more.


Download Cloud Pass and access all Cisco 200-301: Cisco Certified Network Associate (CCNA) practice questions for free.
Want to practice all questions on the go?
Get the free app
Download Cloud Pass for free — includes practice tests, progress tracking & more.
How does HSRP provide first hop redundancy?
Incorrect. Flooding traffic out all interfaces in the same VLAN describes Layer 2 switching behavior for unknown unicast/broadcast/multicast, not HSRP. HSRP operates at the default-gateway (first-hop) function using a virtual IP/MAC and router state machines. It does not provide redundancy by VLAN flooding and does not “load-balance Layer 2 traffic along the path.”
Correct. HSRP provides first-hop redundancy by having multiple routers share a virtual IP address and a virtual MAC address that hosts use as their default gateway. One router is Active and forwards traffic for the virtual gateway; another is Standby and takes over if the Active fails. This keeps the gateway identity stable for hosts and enables fast failover.
Incorrect. Forwarding multiple packets to the same destination over different routed links describes per-packet load balancing or multipath forwarding concepts, not first-hop redundancy. HSRP does not split traffic across multiple routed links for the same flow; it ensures a single default gateway remains available by failing over Active/Standby roles.
Incorrect. Assigning the same metric to multiple routes in the routing table describes equal-cost multipath (ECMP) routing. ECMP is a routing-plane feature used to load-share across multiple next hops to a destination. HSRP is not a routing protocol and does not manipulate route metrics; it provides a redundant default gateway on a LAN via a virtual IP/MAC.
Core Concept: HSRP (Hot Standby Router Protocol) is a Cisco First Hop Redundancy Protocol (FHRP). FHRPs solve the “default gateway single point of failure” problem on a LAN by making multiple routers present a single, resilient default gateway to hosts. Why the Answer is Correct: HSRP forms a group of routers that share a virtual IP address (VIP) and a corresponding virtual MAC address. Hosts configure their default gateway as the VIP, not a physical router interface IP. Within the HSRP group, one router is elected Active and actually forwards traffic sent to the virtual MAC; another is Standby and is ready to take over. If the Active router fails (or its tracked uplink fails), the Standby transitions to Active and assumes forwarding using the same VIP/virtual MAC. Because the gateway IP and MAC remain consistent from the host perspective, hosts typically do not need to change configuration or wait for ARP to learn a new gateway MAC (HSRP can send gratuitous ARPs to speed convergence). Key Features / How It Works: - Roles and election: Active/Standby chosen by priority (default 100) and tie-breaker highest interface IP. Preemption can be enabled so a higher-priority router can reclaim Active. - Virtual addressing: VIP is configured per group; virtual MAC is derived from the group number (v1) or other scheme (v2). Hosts ARP for the VIP and learn the virtual MAC. - Timers and failover: Hello/hold timers detect failure; faster timers improve convergence. - Tracking: Interface/object tracking can decrement priority when an uplink fails, triggering a controlled failover even if the LAN interface stays up. Common Misconceptions: Many confuse HSRP with load balancing or routing path selection. HSRP’s primary purpose is gateway redundancy, not ECMP or L2 flooding. While multiple HSRP groups can be used to share load across VLANs (or per-group), a single HSRP group still uses one Active forwarder at a time. Exam Tips: If you see “virtual IP” + “virtual MAC” + “default gateway redundancy,” think FHRP (HSRP/VRRP/GLBP). HSRP specifically uses Active/Standby with a shared VIP and virtual MAC to provide first-hop redundancy for hosts on a subnet.
An engineer is asked to protect unused ports that are configured in the default VLAN on a switch. Which two steps will fulfill the request? (Choose two.)
Incorrect. Configuring unused ports as trunks is insecure because trunks can carry multiple VLANs and may allow unintended access if connected to another switch or a malicious device. Trunking also introduces risks around negotiation (DTP) and VLAN hopping scenarios if misconfigured. For unused ports, best practice is access mode + shutdown, not trunk mode.
Incorrect. CDP is a Layer 2 discovery protocol that advertises device information (platform, software version, management addresses). Enabling it on unused ports does not protect them; it can actually leak useful reconnaissance data to an attacker who plugs into the port. Security best practice is often to disable CDP on untrusted/user-facing ports.
Correct. Setting the port to access mode and assigning it to an unused/parking VLAN (for example VLAN 99) removes it from the default VLAN 1. If the port is accidentally enabled later, it will not provide access to the primary user VLANs. This is a common switch hardening baseline step for unused interfaces.
Correct. Administratively shutting down unused ports prevents any device from gaining connectivity through those interfaces. It is the most immediate and effective control to protect unused ports. In practice, shutdown is typically combined with moving the port to a parking VLAN to protect against accidental re-enablement without proper configuration.
Incorrect. EtherChannel is used to bundle multiple physical links into one logical link for redundancy and increased bandwidth, typically between switches or switch-to-server designs. It does not provide a security control for unused ports and would be irrelevant (and potentially disruptive) if applied to interfaces that should not be active.
Core Concept: This question tests basic Layer 2 switch hardening for unused access ports, especially those left in the default VLAN (VLAN 1). Best practice is to (1) disable unused ports and (2) move them out of VLAN 1 into an unused “parking” VLAN. This reduces the attack surface for unauthorized access, VLAN hopping-related misconfigurations, and accidental connectivity. Why the Answer is Correct: Administratively shutting down unused ports (D) is the most direct protection: the port will not forward frames, will not participate in STP forwarding, and will not provide any connectivity if someone plugs in a device. However, relying only on shutdown can be risky operationally if someone later enables the port without applying the intended security baseline. Configuring the port as an access port and placing it in a non-default, unused VLAN such as VLAN 99 (C) provides an additional safeguard. If the port is mistakenly enabled, it will not land in VLAN 1 (where many legacy/control-plane and user devices might exist). A dedicated unused VLAN is typically not routed, has no DHCP scope, and is monitored/controlled, limiting what an unauthorized device can reach. Key Features / Best Practices: - Use “parking VLAN” (often VLAN 999 or 99) for unused ports. - Force access mode (switchport mode access) to prevent dynamic trunk negotiation. - Shutdown unused ports (shutdown). - Often paired (though not asked here) with: disable DTP (switchport nonegotiate), port-security, BPDU Guard, and storm control. Common Misconceptions: - Making a port a trunk (A) is the opposite of hardening; it can expose multiple VLANs. - Enabling CDP (B) increases information disclosure (device model, IOS, IPs) and is not a protection mechanism. - EtherChannel (E) is for bundling links, not securing unused ports. Exam Tips: For CCNA, the standard answer pattern for unused ports is: “put them in an unused VLAN and shut them down.” Also remember: avoid using VLAN 1 for user/access ports and avoid leaving ports in default configurations.
SW2
vtp domain cisco
vtp mode transparent
vtp password ciscotest
interface fastethernet0/1
description connection to sw1
switchport mode trunk
switchport trunk encapsulation dot1q
Refer to the exhibit. How does SW2 interact with other switches in this VTP domain?
Incorrect. In VTP transparent mode, the switch does not process VTP updates to modify its VLAN database, and it does not act like a VTP participant that learns VLANs from clients/servers. While it can forward VTP advertisements over trunk links, “transmits and processes VTP updates” implies it is actively participating in VTP synchronization, which transparent mode does not do.
Incorrect. VTP advertisements are carried over trunk links, not access ports. Access ports belong to a single VLAN and do not transport VTP control traffic between switches. Even if a switch is in server/client mode, VTP update exchange requires trunking (e.g., 802.1Q). Therefore, processing VTP updates from clients on access ports is not how VTP operates.
Incorrect. A transparent switch does not receive VTP updates and apply them to its VLAN database, and it does not advertise its locally configured VLANs via VTP to other switches. Local VLAN creation on a transparent switch remains local only. Although it may forward VTP advertisements that it hears, it is not forwarding “locally configured VLANs” as VTP updates.
Correct. In VTP transparent mode, SW2 does not participate in VLAN database synchronization, but it will forward VTP advertisements it receives out its trunk ports (for VTPv1/v2 behavior commonly tested in CCNA). With Fa0/1 configured as an 802.1Q trunk, SW2 can pass VTP advertisements between other switches, acting as a transit device for VTP messages.
Core Concept: This question tests VLAN Trunking Protocol (VTP) behavior, specifically VTP transparent mode. VTP is a Layer 2 control-plane protocol that advertises VLAN database information over trunk links within a VTP domain (name + optional password). Switches can be VTP server, client, or transparent. Why the Answer is Correct: SW2 is configured with "vtp mode transparent". In transparent mode, the switch does not participate in the VTP database synchronization process: it does not learn VLANs from VTP advertisements and does not advertise its locally created VLANs via VTP. However, it will forward VTP advertisements it receives out its trunk ports (VTP is carried over trunks, not access ports). Therefore, SW2 acts as a pass-through device for VTP messages, forwarding the advertisements it receives on trunks to other trunks. That matches option D. Key Features / Behaviors to Know: - Transparent mode: local VLAN changes stay local; they are not propagated via VTP. - Transparent mode still forwards VTP advertisements (VTPv1/v2) out trunk ports, preserving the VTP domain’s ability to span through the switch. - VTP advertisements are sent over trunk links (e.g., 802.1Q). The trunk configuration on Fa0/1 enables VTP frames to traverse between SW2 and SW1. - VTP domain name and password must match for a switch to process VTP updates; transparent mode largely makes this irrelevant for learning, but forwarding still occurs. Common Misconceptions: A is tempting because it mentions trunks and “transmits,” but transparent mode does not process updates into its VLAN database. C is a classic confusion: transparent mode does not forward “locally configured VLANs” via VTP. B is incorrect because VTP is not exchanged on access ports. Exam Tips: When you see "vtp mode transparent," think: “does not learn/advertise VLAN database, but forwards VTP advertisements on trunks.” Also remember VTP uses trunks; if a link is not trunking, VTP won’t traverse it. For CCNA, focus on the behavioral differences: server/client synchronize; transparent does not synchronize but can forward advertisements.
What are two descriptions of three-tier network topologies? (Choose two.)
Correct. The distribution layer commonly supports both Layer 2 and Layer 3. It aggregates access switches (often via Layer 2 trunks) and is a common place for Layer 3 functions like inter-VLAN routing using SVIs, dynamic routing adjacencies toward the core, route summarization, and first-hop redundancy. It also frequently defines the STP boundary and enforces policies (ACLs/QoS).
Correct. The core layer is built for high availability and resiliency, aiming to keep the campus operational during failures. It uses redundant core switches/routers, redundant links, and designs that enable fast convergence. The core should be highly reliable and high-speed, typically minimizing complex features/policies so it can forward traffic quickly and recover rapidly from device/link failures.
Incorrect. The access layer’s primary role is providing Layer 2 access for end devices (PCs, phones, APs) and applying edge features like port security and VLAN assignment. Routing between different VLANs/subnets (different broadcast domains) is typically performed at the distribution layer (or at a collapsed core/distribution in smaller designs), not at the access layer in a classic three-tier model.
Incorrect. Wired connections for each host are provided by the access layer, where end devices physically connect. The core layer is not designed for endpoint attachment; it is the high-speed backbone that interconnects distribution blocks. Connecting hosts directly to the core would reduce scalability and complicate operations, violating the hierarchical design principle of keeping the core simple and resilient.
Incorrect. The core and distribution layers have different purposes. Distribution is the aggregation and policy boundary layer (routing, ACLs, QoS, summarization, FHRP), while the core is the fast, resilient transport layer intended to move traffic quickly between distribution blocks. Although both may run Layer 3, they are not considered to perform the same functions in three-tier design.
Core Concept: This question tests the classic Cisco three-tier hierarchical campus design model: Access, Distribution, and Core. Each layer has a distinct role to improve scalability, resiliency, manageability, and performance. Why the Answer is Correct: A is correct because the distribution layer is the policy and aggregation layer and commonly includes both Layer 2 and Layer 3 functions. It aggregates access switches, provides boundary control, and often performs inter-VLAN routing (Layer 3) while still potentially supporting Layer 2 features (such as VLAN trunking, STP boundary, and first-hop redundancy). B is correct because the core layer is designed for high availability and fast, reliable transport. The core’s primary job is to keep the campus connected with minimal downtime, using redundant devices/links and fast convergence so that connectivity is maintained even when a device or link fails. Key Features / Best Practices: - Access layer: endpoint connectivity, VLAN assignment, PoE, port security, 802.1X, QoS marking. - Distribution layer: aggregation, routing between VLANs, policy enforcement (ACLs, QoS), route summarization, FHRPs (HSRP/VRRP/GLBP), and acting as an STP boundary. - Core layer: high-speed Layer 3 switching/routing, redundancy, minimal policy to reduce complexity, and rapid convergence (e.g., ECMP, dynamic routing). Common Misconceptions: Many confuse “core” with “where hosts connect” (that is access). Others assume the access layer routes between domains; in hierarchical design, inter-VLAN and inter-subnet routing is typically centralized at distribution (or collapsed core/distribution in smaller networks). Also, core and distribution do not perform the same functions: distribution is policy/aggregation; core is fast transport. Exam Tips: Remember the mnemonic: Access = connect devices, Distribution = aggregate + policy + routing boundary, Core = fast and resilient backbone. If an option mentions “continuous connectivity,” “redundancy,” and “fast transport,” it points to the core. If it mentions “policy,” “inter-VLAN routing,” or “aggregation,” it points to distribution.
Which action must be taken to assign a global unicast IPv6 address on an interface that is derived from the MAC address of that interface?
Explicitly assigning a link-local address does not cause a global unicast address to be generated. Link-local addresses (FE80::/10) are used only on the local segment for neighbor discovery and next-hop resolution. Even though link-local can be derived from the MAC via EUI-64, it is not a global unicast address and does not satisfy the question’s requirement.
Disabling the EUI-64 process would prevent forming an interface identifier from the MAC address, which is the opposite of what the question asks. If EUI-64 is disabled (or if privacy extensions/random IIDs are used instead), the interface will not derive the IID from the MAC. Therefore this action cannot be required to create a MAC-derived global unicast address.
Enabling SLAAC allows the interface to learn the global routing prefix from ICMPv6 Router Advertisements and then auto-generate the interface identifier. In classic IPv6 behavior (and for CCNA exam purposes), that IID is commonly derived from the interface MAC using modified EUI-64. This results in a global unicast IPv6 address built from the RA prefix plus the MAC-derived IID.
Configuring a stateful DHCPv6 server provides IPv6 addresses by server assignment, not by the interface deriving the address from its own MAC. DHCPv6 can hand out any address from a pool and does not require EUI-64-based IIDs. While DHCPv6 is a valid addressing method, it is not the required action to get a MAC-derived global unicast address.
Core concept: This question tests IPv6 address autoconfiguration and how an interface can form its IPv6 interface identifier (IID) from its MAC address. In IPv6, a global unicast address is typically composed of a 64-bit global routing prefix (learned from Router Advertisements) plus a 64-bit IID. One common IID method is modified EUI-64, which is derived from the interface MAC address. Why the answer is correct: To obtain a global unicast IPv6 address that is derived from the interface MAC, the host/interface must learn the global prefix and then build the IID. That is exactly what SLAAC (Stateless Address Autoconfiguration) does: it uses ICMPv6 Router Advertisements (RAs) to learn the /64 prefix and then auto-generates the IID (often using EUI-64, depending on platform/settings). On Cisco devices, this is commonly seen with commands like "ipv6 address autoconfig" (or "ipv6 address <prefix> eui-64" when manually specifying the prefix). Among the given choices, enabling SLAAC is the action that results in a global unicast address formed from the MAC-derived IID. Key features / best practices: SLAAC requires an IPv6 router on the link sending RAs with the Autonomous (A) flag set. The interface then forms a global unicast address using the advertised prefix and an IID. Note that modern systems may use privacy extensions (randomized IIDs) by default, but the exam’s classic expectation is that SLAAC can derive the IID from the MAC via EUI-64. Common misconceptions: Link-local addressing is separate: link-local addresses (FE80::/10) are automatically created and used for neighbor discovery and as next-hop addresses, but they do not create a global unicast address. DHCPv6 can provide addresses too, but stateful DHCPv6 does not inherently mean “MAC-derived EUI-64 IID,” and it is not required for MAC-derived global addresses. Exam tips: Remember: RA/SLAAC = prefix learned from router + IID auto-generated (often EUI-64). DHCPv6 stateful = server assigns full address. Link-local is always present and not “global.”
What mechanism carries multicast traffic between remote sites and supports encryption?
ISATAP (Intra-Site Automatic Tunnel Addressing Protocol) is primarily an IPv6 transition mechanism that tunnels IPv6 over IPv4 within a site. It is not a general-purpose site-to-site VPN technology for carrying multicast between remote sites, and it does not provide encryption by itself. In CCNA context, ISATAP is about IPv6 connectivity, not multicast VPN design.
IPsec over ISATAP is not a standard or common CCNA-relevant design for “multicast between remote sites.” ISATAP is used to provide IPv6-over-IPv4 connectivity (usually intra-site), and adding IPsec would address encryption but still doesn’t make it the typical solution for multicast transport between sites. The exam expects GRE as the multicast-capable tunnel mechanism.
GRE can carry multicast traffic because it creates a tunnel interface and encapsulates packets (including multicast) inside a unicast GRE header. This makes it useful for running multicast and routing protocols across an IP network. However, GRE provides no encryption, authentication, or integrity, so it does not meet the requirement to “support encryption.”
GRE over IPsec is the classic solution to carry multicast traffic between remote sites while also providing encryption. GRE supplies the tunneling/encapsulation needed for multicast and routing protocols; IPsec then encrypts and protects the GRE packets across the untrusted network. This combination satisfies both requirements: multicast support and encryption.
Core Concept: This question tests VPN tunneling and encapsulation behavior for multicast traffic. Many WAN transports (especially the public Internet) are unicast-only from the perspective of routing/forwarding, so multicast often requires a tunneling mechanism. Separately, encryption is typically provided by IPsec. Why the Answer is Correct: GRE (Generic Routing Encapsulation) is a tunneling protocol that can encapsulate many Layer 3 protocols, including multicast and routing protocol traffic, inside a unicast GRE tunnel. However, GRE by itself provides no confidentiality, integrity, or authentication. IPsec provides encryption and integrity, but “plain IPsec” is optimized for unicast IP flows and does not inherently create a virtual point-to-point interface that can conveniently carry multicast and dynamic routing in the same way. Combining them as GRE over IPsec is the classic design: GRE provides the tunnel interface and multicast-capable encapsulation; IPsec protects the GRE packets with encryption. Key Features / Best Practices: - Use GRE tunnel interfaces between sites to carry multicast (and often dynamic routing like OSPF/EIGRP) across an IP network. - Apply IPsec (commonly via crypto maps or, more modernly, via IPsec profiles on the tunnel interface) to encrypt the GRE-encapsulated traffic. - This approach is common in site-to-site VPNs when you need multicast applications (streaming, discovery protocols) or routing protocol adjacency across the VPN. Common Misconceptions: - “GRE alone is enough” is wrong because GRE does not encrypt. - “IPsec alone carries multicast” is often misleading in exam context; while some multicast scenarios can be engineered, the standard CCNA answer for multicast + encryption between sites is GRE over IPsec. - ISATAP is an IPv6 transition mechanism, not a general multicast VPN solution. Exam Tips: When you see requirements like “carry multicast between remote sites” and “supports encryption,” think: GRE for multicast + IPsec for encryption = GRE over IPsec. Remember: GRE adds the tunnel/encapsulation capability; IPsec adds security services.
Router(config)#interface GigabitEthernet 1/0/1
Router(config-if)#ip address 192.168.16.143 255.255.255.240
Bad mask /28 for address 192.168.16.143
Refer to the exhibit. Which statement explains the configuration error message that is received?
192.168.16.143 is within the private 192.168.0.0/16 range, but that is not an error condition on Cisco routers. Private addresses are commonly used on internal interfaces and are fully supported. The message is triggered by an invalid host assignment for the given subnet mask, not by the address being private.
Cisco routers support /28 masks (255.255.255.240) on IPv4 interfaces. The error text can mislead candidates into thinking the mask itself is unsupported, but IOS is actually rejecting the specific IP address because it falls on a reserved address (broadcast) for that /28 subnet.
A network address in a /28 would be the first address of the 16-address block (for example, 192.168.16.128, 192.168.16.144, etc.). 192.168.16.143 is not the network address for any /28; it is the last address of the 192.168.16.128/28 block, which makes it the broadcast address instead.
With a /28 mask, subnets increment by 16 in the last octet. The range 192.168.16.128–192.168.16.143 is one subnet, and the last address (.143) is the directed broadcast address for that subnet. Broadcast addresses cannot be assigned to router interfaces, so IOS rejects the configuration and displays the error.
Core Concept: This question tests IPv4 addressing rules on Cisco IOS interfaces, specifically how subnet masks define usable host ranges versus reserved addresses (network and broadcast). Cisco IOS validates that an interface IP is a valid host address for the given prefix. Why the Answer is Correct: The address 192.168.16.143 with mask 255.255.255.240 is a /28. A /28 creates subnets in increments of 16 in the last octet (…0, 16, 32, 48, 64, 80, 96, 112, 128, 144, …). The subnet that contains 192.168.16.143 is 192.168.16.128/28 because 128–143 is one block of 16 addresses. In that subnet: - Network address: 192.168.16.128 - Usable hosts: 192.168.16.129 through 192.168.16.142 - Broadcast address: 192.168.16.143 Because .143 is the broadcast address for 192.168.16.128/28, IOS rejects it as an interface address and reports “Bad mask /28 for address …” (the message is slightly generic; the real issue is that the address is not a valid host address under that mask). Key Features / Best Practices: - Always determine the subnet block size (256 - mask value in the interesting octet). For /28, block size is 16. - Avoid assigning network or broadcast addresses to interfaces; choose a host address within the usable range. - IOS performs sanity checks to prevent invalid L3 interface configurations. Common Misconceptions: - “Bad mask” does not mean the router can’t use /28; it means the IP/mask combination is invalid. - Private addressing (192.168.0.0/16) is allowed on interfaces; it is unrelated to this error. Exam Tips: Quickly compute /28 ranges: last-octet boundaries every 16. If the last octet equals a boundary, it’s a network address; if it equals boundary+15, it’s a broadcast address. Here, 143 = 128 + 15, so it’s broadcast and cannot be assigned.
Using direct sequence spread spectrum, which three 2.4-GHz channels are used to limit collisions?
Channels 5, 6, and 7 are adjacent and heavily overlapping in 2.4 GHz. Although they are different channel numbers, their 20/22 MHz bandwidths overlap significantly, creating adjacent-channel interference. This increases contention and retransmissions, which can look like “collisions” and reduces overall WLAN performance. This is a common trap option because it seems like using multiple channels should reduce collisions.
Channels 1, 2, and 3 also overlap heavily. Because 2.4-GHz channels are only 5 MHz apart but occupy ~20–22 MHz, these channels share much of the same spectrum. Deploying APs on 1/2/3 in the same area typically worsens interference and leads to more retries and lower throughput. Different channel numbers do not guarantee separation in 2.4 GHz.
Channels 1, 6, and 11 are the classic non-overlapping channels in the 2.4-GHz band (where channels 1–11 are available). They are spaced far enough apart to avoid significant overlap for 20/22 MHz channels, minimizing adjacent-channel interference. This reduces retransmissions and contention, improving performance and effectively limiting collision-like behavior in WLANs.
Channels 1, 5, and 10 are not a non-overlapping set. Channel 5 overlaps with both 1 and 6, and channel 10 overlaps with 11 (and partially with 6/9 depending on bandwidth). While the numbers appear evenly spaced, the actual RF channel width makes them interfere with each other. This option is appealing if you assume channel numbers map directly to non-overlap.
Core Concept: This question tests 2.4-GHz Wi-Fi channel planning and how Direct Sequence Spread Spectrum (DSSS) / 802.11b (and the 20 MHz channelization used by 802.11g/n in 2.4 GHz) interacts with channel overlap. In 2.4 GHz, channels are spaced 5 MHz apart, but a typical Wi-Fi channel occupies about 22 MHz (802.11b DSSS) or 20 MHz (802.11g/n). Because the occupied bandwidth is much wider than the spacing, most adjacent channels overlap and cause adjacent-channel interference (ACI), which increases contention, retransmissions, and perceived “collisions.” Why the Answer is Correct: Channels 1, 6, and 11 are the standard three non-overlapping 2.4-GHz channels in regulatory domains where channels 1–11 are available (common in North America). Their center frequencies are far enough apart (separated by 25 MHz) that their spectral footprints do not significantly overlap for 20/22 MHz channels. Using only these three channels in a multi-AP design minimizes ACI and therefore reduces contention and retransmissions, improving throughput and stability. Key Features / Best Practices: - Use a reuse pattern (e.g., 1/6/11) across neighboring APs to avoid overlap. - Prefer 5 GHz/6 GHz where possible because they offer more non-overlapping channels. - In 2.4 GHz, avoid 40 MHz channels (802.11n) because they consume too much spectrum and worsen interference. - Remember that “collisions” in Wi-Fi are mostly avoided by CSMA/CA; the practical problem is interference and contention leading to retries. Common Misconceptions: - Thinking adjacent channels (like 5/6/7 or 1/2/3) are “different channels” and therefore safe. They are different numbers but still overlap heavily. - Assuming 1/5/10 works because it “spreads out.” Channel 10 overlaps with 11 and is not sufficiently separated from 5 for non-overlap. Exam Tips: For CCNA, memorize the 2.4-GHz non-overlapping set as 1, 6, 11 (and in some regions with channel 13 allowed, 1/5/9/13 can be used with careful planning, but the classic exam answer is 1/6/11). When asked about limiting interference/collisions in 2.4 GHz, choose the non-overlapping channels, not merely different channel numbers.
Which two statements about the purpose of the OSI model are accurate? (Choose two.)
Correct. The OSI model breaks networking into seven layers and describes the general functions performed at each layer (physical signaling, framing/MAC, logical addressing/routing, transport, and application-related services). This provides a common language for vendors and engineers to classify protocols and devices and to discuss where a feature operates (for example, routing at Layer 3).
Correct. OSI helps visualize how data moves through a network via encapsulation and decapsulation across layers. This improves understanding of end-to-end communication and supports structured troubleshooting (isolating whether a problem is at Layer 1, 2, 3, etc.). CCNA questions often rely on this conceptual flow to identify where headers are added/removed and where failures occur.
Incorrect. While layering encourages separation of concerns, it is not true that changes in one layer do not impact other layers. Real networks have cross-layer dependencies (e.g., Layer 2 MTU affects Layer 3 fragmentation and Layer 4 performance; Layer 1 errors affect higher-layer throughput). The OSI model reduces complexity, but it does not guarantee complete independence between layers.
Incorrect. The OSI model does not ensure reliable delivery; it is a reference framework. Reliability depends on specific protocols and mechanisms, such as TCP at Layer 4 (sequencing, acknowledgments, retransmissions) or certain Layer 2 features. The OSI model helps describe where reliability mechanisms might exist, but it does not provide them by itself.
Core Concept: The OSI (Open Systems Interconnection) model is a conceptual, seven-layer reference model created by ISO to standardize how network communication functions are described. It is not a protocol suite itself; it is a framework for understanding, designing, and troubleshooting networks by separating responsibilities into layers. Why the Answer is Correct: A is correct because the OSI model defines (categorizes) the major network functions that occur at each layer (e.g., addressing and routing concepts at Layer 3, framing and MAC addressing at Layer 2). This layered definition helps vendors and engineers discuss where a function belongs and how components interact. B is correct because the OSI model facilitates understanding of how information travels through a network: data is encapsulated as it moves down the sender’s stack (adding headers/trailers) and decapsulated as it moves up the receiver’s stack. This mental model is central to CCNA troubleshooting (identifying whether an issue is physical, data link, network, etc.). Key Features / Best Practices: The OSI model promotes modularity and interoperability by encouraging clear boundaries between functions. In practice, engineers map real protocols to OSI layers (e.g., Ethernet at L1/L2, IP at L3, TCP/UDP at L4, HTTP/DNS at L7) to reason about behavior and isolate faults. Troubleshooting often follows a layered approach (bottom-up or top-down) using OSI as the organizing structure. Common Misconceptions: Option C sounds like “layer independence,” but it is overstated. Changes in one layer can and do impact others (e.g., MTU at L2 affects fragmentation/PMTUD at L3/L4; wireless interference at L1 affects throughput at L4). The model aims to reduce coupling, not eliminate it. Option D incorrectly implies the OSI model ensures reliability. Reliability is provided by specific protocols and mechanisms (e.g., TCP acknowledgments/retransmissions at L4, or L2 error detection). The OSI model itself is descriptive, not an enforcing mechanism. Exam Tips: Remember: OSI is a reference model used to describe functions and aid troubleshooting, not to guarantee performance or reliability. Watch for absolute wording like “ensures” or “do not impact”—these are often incorrect on CCNA. Focus on what the model is for: standard terminology, functional separation, and understanding encapsulation/decapsulation.