
Cisco
1,128+ 무료 연습 문제 (AI 검증 답안 포함)
AI 기반
모든 Cisco 200-301: Cisco Certified Network Associate (CCNA) 답안은 3개의 최고 AI 모델로 교차 검증하여 최고의 정확도를 보장합니다. 선택지별 상세 해설과 심층 문제 분석을 제공합니다.
How do TCP and UDP differ in the way that they establish a connection between two endpoints?
Correct. TCP establishes a connection using the three-way handshake (SYN, SYN-ACK, ACK) before sending application data, enabling reliable, ordered delivery with retransmissions and acknowledgments. UDP is connectionless and does not perform a handshake; it sends datagrams without guaranteeing delivery, order, or duplicate protection. Any reliability with UDP must be implemented by the application layer, not by UDP itself.
Incorrect. TCP does use SYN as part of connection establishment, but UDP does not use acknowledgment packets at the transport layer. UDP has no built-in ACK mechanism and no connection setup. While some applications running over UDP may send their own acknowledgments, those are application-layer behaviors and not a feature of UDP as a transport protocol.
Incorrect. This reverses the protocols. TCP is the reliable, connection-oriented protocol that provides sequencing, acknowledgments, and retransmissions. UDP is connectionless and does not provide reliable delivery. On the exam, “reliable transfer” and “connection-oriented” are strong indicators of TCP, not UDP.
Incorrect. UDP does not have SYN, SYN-ACK, FIN, or any TCP-style flags because UDP’s header is very small and contains only source port, destination port, length, and checksum. TCP uses flags such as SYN, ACK, and FIN in its TCP header to establish and tear down connections. The option incorrectly assigns TCP flags to UDP.
Core Concept: This question tests the fundamental difference between TCP and UDP regarding connection establishment and reliability. TCP is a connection-oriented, reliable transport protocol that establishes a session before data transfer. UDP is connectionless and does not establish a session; it simply sends datagrams without built-in delivery guarantees. Why the Answer is Correct: TCP establishes a connection using the three-way handshake: SYN (synchronize), SYN-ACK (synchronize acknowledgment), and ACK (acknowledgment). This process synchronizes sequence numbers and confirms both endpoints are ready to communicate, enabling reliable, ordered delivery with retransmissions and flow control. UDP does not perform any handshake; there is no session setup, no sequence-number synchronization, and no transport-layer acknowledgments. As a result, UDP does not guarantee delivery, ordering, or duplicate suppression—those functions must be handled by the application if needed. Key Features: TCP features include connection establishment/teardown, sequence numbers, acknowledgments, retransmission on loss, ordered delivery, flow control (windowing), and congestion control. UDP features include minimal overhead (8-byte header), no handshake, and low latency; it is commonly used for DNS queries, VoIP, streaming, and some routing/management protocols where speed matters and occasional loss is acceptable or handled elsewhere. Common Misconceptions: A frequent trap is thinking UDP uses acknowledgments or “reliability” because some applications built on UDP implement their own ACK/sequence logic (for example, certain streaming or tunneling solutions). However, that is not UDP itself. Another misconception is mixing up TCP flags (SYN/ACK/FIN) and assuming UDP has similar bits—UDP has no flags like SYN or FIN. Exam Tips: For CCNA, remember: TCP = connection-oriented + three-way handshake + reliability; UDP = connectionless + no handshake + best-effort delivery. If a question mentions SYN/SYN-ACK/ACK, it is TCP. If it emphasizes low overhead/latency and no delivery guarantee, it is UDP.
이동 중에도 모든 문제를 풀고 싶으신가요?
Cloud Pass를 무료로 다운로드하세요 — 모의고사, 학습 진도 추적 등을 제공합니다.
이동 중에도 모든 문제를 풀고 싶으신가요?
Cloud Pass를 무료로 다운로드하세요 — 모의고사, 학습 진도 추적 등을 제공합니다.
이동 중에도 모든 문제를 풀고 싶으신가요?
Cloud Pass를 무료로 다운로드하세요 — 모의고사, 학습 진도 추적 등을 제공합니다.


이동 중에도 모든 문제를 풀고 싶으신가요?
무료 앱 받기
Cloud Pass를 무료로 다운로드하세요 — 모의고사, 학습 진도 추적 등을 제공합니다.
Which 802.11 frame type is Association Response?
Correct. Association Response is a management frame subtype used by the AP to accept or reject a client’s Association Request and to provide association parameters (e.g., AID, capabilities, supported rates). Management frames handle discovery and connection lifecycle tasks such as authentication, association, reassociation, and termination (deauth/disassoc).
Incorrect. “Protected frame” is not one of the three 802.11 frame types. Protection refers to security mechanisms (e.g., encryption for data frames with WPA2/WPA3, and 802.11w PMF for certain management frames). An Association Response may be protected when PMF is enabled, but its type remains a management frame.
Incorrect. Action frames are a category within management frames used to carry specific management actions (e.g., spectrum management, QoS, block ack negotiation, radio measurement). While Action frames are management-related, “Association Response” is its own management subtype and is not classified as an Action frame.
Incorrect. Control frames support reliable delivery and medium access coordination, such as ACK, RTS, CTS, PS-Poll, and Block Ack. They do not establish association. Since Association Response is part of the join process and BSS membership establishment, it is not a control frame.
Core Concept: IEEE 802.11 (Wi-Fi) defines three primary frame types: management, control, and data. Management frames establish and maintain the wireless connection (discovery, authentication, association, roaming support). Control frames assist in delivering data reliably (e.g., acknowledgments, RTS/CTS). Data frames carry upper-layer payloads. Why the Answer is Correct: An Association Response is a management frame subtype. During the process of joining a WLAN, a client (STA) sends an Association Request to an access point (AP). The AP replies with an Association Response indicating whether the association is accepted or denied and providing key parameters (such as the assigned Association ID, supported rates, and capability information). Because this exchange is part of connection establishment and BSS membership, it is classified as a management frame. Key Features / What to Know: Association is distinct from authentication. In classic 802.11, “authentication” (Open System or Shared Key) occurs before association; in modern enterprise networks, 802.1X/EAP authentication happens after association at Layer 2/2.5, but the 802.11 association step is still required to join the BSS. Management frames include Beacon, Probe Request/Response, Authentication, Association Request/Response, Reassociation Request/Response, Disassociation, and Deauthentication. Some management frames can be protected by 802.11w (PMF—Protected Management Frames), but “protected” is not a separate frame type; it’s a security feature applied to certain management frames. Common Misconceptions: “Action” frames are also management subtypes, which can confuse test-takers. However, Association Response is not an Action frame; it is explicitly an Association Response management subtype. “Protected frame” sounds like a category, but it refers to whether a frame is cryptographically protected (e.g., PMF) rather than being a distinct 802.11 frame type. Exam Tips: Memorize the big three: Management (join/leave/discover), Control (assist delivery), Data (payload). If the frame name relates to joining a WLAN (probe/auth/assoc/reassoc/deauth/disassoc), it’s management. If it’s ACK/RTS/CTS/Block Ack, it’s control.
In which way does a spine-and-leaf architecture allow for scalability in a network when additional access ports are required?
Adding both a spine and a leaf can increase capacity, but it is not the fundamental scaling method when you specifically need more access ports. Spine-and-leaf scales access by adding leaf switches; spines are added when you need more fabric bandwidth/paths. Also, “redundant connections between them” is vague and doesn’t capture the required full-mesh leaf-to-all-spines connectivity rule.
Adding a spine switch can improve overall fabric capacity and increase the number of ECMP paths, but it does not directly add access ports. The “at least 40 GB uplinks” detail is not a defining requirement of spine-and-leaf scalability; link speed is an implementation choice. Scalability is about adding nodes (leaf/spine) with consistent interconnections, not a specific uplink rate.
Correct. To scale when more access ports are needed, you add a new leaf switch and connect it to every spine switch. This preserves the fabric’s predictable 2-hop leaf-to-leaf forwarding and enables ECMP across multiple spines. It’s the classic scale-out approach: more leafs = more edge/access ports while maintaining consistent performance and resiliency.
A single connection to a “core spine switch” contradicts the spine-and-leaf design principle. Spine-and-leaf avoids a single core dependency by having each leaf connect to all spines, providing multiple equal-cost paths and eliminating bottlenecks. Connecting a leaf to only one spine reduces redundancy, limits ECMP, and can create oversubscription and single points of failure.
Core concept: Spine-and-leaf is a data-center switching topology designed for predictable, scalable east-west traffic. It uses a two-tier fabric: leaf switches provide access ports (servers/endpoints), and spine switches provide the high-speed interconnect. The key scalability property is that every leaf connects to every spine, creating consistent, low-latency paths. Why the answer is correct: When additional access ports are required, you scale out by adding another leaf switch. To preserve the fabric’s uniform performance and full bisection bandwidth characteristics, the new leaf must connect to every spine switch. This maintains the design goal that any endpoint on any leaf is at most one leaf-to-spine hop away from any other leaf (typically a 2-hop path leaf→spine→leaf). Adding a leaf in this way increases port capacity without redesigning the entire network. Key features / best practices: - “Scale-out” model: add leafs for more access ports; add spines for more fabric capacity. - Equal-cost multipathing (ECMP) is commonly used so traffic can load-balance across multiple spine paths. - Consistent cabling rule: each leaf has uplinks to all spines; each spine has downlinks to all leafs. - Underlay is often routed (eBGP/OSPF/IS-IS) to support ECMP; overlays (e.g., VXLAN) may be used, but CCNA focuses on the topology concept. Common misconceptions: - Thinking you add both a spine and a leaf together (not required for access-port growth). - Believing a single “core spine” is sufficient (that reintroduces hierarchy and bottlenecks). - Assuming higher-speed uplinks alone provide scalability (speed helps capacity, not the scaling method). Exam tips: - If the question mentions “more access ports,” think “add a leaf.” - If it mentions “more bandwidth between leafs” or “more fabric capacity,” think “add a spine.” - Remember the defining rule: leafs connect to all spines (not just one), enabling ECMP and predictable latency.
Which command automatically generates an IPv6 address from a specified IPv6 prefix and MAC address of an interface?
"ipv6 address dhcp" tells the interface to obtain its IPv6 address via DHCPv6. The address is assigned by a DHCPv6 server, not generated locally from the interface MAC and a specified prefix. This is useful in managed environments requiring centralized address assignment, but it does not meet the requirement of deriving the address from a configured prefix plus MAC.
"ipv6 address 2001:DB8:5:112::/64 eui-64" is the correct command. It uses the explicitly configured /64 prefix and automatically generates the 64-bit interface ID using the EUI-64 method derived from the interface MAC address. This exactly matches “automatically generates an IPv6 address from a specified IPv6 prefix and MAC address.”
"ipv6 address autoconfig" enables SLAAC on the interface. The device learns the prefix from Router Advertisements (RA) and then forms an address (often using EUI-64 or privacy extensions). However, it does not use a specified prefix in the command itself; the prefix comes from the network’s RA, so it doesn’t match the question’s requirement.
"ipv6 address 2001:DB8:5:112::2/64 link-local" is incorrect because link-local addresses must use the FE80::/10 prefix, not a global unicast prefix like 2001:DB8:.... Also, this command is manually setting a specific address (::2) rather than generating an interface ID from the MAC. It neither uses EUI-64 nor correct link-local formatting.
Core Concept: This question tests IPv6 interface address configuration methods, specifically generating the interface ID portion of an IPv6 address automatically from the interface MAC address. In IPv6, a typical global unicast address is composed of a 64-bit prefix (network portion) and a 64-bit interface ID (host portion). One common way to derive the interface ID is EUI-64. Why the Answer is Correct: The command "ipv6 address 2001:DB8:5:112::/64 eui-64" configures an IPv6 address using the given /64 prefix and instructs the router to automatically generate the remaining 64-bit interface ID using the EUI-64 algorithm based on the interface’s MAC address. This directly matches the requirement: generate an IPv6 address from a specified IPv6 prefix and the MAC address. Key Features / How It Works: With EUI-64, the device takes the 48-bit MAC address, inserts FFFE in the middle to make it 64 bits, and flips the 7th bit (Universal/Local bit) of the first byte. The resulting 64-bit value becomes the interface ID. This produces a stable address per interface (unless the MAC changes) and avoids manual host addressing. It is commonly used on routers and infrastructure devices when you want deterministic addressing without DHCPv6. Common Misconceptions: Many confuse SLAAC ("ipv6 address autoconfig") with EUI-64. SLAAC uses Router Advertisements to learn the prefix; it may use EUI-64 or privacy extensions for the interface ID, but it does not use a manually specified prefix in the command. Another confusion is with DHCPv6 ("ipv6 address dhcp"), which obtains an address from a DHCPv6 server rather than deriving it from the MAC. Exam Tips: Remember the keyword: "eui-64" explicitly means “build the interface ID from the MAC.” If the question says “from a specified prefix,” look for a command that includes the prefix in the configuration line. If it says “learn prefix from RA,” that points to "autoconfig". If it says “get address from server,” that points to DHCPv6.
When configuring IPv6 on an interface, which two IPv6 multicast groups are joined? (Choose two.)
2000::/3 is the IPv6 global unicast address range (publicly routable IPv6 space). It is not a multicast group (multicast is FF00::/8), and interfaces do not “join” unicast ranges. This option is a common distractor because it is a well-known IPv6 prefix, but it describes address allocation, not multicast membership behavior.
2002::/16 is historically associated with 6to4 transition addressing (not a multicast group). The specific value “2002::5” is not a standard IPv6 multicast group that an interface joins. Since it does not start with FF, it is not multicast. This option tries to mislead by presenting something that looks like a special IPv6 address.
FC00::/7 is the Unique Local Address (ULA) range, similar in concept to RFC1918 private IPv4 addressing. It is unicast, not multicast, and therefore not something an interface “joins.” ULAs can be configured on interfaces, but they do not represent multicast group memberships used for discovery/control-plane functions.
FF02::1 is the all-nodes multicast group with link-local scope. Every IPv6-enabled node joins this group on each interface. It is used for communications intended for all IPv6 devices on the local segment, including reception of Router Advertisements and other link-local control messages. This is one of the two default multicast groups relevant to basic IPv6 operation.
FF02::2 is the all-routers multicast group with link-local scope. IPv6 routers join this group on each IPv6-enabled interface. Hosts use it to send Router Solicitations to discover routers, and various control-plane functions may target all routers on the link. It is a key IPv6 multicast group to remember for CCNA exam questions.
Core Concept: This question tests IPv6 multicast behavior on an interface. Unlike IPv4, IPv6 relies heavily on multicast (not broadcast) for essential control-plane functions such as Neighbor Discovery Protocol (NDP) and router discovery. When IPv6 is enabled/configured on an interface, the interface automatically joins certain well-known multicast groups. Why the Answer is Correct: An IPv6-enabled interface joins two key link-local scope multicast groups by default: 1) FF02::1 (All-nodes multicast): every IPv6 node on the local link joins this group. It is used for reaching all IPv6 devices on the segment (similar in intent to IPv4 broadcast, but implemented as multicast). 2) FF02::2 (All-routers multicast): IPv6 routers join this group on each IPv6-enabled interface. It is used by hosts to discover routers and by routing/control mechanisms that need to address all routers on the local link. Key Features / How It Works: - FF02::/16 indicates link-local scope multicast (FF = multicast, 02 = link-local scope). These packets never get routed beyond the local segment. - NDP uses ICMPv6 messages (Router Solicitation/Advertisement, Neighbor Solicitation/Advertisement). Router Solicitation is typically sent to FF02::2 to reach routers; Router Advertisements are often sent to FF02::1 to inform all nodes. - In addition to these, interfaces also join solicited-node multicast groups (FF02::1:FFxx:xxxx) for each unicast/anycast address, but that is not one of the provided options. Common Misconceptions: - Some candidates confuse IPv6 multicast groups with IPv6 address ranges (global unicast, unique local, etc.). The question is specifically about multicast groups joined on an interface, which are FF00::/8 addresses. - Another trap is thinking only routers join multicast groups. In reality, all IPv6 nodes join FF02::1, while only routers join FF02::2. Exam Tips: - Memorize: FF02::1 = all nodes, FF02::2 = all routers. - Recognize scope: FF02 means link-local multicast (stays on the LAN). - If you see non-FF addresses (like 2000::/3 or FC00::/7), they are not multicast groups and are not “joined” like multicast memberships.
What is the default behavior of a Layer 2 switch when a frame with an unknown destination MAC address is received?
Incorrect. A switch does not add the destination MAC address to the MAC address table just because it appears as a destination in a received frame. MAC learning is based on the source MAC address (and VLAN/ingress port). If the destination is unknown, the switch cannot forward it to a single port; it floods it within the VLAN instead.
Incorrect. In normal operation, unknown unicast frames are handled in hardware (ASIC) and flooded within the VLAN; they are not typically sent to the CPU for “destination MAC learning.” The switch learns MAC addresses from the source field of frames it receives, not by punting frames to the CPU to discover destinations.
Correct. If the destination MAC is not in the MAC address table, the switch treats it as an unknown unicast and floods the frame out all other ports in the same VLAN (excluding the ingress port). This ensures the frame can still reach the destination, and the return traffic allows the switch to learn the destination MAC-to-port mapping.
Incorrect. Dropping unknown unicast frames is not the default behavior of a Layer 2 switch. If switches dropped unknown destinations, initial communication would fail whenever MAC entries are missing (e.g., after aging or reboot). Flooding is the default mechanism to deliver the frame and enable subsequent MAC learning via the destination’s reply.
Core Concept: This question tests Layer 2 switching behavior when the destination MAC address is unknown (an “unknown unicast” frame). A switch makes forwarding decisions using its MAC address table (CAM table), which maps MAC addresses to switch ports per VLAN. Why the Answer is Correct: When a Layer 2 switch receives a frame, it first learns the source MAC address by recording the source MAC and the ingress port in the MAC address table (within that VLAN). Then it looks up the destination MAC. If the destination MAC is not in the table (unknown), the switch cannot determine the correct egress port. The default behavior is to flood the frame out all ports in the same VLAN except the port it was received on. This flooding increases the chance the frame reaches the correct destination, and when the destination replies, the switch learns that MAC-to-port mapping, reducing future flooding. Key Features / Best Practices: Flooding is constrained to the VLAN (broadcast domain) of the incoming frame; trunks carry it only for that VLAN. Flooding also applies to broadcasts (FF:FF:FF:FF:FF:FF) and unknown multicast (depending on features like IGMP snooping). To reduce unnecessary flooding, networks rely on stable MAC learning, appropriate VLAN design, and avoiding loops (STP). Security features like port security, DHCP snooping, and dynamic ARP inspection don’t change the fundamental unknown-unicast flooding behavior, but can limit abuse. Common Misconceptions: Many assume the switch “learns” the destination MAC from the unknown frame—switches learn from the source MAC, not the destination. Others think the switch sends unknown frames to the CPU; in normal hardware switching, flooding is done in ASICs, not punted to CPU. Dropping unknown unicasts is not default behavior; it would break basic Ethernet communication when the table is empty or aged out. Exam Tips: Remember the three classic Layer 2 forwarding cases: 1) Known unicast: forward to the single learned port. 2) Unknown unicast: flood within the VLAN (except ingress). 3) Broadcast/many multicasts: flood within the VLAN. Also remember MAC table entries age out (commonly ~300 seconds on Cisco), so unknown-unicast flooding can reappear after inactivity.
An engineer must configure a /30 subnet between two routes. Which usable IP address and subnet mask combination meets this criteria?
This is the best available answer because it is the only option that uses the correct /30 subnet mask, 255.255.255.252. A /30 is the standard mask expected for a two-device point-to-point link on CCNA-style questions. Strictly speaking, 10.2.1.3 in the 10.2.1.0/30 subnet is the broadcast address, so it is not actually usable as a host IP. However, since all other options have an incorrect or invalid mask, A is the intended exam answer.
This option uses 255.255.255.248, which is a /29 mask rather than a /30. A /29 creates blocks of 8 addresses and provides 6 usable host addresses, so it does not match the requirement for a /30 subnet between two routers. Even though 192.168.1.1 can be a valid host address in some /29 subnet, the subnet mask itself makes this option incorrect. The exam objective here is primarily to recognize the proper /30 mask.
This option also uses 255.255.255.248, which corresponds to /29 and not /30. In addition, with that mask, 172.16.1.4 is the network address of the 172.16.1.0/29 or 172.16.1.0-7 block depending on boundary interpretation from the given host, so it is not a usable host address. That means this choice fails both the mask requirement and the usable-address requirement. It is therefore clearly incorrect.
This option shows 225.255.255.252 as the subnet mask, which is not a valid IPv4 subnet mask. Valid subnet masks must have contiguous 1 bits from left to right, and valid octet values are limited to values such as 0, 128, 192, 224, 240, 248, 252, 254, and 255. Because 225 is not one of those valid contiguous-mask values, the mask is invalid regardless of the IP address. Therefore this option cannot represent a valid /30 subnet.
Core Concept: A /30 subnet is commonly used for point-to-point links (such as between two routers) because it provides exactly 4 IPv4 addresses: 1 network address, 2 usable host addresses, and 1 broadcast address. A /30 prefix corresponds to the subnet mask 255.255.255.252. Why the Answer is Correct: Option A uses the mask 255.255.255.252, which is a /30. With a /30, subnets increment by 4 in the last octet (…0, …4, …8, etc.). The address 10.2.1.3 falls in the 10.2.1.0/30 block (10.2.1.0–10.2.1.3). In that block: - Network: 10.2.1.0 - Usable hosts: 10.2.1.1 and 10.2.1.2 - Broadcast: 10.2.1.3 So 10.2.1.3 is not usable as a host address, but the question asks for a “usable IP address and subnet mask combination” that meets /30 criteria. On CCNA-style questions, the key criteria is the /30 mask; however, strictly speaking, the usable IPs would be .1 or .2. Among the options, only A provides the correct /30 mask, making it the best/expected answer. Key Features / Best Practices: - /30 is traditional for router-to-router links (2 usable IPs). - Verify usability by identifying network and broadcast addresses. - Modern designs may use /31 (RFC 3021) to conserve addresses, but /30 remains a common exam focus. Common Misconceptions: - Confusing /29 (255.255.255.248) with /30 (255.255.255.252). /29 provides 8 addresses (6 usable), not ideal when only two endpoints exist. - Not checking whether the chosen IP is a network/broadcast address. - Accepting an invalid subnet mask (typo) as if it were /30. Exam Tips: - Memorize: /30 = 255.255.255.252, block size 4. - For any candidate IP, quickly compute the subnet range and identify network/broadcast. - If only one option has the correct /30 mask, that is typically the intended correct choice even if the IP itself looks suspicious; in real configs, you would choose one of the two usable addresses.
Which network allows devices to communicate without the need to access the Internet?
172.9.0.0/16 is NOT a private RFC 1918 network. The private 172 range is 172.16.0.0/12, which covers only 172.16.0.0 through 172.31.255.255. Since 172.9.x.x is outside that boundary, it is publicly routable in principle (assuming it is allocated/used publicly) and is not intended for internal-only addressing without Internet routing.
172.28.0.0/16 is a private network because it falls within the RFC 1918 private block 172.16.0.0/12 (172.16–172.31). Devices using addresses in this range can communicate internally without Internet access. If Internet connectivity is later required, NAT/PAT at the edge can translate these private addresses to a public address.
192.0.0.0/8 is not an RFC 1918 private range. The private range in the 192 space is specifically 192.168.0.0/16. The 192.0.0.0/8 block includes various special-use and publicly routable ranges (for example, 192.0.2.0/24 is TEST-NET-1 for documentation), but it is not the standard private addressing block used for internal networks.
209.165.201.0/24 is a public IPv4 network (not RFC 1918). In many Cisco CCNA labs and examples, 209.165.200.0/24 or similar ranges are used to represent an ISP/public Internet segment. Devices can certainly communicate within this subnet, but it is not an internal-only private range; it is intended to be globally routable rather than isolated from the Internet.
Core Concept: This question tests knowledge of private IPv4 addressing (RFC 1918). Private IP networks are not routable on the public Internet. Devices can communicate internally (within the private network and between private networks via routing/VPN), but to reach the Internet they typically require Network Address Translation (NAT). Why the Answer is Correct: RFC 1918 defines three private IPv4 ranges: 1) 10.0.0.0/8 2) 172.16.0.0/12 (172.16.0.0 through 172.31.255.255) 3) 192.168.0.0/16 Option B is 172.28.0.0/16, which falls inside 172.16.0.0/12, so it is a private network. Using this range allows devices to communicate on a LAN/WAN without needing Internet connectivity, and it prevents accidental global routing on the public Internet. Key Features / Best Practices: - Private addressing is used for internal networks to conserve public IPv4 space. - To access the Internet from private IPs, configure NAT/PAT on an edge router/firewall. - Use proper subnetting and summarization; for example, 172.16.0.0/12 is a large block often subdivided into /16s or smaller. - Ensure internal routing (static, OSPF, EIGRP, etc.) is in place so private subnets can reach each other. Common Misconceptions: - Many assume any 172.x.x.x network is private. That is incorrect: only 172.16.0.0–172.31.255.255 is private. Therefore 172.9.0.0/16 (Option A) is public. - 192.0.0.0/8 (Option C) looks like it might be private because it starts with 192, but the private range is specifically 192.168.0.0/16. - Some exam questions use 209.165.201.0/24 (Option D) as a “public/ISP” example network in Cisco materials; it is not RFC 1918 private. Exam Tips: Memorize RFC 1918 exactly and be able to quickly check boundaries: - 172 private starts at 172.16 and ends at 172.31. If a 172 address is outside that window (like 172.9.x.x), it is not private. Also remember: 192.168 is private, not all 192.x.
Which IPv6 address block sends packets to a group address rather than a single address?
2000::/3 is the IPv6 Global Unicast range (publicly routable on the Internet, similar to public IPv4). It is used for one-to-one communication to a single interface address. While global unicast can be assigned to many hosts, each packet is still destined to one specific unicast address, not a multicast group.
FC00::/7 is Unique Local Addressing (ULA), often compared to RFC1918 private IPv4 space. It is still unicast (one-to-one) and intended for internal networks, VPNs, and environments that want stable internal addressing without Internet routability. It does not represent group addressing; multicast is a different prefix.
FE80::/10 is IPv6 link-local unicast. Every IPv6-enabled interface typically has a link-local address and uses it for neighbor discovery, router discovery, and some routing protocol adjacencies on the local segment. Despite being “local,” it is not a group address block; it is unicast and not routed beyond the local link.
FF00::/8 is the IPv6 multicast block. Any destination address beginning with FF indicates multicast, meaning the packet is intended for a group of receivers that have joined that multicast group. IPv6 uses multicast extensively (and does not use broadcast), making FF00::/8 the correct choice for group addressing.
Core Concept: This question tests IPv6 address types and how IPv6 delivers traffic to one device versus many. In IPv6, “sending packets to a group address rather than a single address” describes multicast. IPv6 relies heavily on multicast (and eliminates broadcast), using multicast groups for functions like Neighbor Discovery (ND), Router Advertisements, and DHCPv6-related messaging. Why the Answer is Correct: IPv6 multicast addresses are identified by the prefix FF00::/8 (all addresses starting with FF). Packets sent to an FF00::/8 destination are delivered to all interfaces that have joined that multicast group, which is exactly “a group address rather than a single address.” The structure of multicast addresses includes flags and scope (for example, link-local scope vs site-local/global), but the key exam recognition point is the FF00::/8 prefix. Key Features / Details: - IPv6 has three main delivery types: unicast (one-to-one), multicast (one-to-many), and anycast (one-to-nearest of many). - Multicast replaces IPv4 broadcast in IPv6. Common groups include: - FF02::1 (all nodes on the local link) - FF02::2 (all routers on the local link) - Multicast scope is encoded in the address (e.g., FF02::/16 indicates link-local scope). Common Misconceptions: - FE80::/10 (link-local) is often confused with “local group” behavior because it’s used for ND and routing adjacencies, but it is still unicast addressing. - FC00::/7 (Unique Local Address) is “private-like” unicast, not multicast. - 2000::/3 is global unicast (public routable) and also not multicast. Exam Tips: Memorize the high-level IPv6 prefixes: - FF00::/8 = multicast - FE80::/10 = link-local unicast - FC00::/7 = ULA (private) unicast - 2000::/3 = global unicast If you see “group” delivery in IPv6, think multicast and immediately look for FF00::/8.
What are two reasons that cause late collisions to increment on an Ethernet interface? (Choose two.)
CSMA/CD is the mechanism used on half-duplex Ethernet to detect and handle collisions, but it does not by itself cause late collisions. Late collisions indicate that collisions are happening outside the normal collision window, which typically points to a problem (duplex mismatch or excessive propagation delay). So CSMA/CD is related, but it’s not the root cause being tested.
Late collisions are strongly associated with half-duplex operation, especially when there is a duplex mismatch (one side half-duplex, the other full-duplex). The half-duplex side uses CSMA/CD and can detect collisions, but with a mismatch the collision behavior becomes abnormal and can be detected late. This is one of the most common real-world causes of late collision counters incrementing.
Ethernet does not wait 15 seconds before retransmitting. After a collision, Ethernet uses a binary exponential backoff algorithm based on slot times (512 bit-times in classic Ethernet). The wait is a randomized short delay, increasing with repeated collisions, not a fixed multi-second timer. This option describes behavior that does not match Ethernet MAC operation.
A late collision is not defined as “after the 32nd byte.” Ethernet collision timing is based on slot time (512 bits = 64 bytes) and the collision window concept, not a 32-byte threshold. While late collisions do occur after the normal collision window, the specific byte count here is incorrect and is a common trap answer.
Exceeding cable length limits (or otherwise increasing propagation delay beyond Ethernet design assumptions) can cause collisions to be detected late. If the signal takes too long to travel end-to-end and back, a device may transmit beyond the collision window before learning about a collision. This violates Ethernet timing rules and is a classic cause of late collisions on half-duplex segments.
Core concept: Late collisions are an Ethernet error condition seen on interfaces operating in half-duplex (CSMA/CD). A “late” collision is one that occurs after the normal collision window—i.e., after the sender has already transmitted the first part of the frame and should have been “safe” from collisions if the network meets Ethernet timing rules. Why the answers are correct: B is correct because late collisions almost always indicate a duplex mismatch (one side half-duplex, the other full-duplex) or a half-duplex segment where collisions can occur. In a duplex mismatch, the full-duplex side never expects collisions and does not perform CSMA/CD, while the half-duplex side does. The half-duplex side may detect collisions late because the full-duplex side can transmit at the same time, effectively creating collisions that occur outside the expected timing window. E is correct because exceeding cable length limits (or otherwise violating Ethernet physical/timing constraints, such as too many repeaters/hubs) increases propagation delay. Ethernet’s collision detection relies on the rule that a station must still be transmitting when a collision from the far end can propagate back. If the network is too long, collisions can be detected “late,” after more of the frame has already been sent. Key features / best practices: - Ensure both ends of a link use the same duplex and speed (preferably auto-negotiation on modern Ethernet). - Avoid hubs/half-duplex designs; use switches and full-duplex. - Respect cabling standards (e.g., 100m for copper twisted pair Ethernet) and proper physical layer installation. - Troubleshoot by checking interface counters: late collisions, FCS/CRC errors, runts/giants, and duplex settings. Common misconceptions: Option D sounds plausible because it references a byte count, but late collision is defined by the collision window (timing/slot time), not “after the 32nd byte.” Classic Ethernet uses a 512-bit (64-byte) slot time concept; the question’s 32-byte threshold is not the standard definition. Option A is too generic: CSMA/CD being used is a prerequisite for collisions, but it is not a “cause” of late collisions; late collisions point to abnormal conditions like duplex mismatch or excessive propagation delay. Option C is incorrect because Ethernet backoff is measured in slot times with a binary exponential backoff algorithm, not a fixed 15-second wait. Exam tips: If you see “late collisions,” think: (1) duplex mismatch (half vs full) and (2) physical layer timing issues like excessive cable length or too-large collision domain. On modern switched full-duplex links, collisions should be essentially nonexistent.
What is a benefit of using a Cisco Wireless LAN Controller?
Correct. A WLC centralizes configuration and management for lightweight APs. You define SSIDs (WLANs), security, VLANs, QoS, and RF policies on the controller and apply them across many APs. This reduces manual per-AP configuration, speeds rollouts, and ensures consistent settings enterprise-wide, which is a primary reason controller-based WLANs are used.
Incorrect. While a controller-based design introduces concepts like CAPWAP, AP join processes, and controller profiles, the goal is to simplify operations at scale. Centralized management typically reduces complexity overall because changes are made once on the WLC rather than repeatedly on each AP, and policies remain consistent.
Incorrect. Multiple SSIDs can use the same authentication method (for example, several SSIDs can all use WPA2/WPA3-Enterprise with 802.1X, or multiple SSIDs can use PSK). SSIDs are separate logical WLANs, but they are not restricted to unique authentication methods per SSID.
Incorrect. WLCs manage lightweight APs (controller-based). Autonomous APs are independently managed and do not require or typically integrate with a WLC for centralized control. Some AP models can be converted between modes, but the WLC benefit is specifically tied to lightweight AP operation.
Core Concept: This question tests the CCNA concept of centralized wireless LAN architecture using a Cisco Wireless LAN Controller (WLC) with lightweight access points (APs). In controller-based WLANs, APs primarily handle RF functions while the WLC provides centralized configuration, policy, and management. Why the Answer is Correct: A key benefit of a WLC is centralized management: you configure WLANs (SSIDs), security settings, VLAN mappings, QoS, RF policies, and other parameters once on the controller, and those settings are pushed to all joined lightweight APs. This eliminates the need to log into and configure each AP individually, which reduces administrative effort, speeds deployments, and improves consistency across the wireless network. Key Features / Best Practices: A WLC commonly provides: - Centralized WLAN/SSID creation and consistent security (WPA2/WPA3, 802.1X, PSK) - Centralized AP provisioning and templates (AP groups, RF profiles) - RF management features (RRM) to optimize channels and power - Centralized monitoring, troubleshooting, and client visibility Operationally, this supports scalable enterprise WLANs where dozens/hundreds of APs must behave consistently. It also reduces configuration drift and misconfigurations that can occur when managing APs one-by-one. Common Misconceptions: Some may think central management “requires more complex configurations” (B). While the controller introduces an architecture to learn (CAPWAP, controller/AP join process), the overall operational model is simpler at scale because changes are made once. Option D is tempting because Cisco has both autonomous and lightweight AP modes, but WLCs manage lightweight APs; autonomous APs are configured independently and do not rely on a WLC. Option C is incorrect because different SSIDs can absolutely use the same authentication method; for example, multiple SSIDs can all use WPA2-Enterprise with 802.1X. Exam Tips: For CCNA, remember: WLC + lightweight APs = centralized configuration and policy, CAPWAP tunnels for control/management (and sometimes data depending on design), and easier scaling. If an option mentions “configure once, apply to many APs,” that is a classic WLC benefit and is usually the correct direction.
Which action is taken by switch port enabled for PoE power classification override?
Correct. With PoE power classification override (administrative power limit), the switch enforces the configured maximum. If the powered device draws more than that value, the port experiences a power violation and PoE is shut down to protect the switch’s power budget. On many Cisco switches this results in the interface being placed into an err-disabled state until manually recovered or until errdisable recovery is configured.
Incorrect. While PoE events (device detection, power granted/denied, overload) can generate syslog messages, generating a syslog message is not the key behavior of “power classification override.” The override feature is about enforcing an administratively set power allocation/limit, not about logging when a device begins drawing power.
Incorrect. PoE power monitoring and policing do not require pausing Ethernet data forwarding. Power negotiation occurs at the physical layer and through discovery/classification mechanisms, but normal data traffic is not temporarily halted simply because the switch checks power usage. A port may be shut down only if a fault/violation occurs, not paused for measurement.
Incorrect. PoE does not assume a device has failed just because it draws less than a configured minimum; many devices have variable power draw depending on load. PoE protection focuses on overload, short-circuit, or exceeding the allowed/allocated power. Underutilization is normal and does not trigger a disconnect behavior in standard PoE override/policing logic.
Core Concept: This question tests Power over Ethernet (PoE) power classification and the specific behavior of a switch port when “power classification override” is enabled. In Cisco PoE, the switch (PSE) allocates power to the powered device (PD) based on either IEEE class (0–4 in 802.3af, higher in 802.3at/bt) or Cisco-specific methods such as CDP/LLDP power negotiation. Administrators can also manually set/override the power allocation on a port. Why the Answer is Correct: With power classification override (commonly implemented via commands like “power inline consumption <mW>” or similar platform-specific controls), the switch enforces an administratively defined maximum power budget for that interface. If the PD draws more than the configured/allowed value, the switch treats it as a power violation and protects the PoE budget by shutting down PoE on that port, typically placing the interface into an error-disabled (err-disabled) state. This prevents one device from consuming excessive power and starving other PoE ports or exceeding the switch’s total PoE budget. Key Features / How It Works: - PoE policing: The switch monitors actual power draw versus the configured/negotiated allocation. - Administrative override: You can force a specific allocation regardless of the PD’s advertised class. - Protection mechanism: On violation, the port can be shut down/err-disabled; recovery may require manual intervention or errdisable recovery configuration. - Best practice: Use override/policing when you must strictly control power budgets (dense AP/phone deployments) or when PD classification is inaccurate. Common Misconceptions: - Syslog messages (option B) can occur for PoE events, but that is not the defining action of “classification override.” - PoE monitoring does not pause Ethernet data (option C); power negotiation/monitoring is separate from data forwarding. - A device drawing less than a configured minimum is not treated as a failure condition (option D); PoE is concerned with overdraw and fault conditions (shorts, overload), not “underuse.” Exam Tips: Associate “override/policing” with “enforcement.” If a PD exceeds the administratively set power limit, expect a protective action: power removed and often err-disabled. Remember that PoE class is about budgeting; override is about forcing and enforcing that budget on the port.
What is the primary effect of the spanning-tree portfast command?
Incorrect. PortFast does not place the port into the listening state first; in fact, it is specifically designed to bypass the listening and learning states. The port transitions directly to forwarding when the interface comes up. Choosing listening would describe the normal STP progression rather than the PortFast behavior. This option contradicts the main purpose of the feature.
Incorrect. Although PortFast does cause a port to enter forwarding immediately when the port comes up, this option is misleading because it ties the behavior specifically to a switch reload. PortFast is not primarily about reload events; it applies whenever the interface transitions up, such as when a host is connected or a link is restored. The question asks for the primary effect, and the broader, more accurate effect is reduced STP delay on the port. Therefore this option is too narrowly worded to be the best answer.
Incorrect. PortFast does not enable BPDU messages; STP BPDUs are already part of normal switch operation. A PortFast-enabled port can still send and receive BPDUs, and receiving one may indicate that the port is no longer an edge port. For protection, PortFast is often combined with BPDU Guard so the port is shut down if unexpected BPDUs arrive. This option confuses PortFast with STP control-plane behavior.
Correct. PortFast minimizes the delay associated with STP state transitions on an edge/access port by allowing the port to move to forwarding immediately instead of waiting through listening and learning. This is why it is commonly used on ports connected to PCs, printers, phones, and other end devices that need immediate connectivity after link-up. While it does not improve STP reconvergence for the entire switched network, among the given choices this is the best description of its primary operational effect. Cisco exam questions often use this wording to test whether you understand that PortFast reduces startup delay for edge ports.
Core concept: The spanning-tree portfast command is used on edge/access ports so they do not wait through the normal STP listening and learning delays before passing traffic. Its practical purpose is to let end devices begin communicating immediately after link-up, which reduces the delay caused by STP on those ports. Key features include immediate transition of an edge port to forwarding, continued BPDU processing, and common pairing with BPDU Guard for safety. A common misconception is that PortFast changes STP calculations for the whole topology; it does not, it only affects the local port’s startup behavior. Exam tip: if the question asks for the primary benefit in broad terms, think reduced STP delay for host-facing ports rather than topology-wide STP optimization.
Which QoS Profile is selected in the GUI when configuring a voice over WLAN deployment?
Platinum is the correct QoS profile for voice over WLAN in Cisco wireless GUIs. It maps traffic to the highest priority treatment (typically WMM Voice / AC_VO) to minimize latency and jitter. Voice is the most delay-sensitive common enterprise application, so it must be placed above video and best-effort traffic for consistent call quality.
Bronze is typically used for background or low-priority traffic (for example, bulk transfers or non-interactive applications). Selecting Bronze for voice would increase delay and jitter because voice frames would contend with other traffic without priority. This would commonly result in poor call quality, choppy audio, and dropped calls in busy RF conditions.
Gold is commonly associated with high-priority traffic and is often used for video. While video is important, it is generally prioritized below voice because voice is more sensitive to delay and jitter. Using Gold for voice may work in lightly loaded environments, but it is not the recommended or typical GUI selection for a voice WLAN deployment.
Silver generally corresponds to best-effort traffic. Best-effort does not provide the preferential queuing and contention parameters needed for real-time voice on Wi-Fi. If voice is left at Silver, it competes like normal data traffic, increasing the likelihood of latency spikes and jitter, especially as client count and airtime utilization rise.
Core Concept: This question tests WLAN Quality of Service (QoS) profiles used in Cisco wireless GUIs (WLC/AireOS and similar policy abstractions) and how they map to traffic priority for different application types. Voice over WLAN is highly sensitive to delay, jitter, and loss, so it must be placed into the highest-priority QoS treatment available. Why the Answer is Correct: In Cisco wireless QoS profiles, “Platinum” is the profile intended for voice traffic. When configuring a voice over WLAN deployment, the GUI selection for QoS is typically set to Platinum to ensure voice frames are placed into the highest-priority 802.11e/WMM access category (Voice/AC_VO) and receive preferential queuing and scheduling. This aligns with Cisco best practices for real-time traffic: prioritize voice above all other user traffic to minimize latency and jitter. Key Features / What Platinum Implies: Platinum generally maps to the highest DSCP/CoS handling and WMM Voice (AC_VO). It is designed to protect voice by giving it expedited forwarding behavior across the WLAN and upstream wired network (assuming end-to-end QoS is configured). In practical deployments, you also pair this with proper SSID/WLAN design (often separate voice SSID or policy), CAC/TSPEC where applicable, and ensuring the wired side trusts/marks DSCP appropriately. Common Misconceptions: Gold is often associated with “high priority” and is commonly used for video, so it can look tempting. However, video typically tolerates slightly more delay than voice and is usually placed below voice in priority. Silver and Bronze are for best-effort and background traffic and are not appropriate for voice. Exam Tips: Memorize the Cisco wireless QoS profile hierarchy: Platinum (voice) > Gold (video) > Silver (best effort) > Bronze (background). Also remember that WLAN QoS must be end-to-end: WMM on the air, correct DSCP marking/trust boundaries on switches, and appropriate queuing on WAN/routers for real voice quality.
Which unified access point mode continues to serve wireless clients after losing connectivity to the Cisco Wireless LAN Controller?
Local mode is the standard unified AP mode where the AP depends on the WLC for control and typically tunnels client traffic to the controller using CAPWAP. If the AP loses connectivity to the WLC, it generally cannot continue providing normal WLAN service because it relies on the controller for configuration, security policies, and (often) data-plane tunneling.
Mesh mode enables APs to form a wireless backhaul (MAPs connecting through RAPs) when Ethernet cabling is unavailable. While mesh can help an AP reach the network, it does not specifically provide survivability for client service when the AP loses connectivity to the WLC. The controller is still required for management and operation.
FlexConnect mode is built for remote/branch AP deployments and supports local switching so client traffic can be bridged onto local VLANs instead of being tunneled to the WLC. When the AP loses connectivity to the WLC, FlexConnect can continue serving wireless clients (standalone behavior), maintaining service during WAN/controller outages when properly configured.
Sniffer mode turns the AP into a dedicated RF capture device, forwarding 802.11 frames to a remote analyzer (for example, Wireshark) for troubleshooting. In this mode the AP does not provide normal client access. It is used for monitoring and packet analysis, not for continuing client service during controller failures.
Core Concept: This question tests Cisco AP operational modes in a unified (controller-based) wireless architecture and, specifically, what happens when an AP loses connectivity to its Wireless LAN Controller (WLC). The key idea is whether the AP can continue to provide client access and how traffic is forwarded (central switching vs local switching). Why the Answer is Correct: FlexConnect mode is designed for distributed/branch deployments where APs may be separated from the WLC across a WAN. In FlexConnect, the AP can be configured for local switching (also called “standalone forwarding”) so client data traffic is bridged onto the local VLAN at the branch rather than tunneled back to the WLC. Crucially, FlexConnect supports “standalone” behavior during a WLC outage: if the AP loses CAPWAP control connectivity to the controller, it can continue servicing already-joined clients (and, with appropriate configuration, allow new client associations) while locally switching traffic. This is the mode explicitly intended to keep wireless service running during controller/WAN failures. Key Features / Configuration Notes: - CAPWAP control is normally required for centralized management, but FlexConnect can keep SSIDs active during controller loss. - Local switching and local authentication options (depending on platform/software) enable survivability. - Common branch design: WLC in HQ, APs in branches; FlexConnect avoids backhauling all client traffic. - Best practice: ensure VLANs, DHCP, and default gateway are available locally if you expect survivability. Common Misconceptions: - “Local mode” sounds like it should keep working locally, but in Cisco terminology Local mode is the default controller-dependent mode where traffic is typically tunneled to the WLC; if the WLC is unreachable, the AP generally cannot continue normal service. - “Mesh” is about wireless backhaul between APs, not controller survivability. - “Sniffer” is for packet capture/monitoring, not client service. Exam Tips: For CCNA, remember: Local mode APs are controller-reliant; FlexConnect is the branch-survivability mode (local switching) that can keep clients working when the WLC/WAN is down. If you see “continues to serve clients after losing WLC connectivity,” think FlexConnect.
Which mode must be used to configure EtherChannel between two switches without using a negotiation protocol?
"active" is an LACP mode. It actively sends LACP packets to negotiate and form an EtherChannel with a neighbor. Because it relies on the LACP negotiation protocol, it does not meet the requirement of “without using a negotiation protocol.” Use active/passive when you want standards-based negotiation and at least one side initiating.
"on" forces EtherChannel formation with no negotiation protocol (no LACP, no PAgP). Both ends must be configured as "mode on" and must match key Layer 2 settings (trunk/access, VLANs, speed/duplex, etc.). This is the correct choice when the question explicitly says “without using a negotiation protocol.”
"auto" is a PAgP mode that passively waits for PAgP negotiation messages from the other side. It will not initiate the EtherChannel; it requires the neighbor to be set to "desirable" to form. Since it uses PAgP (a negotiation protocol), it is not correct for a “no negotiation” requirement.
"desirable" is a PAgP mode that actively negotiates EtherChannel by sending PAgP packets. It can form an EtherChannel with a neighbor in "auto" or "desirable". Because it depends on PAgP negotiation, it does not satisfy the condition of configuring EtherChannel without a negotiation protocol.
Core Concept: This question tests EtherChannel formation modes and the difference between negotiated EtherChannel (using a protocol) versus static EtherChannel (no negotiation). Cisco switches can form EtherChannels using either LACP (IEEE 802.3ad/802.1AX) or PAgP (Cisco proprietary), or they can be forced to bundle links with no negotiation. Why the Answer is Correct: To configure EtherChannel between two switches without using a negotiation protocol, you must use the static mode: "on". In Cisco IOS, "channel-group <number> mode on" forces the interfaces to become an EtherChannel without exchanging LACP or PAgP control packets. Because there is no negotiation, both ends must be configured identically (same channel-group number locally, compatible switchport mode/trunking, allowed VLANs, native VLAN, speed/duplex, etc.) or the bundle may fail or cause issues. Key Features / Best Practices: - "on" = no protocol, no negotiation; it simply bundles. - LACP uses "active" (initiates) and "passive" (responds). At least one side must be active. - PAgP uses "desirable" (initiates) and "auto" (responds). At least one side must be desirable. - Best practice: prefer LACP over "on" because negotiation helps prevent misconfiguration (e.g., accidentally bundling mismatched links) and provides better interoperability. Common Misconceptions: Many learners confuse "active" with “force on.” However, "active" is still a negotiation mode—specifically LACP—and therefore does use a negotiation protocol. Similarly, "desirable" and "auto" are PAgP modes, also negotiated. Exam Tips: Memorize the mapping: LACP = active/passive; PAgP = desirable/auto; Static/no protocol = on. If the question says “without negotiation protocol,” the answer is always "on". Also remember that static EtherChannel can be risky in production because it won’t protect you from configuration mismatches.
Which mode allows access points to be managed by Cisco Wireless LAN Controllers?
Bridge mode refers to an AP operating as a wireless bridge (for example, connecting two wired networks over a wireless link, or certain mesh/bridging scenarios). While some bridge/mesh deployments can be controller-based, “bridge” is not the defining mode for WLC management. The question is asking for the AP mode that is specifically intended to be managed by a WLC, which is lightweight.
Lightweight mode is the correct answer because lightweight APs are designed to be centrally managed by a Cisco Wireless LAN Controller. They use CAPWAP to join a WLC, receive configuration (SSIDs, security, RF parameters), and rely on the controller for centralized control and management features such as RRM, roaming coordination, and policy enforcement.
Mobility Express is a solution where one AP runs an embedded wireless controller function and manages other APs. It can look like “controller-managed,” but it is not the classic model of APs being managed by a dedicated Cisco WLC appliance/VM. On CCNA-style questions, “managed by Cisco WLC” typically maps to lightweight mode, not Mobility Express.
Autonomous mode means the AP operates independently without a WLC. Configuration and control functions reside on the AP itself (often configured via CLI/GUI per device). Autonomous APs can provide WLAN services, but they are not managed by a Cisco Wireless LAN Controller, which is exactly what the question is asking about.
Core Concept: This question tests Cisco AP operating modes and how they are managed. In Cisco wireless architectures, access points can run either as autonomous (controller-less) devices or as lightweight APs that are centrally managed by a Cisco Wireless LAN Controller (WLC) using CAPWAP. Why the Answer is Correct: Lightweight mode is specifically designed for controller-based deployments. A lightweight AP does not hold the full WLAN configuration locally; instead, it discovers a WLC (via DHCP option 43, DNS entry like CISCO-CAPWAP-CONTROLLER, broadcast on the local subnet, or previously learned controller info), establishes a CAPWAP tunnel to the WLC, downloads its configuration, and is then managed centrally. The WLC handles key functions such as RF management (RRM), security policy enforcement, client authentication integration, and consistent SSID/WLAN provisioning. Key Features / Best Practices: In lightweight deployments, the WLC provides centralized configuration, monitoring, and troubleshooting (e.g., client state, RF interference, rogue detection). CAPWAP (UDP 5246/5247) is the control/data protocol used between AP and controller. This architecture scales well and supports enterprise features like seamless roaming, consistent QoS, and coordinated channel/power planning. From an exam perspective, remember: “WLC-managed AP” almost always implies “lightweight AP.” Common Misconceptions: Autonomous mode can sound like it “manages itself,” but it is not managed by a WLC; it is configured per-AP (or via management tools) and runs the full control plane locally. Mobility Express can be confused with lightweight because it uses a controller-like approach, but it is an embedded controller running on an AP, not a separate WLC managing APs in the traditional sense. Bridge mode is a role/function (often for point-to-point/mesh bridging) and does not inherently mean WLC management. Exam Tips: If the question asks “managed by Cisco WLC,” choose lightweight. If it says “no controller,” choose autonomous. If it mentions “one AP acts as controller for others,” that points to Mobility Express (or newer embedded controller concepts), but the classic WLC-managed mode remains lightweight.
Which two values or settings must be entered when configuring a new WLAN in the Cisco Wireless LAN Controller GUI? (Choose two.)
QoS settings are commonly configured per WLAN (e.g., Platinum/Gold/Silver/Bronze on AireOS, or QoS profiles/policy on Catalyst 9800), but they are not mandatory fields to create a new WLAN object in the WLC GUI. QoS is typically adjusted after the WLAN is created, based on application requirements like voice, video, or best-effort data.
You do not enter the IP address of one or more access points when creating a WLAN. APs discover and join the controller using CAPWAP and are managed separately from WLAN definitions. Once APs are joined, WLANs are advertised based on controller configuration (and possibly AP groups/policy tags), not by manually listing AP IPs during WLAN creation.
The SSID is required because it is the client-facing wireless network name that users see and connect to. In the WLC GUI WLAN creation process, the SSID must be specified to define what beacon/probe response name will be broadcast (or optionally hidden). Without an SSID, the controller cannot present a usable WLAN to wireless clients.
The profile name is required because it is the controller’s internal identifier for the WLAN configuration object. It distinguishes WLANs in the configuration database and is used for management, troubleshooting, and referencing the WLAN in lists and logs. The profile name can differ from the SSID and is mandatory at creation time in the WLC GUI.
Management interface settings are part of controller system configuration (management IP, default gateway, DHCP server settings, etc.), not a required input when creating a new WLAN. A WLAN is typically mapped to a dynamic interface/VLAN (or policy profile) for client traffic, but the controller’s management interface settings are not entered as part of the WLAN creation step.
Core Concept: This question tests basic Cisco WLC WLAN creation requirements. On Cisco Wireless LAN Controllers (AireOS and similarly in Catalyst 9800 concepts), a WLAN is a logical wireless network definition that maps an SSID to security, QoS, and a wired VLAN/interface. In the GUI “Create New WLAN” workflow, the controller requires a minimal set of identifiers before you can proceed to advanced settings. Why the Answer is Correct: When creating a new WLAN in the Cisco WLC GUI, you must enter (1) a Profile Name and (2) an SSID. The Profile Name is the internal identifier used by the controller to reference the WLAN configuration object (often used in logs, configuration lists, and when applying policies). The SSID is the actual wireless network name broadcast (or not) to clients. Without these two values, the controller cannot create the WLAN object because it lacks both an internal handle (profile) and the client-facing identifier (SSID). Key Features / Configuration Notes: After the WLAN object is created with profile name and SSID, you typically configure: - Interface/VLAN mapping (dynamic interface on AireOS; policy profile/VLAN on Catalyst 9800) - Security (WPA2/WPA3, 802.1X, PSK) - QoS (WMM, AVC markings, per-WLAN QoS profiles) - Advanced settings (broadcast SSID, client exclusion, session timeouts) These are important, but they are not mandatory inputs at the initial “new WLAN” creation step. Common Misconceptions: QoS and management interface settings are commonly configured and may feel “required” in real deployments, but they are not mandatory fields to create the WLAN object in the GUI. Also, you never enter AP IP addresses to create a WLAN; APs join the controller separately, and WLANs are then enabled and advertised by APs based on AP group/site/policy assignments. Exam Tips: For CCNA-level wireless questions, remember the separation of roles: - AP join/controller management is separate from WLAN definition. - A WLAN minimally needs an internal name (profile) and the SSID. Security/VLAN/QoS are configured afterward. If asked “must be entered when configuring a new WLAN,” think of the first required fields in the GUI wizard: Profile Name and SSID.
Two switches are connected and using Cisco Dynamic Trunking Protocol. SW1 is set to Dynamic Auto and SW2 is set to Dynamic Desirable. What is the result of this configuration?
Incorrect. An access port results when trunk negotiation does not succeed (for example, dynamic auto to dynamic auto, or when one side is forced to access). In this scenario, SW2 is dynamic desirable and actively requests trunking, and SW1 (dynamic auto) will accept that request, so the port does not remain access.
Incorrect. DTP mismatches do not normally place a port into err-disabled. Error-disable is triggered by specific protection mechanisms (for example BPDU Guard, Port Security violations, UDLD, or link-flap detection). With auto/desirable, DTP negotiation succeeds and the interface transitions normally, so err-disable is not expected.
Incorrect. The physical link state (up/down) depends on Layer 1/2 connectivity (cabling, speed/duplex negotiation, admin shutdown, etc.), not on whether DTP negotiates trunking. With dynamic auto and dynamic desirable, the link can come up and then negotiate trunking; it should not be down due to these DTP settings.
Correct. Dynamic desirable actively tries to form a trunk and sends DTP frames to negotiate trunking. Dynamic auto is passive but will become a trunk if the neighbor requests it. Therefore, SW2 (desirable) + SW1 (auto) results in a negotiated 802.1Q trunk port.
Core Concept: This question tests Cisco Dynamic Trunking Protocol (DTP) negotiation behavior. DTP is a Cisco-proprietary Layer 2 protocol used to automatically negotiate whether a switchport becomes an access port or an 802.1Q trunk. Why the Answer is Correct: With SW1 set to dynamic auto and SW2 set to dynamic desirable, the link will negotiate to become a trunk. Dynamic desirable actively attempts to form a trunk by sending DTP messages indicating trunking intent. Dynamic auto is passive; it does not actively try to trunk, but it will agree to trunking if the neighbor requests it. Because SW2 (desirable) initiates trunking and SW1 (auto) is willing to accept, the result is an operational trunk port. Key Features / Behaviors to Know: - dynamic desirable: actively negotiates trunking; will form a trunk with neighbor modes trunk, desirable, or auto. - dynamic auto: passive; will form a trunk only if the neighbor is trunk or desirable. - access: forces non-trunking. - trunk: forces trunking (typically still sends DTP unless disabled). - nonegotiate: disables DTP negotiation (commonly used for security and interoperability). Best practice in many environments is to statically configure trunking (switchport mode trunk) and disable DTP (switchport nonegotiate) on trunk links to prevent unintended trunk formation. Common Misconceptions: A frequent mistake is thinking “auto” means “automatic trunk,” when it actually means “wait and see.” Another misconception is that mismatched dynamic modes cause link failure; they generally do not—DTP affects trunk/access state, not physical link state. Exam Tips: Memorize the classic DTP outcomes: - desirable + auto = trunk - desirable + desirable = trunk - auto + auto = access (no trunk forms) Also remember that trunk negotiation is separate from the interface being administratively/physically up, and that error-disable is typically caused by features like port security, BPDU guard, or UDLD, not by DTP mode combinations.
A Cisco IP phone receives untagged data traffic from an attached PC. Which action is taken by the phone?
This is incorrect because Cisco IP phones are specifically designed to let an attached PC share the same physical switch connection. In normal operation, the phone does not discard ordinary untagged PC data traffic. Dropping the traffic would prevent the integrated phone-plus-PC deployment model commonly used in enterprise networks. Only separate security or policy features could cause drops, not the phone's default forwarding behavior.
This is correct because the Cisco IP phone forwards the attached PC's data frames without adding an 802.1Q tag. The phone internally separates voice and data, tagging only its own voice traffic for the voice VLAN while leaving PC traffic untagged. On the switch side, those untagged frames are then associated with the access VLAN configured on the port. The key point is that the phone allows the traffic to pass through unchanged rather than tagging it itself.
This is incorrect because the phone does not tag attached PC traffic with the native VLAN. Native VLAN is an 802.1Q trunk concept describing which VLAN is carried untagged across a trunk link. In the phone deployment model, the PC sends untagged traffic and the phone forwards it untagged, while only voice traffic is tagged. Therefore, no native-VLAN tag is added by the phone.
This is incorrect because the phone does not tag PC traffic with a default VLAN either. Although the switch will place incoming untagged frames into its configured access VLAN, that classification happens on the switchport and is not a tagging action performed by the phone. The question asks what action the phone takes, and the phone simply forwards the traffic unchanged. Confusing switch VLAN assignment with phone tagging leads to the wrong answer.
Core concept: A Cisco IP phone acts as a small Layer 2 switch with separate handling for voice and data traffic. The phone tags its own voice traffic with the configured voice VLAN, while traffic from an attached PC is sent as ordinary untagged access traffic. Why correct: When the PC sends untagged frames to the phone, the phone forwards those frames out toward the switch without adding an 802.1Q tag. Key features: this behavior works with the common switch configuration using an access VLAN for data and a voice VLAN for the phone, where the switch classifies untagged frames into the access VLAN and tagged voice frames into the voice VLAN. Common misconceptions: many learners confuse the switch's treatment of untagged frames with the phone actively tagging them, or mix up access VLAN behavior with native VLAN trunk behavior. Exam tips: if the question asks what action the phone takes on untagged PC traffic, focus on the phone's action itself—forwarding unchanged—not the switch's later VLAN classification.




