
Simula la experiencia real del examen con 90 preguntas y un límite de tiempo de 90 minutos. Practica con respuestas verificadas por IA y explicaciones detalladas.
Impulsado por IA
Cada respuesta es verificada de forma cruzada por 3 modelos de IA líderes para garantizar la máxima precisión. Obtén explicaciones detalladas por opción y análisis profundo de cada pregunta.
Which of the following steps of the troubleshooting methodology would most likely include checking through each level of the OSI model after the problem has been identified?
Establishing a theory of probable cause is the troubleshooting step where a technician analyzes the issue and determines what might be wrong. Using the OSI model layer by layer is a classic diagnostic method for narrowing down whether the fault is physical, data link, network, transport, or application related. At this point, the problem has been recognized, but the root cause has not yet been confirmed, so a structured OSI walkthrough helps form a likely explanation. CompTIA commonly associates top-to-bottom and bottom-to-top troubleshooting with this theory-building stage.
Implement the solution happens only after a probable cause has been identified, tested if necessary, and a remediation path has been chosen. This step is about making the change, such as replacing a cable, reconfiguring a switch port, or updating a routing table. Checking each OSI layer is primarily a diagnostic technique used before changes are made, not the implementation itself. While implementation may involve touching a specific layer, it is not the stage where you systematically analyze all layers to find the cause.
Create a plan of action comes after you have a likely cause and are deciding how to resolve it. In this phase, you consider the impact of the fix, required resources, rollback options, and communication or escalation needs. The OSI model can inform the plan, but the actual layer-by-layer checking is done earlier when diagnosing the issue. Planning is about remediation strategy, not root-cause investigation.
Verify functionality is the post-fix step where you confirm the issue is resolved and that full system operation has been restored. This can include testing connectivity, application access, and user experience after the solution is applied. Although verification may touch multiple OSI layers, the question asks about checking through each level after the problem has been identified, which points to diagnosis rather than confirmation. On the exam, verification is associated with validating the fix, not determining the probable cause.
Core concept: This question targets the CompTIA Network+ troubleshooting methodology and how performance baselines are used. A baseline is a known-good set of metrics (throughput, latency, error rates, CPU/memory utilization, etc.) captured when the network is healthy. Comparing current test results to that baseline helps determine whether the issue is resolved and whether performance has returned to expected levels. Why the answer is correct: Comparing current throughput tests to a baseline most directly aligns with the step “Verify full system functionality and, if applicable, implement preventive measures.” After you apply a fix, you must confirm the network is operating normally end-to-end. Throughput testing (e.g., iPerf, speed tests, WAN circuit tests, file transfer benchmarks) compared against historical baseline values is a classic verification activity. It validates not just that the symptom is gone, but that performance is back within acceptable thresholds and that no new bottlenecks were introduced. Key features/best practices: Effective verification includes (1) re-running the same tests used to detect the problem, (2) comparing results to baselines and SLAs, (3) checking multiple perspectives (client-to-server, site-to-site, wired vs. wireless), and (4) monitoring for a period to ensure stability. Baselines should be periodically updated and stored in monitoring systems (SNMP/telemetry, NetFlow, NMS dashboards) so comparisons are meaningful. Common misconceptions: “Test the theory” can involve running throughput tests, but that step is about confirming the suspected cause before making changes (e.g., proving congestion or duplex mismatch). The question specifically emphasizes comparing to a baseline, which is more characteristic of post-fix validation. “Implement the solution” is the change itself, not the measurement. “Document findings” may include recording baseline comparisons, but documentation is the final administrative step, not where the comparison is most likely performed. Exam tips: When you see wording like “compare to baseline,” “confirm normal operation,” “ensure performance is restored,” or “validate end-to-end,” think of the verification step. When you see “prove the cause,” “reproduce the issue,” or “validate hypothesis,” think of testing the theory.
A network administrator needs to set up a file server to allow user access. The organization uses DHCP to assign IP addresses. Which of the following is the best solution for the administrator to set up?
A separate DHCP scope with a /32 subnet is not a typical or practical solution for assigning a stable address to a single server. DHCP scopes are normally aligned to real subnets/VLANs. A /32 represents a single host route and does not match how clients communicate on a LAN segment. Reservations within the existing scope are the standard method to give one device a consistent IP via DHCP.
A DHCP reservation maps the server’s MAC address to a specific IP so the server always receives the same address while still using DHCP. This prevents IP conflicts, keeps addressing centrally managed, and simplifies changes, auditing, and documentation. It’s a common best practice for servers in DHCP-managed networks, especially when consistent reachability and stable DNS records are required.
A static IP address within the DHCP IP range is a common mistake because the DHCP server may lease that same address to another client, causing an IP conflict and intermittent connectivity issues. If using static addressing, best practice is to place it outside the DHCP pool and document it carefully. Given the organization uses DHCP, a reservation is safer and more manageable.
SLAAC (Stateless Address Autoconfiguration) is an IPv6 mechanism where hosts self-generate addresses based on router advertisements. While useful for client networks, it is not ideal for a file server that needs a predictable, centrally controlled address. Servers typically use static IPv6 or DHCPv6 reservations/assignments (plus stable DNS), not SLAAC, for consistent access.
Core Concept: This question tests DHCP addressing best practices for servers that must be consistently reachable by users. File servers should have a stable IP address so DNS records, mapped drives, ACLs, monitoring, and documentation remain accurate. Why the Answer is Correct: A DHCP reservation (also called a static lease) binds a specific IP address to the server’s MAC address on the DHCP server. The server still uses DHCP, but it always receives the same IP. This provides the operational benefits of centralized IP management (single source of truth, easy changes, auditing, and avoidance of conflicts) while ensuring the file server’s address does not change. In most enterprise environments, this is the preferred approach when the organization “uses DHCP” and wants consistent addressing without manually configuring the host. Key Features / Best Practices: - Create a reservation in the DHCP scope for the server’s NIC MAC address and desired IP. - Ensure the reserved IP is outside the dynamic pool (or excluded from the pool) to prevent accidental assignment to other clients. - Pair with DNS: create/verify A/AAAA and PTR records; many DHCP servers can update DNS dynamically. - Consider redundancy: DHCP failover/split-scope so the reservation remains available during outages. Common Misconceptions: - “Just set a static IP” can work, but if it’s inside the DHCP range it risks IP conflicts. Even outside the range, it reduces centralized control and can be missed during renumbering. - “/32 scope” is not how you provide a single fixed address via DHCP in normal designs. - SLAAC is IPv6-focused and does not provide the same deterministic, centrally managed addressing typically expected for servers. Exam Tips: For Network+ questions: servers and network devices generally need predictable IPs. If the environment uses DHCP, the best answer is usually a DHCP reservation (or static lease). Avoid choices that place static IPs inside the DHCP pool/range, and be cautious of IPv6 autoconfiguration options when the scenario implies managed, stable addressing for a server.
Which of the following technologies are X.509 certificates most commonly associated with?
PKI (Public Key Infrastructure) is the correct association because X.509 defines the standard certificate format used by PKI. PKI provides the trust model (CAs, certificate chains), lifecycle management (issuance, renewal), and validation/revocation methods (CRL/OCSP) that make X.509 certificates usable for TLS, VPNs, and 802.1X authentication.
VLAN tagging refers to IEEE 802.1Q, which inserts VLAN IDs into Ethernet frames to segment Layer 2 networks. It deals with switching, trunk ports, and broadcast domain separation—not identity, encryption, or certificate formats. X.509 certificates operate at higher layers for authentication and encryption (e.g., TLS), so VLAN tagging is unrelated.
LDAP is a directory protocol used to access and manage directory information, such as users, groups, and sometimes certificate-related objects. Although LDAP directories can store X.509 certificates or publish certificate revocation information, LDAP is not the trust model or certificate framework itself. X.509 is most directly tied to PKI, which handles certificate issuance, validation, and chain of trust. Therefore, LDAP may interact with certificates in some environments, but it is not the technology most commonly associated with X.509.
MFA (multi-factor authentication) is an authentication strategy requiring two or more factor types (something you know/have/are). Certificates can be used as “something you have” (smart card/cert-based auth), but MFA is not inherently tied to X.509. X.509 is a certificate standard; PKI is the infrastructure that uses it.
Core Concept: X.509 is the standard format for public key certificates used to bind an identity (a person, server, device, or service) to a public key. These certificates are a foundational component of Public Key Infrastructure (PKI), which provides the processes and trust model for issuing, validating, revoking, and managing certificates. Why the Answer is Correct: X.509 certificates are most commonly associated with PKI because PKI is the overarching system that makes certificates useful and trustworthy. PKI includes Certificate Authorities (CAs) that issue X.509 certificates, Registration Authorities (RAs) that validate identity, certificate repositories, and revocation mechanisms (CRL and OCSP). Without PKI, a certificate is just data; PKI provides the chain of trust that allows clients to verify that a certificate is legitimate and unaltered. Key Features / Best Practices: X.509 certificates contain fields such as Subject, Issuer, Validity period, Subject Public Key Info, Key Usage/Extended Key Usage, and Subject Alternative Name (SAN). In real networks, X.509 is used heavily for TLS/HTTPS, VPNs (IPsec/IKE, SSL VPN), 802.1X/EAP-TLS for network access control, S/MIME email encryption, and code signing. Best practices include using strong key sizes/algorithms, ensuring SAN matches hostnames, monitoring expiration, and implementing revocation checking (OCSP stapling where appropriate). Common Misconceptions: LDAP is often mentioned alongside certificates because directories can store certificates or user attributes, but LDAP is not the primary technology X.509 is “associated with”; it’s a directory protocol. MFA can use certificates as one factor (smart cards), but MFA is an authentication approach, not the certificate trust framework. VLAN tagging (802.1Q) is a Layer 2 segmentation method and unrelated to certificate formats. Exam Tips: For Network+ questions, when you see “X.509,” think “digital certificates,” “TLS,” “CA,” “chain of trust,” and “PKI.” If the option “PKI” is present, it is almost always the best match. Also remember CRL/OCSP as PKI validation components and SAN/CN as common troubleshooting points for certificate name mismatch errors.
Which of the following should be used to obtain remote access to a network appliance that has failed to start up properly?
A crash cart is a portable set of local-access tools (monitor, keyboard, mouse, cables/serial adapter) used to connect directly to a server or appliance in a data center. It’s useful when the device won’t boot or has no network access, but it requires someone physically on-site. Because the question asks for remote access, a crash cart is not the best answer.
A jump box (jump host/bastion host) is a hardened system used as an intermediate point to access internal resources securely. It improves security and auditing, but it still depends on the target appliance being reachable over the network and having working in-band management services. If the appliance failed to start properly, a jump box won’t help you access its console/boot process.
Secure Shell (SSH) is an in-band remote management protocol that requires the device OS/network stack to be running and the SSH service to be available. If the appliance fails during boot, interfaces may not come up and the SSH daemon may never start, making SSH unusable. SSH is excellent for normal remote administration, but not for boot-level recovery access.
Out-of-band management provides a separate management path independent of the device’s production interfaces and often independent of the main OS state. Examples include a dedicated management port, console server/serial-over-IP, or hardware management controllers. This enables remote viewing of boot output and recovery actions even when the appliance fails to start properly, making it the correct choice.
Core Concept: This question tests remote troubleshooting access methods when a network appliance fails to boot normally. The key idea is that in-band management (SSH, web GUI, etc.) depends on the device OS and network stack being up, while out-of-band (OOB) management provides an independent path to reach the device even during boot failures. Why the Answer is Correct: Out-of-band management is specifically designed to provide remote access to a device regardless of the operational state of its primary network interfaces or even its main OS. If an appliance fails to start properly, its in-band services (like SSH) may not load, IP interfaces may not come up, and routing may be unavailable. OOB uses a dedicated management plane such as a console server, dedicated management port, or hardware management controller (e.g., iLO/DRAC/IPMI, serial-over-IP). This allows you to view boot messages, enter recovery modes, adjust BIOS/firmware settings (on applicable platforms), and perform low-level remediation remotely. Key Features / Best Practices: OOB commonly includes a dedicated management interface on a separate management VLAN/VRF, strong authentication (AAA, MFA where possible), encryption (SSH to console server, HTTPS to management controller), and strict access controls (ACLs, jump host/VPN). Best practice is to keep OOB isolated from production traffic and monitor/log all administrative sessions. Common Misconceptions: A crash cart can indeed access a broken device, but it is not remote; it requires physical presence. A jump box can facilitate secure admin access, but it still relies on the target device being reachable and responsive over the network. SSH is an in-band protocol and typically won’t work if the device didn’t boot far enough to start the SSH daemon or bring up interfaces. Exam Tips: When you see “failed to start/boot,” “OS down,” “network unreachable,” or “need access during POST/boot,” think out-of-band management (console, IPMI/iLO/DRAC, console server). When the question emphasizes “remote” plus “device not booting properly,” OOB is the most reliable and exam-appropriate choice.
A company's office has publicly accessible meeting rooms equipped with network ports. A recent audit revealed that visitors were able to access the corporate network by plugging personal laptops into open network ports. Which of the following should the company implement to prevent this in the future?
URL filters restrict which web domains/categories users can access (often via proxy or next-gen firewall). They do not prevent a device from connecting to the LAN, getting an IP address, or reaching non-web services (SMB, RDP, DNS, etc.). In this scenario, visitors gained network access at Layer 2/3 by plugging into a port; URL filtering would only limit some HTTP/HTTPS traffic after the fact.
A VPN provides an encrypted tunnel for remote users over untrusted networks (e.g., the internet) into the corporate network. It does not stop someone who is physically connected to an internal switch port from accessing the LAN. While you could require VPN for certain resources, the visitor would still have local network connectivity unless admission control (like NAC/802.1X) is enforced.
ACLs (Access Control Lists) filter traffic based on IP, protocol, and ports at routers, firewalls, or Layer 3 switches. They are useful for segmentation and limiting what a connected host can reach, but they don’t authenticate the endpoint or prevent link-level access. A visitor could still connect, obtain DHCP, and potentially access allowed services; ACLs are not the best control for open wall jacks.
NAC (Network Access Control) enforces who/what can connect to the network, commonly using 802.1X with RADIUS. It can block unknown devices, require user/device authentication, and place unauthenticated systems into a guest or quarantine VLAN. This directly prevents visitors from gaining corporate network access simply by plugging into an open port, making it the best answer.
Core Concept: This question tests port-based access control and endpoint authorization at the network edge. The key issue is that physical access to an Ethernet jack is effectively network access unless the switch enforces authentication/authorization. Network Access Control (NAC) solutions (often using IEEE 802.1X) ensure only approved users/devices can gain network connectivity. Why the Answer is Correct: NAC prevents visitors from accessing the corporate network by requiring authentication before a switch port becomes fully active on the production VLAN. With 802.1X, the switch acts as the authenticator, the user/device (supplicant) proves identity (credentials/cert), and a RADIUS server (authentication server) validates it. If authentication fails, NAC can place the device into a guest VLAN, quarantine/remediation network, or block it entirely. This directly addresses the scenario: open wall ports in public meeting rooms. Key Features / Best Practices: 1) 802.1X on wired ports with RADIUS (e.g., NPS/ISE/ClearPass) for centralized policy. 2) Dynamic VLAN assignment: corporate VLAN for managed devices, guest VLAN for visitors. 3) Posture assessment (optional): verify AV, patch level, device compliance. 4) MAC Authentication Bypass (MAB) for non-802.1X devices (printers/VoIP), ideally with tight profiling and restricted VLANs. 5) Complementary controls: disable unused ports, port security (limit MACs), and physical security for jacks—though NAC is the primary control asked here. Common Misconceptions: ACLs can restrict traffic but do not stop a visitor from obtaining link and often an IP address; they also don’t validate the endpoint identity. VPN provides secure remote access, not protection against someone already on-site plugging into a port. URL filtering controls web destinations, not network admission. Exam Tips: When the problem is “unauthorized device plugged into an open switch port,” think NAC/802.1X first. If the question mentions “authenticate devices/users before granting network access,” “quarantine,” or “guest VLAN,” that’s NAC. ACLs are for controlling permitted flows after access is already granted, not for admission control.
¿Quieres practicar todas las preguntas en cualquier lugar?
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
Which of the following technologies is the best choice to listen for requests and distribute user traffic across web servers?
A router primarily connects different networks and forwards packets based on IP routing tables (Layer 3). While routers can use features like policy-based routing or ECMP to choose paths, they are not typically used to distribute client web requests across multiple backend servers with health checks and server pools. In exam context, “distribute user traffic across web servers” points beyond routing to application delivery/load balancing.
A switch primarily operates at Layer 2 (and sometimes Layer 3 for multilayer switches) to forward frames within a LAN using MAC address tables. Switches can provide redundancy (STP), segmentation (VLANs), and high throughput, but they do not natively provide server-pool load distribution, application-aware routing, or health checks for web servers. They move traffic, but they don’t balance web requests across servers.
A firewall enforces security policies by filtering and inspecting traffic (stateful inspection, application control, NAT, etc.). Although some next-generation firewalls can include features resembling load balancing, that is not their primary purpose. For Network+ exam questions, a firewall is chosen when the goal is to block/allow traffic, segment networks, or protect resources—not to distribute user requests across multiple web servers.
A load balancer is designed to listen for incoming client requests and distribute them across multiple web servers (a server pool) to improve performance, scalability, and availability. It commonly uses a virtual IP/hostname, health checks to remove failed servers, and algorithms like round robin or least connections. Many also support SSL/TLS termination and Layer 7 routing, making it the best match for the scenario.
Core Concept: This question tests your understanding of traffic distribution and high availability for web applications. The technology designed to listen for client requests (often HTTP/HTTPS) and distribute those requests across multiple backend web servers is a load balancer. Why the Answer is Correct: A load balancer sits in front of a server farm (pool) and presents a single “virtual IP” (VIP) or hostname to users. It accepts incoming connections, evaluates which backend server should handle each request, and forwards traffic accordingly. This improves performance (spreads load), increases availability (removes failed servers from rotation), and enables scalability (add servers without changing the client-facing address). Key Features / Configurations / Best Practices: Load balancers commonly support health checks (HTTP GET, TCP connect, or custom probes) to detect failed or degraded servers and automatically stop sending them traffic. They can use distribution algorithms such as round robin, least connections, weighted methods, or hash-based persistence. Many provide session persistence (“sticky sessions”) via cookies or source IP when applications require it. They may also perform SSL/TLS termination (offloading encryption from web servers), Layer 7 routing (path/host-based routing), and connection draining to gracefully remove servers during maintenance. In enterprise designs, load balancers are often deployed redundantly (active/active or active/passive) to avoid a single point of failure. Common Misconceptions: Routers and switches do forward traffic, but they do not inherently provide application-aware distribution across multiple web servers with health checks and server pools. Firewalls can control and inspect traffic and may include limited load-balancing features in some next-gen platforms, but their primary role is security enforcement, not distributing user sessions across a web server farm. Exam Tips: On Network+ questions, keywords like “distribute user traffic,” “server farm,” “VIP,” “health checks,” “reverse proxy,” and “high availability for web servers” strongly indicate a load balancer. If the question emphasizes security policy enforcement, think firewall; if it emphasizes inter-network routing, think router; if it emphasizes LAN switching and VLANs, think switch.
A user is unable to navigate to a website because the provided URL is not resolving to the correct IP address. Other users are able to navigate to the intended website without issue. Which of the following is most likely causing this issue?
The hosts file is a local, static mapping of hostnames to IP addresses and is checked before DNS queries. If it contains an incorrect entry for the website (accidental, malicious, or leftover from testing), that single user’s machine will resolve the URL to the wrong IP even though DNS is correct for everyone else. This perfectly matches an isolated resolution issue.
A self-signed certificate affects TLS trust, not DNS resolution. The user would typically still reach the correct IP/website but receive a certificate warning or be blocked by policy (HSTS/enterprise inspection). It does not cause the URL to resolve to an incorrect IP address; it occurs after name resolution and TCP/TLS negotiation begin.
A nameserver (NS) record controls DNS delegation for a domain. If NS records were wrong or missing, many or all users would fail to resolve the domain (depending on caching and resolver paths), not just a single user. Since other users can reach the site without issue, domain-level NS delegation is unlikely to be the root cause.
IP helper (often referring to DHCP relay on routers/switches) forwards broadcast DHCP requests to a DHCP server across subnets. It is used for IP address assignment, not for resolving URLs to IP addresses. A misconfigured IP helper might prevent a host from getting an IP, but it would not specifically cause one URL to resolve incorrectly.
Core Concept: This question tests DNS name resolution and the local resolution order on an endpoint. When a user types a URL, the system resolves the hostname to an IP address using local sources first (e.g., hosts file and DNS cache) before querying configured DNS servers. Why the Answer is Correct: Because other users can reach the intended website, the authoritative DNS records and general network path are likely fine. The issue is isolated to a single user/device, which strongly indicates a local override or cache problem. The most likely cause is an incorrect entry in the local hosts file mapping the website’s hostname to the wrong IP address. A hosts file entry takes precedence over DNS queries, so even if DNS is correct for everyone else, that one machine will consistently resolve to the wrong IP. Key Features / Best Practices: The hosts file is a static, local name-to-IP mapping used for testing, internal overrides, or blocking. On Windows it’s typically at C:\Windows\System32\drivers\etc\hosts; on Linux/macOS at /etc/hosts. Troubleshooting steps include checking the hosts file for the domain, flushing the DNS cache (e.g., ipconfig /flushdns), and verifying resolution with nslookup/dig (which bypasses hosts in some contexts depending on tool usage) and ping/Resolve-DnsName (which follows OS resolution rules). Best practice is to avoid unnecessary hosts entries in managed environments and control name resolution centrally via DNS. Common Misconceptions: A nameserver record problem (NS) can break resolution, but it would affect many users, not just one. A self-signed certificate causes browser trust/TLS warnings after connecting to the site’s IP, not incorrect DNS resolution. IP helper relates to DHCP relay and is unrelated to web URL resolution. Exam Tips: If “only one user” has a name resolution problem while others are fine, think local causes: hosts file, DNS cache, incorrect DNS server settings, VPN split-DNS, or malware. If “everyone” is affected, think DNS infrastructure: A/AAAA/CNAME records, NS delegation, registrar issues, or upstream DNS outages.
Which of the following is the next step to take after successfully testing a root cause theory?
Correct. Once the root cause theory is successfully tested/confirmed, the next step in the CompTIA troubleshooting model is to establish a plan of action—i.e., determine resolution steps. This includes choosing the fix, assessing impact, planning downtime/rollback, and identifying required resources. Planning comes before making changes to avoid unnecessary outages or introducing new issues.
Incorrect. Duplicating the problem in a lab is a technique used when you need a safe environment to reproduce symptoms or test theories without impacting production. Since the question states the root cause theory has already been successfully tested, you have already validated the cause. The next step is to plan the remediation, not re-create the issue elsewhere.
Incorrect. Presenting the theory for approval is not the standard next step in CompTIA’s troubleshooting sequence. In real environments, you may need change approval, but that approval is typically for the planned fix (change request), not for the theory itself. The exam expects you to determine the resolution steps/plan before any approvals or implementation.
Incorrect. Implementing the solution happens after you establish the plan of action and consider potential effects. Jumping straight to implementation can cause avoidable downtime, miss required prerequisites, or violate change control. CompTIA emphasizes planning and risk assessment between confirming the cause and applying the fix.
Core Concept: This question tests CompTIA Network+ troubleshooting methodology. CompTIA aligns closely with an industry-standard flow: identify the problem, establish a theory of probable cause, test the theory, establish a plan of action to resolve the problem and identify potential effects, implement the solution (or escalate), verify full system functionality and implement preventive measures, and document findings. Why the Answer is Correct: After you have successfully tested a root cause theory, the next step is to translate that validated cause into an actionable remediation plan—i.e., determine resolution steps (and typically consider impact/risk). Testing confirms you’ve found the cause; it does not fix the issue. The correct next action is to decide what changes, configurations, replacements, or process steps will resolve the problem safely and effectively. This is where you consider dependencies, maintenance windows, rollback plans, and stakeholder communication. Key Features / Best Practices: Determining resolution steps includes selecting the least disruptive fix first, identifying required resources (access, tools, parts), assessing risk and potential side effects (e.g., downtime, routing reconvergence, security exposure), planning validation steps, and preparing rollback. In many organizations, this also ties into change management (RFCs, approvals), but the troubleshooting “next step” is still to define the resolution plan before executing it. Common Misconceptions: Option D (implement the solution) is tempting because it feels like the natural next move once you know the cause. However, CompTIA expects you to plan before you act. Option C (present the theory for approval) can be part of a formal change process, but it’s not the standard next step in the troubleshooting sequence; approval typically applies to the planned change, not the theory itself. Option B (duplicate the problem in a lab) is useful when you cannot safely test in production or need to reproduce an intermittent issue, but the question states the theory has already been successfully tested. Exam Tips: Memorize the order: Theory -> Test -> Plan/Resolution steps -> Implement -> Verify/Prevent -> Document. When a question says the theory is confirmed, the next best answer is usually “establish plan/determine resolution steps,” not “implement.”
A user reports having intermittent connectivity issues to the company network. The network configuration for the user reveals the following:
IP address: 192.168.1.10 -
Subnet mask: 255.255.255.0 -
Default gateway: 192.168.1.254 - The network switch shows the following ARP table: MAC address IP address Interface VLAN 0c00.1134.0001 192.168.1.10 eth4 10 0c00.1983.210a 192.168.2.13 eth5 11 0c00.1298.d239 192.168.1.10 eth6 10 0c00.a291.c113 192.168.2.12 eth7 11 0c00.923b.2391 192.168.1.11 eth8 10 feff.2391.1022 192.168.1.254 eth1 10 Which of the following is the most likely cause of the user's connection issues?
Incorrect. A wrong VLAN assignment typically results in consistent loss of connectivity to the expected subnet (wrong IP network, wrong gateway reachability) rather than intermittent behavior. In the ARP table, both entries for 192.168.1.10 are in VLAN 10, matching the user’s 192.168.1.0/24 configuration and the gateway’s presence in VLAN 10. That points away from a VLAN mismatch and toward an IP-level conflict.
Incorrect. Spanning tree conflicts/loops usually cause broad network symptoms: high CPU on switches, broadcast storms, widespread packet loss, and MAC address table instability (the same MAC moving between ports). Here, the evidence is specific: one IP address (192.168.1.10) resolves to two different MAC addresses on two access ports in the same VLAN. That pattern aligns with duplicate IP usage, not STP issues.
Correct. The ARP table shows 192.168.1.10 mapped to two different MAC addresses (0c00.1134.0001 on eth4 and 0c00.1298.d239 on eth6) within VLAN 10. This indicates another device is using the same IP address, often due to a manually configured static IP or a misconfigured DHCP reservation. ARP cache entries can alternate between the two MACs, producing intermittent connectivity for the affected user.
Incorrect. Overlapping route tables on a router can cause traffic to take the wrong path between networks, but it does not explain a single switch learning two different MAC addresses for the same IP within one VLAN. ARP is local to the broadcast domain; the duplicate ARP entries for 192.168.1.10 are a Layer 2/3 local symptom. Routing issues would more likely affect inter-subnet reachability, not create duplicate ARP mappings.
Core concept: This question tests IP addressing conflicts and how ARP (Address Resolution Protocol) reveals them at Layer 2/Layer 3 boundaries. ARP maps an IPv4 address to a MAC address within the same broadcast domain (VLAN). If two devices claim the same IP, ARP entries can “flip” between MAC addresses, causing intermittent connectivity. Why the answer is correct: The user is configured as 192.168.1.10/24 with gateway 192.168.1.254. In the switch’s ARP table, the IP 192.168.1.10 appears twice with two different MAC addresses on two different switch ports (eth4 and eth6), both in VLAN 10. That is the classic signature of an IP address conflict: two hosts are using 192.168.1.10. When other devices (or the gateway) ARP for 192.168.1.10, they may learn either MAC depending on which host replied most recently. Traffic destined for the user may intermittently go to the other device, and the user’s return traffic may not match the expected ARP cache on peers, producing symptoms like sporadic drops, inability to reach some resources, or sessions that reset. Key features/best practices: Use DHCP with reservations to prevent duplicates, enable DHCP snooping and IP source guard (where supported), and investigate by checking the MAC addresses on eth4/eth6 (switch CAM table, port descriptions, or LLDP/CDP) to locate both endpoints. On the endpoint, look for “duplicate IP address” OS warnings. Clearing ARP caches may temporarily “fix” it but it will recur until the duplicate is removed. Common misconceptions: VLAN issues can also cause connectivity problems, but here both conflicting entries are in the same VLAN (10), and the gateway (192.168.1.254) is also in VLAN 10, so basic VLAN membership looks consistent. Spanning tree problems usually manifest as widespread instability, MAC flapping across uplinks, or broadcast storms, not a single IP mapping to two MACs. Overlapping routes are a router-side issue and wouldn’t create duplicate ARP entries for the same IP on a single VLAN. Exam tips: When you see “intermittent connectivity” plus ARP showing the same IP mapped to different MAC addresses, think “duplicate IP.” If the same MAC appears on different ports, think “MAC flapping/loop.” If the same IP appears in different VLANs, think “mis-VLAN or L3 design issue.”
A network administrator configured a router interface as 10.0.0.95 255.255.255.240. The administrator discovers that the router is not routing packets to a web server with IP 10.0.0.81/28. Which of the following is the best explanation?
Incorrect. With a /28 mask, 10.0.0.81 belongs to subnet 10.0.0.80/28 (range 10.0.0.80–10.0.0.95). The router IP 10.0.0.95 is also within that same subnet range. The issue is not that the server is in a different subnet; it’s that the router’s chosen address is not a valid host address.
Correct. In subnet 10.0.0.80/28, the broadcast address is 10.0.0.95. Broadcast addresses are reserved for sending to all hosts on the subnet and cannot be assigned to a router interface as a usable host IP. Because the router interface is configured with the broadcast address, it will not route traffic properly to hosts like 10.0.0.81/28.
Incorrect. 10.0.0.0 is historically a Class A network, but classful addressing is largely irrelevant in modern networks using CIDR (/28 here). Being “Class A” does not prevent routing. The routing problem is caused by assigning an invalid interface address (broadcast), not by the original class of the address block.
Incorrect. 10.0.0.0/8 is private RFC1918 space, which is commonly used internally and routes perfectly fine within private networks. Private addressing only becomes an issue when trying to route on the public Internet without NAT or proper translation. It does not explain why the router can’t route locally to 10.0.0.81/28.
Core concept: This question tests IPv4 subnetting (/28) and identifying valid host addresses versus network and broadcast addresses. Routers must have a valid host IP on an interface to route traffic for that subnet. Why the answer is correct: A /28 mask (255.255.255.240) creates subnets in blocks of 16 addresses in the last octet: 0–15, 16–31, 32–47, 48–63, 64–79, 80–95, 96–111, etc. The router interface is configured as 10.0.0.95/28. The 80–95 block corresponds to subnet 10.0.0.80/28. In that subnet: - Network address: 10.0.0.80 - Usable hosts: 10.0.0.81 through 10.0.0.94 - Broadcast address: 10.0.0.95 Therefore, 10.0.0.95 is the broadcast address, not a usable host address. An interface configured with a broadcast address will not function correctly for normal IP routing, so the router will not properly route packets to the web server at 10.0.0.81/28 (which is a valid host in that subnet). Key features / best practices: - Always verify subnet boundaries and calculate network/broadcast before assigning interface IPs. - For /28, remember “increment by 16” in the last octet. - Router interfaces (and default gateways) must be assigned a usable host address, commonly the first or last usable (e.g., .81 or .94 in this subnet). Common misconceptions: Option A can seem plausible because routing failures often occur when devices are in different subnets, but here both addresses are in 10.0.0.80/28. Options C and D describe properties of the address range (classful and private), but those do not inherently prevent routing. Exam tips: When given an IP and mask, quickly compute: block size = 256 − mask octet (256−240=16). Find the subnet range containing the IP, then identify network (first) and broadcast (last). If the configured address equals the last address in the block, it’s the broadcast and invalid for a host/interface.


¿Quieres practicar todas las preguntas en cualquier lugar?
Obtén la app gratis
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.