
Simulez l'expérience réelle de l'examen avec 90 questions et une limite de temps de 90 minutes. Entraînez-vous avec des réponses vérifiées par IA et des explications détaillées.
Propulsé par l'IA
Chaque réponse est vérifiée par 3 modèles d'IA de pointe pour garantir une précision maximale. Obtenez des explications détaillées par option et une analyse approfondie des questions.
Which of the following ports is used for secure email?
Port 25 is standard SMTP used primarily for server-to-server email transport (MTA to MTA). While STARTTLS can be enabled on 25, it is commonly used for relaying and may allow opportunistic (not enforced) encryption. Many organizations block outbound 25 from clients to reduce spam. For “secure email submission,” 587 is the better, modern choice.
Port 110 is POP3, used by clients to retrieve email from a mailbox. POP3 on 110 is typically cleartext unless upgraded with STARTTLS (not always enabled). The secure, implicit TLS version is POP3S on port 995. Since 995 is not listed, 110 is not the correct answer for secure email.
Port 143 is IMAP, used for email retrieval and mailbox synchronization. Like POP3, IMAP on 143 is not inherently encrypted, though it can use STARTTLS if configured. The secure, implicit TLS version is IMAPS on port 993. Because 993 is not an option, 143 is not the secure email port here.
Port 587 is SMTP Message Submission, the standard port for clients submitting outgoing email to a mail server. It is commonly configured to require SMTP authentication and to use STARTTLS for encryption, making it the typical “secure email sending” port tested on Network+. When 465/993/995 are not present, 587 is the best secure email-related answer.
Core concept: This question tests knowledge of well-known email-related TCP ports and which ones are associated with secure or authenticated email submission. Email commonly involves three protocols: SMTP for sending, and POP3/IMAP for retrieving. “Secure email” in Network+ questions typically refers to using TLS (either implicit TLS or STARTTLS) and/or authenticated submission rather than legacy cleartext services. Why the answer is correct: TCP 587 is the SMTP Message Submission port (per IETF standards such as RFC 6409). It is used by email clients (MUAs) to submit outgoing mail to a mail server (MSA). Port 587 is commonly configured to require authentication (SMTP AUTH) and to use encryption via STARTTLS, making it the modern, recommended “secure” way for clients to send email. In many environments, port 25 is reserved for server-to-server SMTP relay, while clients use 587. Key features/best practices: On port 587, administrators typically enforce SMTP AUTH, require STARTTLS, and apply submission-specific policies (rate limits, anti-abuse controls). This reduces spam/abuse and prevents credential exposure. Note that “secure email” can also mean SMTPS on TCP 465 (implicit TLS), but 465 is not listed here; among the provided options, 587 is the best match for secure sending. Common misconceptions: Port 25 is SMTP but is often unencrypted and intended for mail transfer between servers, not secure client submission. Ports 110 (POP3) and 143 (IMAP) are for retrieving mail and are not inherently secure; their secure counterparts are 995 (POP3S) and 993 (IMAPS), which are not options. Exam tips: Memorize the secure/modern email ports: SMTP submission 587 (STARTTLS), SMTPS 465, IMAPS 993, POP3S 995. If the question says “secure email” and only one secure-ish option is present, 587 is usually the intended answer because it implies authenticated submission with TLS support.
Envie de vous entraîner partout ?
Téléchargez Cloud Pass gratuitement — inclut des tests d'entraînement, le suivi de progression et plus encore.


Téléchargez Cloud Pass et accédez gratuitement à toutes les questions d'entraînement N10-009: CompTIA Network+.
Envie de vous entraîner partout ?
Obtenir l'application gratuite
Téléchargez Cloud Pass gratuitement — inclut des tests d'entraînement, le suivi de progression et plus encore.
A network technician is troubleshooting a web application's poor performance. The office has two internet links that share the traffic load. Which of the following tools should the technician use to determine which link is being used for the web application?
netstat displays local TCP/UDP sessions, listening ports, and connection states (and sometimes per-interface statistics depending on flags). It can confirm that a client has an established connection to the web server and which local port is used, but it does not reveal the upstream hop-by-hop path or which ISP/WAN link carried the traffic. It’s more useful for host-level socket troubleshooting than WAN path selection.
nslookup queries DNS to resolve a hostname to an IP address (and can show which DNS server answered). This helps verify name resolution issues or identify the destination IP to test, but it cannot determine which internet link is being used. DNS resolution is separate from routing decisions; the same resolved IP could be reached via either WAN link depending on load balancing/policy routing.
ping tests basic IP connectivity and round-trip time to a destination using ICMP Echo. It can indicate packet loss or latency that might correlate with one WAN link being degraded, but it does not show the route taken. In a dual-WAN environment, ping alone can’t reliably tell which link was used unless combined with other information (e.g., source interface selection or router logs).
tracert maps the route to a destination by listing intermediate hops, which is ideal for identifying which WAN/ISP path is being used. In a dual-internet-link setup, the early hops after the LAN edge typically differ between ISPs, allowing you to determine which link is carrying the web application traffic. It’s a standard first-line tool for path verification and isolating where latency begins.
Core Concept: This question tests path/route verification in a dual-WAN (two internet links) environment using load sharing. When performance is poor, you often need to confirm which upstream path (ISP link) a specific application flow is taking. Tools that reveal the Layer 3 path (hops) help you infer which WAN link is in use. Why the Answer is Correct: tracert (Windows) / traceroute (Linux/macOS) shows the sequence of routers (hops) from the client to the destination. In an office with two internet links, each link typically has a different default gateway/edge router and different upstream ISP hop addresses. By running tracert to the web application’s hostname/IP, the first hop(s) beyond the LAN (often the firewall/router and then the ISP’s first router) will indicate which WAN circuit is being used. If the organization uses policy-based routing or per-flow load balancing, repeating tracert (or testing from multiple clients) can also reveal whether traffic is consistently pinned to one link or distributed. Key Features / Best Practices: - tracert uses increasing TTL values and ICMP Time Exceeded responses to map the route. - Compare the first few hops against known ISP next-hop IP ranges to identify the active link. - If ICMP is filtered, traceroute may show timeouts; you may need firewall rules or alternate traceroute modes (TCP/UDP-based tools), but for Network+ the conceptual tool remains traceroute/tracert. - Run tests to the application’s actual endpoint (resolved IP) because CDNs/load balancers can change destinations. Common Misconceptions: - netstat shows local connections and ports, but not which WAN link carried the traffic beyond the local host. - nslookup only resolves DNS names to IPs; it doesn’t show the network path. - ping measures reachability/latency to a target but doesn’t identify the route or which ISP link was used. Exam Tips: When asked “which path/link is being used,” think route/path discovery: tracert/traceroute. When asked “is it reachable/latency,” think ping. When asked “what IP does this name resolve to,” think nslookup. When asked “what connections/ports are open,” think netstat.
A user notifies a network administrator about losing access to a remote file server. The network administrator is able to ping the server and verifies the current firewall rules do not block access to the network fileshare. Which of the following tools would help identify which ports are open on the remote file server?
dig is a DNS query tool used to retrieve DNS records (A/AAAA, CNAME, MX, TXT, SRV, etc.) from a DNS server. It helps troubleshoot name resolution issues, such as incorrect records or propagation problems. However, dig does not scan a host for open TCP/UDP ports and cannot determine whether a fileshare port like TCP 445 is open or filtered.
nmap (Network Mapper) is specifically designed to discover open ports and services on remote systems. It can perform multiple scan types (SYN, connect, UDP) and report whether ports are open, closed, or filtered. In a file server access issue, nmap can verify whether SMB-related ports (especially TCP 445) are reachable, helping isolate server-side service/firewall issues from network connectivity problems.
tracert (Windows traceroute) maps the network path to a destination by sending packets with increasing TTL values, revealing intermediate hops and round-trip times. It is useful for diagnosing routing problems, asymmetric paths, or where latency/packet loss begins. It does not test or enumerate which application ports are open on the destination host, so it won’t confirm SMB port availability.
nslookup is a DNS troubleshooting utility used to query DNS servers for name-to-IP resolution and record verification. It can confirm whether the server name resolves correctly and whether the client is using the right DNS server. But it does not probe TCP/UDP ports on the target host, so it cannot identify which ports are open on the remote file server.
Core Concept: This question tests port discovery and service reachability during troubleshooting. When a user loses access to a remote file server (commonly SMB/CIFS on TCP 445, sometimes legacy NetBIOS ports 137-139), and basic connectivity (ping/ICMP) works and perimeter firewall rules appear permissive, the next step is to verify what the server is actually listening on and what is reachable from the client’s network path. Why the Answer is Correct: nmap is a network scanning tool designed to identify open ports, running services, and sometimes OS fingerprints on a remote host. It can perform TCP SYN scans, TCP connect scans, UDP scans, and service/version detection. In this scenario, nmap can quickly confirm whether TCP 445 is open on the file server, whether it’s filtered, or whether the service is down/misconfigured. This directly answers “which ports are open on the remote file server.” Key Features / Best Practices: nmap can scan specific ports (e.g., 445) or common port sets, and can run from the affected client subnet to reflect real reachability. Typical commands include scanning a host for common ports or targeted SMB ports. Best practice is to scan only systems you are authorized to test, run the scan from the same network segment as the user when possible, and interpret results carefully (open vs closed vs filtered). If 445 is closed, you’d pivot to checking the server’s local firewall, SMB service status, or host-based security software. Common Misconceptions: Ping success does not imply application access; ICMP can be allowed while TCP ports are blocked. Tools like tracert help with routing/path issues but do not enumerate open ports. dig and nslookup are DNS tools; they resolve names to IPs and query DNS records, but they do not test port openness. Exam Tips: For “identify open ports/services on a remote host,” think nmap. For “resolve DNS,” think nslookup/dig. For “path/routing latency,” think tracert/traceroute. Also remember common file-sharing ports (SMB 445) to connect symptoms to likely service ports.
Which of the following would allow a network administrator to analyze attacks coming from the internet without affecting latency?
IPS (Intrusion Prevention System) typically operates inline, inspecting packets and making allow/deny decisions before forwarding traffic. Because it sits in the traffic path, it can introduce latency and become a throughput bottleneck if undersized or heavily loaded. IPS is best when the goal is to actively block malicious traffic in real time, not when the requirement is analysis without impacting latency.
An IDS (Intrusion Detection System) is designed to inspect network traffic and alert on suspicious or malicious activity without directly blocking packets. It is commonly deployed out-of-band by receiving a copy of traffic from a SPAN port, mirror port, or network TAP rather than sitting inline with production traffic. Because live packets do not have to traverse the IDS, it does not introduce forwarding delay or become a bottleneck for normal communications. This makes it the best choice when the goal is to analyze attacks coming from the internet while avoiding any impact on network latency.
A load balancer distributes client requests across multiple servers to improve availability and performance. While it can provide some visibility (health checks, basic logs) and may integrate with WAF features in some products, its primary purpose is not attack analysis. It is also typically inline, so it can affect traffic flow characteristics and is not the best answer for passive attack analysis without latency impact.
A firewall enforces security policy by allowing or blocking traffic based on rules (IP/port/protocol, state, and possibly application inspection in NGFWs). Firewalls are inline devices, so traffic must traverse them, which can add latency and processing overhead. Although firewalls provide useful logs, they are primarily control/enforcement devices rather than passive analysis tools that avoid affecting latency.
Core concept: This question tests the difference between detection and prevention security controls and how they affect traffic flow and latency. Specifically, it contrasts an Intrusion Detection System (IDS) versus an Intrusion Prevention System (IPS), and how “out-of-band” monitoring can provide visibility without adding delay. Why the answer is correct: An IDS is designed to analyze traffic and generate alerts/logs about suspicious or malicious activity without actively blocking traffic. In common deployments, an IDS is placed out-of-band using a SPAN/mirror port on a switch, a network TAP, or a packet broker. Because the IDS is not inline with the forwarding path, production packets do not have to traverse the IDS for delivery. That means the administrator can inspect and analyze internet-originated attacks (signatures, anomalies, flows, PCAPs) without introducing additional latency or becoming a bottleneck. Key features and best practices: IDS capabilities typically include signature-based detection, anomaly/behavior detection, protocol analysis, and integration with SIEM/SOAR for correlation and incident response. Best practices include using TAPs for more reliable capture than SPAN (SPAN can drop packets under load), tuning signatures to reduce false positives, baselining normal traffic, and ensuring time synchronization (NTP) for accurate event timelines. IDS is ideal when the requirement is visibility and analysis rather than immediate blocking. Common misconceptions: IPS sounds attractive because it “stops attacks,” but IPS is usually inline and must inspect traffic before forwarding, which can add latency and risk throughput constraints. Firewalls also inspect and enforce policy inline, potentially affecting latency, and they are not primarily for deep attack analysis. Load balancers distribute traffic and can improve performance/availability, but they are not security analytics tools. Exam tips: Remember: IDS = detect/alert (often out-of-band, minimal impact). IPS = prevent/block (typically inline, can impact latency). If a question emphasizes “without affecting latency” or “passive monitoring,” think IDS with SPAN/TAP. If it emphasizes “blocking” or “actively stopping,” think IPS or firewall.
Which of the following are environmental factors that should be considered when installing equipment in a building? (Choose two.)
Fire suppression systems are important building safety controls, but they are not usually the best answer when the exam asks for environmental factors affecting equipment installation conditions. The question is more focused on the operating environment and facility support requirements for the equipment itself. In this context, humidity and electrical load are more direct environmental considerations.
UPS location is primarily a facility layout and power-design consideration rather than an environmental factor. It affects cable routing, maintenance access, and battery placement, but it does not describe the ambient conditions or utility capacity of the room. The exam typically separates placement decisions from environmental operating requirements.
Humidity control is a classic environmental consideration for network and server equipment. Excess humidity can lead to condensation and corrosion, while very low humidity increases the risk of electrostatic discharge damaging components. Proper HVAC and humidity regulation help maintain manufacturer-recommended operating conditions and improve long-term reliability.
Power load is a critical site/environmental factor because the building must be able to supply enough electrical capacity for all installed equipment. If the room exceeds circuit or breaker limits, devices may fail, overheat, or create safety hazards. During installation planning, technicians must verify available power, circuit capacity, and load distribution to ensure stable operation.
Floor construction type is a structural consideration related to load-bearing capacity, raised flooring, and cable routing. While it can influence how equipment is installed, it is not generally categorized as an environmental factor in Network+ terminology. Environmental factors are more commonly things like humidity, temperature, and electrical capacity.
Proximity to the nearest MDF is a network design and cabling-distance issue. It affects backbone connectivity, pathway planning, and adherence to cable length limitations, but it does not describe the room’s environmental suitability for equipment. Therefore it is not one of the environmental factors being tested here.
Core concept: This question is testing environmental and site-preparation considerations for installing network equipment in a building. In Network+ objectives, environmental factors typically include conditions and utilities that affect safe operation of equipment, such as temperature, humidity, and available electrical capacity. Why correct: Humidity control directly affects hardware reliability by preventing condensation, corrosion, and electrostatic discharge, while power load determines whether the room’s electrical infrastructure can safely support the installed equipment. Key features: Environmental/site planning often includes HVAC, humidity, electrical capacity, grounding, and monitoring for safe continuous operation. Common misconceptions: Fire suppression is an important building safety feature, but on exams it is often treated as a physical safety/control measure rather than one of the primary environmental operating factors; proximity to the MDF and UPS location are design/layout concerns, and floor construction is structural. Exam tips: When you see environmental factors for equipment installation, think about what the room must provide for the equipment to run safely every day—especially climate control and sufficient power.
A network administrator is troubleshooting an application issue after a firewall change. The administrator has confirmed that the port and protocol are accessible to the user, but the application is still having issues. Which of the following tools allows the administrator to look at traffic on the application layer of the OSI model?
ifconfig is used to view and configure network interface parameters (IP address, netmask, MTU, interface up/down status) on Unix-like systems. It helps verify Layer 1/2/3 configuration issues such as wrong IP, incorrect subnet mask, or disabled interface. It does not capture packets or inspect application-layer traffic, so it cannot show what the application is sending/receiving.
tcpdump captures packets on a network interface and can display packet details, including payload when requested. This enables analysis of higher-layer protocols (e.g., HTTP, DNS, SMTP) when traffic is not encrypted, making it suitable for investigating application-layer behavior after a firewall change. It can also save to pcap for deeper decoding in tools like Wireshark.
nslookup queries DNS servers to resolve hostnames to IP addresses (and vice versa) and can help diagnose name resolution problems (Layer 7 service, but limited to DNS specifically). While DNS issues can cause application failures, nslookup does not let you inspect general application traffic flows or payloads. It only tests DNS query/response behavior.
traceroute (or tracert) identifies the network path to a destination by probing TTL/hop limits and reporting intermediate routers. It is useful for diagnosing routing issues, asymmetric paths, or where packets are being dropped. However, it does not capture or decode application-layer traffic and cannot show whether the application protocol exchange is succeeding.
Core concept: This question tests packet capture/traffic analysis tools used for troubleshooting beyond basic connectivity. When a firewall change occurs, confirming that a port/protocol is reachable (Layer 3/4) does not guarantee the application is functioning (Layer 7). To validate what is actually being exchanged, you need a tool that can capture and inspect packets and, when possible, decode higher-layer protocols. Why the answer is correct: tcpdump is a packet capture tool that can capture live traffic on an interface and display packet headers and payload (depending on options used). While tcpdump is not a full GUI protocol analyzer like Wireshark, it can still reveal application-layer details (for example, HTTP methods/URLs, DNS queries/responses, TLS handshakes, SMTP commands) when traffic is unencrypted or when you capture enough of the payload. This allows the administrator to confirm whether the client is sending the expected requests, whether the server is responding, whether resets/timeouts occur, and whether the firewall/NAT change is altering traffic. Key features/best practices: tcpdump supports capture filters (BPF syntax) to narrow traffic (e.g., host, port, protocol), can write captures to a pcap file for deeper analysis in Wireshark, and can increase verbosity or print ASCII payloads (e.g., -A) to view application data. In real troubleshooting, you might capture on both sides of the firewall to compare flows, validate NAT translations, or confirm that return traffic is present. Common misconceptions: Tools like traceroute and nslookup are often used in “application issues” scenarios, but they only test specific layers/functions (path discovery or DNS resolution). ifconfig is for interface configuration/status and does not inspect traffic. Another misconception is assuming “port open” equals “app works”; application failures can be due to DNS, TLS/cert issues, incorrect SNI/Host headers, blocked secondary ports, or application-layer filtering. Exam tips: For Network+ questions asking to “look at traffic” or “capture packets,” choose tcpdump (CLI) or Wireshark (GUI, if offered). If the question emphasizes DNS name resolution, pick nslookup/dig. If it emphasizes route/path, pick traceroute. If it emphasizes interface IP settings, pick ifconfig/ip.
Which of the following best describes what an organization would use port address translation for?
VLANs on the perimeter are used for segmentation (e.g., separating DMZ, guest, and internal networks) and controlling broadcast domains and policy boundaries. VLANs can exist at the edge, but they do not provide address translation. PAT is about translating IP addresses and ports, not creating logical Layer 2 segments, so VLANs are not what an organization uses PAT for.
PAT lets many internal private hosts access external networks by sharing a single public IPv4 address on the perimeter router/firewall. The device translates internal source IP:port pairs to the router’s public IP with unique ports, tracking sessions in a NAT table. This is the classic “NAT overload” use case and directly relies on having a public address on the perimeter router’s outside interface.
A non-routable (private/RFC1918) address on the perimeter router’s Internet-facing interface would not work for direct Internet connectivity because upstream providers will not route private addresses. PAT is used to translate internal private addresses to a routable public address. While the router may have private IPs on its inside interface, PAT’s key requirement is a public IP on the outside.
Servers on the perimeter refers to a DMZ/perimeter network design where public-facing services (web, mail, DNS) are placed in a screened subnet. PAT can be used alongside a DMZ, but the presence of servers on the perimeter is not what PAT is for. Publishing servers typically uses static NAT or port forwarding rather than generic outbound PAT.
Core concept: Port Address Translation (PAT), often called NAT overload, is a form of Network Address Translation that allows many internal private (RFC1918) hosts to share a single public IPv4 address by differentiating sessions using Layer 4 port numbers (TCP/UDP) and a translation table. Why the answer is correct: PAT is most commonly used on an organization’s perimeter router/firewall so internal devices with non-routable private IPs can access the Internet. To do this, the edge device translates each internal source IP:port to the same external public IP but with unique source ports. Therefore, the organization uses PAT specifically to enable outbound connectivity while conserving public IPv4 space, which maps directly to having a public address on the perimeter router. Key features / how it works: 1) Many-to-one translation: multiple inside local addresses map to one inside global address. 2) Port multiplexing: the NAT device rewrites source ports and tracks flows in a state/translation table. 3) Typically applied for outbound connections; inbound access generally requires static NAT or port forwarding (DNAT) for specific services. 4) Implemented on perimeter devices (router/firewall) with an “inside” and “outside” interface and a public IP on the outside. Common misconceptions: - People sometimes associate PAT with “non-routable address on the perimeter router,” but the perimeter router’s Internet-facing interface must use a routable public IP. The non-routable addresses are usually on the internal LAN. - PAT is not the same as a DMZ design (servers on the perimeter) or VLAN segmentation. Those are security/architecture choices; PAT is an address translation technique. Exam tips: - If you see “many internal hosts share one public IP” or “NAT overload,” think PAT. - PAT requires at least one public IPv4 address on the outside/perimeter interface. - For publishing internal servers to the Internet, look for “static NAT” or “port forwarding,” not generic PAT for outbound user traffic.
Which of the following is used to stage copies of a website closer to geographically dispersed users?
VPN (Virtual Private Network) creates an encrypted tunnel over an untrusted network (like the internet) to provide secure remote access or site-to-site connectivity. It protects confidentiality and integrity of traffic and can extend a private network to remote users. However, it does not cache or replicate website content at edge locations, so it is not used to stage copies of a website closer to users.
CDN (Content Delivery Network) distributes and caches website content across geographically dispersed edge servers (PoPs). Users are directed to a nearby edge node, which serves cached copies of content to reduce latency, improve load times, and decrease bandwidth and processing demands on the origin server. This is exactly the service used to stage copies of a website closer to geographically dispersed users.
SAN (Storage Area Network) is a high-speed network that provides block-level storage to servers, commonly used in data centers for shared storage, virtualization clusters, and high availability. SAN technologies include Fibre Channel and iSCSI. While a SAN can store website files centrally for servers, it does not distribute copies globally to be closer to end users, so it doesn’t meet the question’s goal.
SDN (Software-Defined Networking) separates the control plane from the data plane and uses centralized controllers to program and automate network behavior. SDN improves agility, policy enforcement, and network management, especially in large or virtualized environments. However, SDN does not inherently provide edge caching or global replication of website content, so it is not the correct choice for staging website copies near users.
Core Concept: This question tests understanding of content distribution and performance optimization on the internet. The key idea is reducing latency and improving load times for users who are geographically far from an origin web server. Why the Answer is Correct: A CDN (Content Delivery Network) is specifically designed to stage (cache/replicate) copies of website content closer to end users by using geographically distributed edge locations (Points of Presence/PoPs). When a user requests a webpage, the CDN serves cached static content (and sometimes accelerates dynamic content) from the nearest edge node, reducing round-trip time, congestion, and load on the origin server. This directly matches “stage copies of a website closer to geographically dispersed users.” Key Features / How It Works: - Edge caching: Stores static assets (images, CSS, JavaScript, downloads) at PoPs. - Anycast DNS / intelligent routing: Directs users to the closest/healthiest edge. - Origin pull vs. origin push: CDN can fetch content on demand (pull) or be pre-populated (push). - TTLs and cache-control: Uses HTTP headers and policies to manage freshness. - Performance and resilience: Offloads traffic from origin, mitigates spikes (“flash crowds”), and improves availability. - Security add-ons: Many CDNs integrate DDoS protection, WAF, TLS offload, and bot mitigation (not the primary concept here, but commonly tested). Common Misconceptions: - VPN might seem like it “connects remote users,” but it provides secure tunneling, not content staging/caching. - SAN relates to storage networking in data centers, not distributing web content globally. - SDN is an architecture for centralized network control and automation; it doesn’t inherently replicate website content to edge locations. Exam Tips: Look for keywords like “cache,” “edge,” “PoP,” “reduce latency,” “global users,” and “offload origin.” Those point strongly to CDN. If the question is about secure remote access, think VPN; if it’s about shared block storage, think SAN; if it’s about controller-based network programmability, think SDN.
A network administrator needs to connect two network closets that are 492ft (150m) away from each other. Which of the following cable types should the administrator install between the closets?
Single-mode fiber is designed for long-distance, high-bandwidth links and easily supports 150m (and typically kilometers). It is also immune to EMI and is commonly used for backbone connections between network closets. Even though multimode is often used inside buildings, single-mode still fully satisfies the distance requirement and is the only appropriate fiber option listed.
Coaxial cable is largely legacy for Ethernet (10BASE2/10BASE5) and is not the standard medium for modern inter-closet uplinks. While coax can carry signals over decent distances, it does not align with typical enterprise backbone design for Ethernet switching today, and it lacks the flexibility and performance/scalability expected compared to fiber.
DAC (Direct Attach Copper) is a short-range twinax cable used with SFP/SFP+ ports, commonly for switch-to-switch or server-to-switch connections within the same rack or nearby racks. Typical DAC lengths are around 1–7m (sometimes up to ~10–15m depending on type). It is not suitable for a 150m run between network closets.
STP (Shielded Twisted Pair) helps reduce electromagnetic interference and crosstalk, but it does not extend the Ethernet maximum channel distance beyond 100m for structured cabling. At 150m, the link would exceed standards and risk errors/instability. For closet-to-closet runs beyond 100m, fiber is the correct medium.
Core concept: This question tests media selection based on distance limits and use case (inter-closet/backbone cabling). In Network+ terms, connecting two network closets is typically a building backbone/distribution link, where fiber is commonly used for longer distances, higher bandwidth, and immunity to electromagnetic interference (EMI). Why the answer is correct: The closets are 492ft (150m) apart. Standard twisted-pair Ethernet over copper (including STP) is specified for a maximum channel length of 100m (328ft) for 1000BASE-T/2.5G/5G/10GBASE-T structured cabling. At 150m, copper Ethernet exceeds the standard limit and becomes unreliable (attenuation, crosstalk, and reduced signal-to-noise ratio). Fiber is the correct choice for a 150m inter-closet run. Among the options, single-mode fiber is a valid fiber type that easily supports 150m and far beyond, making it the best answer. Key features / best practices: Fiber is ideal for backbone links because it supports long distances, high throughput, and is immune to EMI and ground potential differences between closets. In real deployments, you would also consider using appropriate transceivers (e.g., SFP/SFP+), proper termination/patch panels, bend radius, and testing/certification. While multimode fiber is often used for short-to-medium building runs, it is not offered here; single-mode remains correct because it meets and exceeds the distance requirement. Common misconceptions: - STP may seem like it can “go farther” than UTP, but shielding primarily helps with EMI; it does not change the 100m Ethernet channel limit. - Coaxial is associated with longer runs in legacy networks, but it is not used for modern Ethernet closet-to-closet backbone links. - DAC (direct attach copper) is common in data centers for switch-to-switch within a rack or adjacent racks, but it is typically limited to a few meters, not 150m. Exam tips: Memorize key distance rules: copper Ethernet over twisted pair is 100m max; fiber is used when you exceed that or need EMI immunity/backbone capacity. If the question says “between closets/building backbone” and the distance is beyond 100m, choose fiber. If multimode isn’t an option, single-mode is still a correct fiber choice for the distance.
After running a Cat 8 cable using passthrough plugs, an electrician notices that connected cables are experiencing a lot of cross talk. Which of the following troubleshooting steps should the electrician take first?
Correct. Crosstalk on newly terminated copper is most commonly caused by termination problems: too much untwist at the connector, exposed conductors, poor seating, split pairs, or stray wire strands touching. A visual inspection is the fastest, least invasive first step and often immediately reveals the root cause, especially with passthrough plugs where conductors can protrude or be cut improperly.
Incorrect. Restoring default settings on devices addresses configuration issues (VLANs, speed/duplex negotiation edge cases, port security, etc.), not physical-layer crosstalk. Crosstalk is induced coupling between wire pairs and is primarily determined by cable construction, termination quality, and installation practices, not by switch/router configuration.
Incorrect as a first step. Re-terminating may ultimately be required, but best troubleshooting practice is to inspect and verify what is wrong before redoing work. Without inspection, the electrician might repeat the same termination mistake (excess untwist, wrong pair order, poor crimp). Inspect first, then re-terminate if defects are found or if testing confirms failure.
Incorrect as a first step. Radio frequency interference/EMI can contribute to noise, but the scenario points to a new Cat 8 run with passthrough plugs, making termination-induced NEXT/alien crosstalk far more likely. RFI investigation is more appropriate after confirming terminations are correct and if issues correlate with environmental factors (motors, fluorescent ballasts, RF transmitters).
Core concept: This question tests physical-layer troubleshooting for copper Ethernet cabling, specifically crosstalk introduced during termination. With high-frequency cabling (Cat 8), termination quality and maintaining pair twist right up to the contact are critical. Passthrough RJ-45-style plugs can increase the risk of untwisting pairs too far, leaving excess conductor exposed, or creating poor pair geometry at the connector—common causes of near-end crosstalk (NEXT) and alien crosstalk. Why the answer is correct: The first troubleshooting step should be the simplest, most direct physical inspection of the termination: inspect the connectors for exposed conductors, wires touching, improper seating, excessive untwist, or shielding/drain wire issues. Crosstalk is most often caused by termination mistakes (pair split, too much untwist, incorrect pinout, poor crimp, or exposed copper) rather than device configuration. A visual inspection can immediately reveal obvious faults and is a low-cost, low-impact first step before rework. Key features / best practices: For Cat 8 (and other high categories), best practice is to preserve the twist to within a very small distance of the termination (commonly ~13 mm / 0.5 inch guidance for many categories; always follow the connector/cable manufacturer spec). Ensure correct T568A or T568B pinout consistently on both ends, avoid “split pairs” (correct colors but wrong pairings), and ensure no conductor whiskers or exposed copper extend beyond the plug. If shielded cable is used (common with Cat 8), verify proper shield termination and grounding practices to reduce noise coupling. Common misconceptions: It can be tempting to jump straight to re-terminating (option C), but inspection should come first because it may identify a specific correctable issue (e.g., one conductor not fully seated) and prevents repeating the same mistake. RFI checks (option D) are more relevant when symptoms persist across multiple known-good terminations or when the cable route is near strong EMI sources. Device defaults (option B) do not address crosstalk, which is a physical-layer phenomenon. Exam tips: When you see “crosstalk,” think: cable quality, pair twist, termination, split pairs, shielding/grounding, and proximity to other cables. In troubleshooting, start with the most likely and least invasive step: inspect and verify the physical termination before changing configurations or performing major rework.