Measuring network performance enables proactive identification of issues before users are impacted. CompTIA Network+ N10-009 tests the key performance metrics that network administrators must track, how to interpret them, and which tools measure each metric. Performance metrics appear throughout the Operations and Troubleshooting domains.
Practice this topic
Bandwidth utilization: the percentage of link capacity currently in use. High utilization (>70–80%) is a leading indicator of congestion and performance degradation. Measured via SNMP interface counters or NetFlow. Sustained high utilization indicates the need for link upgrade or traffic optimization.
Latency and RTT: round-trip time measured by ICMP ping. Baseline latency for a local LAN: <1ms. LAN to internet: typically 10–100ms. Satellite: 500–600ms. Increasing latency indicates congestion or routing changes. Jitter (latency variation) is measured separately and impacts VoIP/video.
Packet loss: the percentage of packets sent that do not reach the destination. Any packet loss on a LAN indicates a problem (physical fault, duplex mismatch, congestion). For internet connections, < 1% is acceptable. > 1% impacts TCP performance (triggers retransmissions); > 3% severely impacts VoIP. Measured with extended ping or specialized tools.
Error counters: switch and router interfaces report error statistics — CRC errors (signal corruption / bad cable), runts (frames shorter than 64 bytes — collision artifact in half-duplex), giants (frames larger than 1518 bytes — misconfigured MTU), input/output errors. Increasing error counters indicate physical or configuration problems.
CPU utilization: high CPU on a router or switch can indicate: routing protocol convergence event, DDoS attack, excessive ACL processing, or failing hardware. Sustained CPU > 80% requires investigation. Memory utilization: insufficient memory causes device instability, route table truncation, or crashes. Monitor free memory trends.
Interface statistics: packets per second, bits per second, error rates, drops. Drops indicate congestion (output queue drops) or policy drops (ACL drops, QoS policing). Interface error counters that increment while the link is up indicate physical problems. Temperature sensors on managed devices alert to thermal issues that cause hardware degradation.
If ping works, the network is performing well
Ping only verifies basic Layer 3 connectivity. High-performance issues — congestion, jitter, packet loss — may not be visible with a basic ping test but significantly impact application performance
These questions are representative of what you will see on Network+ exams. The correct answer and explanation are shown immediately below each question.
A network administrator sees increasing CRC errors on a switch interface. What is the most likely cause?
Explanation: CRC (Cyclic Redundancy Check) errors indicate frames arriving with corrupted data. The most common cause is a physical layer problem: damaged cable, bent or crimped cable, bad SFP transceiver, or a faulty NIC. CRC errors are a Layer 1/2 symptom. High CPU does not cause CRC errors; VLAN and DHCP are unrelated.
MTU (Maximum Transmission Unit) is the largest packet size that can be transmitted on a link without fragmentation. Standard Ethernet MTU is 1500 bytes. Jumbo frames use 9000-byte MTU for storage and high-performance networks. MTU mismatches cause fragmentation or black-hole routing — packets too large for a link are fragmented (IP) or dropped (IPv6). Path MTU Discovery (PMTUD) detects the maximum MTU across an end-to-end path.
Try free Performance Metrics practice questions with explanations, topic links and progress tracking.