A network baseline documents normal performance levels across key metrics, providing a reference point for identifying deviations that indicate problems. CompTIA Network+ N10-009 tests baseline concepts as part of Network Operations. Without a baseline, it's impossible to determine whether current performance is normal or degraded — 'slow' is relative without knowing what fast looks like for that specific environment.
Practice this topic
A network baseline captures metrics during normal, healthy operations: bandwidth utilization per link, CPU and memory usage per device, interface error rates, average latency to key destinations, typical packet loss, and common traffic patterns. Baseline collection should span multiple time periods — different times of day, days of week, and business seasons — to capture natural variation.
When to capture baselines: after initial deployment (establishes 'normal' before changes). After significant changes (upgrades, new applications, topology changes). Periodically (quarterly or annually to account for organic growth and change). Baselines become stale over time as the network evolves — regular updates are important.
Deviation from baseline triggers investigation. If interface utilization is normally 30% but spikes to 95% at 2am, that warrants investigation — it could be a backup job, malware, or a rogue device. Baselines provide context: a 10ms ping to the server is fine if baseline is 5ms (slight increase), but concerning if baseline is 1ms (10x increase).
Trending: analyzing baseline data over time reveals capacity issues before they cause problems. If interface utilization grows 5% each month, capacity planning can predict when to upgrade. Anomaly detection: monitoring systems compare real-time metrics against the baseline and alert when significant deviations occur. Modern tools use machine learning to automatically identify anomalous patterns.
A baseline only needs to be captured once
Baselines must be updated after major changes (new applications, topology changes, growth) because 'normal' evolves. An outdated baseline may cause false alarms or miss real problems by comparing against obsolete normal values
These questions are representative of what you will see on Network+ exams. The correct answer and explanation are shown immediately below each question.
After deploying a new ERP application, the network team notices CPU utilization on the core router is consistently 70% compared to the previous 25% baseline. What should the team do first?
Explanation: After a major change (ERP deployment), the team should first understand why CPU increased (the new application's traffic patterns). After confirming the increase is expected and acceptable, they should update the baseline to reflect the new normal. If the increase is unexpected, investigation is warranted. Immediately replacing hardware or disabling the application before investigation is premature.
SNMP-based monitoring (PRTG, Zabbix, Nagios) collects device metrics. NetFlow analyzers (ntopng, SolarWinds NTA) capture traffic baselines. Packet capture tools (Wireshark) for protocol-level baselines. Synthetic monitoring (simulated transactions) establishes application performance baselines. The data should be stored in a time-series database (InfluxDB, Graphite) for trending and historical comparison.
Try free Network Baseline practice questions with explanations, topic links and progress tracking.