Load balancing distributes incoming network traffic across multiple servers to prevent any single server from becoming a bottleneck. CompTIA Network+ N10-009 tests load balancing concepts in implementation and high-availability contexts. You must understand load balancing algorithms, health checks, and session persistence, and how load balancers fit into network design for scalability and availability.
Practice this topic
A load balancer sits in front of a server pool and distributes client requests across the pool members. From the client's perspective, they are communicating with a single virtual IP (VIP); the load balancer transparently forwards requests to back-end servers. When a server fails, the load balancer detects the failure (via health checks) and removes it from rotation — remaining servers absorb its traffic.
Layer 4 load balancing: distributes traffic based on TCP/UDP headers (IP address and port) without inspecting application content. Fast and efficient. Cannot make content-based decisions. Layer 7 load balancing: inspects HTTP/HTTPS content — can route requests based on URL path, cookies, headers, or host name. Enables content switching: /images/* to image servers, /api/* to API servers.
Round-robin: requests distributed sequentially across servers. Simple, equal distribution assuming servers have equal capacity and requests take equal time. Weighted round-robin: same as round-robin but servers with higher weight receive proportionally more requests — accommodates servers with different capacities.
Least connections: new requests sent to the server with the fewest active connections. Better for sessions with variable duration. Weighted least connections accounts for server capacity differences. IP hash: the client's source IP determines which server receives the request — the same client always goes to the same server (provides simple persistence without tracking state). Random: requests assigned randomly — simple but potentially uneven.
Session persistence (sticky sessions): ensures a client's requests always reach the same back-end server during a session. Important for stateful applications that store session data locally on the server. Methods: source IP affinity, cookie-based persistence (load balancer inserts a cookie identifying the server). Without persistence, a user could be redirected to a different server mid-session and lose their state.
Health checks: the load balancer periodically tests each server's availability. Types: ICMP ping (basic — is the server alive?), TCP connection check (is the port open?), HTTP/HTTPS GET request (is the application responding correctly?). Servers failing health checks are removed from rotation. Servers recovering are added back. Active-passive load balancing: one server is primary, standby activates only on failure — provides failover, not load distribution.
| Algorithm | Method | Best For |
|---|---|---|
| Round-robin | Sequential | Equal servers, equal request duration |
| Weighted round-robin | Sequential by weight | Servers with different capacities |
| Least connections | Fewest active sessions | Variable session duration |
| IP hash | Source IP determines server | Simple persistence without cookies |
| Random | Random assignment | Simple, roughly equal servers |
Load balancers eliminate the need for high-availability design
Load balancers improve availability and performance, but the back-end servers still need to be designed for redundancy. A load balancer itself can be a single point of failure — deploy load balancers in HA pairs
Round-robin is always the best algorithm
Round-robin assumes equal server capacity and equal request processing time — it performs poorly when requests vary significantly in resource consumption. Least-connections or weighted algorithms are better for real-world workloads
These questions are representative of what you will see on Network+ exams. The correct answer and explanation are shown immediately below each question.
A web application stores user session data in server memory. Which load balancing feature must be configured to prevent users from losing their session when requests are distributed?
Explanation: Session persistence (sticky sessions) ensures all requests from the same client are directed to the same back-end server. This is essential for stateful applications that store session data locally — if a user is sent to a different server mid-session, the session data is missing and the user may be logged out or lose work.
A reverse proxy sits in front of servers and forwards client requests to back-end servers — it can provide caching, SSL termination, compression, and security filtering. A load balancer is a specific type of reverse proxy focused on distributing traffic across multiple servers. Modern load balancers combine both functions: load distribution, SSL termination, health checking, and Layer 7 content routing.
Try free Load Balancing practice questions with explanations, topic links and progress tracking.