Load Balancing
The Traffic Manager offers a choice of load-balancing algorithms that distribute requests among the nodes in the pool. The algorithms are as follows:
Algorithm |
Description |
Round Robin |
Connections are routed to each of the back-end servers in turn. |
Weighted Round Robin |
As for Round Robin, but with different proportions of traffic directed to each node. The weighting for each node must be specified using the entry boxes provided. |
Perceptive |
Monitors the load and response times of each node, and predicts the best distribution of traffic. This optimizes response times and ensures that no one server is overloaded. |
Least Connections |
Chooses the back-end server which currently has the smallest number of connections. |
Weighted Least Connections |
Chooses the back-end server which currently has the smallest number of connections, scaled by the weight of each server. Weights can be specified in the entry boxes provided. |
Fastest Response Time |
Sends traffic to the back-end server currently giving the fastest response time. |
Random Node |
Chooses a back-end server at random. |
(Traffic Manager multi-site mode only) Node weightings cannot be switched between single and multiple location-based input like other areas of the Traffic Manager UI. Instead, if you have chosen to configure your nodes by location, the associated weightings will automatically be displayed by the locations used.
Selecting the Optimum Load Balancing Method
“Least Connections” is generally the best load balancing algorithm for homogeneous traffic, where every request puts the same load on the back-end server and where every back-end server is the same performance. The majority of HTTP services fall into this situation. Even if some requests generate more load than others (for example, a database lookup compared to an image retrieval), the Least Connections method will evenly distribute requests across the machines and if there are sufficient requests of each type, the load will be very effectively shared. Weighted Least Connections is a refinement which can be used when the servers have different capacities; servers with larger weights will receive more connections in proportion to their weights.
Least Connections is not appropriate when individual high-load requests cause significant slowdowns, and these requests are infrequent. Neither is it appropriate when the different servers have different capacities. The “Fastest Response Time” algorithm will send requests to the server that is performing best (responding most quickly), but it is a reactive algorithm (it only notices slowdowns after the event) so it can often overload a fast server and create a choppy performance profile.
“Perceptive” is designed to take the best features of both Least Connections and Fastest Response Time. It adapts according to the nature of the traffic and the performance of the servers; it will lean towards Least Connections when traffic is homogeneous, and Fastest Response Time when the loads are very variable. It uses a combination of the number of current connections and recent response times to trend and predict the performance of each server.
Under this algorithm, traffic is introduced to a new server (or a server that has returned from a failed state) gently, and is progressively ramped up to full operation. When a new server is added to a pool, the algorithm tries it with a single request, and if it receives a reply, gradually increases the number of requests it sends the new server until it is receiving the same proportion of the load as other equivalent nodes in the pool. This ramping is done in an adaptive way, dependent on the responsiveness of the server. So, for example, a new Web server serving a small quantity of static content will very quickly be ramped up to full speed, whereas a Java application server that compiles JSPs the first time they are used (and so is slow to respond to begin with) will be ramped up more slowly.
Least Connections is simpler and more deterministic than Perceptive, so should be used in preference when appropriate.
Caveats with Load Balancing Algorithms
Least Connections, Fastest Response Time and Perceptive can all have unexpected behavior at very low traffic levels. Least Connections will not distribute requests if you only ever subject it to one connection at a time; Perceptive and Fastest Response Time will tend to favor nodes with known good response times and will ignore nodes that are untested.
Load balancing metrics are not shared between Traffic Managers in a cluster. For example, if two Traffic Managers use the Round Robin algorithm to distribute requests, they will each progress through the nodes in turn, but independently.
Most load balancing metrics are shared between processes (CPU cores) when the traffic management software runs on a multi-core server. The one exception is response time information; this is not shared across cores.
If you are testing your back-end servers and want to be sure that traffic is directed to all of them (and want request distribution rather than load balancing), then use “Round Robin” or “Random” for your test traffic.
Locality Aware Request Distribution (LARD)
Perceptive, Least Connections and Fastest Response Time all use a technique called LARD (Locality Aware Request Distribution) to try and send the same request to the same back-end server. This technique takes advantage of server caching; if a server returns a particular item of content, it is likely that it will be able to serve the same content quickly again because the content will be located in an internal cache (memory or disk).
When each algorithm makes its load balancing decision, it weights the decision with information as to which node processed the same request previously:
•Least Connections will choose the favored node if there are several candidates for the node with least connections, and the favored node is one of them.
•Perceptive and Response Time algorithms give a light additional weight to the favored node in their internal selection.
Locality Aware Request Distribution is a lightweight way to advise the Traffic Manager how to route requests to back-end nodes. If you wish to mandate that requests for the same URL are sent to the same back-end server, you should use Universal Session Persistence (keyed by the URL) (Session Persistence) to do this more forcefully. Alternatively, you can gain full control over request routing using TrafficScript and named node persistence, forward proxy or pool selection techniques.