top of page
  • Writer's pictureCraig Risi

Making Sense Of Load Balancing




This article originally appeared on Snapt's blog.


We use a variety of tools and ADCs to balance the load of our software applications around the world.


We have learned to rely on them immensely to ensure that our applications stay online and are able to scale and respond to changing needs. However, few people understand how load balancers work to ensure the availability of our critical applications and services.

As early as 1990 we saw the origins of the first hardware appliances for distributing traffic across a network. In the years since then, these appliances have evolved from hardware to software. They also absorbed additional features designed to improve application performance and security, transforming them into Application Delivery Controllers (ADCs).


Load Balancing Methods

Despite these architectural and functional changes, load balancers have consistently relied on a simple set of rules to determine how they distribute traffic across a network. To make these critical decisions, load balancers use the following methods.


Least Connection Method

This method directs traffic to the server with the fewest active connections. This approach is most useful when there are a large number of persistent connections in the traffic unevenly distributed between the servers.


Least Response Time Method

This algorithm directs traffic to the server with the fewest active connections and the lowest average response time.


Least Bandwidth Method

This method selects the server that is currently serving the least amount of traffic, measured in megabits per second (Mbps).


Round Robin Method

This method cycles through a list of servers and sends each new request to the next server. When it reaches the end of the list, it starts over at the beginning. It is most useful when servers are of equal specification and there are not many persistent connections.


Weighted Round Robin Method

The weighted round-robin scheduling is designed to handle servers with different processing capacities. Each server is assigned a weight: an integer value that indicates the processing capacity. Servers with higher weights are prioritized, receiving more connections than servers with lower weights.


IP Hash

This method calculates a hash of the IP address of the client to determine which server receives the request.


Selecting Load Balancing Methods

A good load balancer will rely on more than one simple routing method. A load balancer ought to apply the appropriate method at the correct time. In most cases, load balancers rely on the user to configure status rules for which method to use in which circumstances. More modern load balancers can leverage automation, machine learning (ML), and artificial intelligence (AI) to evaluate the circumstances in real-time using large datasets and select the best load balancing method.


Beyond Load Balancing: Security And Performance

Load balancing is no longer just about distributing traffic. Security has become a major concern as the load balancer also acts as the point of encryption and decryption when traffic enters a particular node. From SSL offloading to threat detection and protection, load balancers combined with web application firewall (WAF) functionality have become critical components of application security.


Performance has become equally important, with site speed affecting everything from bandwidth costs to search rankings, conversions, and e-commerce revenue. Load balancers with web acceleration functionality add compression, caching, rewriting, and more to reduce the load on web servers and keep websites and applications running optimally.


Becoming Cloud-Native

In recent years, load balancers have made the next leap from being software appliances to cloud-native services.


The ability to run in multiple public clouds, with a cloud-neutral approach, has become important in helping users avoid cloud lock-in while still leveraging the benefits of the cloud for agility, reach, and time-to-market.


At the same time, load balancers must now be architected for cloud-native environments: resource-light services compatible with containers, service discovery, and east-west traffic between microservices, and with full open APIs.


This also means adopting modern pricing and scaling structures, and the advent of Load Balancing as a Service (LBaaS), where load balancing is fully abstracted from an appliance (whether hardware or software) and becomes an intent-based service that users can deploy dynamically and automatically into any environment and any scale, with payment by the hour or by the second.

Load balancers continue to be an indispensable part of IT, DevOps, and software engineering, ensuring that applications remain available and ready to meet a modern world. The evolution of load balancing will continue to produce future products that are increasingly service-oriented, more abstracted, easier to use and deploy, and better adapted to modern business operations.

0 comments

Thanks for subscribing!

bottom of page