Load Balancers rescue our Systems from being overwhelmed by distributing the load. However, the size of the internet can put almost any sophisticated system to test.
The Load Balancer is a point through which all network traffic passes and is then routed.
It is also possible for the Load Balancer to be itself overwhelmed by the sheer volume of such requests, thus becoming a bottleneck.
A Load Balancer may also become a Single Point of Failure for the whole system it serves.
— — — -
It is therefore important for us to follow a High Availability structure for Load Balancing as well.
The idea of multiple Load Balancers routing to the same systems can be explored with DNS Round Robin.
In this case, the IP addresses of multiple Load Balancers are sent as a DNS Response.
The sequence of each IP Address is decided by Round Robin format.
Let’s say we configured 3 LBs (A, B, C).
The first client that puts forward a connection request gets the following:
The second client would get the following sequence:
This will ensure that all our Load Balancers are always active and that there is no Single Point of Failure.
— — — — -
However, a huge part of Internet traffic in the Cloud age is due to instances/containers communicating with each other.
Containers may be exploring service availability, trying to communicate between services.
Such scenarios can cause heavy burden on the systems.
The issue with using Round Robin for such cases is that Load Balancers may be configured as 1 LB per Microservice.
The failure of 1 LB would mean losing 1 microservice as a whole.
In such scenarios we can use Client-Side Load Balancing protocol like Baker Street.
Baker Street is a service discovery and routing system for Microservices ecosystem.
It solves the following 3 problems:
1. Service Discovery
2. Load Balancing
It consists of 3 Key components:
1. Sherlock: Like Sherlock Holmes, this component searches for connections to the right direction and routes us.
2. Watson: This component named after the Doctor is the health check component. It provides real-time health of local container.
3. Datawire Directory: A Directory that contains all available containers as notified by Watson. It then allows Sherlock to draw routes to different containers.
The Sherlock and Watson are installed locally on each instance or container. The Datawire Directory is installed on a central master server that faces low latency and connection faults.
When a new container (3-B) is to be added to the pool, Baker Street is installed into it. It will come with Sherlock, Watson and Datawire.
The Watson from the 3-B container first informs the Datawire Directory that it is going live.
Datawire then registers 3-B on its database as Live. Datawire then sends a signal to all other Local Sherlocks that a new Node 3-B has been added.
All Sherlocks then find their route to 3-B. Whenever required, Sherlock will guide their own instance towards 3-B.
A similar process will be followed when a Node/Container goes offline.