Share your love
How to Configure Nginx as a load balancer
In this guide, we want to teach you How to Configure Nginx as a load balancer.
A load balancer is a device that acts as a reverse proxy and distributes network or application traffic across a number of servers. Load balancers are used to increase the capacity (concurrent users) and reliability of applications. They improve the overall performance of applications by decreasing the burden on servers associated with managing and maintaining application and network sessions, as well as by performing application-specific tasks.
Nginx, a popular web server software, can be configured as a simple yet powerful load balancer to improve your server’s resource availability and efficiency.
Steps To Configure Nginx as a load balancer
To complete this guide, you must log in to your server as a non-root user with sudo privileges. Also, you need to have Nginx installed on your server.
Use Nginx Upstream Module as a Load Balancer
To set up Nginx as a load balancer, you need to use the Nginx upstream module.
For this Nginx load balancer example, we will edit the file named default which is in the /etc/nginx/sites-available folder.
sudo vi /etc/nginx/sites-available/default
All of the backend servers that work together in a cluster to support a single application or microservice should be listed together in an Nginx upstream block.
At this point, add the upstream module that looks like this:
upstream backend { server backend1.example.com; server backend2.example.com; server backend3.example.com; }
Then, you should reference the module further on in the configuration:
server { location / { proxy_pass http://backend; } }
At this point, you need to restart your Nginx:
sudo service nginx restart
When the server restarts, Nginx sprays all requests to the workload-managed servers on the backend.
Additionally, there are several directives that you can use to direct site visitors more effectively.
Weight
A common configuration to add to the Nginx load balancer is a weighting on individual servers.
If one server is more powerful than another, you should make it handle a larger number of requests. To do this, add a higher weighting to that server.
A load-balanced setup that included server weight could look like this:
upstream backend { server backend1.example.com weight=1; server backend2.example.com weight=2; server backend3.example.com weight=4; }
The default weight is 1. With a weight of 2, backend2.example will be sent twice as much traffic as backend1, and backend3, with a weight of 4, will deal with twice as much traffic as backend2 and four times as much as backend 1.
Hash
IP Hash load balancing uses an algorithm that takes the source and destination IP addresses of the client and server to generate a unique hash key.
The configuration below provides an example:
upstream backend { ip_hash; server backend1.example.com; server backend2.example.com; server backend3.example.com down; }
Max Fails
There are two factors associated with the max fails: max_fails and fall_timeout. Max fails refers to the maximum number of failed attempts to connect to a server that should occur before it is considered inactive. Fall_timeout specifies the length that the server is considered inoperative. Once the time expires, new attempts to reach the server will start up again. The default timeout value is 10 seconds.
A sample configuration might look like this:
upstream backend { server backend1.example.com max_fails=3 fail_timeout=15s; server backend2.example.com weight=2; server backend3.example.com weight=4;
Conclusion
At this point, you have learned to Configure Nginx as a load balancer.
Hope you enjoy it.
Also, you may like to visit the OrcaCore website for more guides and articles.