How to Configure NGINX Load Balancing

Filed Under: NGINX

The load balancing is the process of distributing traffic to multiple instances of an application. It is commonly used for better resource utilization, maximizing throughput and reducing latency apart from ensuring fault‑tolerance of deployed applications.

Load balancing can be implemented in transport (Layer 4) and application layer (Layer 7) of the OSI reference model. The load balancing at the transport layer is implemented with dedicated hardware while the load balancing at the application layer is implemented with specialized software like NGINX or HAPROXY.

NGINX Load Balancing

Nginx Load Balancing

Nginx Load Balancing


NGINX is a modern, open-source and high-performance web server. Apart from serving static and dynamic content very efficiently, NGINX can also be configured to act as a load balancer that can handle a large number of incoming connections and distribute them to separate upstream servers for processing thereby achieving fault tolerance and better performance of deployed applications.

This tutorial will describe how NGINX can be configured as a load balancer in Ubuntu 18.04.

Prerequisite

  • You have already installed NGINX by following our tutorial from here.

Configure NGINX as a Load Balancer

The load balancing configuration instructs NGINX how to process incoming traffic by distributing them to a chain of servers sitting preferably in a private network. These chain of servers are known as the backend or upstream servers.

To configure NGINX as a load balancer, the first step is to include the upstream or backend servers in your configuration file for load balancing scheme of things. The upstream module of NGINX exactly does this by defining these upstream servers like the following:


upstream backend {
   server 10.5.6.21; 
   server 10.5.6.22;
   server 10.5.6.23;
}

The above upstream directive includes three servers with their private address. You can also define the upstream servers with their domain name, however, from performance and security point of view, it is better to refer them by their private address.

Once upstream servers have been defined, you just need to refer them in the desired location block by using proxy_pass directive to load balance the traffic.


server {
         ...
         server_name SUBDOMAIN.DOMAIN.TLD
         ...
         location / {
                      proxy_pass  http://backend;
         }
}

In the above server block, whenever traffic for the domain SUBDOMAIN.DOMAIN.TLD matches with root(/) location block, NGINX will forward the request to each upstream servers one by one. The default method of choosing an upstream server will be round robin although there are few other methods available to load balance the traffic which we will discuss in the next section.

Let us now make use of above upstream and server block to create a load balancer for a domain. To do that, navigate to the NGINX server block configuration directory and create a configuration file using a text editor.


# cd /etc/nginx/sites-available/
# vi load_balancer.conf

upstream backend {
   server 10.0.2.144;
   server 10.0.2.42;
   server 10.0.2.44;
   # add more servers here
}

server {
           listen 80; 
           server_name SUBDOMAIN.DOMAIN.TLD;
           location / {
                        proxy_set_header Host $host;                          
                        proxy_set_header X-Real-IP $remote_addr;                        
                        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;                          
                        proxy_pass http://backend;
           }
}

Whenever traffic arrives at port 80 for the domain SUBDOMAIN.DOMAIN.TLD, NGINX will forward the request to each upstream servers defined in the upstream section in a round robin way and thus load balances the traffic. Make sure the hostname for proxy_pass directive must match with upstream module name.

Now check for any syntactical error in the NGINX configuration file and enable the server block. Once done reload NGINX to apply new settings.


# nginx -t
# cd /etc/nginx/sites-enabled/
# ln -s ../sites-available/load_balancer.conf .
# systemctl reload nginx

To test the load balancer, make a CURL query to it. You will find that the load balancer is forwarding the request to each upstream servers one by one in a round robin method thereby load balancing the traffic.

CURL Query To NGINX Load balancer

CURL Query To Loadbalancer

NGINX Load balancing methods

1. Round robin

The round robin scheme distributes the traffic to upstream servers equally and is the default scheme available if you don’t specify any. With this scheme each upstream server is selected one by one in turn according to the order you place them in the configuration file. That means clients request are served by any one of the listed upstream servers. The load balancer that we have configured earlier uses this scheme to choose an upstream server.

2. Least connected

With this scheme, load balancer assigns the request to the upstream server with the least number of active connections. This scheme is useful in a situation when active connection to the upstream server takes some time to complete or an upstream server is overloaded with active connections. To configure the least connected load balancing, add least_conn directive as the first line within the upstream module like below.


upstream wordpress_apps {
   least_conn;
   server 10.0.2.144;
   server 10.0.2.42;
   server 10.0.2.44;
   # add more servers here
}

3. IP Hash

This scheme selects an upstream server by generating a hash value based on the client’s IP address as a key. This allows the request from clients to be forwarded to the same upstream server provided it is available and the clients IP address has not changed. To configure IP hash method of load balancing, add ip_hash directive in the beginning within the upstream module like below.


upstream wordpress_apps {
   ip_hash;
   server 10.0.2.144;
   server 10.0.2.42;
   server 10.0.2.44;
   # add more servers here
}

4. Weighted

The weighted method allows fine-tuning load balancing within the NGINX further. With this scheme, the upstream server with the highest weight is selected most often. This scheme is useful in the situation where the upstream server’s resources are not equal and favors the one with better available resources. To configure weighted load balancing, add weight directive after the URL parameter in the upstream section like below.


upstream wordpress_apps {
   server 10.0.2.144 weight=2; 
   server 10.0.2.42 weight=3;
   server 10.0.2.44 weight=5;
   # add more servers here
}

With the above configuration out of every 10 requests, 2 requests are forwarded to the first server, 3 requests are forwarded to the second server and 5 requests are forwarded to the third server.

Summary

Load balancing with NGINX is relatively simple to set up yet a powerful way to increase throughput and better resource utilization for any web application. Further, load balancing increases the security of web application by placing the upstream servers in a private network. You can now proceed with implementing a load balancer in your environment by choosing a suitable method.

Comments

  1. Rajendra Singh says:

    Do we need to do any configurations of slave servers?

Leave a Reply

Your email address will not be published. Required fields are marked *

close
Generic selectors
Exact matches only
Search in title
Search in content
Search in posts
Search in pages