Updated on 24.5.2023

How to configure load balancing using Nginx

Load balancing

Advantages of load balancing

Load balancing is an excellent way to scale out your application and increase its performance and redundancy. Nginx, a popular web server software, can be configured as a simple yet powerful load balancer to improve your server’s resource availability and efficiency.

How does Nginx work? Nginx acts as a single entry point to a distributed web application working on multiple separate servers.

This guide describes the advantages of load balancing. Learn how to set up load balancing with nginx for your cloud servers.

As a prerequisite, you’ll need to have at least two hosts with web server software installed and configured to see the benefit of the load balancer. If you already have one web host set up, duplicate it by creating a custom image and deploying it onto a new server at your UpCloud control panel.

Installing nginx

The first thing to do is to set up a new host that will serve as your load balancer. Deploy a new instance at your UpCloud Control Panel if you haven’t already. Currently, nginx packages are available on the latest versions of CentOS, Debian and Ubuntu. So pick whichever of these you prefer.

After you have set up the server the way you like, install the latest stable nginx. Use one of the following methods.

# Debian and Ubuntu
sudo apt-get update
# Then install the Nginx Open Source edition
sudo apt-get install nginx
# CentOS
# Install the extra packages repository
sudo yum install epel-release
# Update the repositories and install Nginx
sudo yum update
sudo yum install nginx

Once installed change the directory into the nginx main configuration folder.

cd /etc/nginx/

Now depending on your OS, the web server configuration files will be in one of two places.

Ubuntu and Debian follow a rule for storing virtual host files in /etc/nginx/sites-available/, which are enabled through symbolic links to /etc/nginx/sites-enabled/. You can use the command below to enable any new virtual host files.

sudo ln -s /etc/nginx/sites-available/vhost /etc/nginx/sites-enabled/vhost

CentOS users can find their host configuration files under /etc/nginx/conf.d/ in which any .conf type virtual host file gets loaded.

Check that you find at least the default configuration and then restart nginx.

sudo systemctl restart nginx

Test that the server replies to HTTP requests. Open the load balancer server’s public IP address in your web browser. When you see the default welcoming page for nginx the installation was successful.

Nginx default welcome page.

If you have trouble loading the page, check that a firewall is not blocking your connection. For example on CentOS 7 the default firewall rules do not allow HTTP traffic, enable it with the commands below.

sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --reload

Then try reloading your browser.

Configuring nginx as a load balancer

When nginx is installed and tested, start to configure it for load balancing. In essence, all you need to do is set up nginx with instructions for which type of connections to listen to and where to redirect them. Create a new configuration file using whichever text editor you prefer. For example with nano:

sudo nano /etc/nginx/conf.d/load-balancer.conf

In the load-balancer.conf you’ll need to define the following two segments, upstream and server, see the examples below.

# Define which servers to include in the load balancing scheme. 
# It's best to use the servers' private IPs for better performance and security.
# You can find the private IPs at your UpCloud control panel Network section.
http {
   upstream backend {
      server 10.1.0.101; 
      server 10.1.0.102;
      server 10.1.0.103;
   }

   # This server accepts all traffic to port 80 and passes it to the upstream. 
   # Notice that the upstream name and the proxy_pass need to match.

   server {
      listen 80; 

      location / {
          proxy_pass http://backend;
      }
   }
}

Then save the file and exit the editor.

Next, disable the default server configuration you earlier tested was working after the installation. Again depending on your OS, this part differs slightly.

On Debian and Ubuntu systems you’ll need to remove the default symbolic link from the sites-enabled folder.

sudo rm /etc/nginx/sites-enabled/default

CentOS hosts don’t use the same linking. Instead, simply rename the default.conf in the conf.d/ directory to something that doesn’t end with .conf, for example:

sudo mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.disabled

Then use the following to restart nginx.

sudo systemctl restart nginx

Check that nginx starts successfully. If the restart fails, take a look at the  /etc/nginx/conf.d/load-balancer.conf you just created to make sure there are no mistypes or missing semicolons.

When you enter the load balancer’s public IP address in your web browser, you should pass to one of your back-end servers.

Load balancing methods

Load balancing with nginx uses a round-robin algorithm by default if no other method is defined, like in the first example above. With a round-robin scheme, each server is selected in turns according to the order you set them in the load-balancer.conf file. This balances the number of requests equally for short operations.

Least connections-based load balancing is another straightforward method. As the name suggests, this method directs the requests to the server with the least active connections at that time. It works more fairly than round-robin would with applications where requests might sometimes take longer to complete.

To enable the least connections balancing method, add the parameter least_conn to your upstream section as shown in the example below.

upstream backend {
   least_conn;
   server 10.1.0.101; 
   server 10.1.0.102;
   server 10.1.0.103;
}

Round-robin and least connections balancing schemes are fair and have their uses. However, they cannot provide session persistence. If your web application requires that the users are subsequently directed to the same back-end server as during their previous connection, use the IP hashing method instead. IP hashing uses the visitor’s IP address as a key to determining which host should be selected to service the request. This allows the visitors to be each time directed to the same server, granted that the server is available and the visitor’s IP address hasn’t changed.

To use this method, add the ip_hash -parameter to your upstream segment like in the example underneath.

upstream backend {
   ip_hash;
   server 10.1.0.101; 
   server 10.1.0.102;
   server 10.1.0.103;
}

In a server setup where the available resources between different hosts are not equal, it might be desirable to favour some servers over others. Defining server weights allows you to further fine-tune load balancing with nginx. The server with the highest weight in the load balancer is selected the most often.

upstream backend {
   server 10.1.0.101 weight=4; 
   server 10.1.0.102 weight=2;
   server 10.1.0.103;
}

For example in the configuration shown above the first server is selected twice as often as the second, which again gets twice the requests compared to the third.

Load balancing with HTTPS enabled

Enable HTTPS for your site, it is a great way to protect your visitors and their data. If you haven’t yet implemented encryption on your web hosts, we highly recommend you take a look at our guide for how to install Let’s Encrypt on nginx.

Using encryption with a load balancer is easier than you might think. All you need to do is to add another server section to your load balancer configuration file which listens to HTTPS traffic at port 443 with SSL.  Then set up a proxy_pass to your upstream segment like with the HTTP in the previous example above.

Open your configuration file again for editing.

sudo nano /etc/nginx/conf.d/load-balancer.conf

Then add the following server segment to the end of the file.

server {
   listen 443 ssl;
   server_name domain_name;
   ssl_certificate /etc/letsencrypt/live/domain_name/cert.pem;
   ssl_certificate_key /etc/letsencrypt/live/domain_name/privkey.pem;
   ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

   location / {
      proxy_pass http://backend;
   }
}

Then save the file, exit the editor and restart nginx again.

sudo systemctl restart nginx

Setting up encryption at your load balancer when you are using private network connections to your back end has some great advantages.

  • As only your UpCloud servers have access to your private network, it allows you to terminate the SSL at the load balancer and thus only pass forward HTTP connections.
  • It also greatly simplifies your certificate management. You can obtain and renew the certificates from a single host.

With the HTTPS-enabled, you also have the option to enforce encryption on all connections to your load balancer. Simply update your server segment by listening to port 80 with a server name and a redirection to your HTTPS port. Then remove or comment out the location portion as it’s no longer needed. See the example below.

server {
   listen 80;
   server_name domain_name;
   return 301 https://$server_name$request_uri;

   #location / {
   #   proxy_pass http://backend;
   #}
}

Save the file again after you have made the changes. Then restart nginx.

sudo systemctl restart nginx

Now all connections to your load balancer will be served over an encrypted HTTPS connection. Requests to the unencrypted HTTP will be redirected to use HTTPS as well. This provides a seamless transition into encryption. Nothing is required from your visitors.

Health checks

In order to know which servers are available, Nginx’s implementations of reverse proxy include passive server health checks. If a server fails to respond to a request or replies with an error, nginx will note the server has failed. It will try to avoid forwarding connections to that server for a time.

The number of consecutive unsuccessful connection attempts within a certain time period can be defined in the load balancer configuration file. Set a parameter max_fails to the server lines. By default, when no max_fails is specified, this value is set to 1. Optionally setting the max_fails to 0 will disable health checks to that server.

If max_fails is set to a value greater than 1 the subsequent fails must happen within a specific time frame for the fails to count. This time frame is specified by a parameter fail_timeout, which also defines how long the server should be considered failed. By default, the fail_timeout is set to 10 seconds.

After a server is marked failed and the time set by fail_timeout has passed, nginx will begin to gracefully probe the server with client requests. If the probes return successful, the server is again marked live and included in the load balancing as normal.

upstream backend {
   server 10.1.0.101 weight=5;
   server 10.1.0.102 max_fails=3 fail_timeout=30s;
   server 10.1.0.103;
}

Use the health checks. They allow you to adapt your server back-end to the current demand by powering up or down hosts as required. When you start up additional servers during high traffic, it can easily increase your application performance when new resources become automatically available to your load balancer.

Conclusions on the advantages of load balancing

If you wish to improve your web application performance and availability, a load balancer is definitely something to consider. Nginx is powerful yet relatively simple to set up to load balance a web server. Together with an easy encryption solution, such as the Let’s Encrypt client, it makes for a great front-end to your web farm. Check out the documentation for upstream over at nginx.org to learn more.

When you are using multiple hosts protects your web service with redundancy, but the load balancer itself can still leave a single point of failure. You can further improve high availability when you set up a floating IP between multiple load balancers. Find out more in our article on floating IPs on UpCloud.

Janne Ruostemaa

Editor-in-Chief

  1. This article saved my butt, thank you. Very clear and helpful.

  2. Nice article!

    Thanks!

  3. Nice article!

    Thanks!

  4. good document

  5. Thanks for your awesome documentation ! What is the best between having a nginx load balancer plus integrated swarm load balancer or using round robin dns?

  6. Janne Ruostemaa

    Hi Felix, thanks for the question! Setting up round-robin on a DNS has some drawbacks that might limit the choices of the DNS software. If you are intending to load balance between containers, Docker Swarm is easy to get started with.

  7. How do we give custom conditions in response before marking the server is off in NGINX (not Nginx plus)

  8. Janne Ruostemaa

    Hi Netu, thanks for the question. I believe the custom conditions in health checks are reliant on a feature that is only available in the NGINX Plus edition. You can find more about the health checks at NGINX documentation https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/

  9. Simon Litchfield

    Yes Janne is correct- the article is not accurate. NGINX does not include health checks unless you use the commercial PLUS version.

  10. Janne Ruostemaa

    Passive health checks are available on the standard NGINX. However, the custom conditions are a feature of active health monitoring and exclusive to NGINX PLUS.

  11. i don’t have load balance config in my nginx ?

  12. Janne Ruostemaa

    Hi Agung, NGINX doesn’t include load balancer configurations by default, hence the tutorial instructs to create one according to the example.

  13. You are great!

  14. Hi Janne,
    Great article thanks. I just have a question. I am converting my current single Nginx configuration to a frond end proxy with multiple back end servers like you stated above.
    What I am not sure about is what to put on the back end servers and how to configure Nginx on those servers.
    The front end server has the content and the SSL certs configured on it but to get the benefit of the load balancing, what data and Nginx configs needs to sit on the back end servers i.e. server 10.1.0.101, , 10.1.0.102, 101.1.0.103 in your cluster?
    Thanks.

  15. Janne Ruostemaa

    Hi Waleed, thanks for the question. The beauty of Nginx is that the backend servers do not require any special configuration, just the standard Nginx web server. You can terminate the SSL at the load balancer and run the backend connections over your private network.

  16. Serkan Coskun

    Thanks!!

  17. Looking at above diagram image, what if the Load Balancer machine is down?

  18. Janne Ruostemaa

    Hi, thanks for the question. The picture describes only the load balancer setup. If you want full redundancy, you should consider deploying a second identical load balancer and configure both with a floating IP and automated fail-over. You can read more about floating IPs at https://upcloud.com/community/
    /tutorials/floating-ip-addresses/

  19. Great Article! Thank you so much! Keep your awesome work.

  20. JOHN Maina Kamau

    Great article. Very well done.

  21. Excellent article which covers everything developers wants to know to use nginx as load balancer.Thanks Much!

  22. Thanks for the article. I found Nginx very helpful but I wan’t always this sure I had my doubts.

  23. Hii.. we have setup a load balancer with 2 servers. We are facing issues while streaming and playing back the same video. If the request is going to one of the servers suppose S1 and for playback the request is going to the server S2 ,in that case we are not able to playback the video. And since we are using the load balancer we can have the request on any of the servers irrespective on which the LB decides.
    So can we have any ways where we can have the the publish and play request going to the same server through the LB.

  24. Janne Ruostemaa

    Hi there, thanks for the question. I’d recommend trying a load balancing method that provides session persistence, e.g.

    upstream backend {
       ip_hash;
       server 10.1.0.101; 
       server 10.1.0.102;
    }
  25. Ganesh Arelly

    Hello Janne,

    Clear explanation, nice write-up.

    We are a start-up from Finland. Horizontal scaling is not an option with us for some legacy software licensing costs. However, we can deploy two application servers at different ports on the same node.

    So, Is it possible load balance on same node between two ports? Something like:
    upstream backend {
    server 10.1.0.101:7777;
    server 10.1.0.101:7778;
    }

    Reply

  26. Janne Ruostemaa

    Hi Ganesh, thanks for the question. Indeed it is possible to set the backend as you described, however, you won’t get the same benefits of redundancy and scalability by deploying multiple instances of your application on the same server.

  27. Nwanze Franklin

    Nice article. very helpful

  28. great article to get start with Nginx.

  29. Hello Janne,

    Nice explanation.

    I have a question: Is it possible to do load balancing in such a way that all the request with same cookie value goes to the same instance ?

  30. Janne Ruostemaa

    Hi Kashish, thanks for the question. Nginx does support session persistence using cookies but it’s only available in Nginx Plus. Alternatively, you could use the IP hash method to ensure that requests from the same address get to the same server.

  31. Hi Janne,
    I am seeing an unexpected behaviour with nginx, when used as a udp load balancer.

    My client and server is supposed to exchange udp packets back and forth between them for longer period. My server will be listening to a specific port and the client will initate the communication with a random port and continue to use it for all communication with the server.

    When i try to introduce NGINX in this topology to proxy the packets, i could see that after few seconds. NGINX changes the port number, which it used to communicate with the backend server for the same client. Hence the server losses the context of the session and thereby resulting in connection loss.

    This behaviour is consistent across both nginx & nginx plus.
    I have tried proxy_timeout option as well, which doesn’t solves the purpose.

    Is it a known issue of nginx ?

  32. Janne Ruostemaa

    Hi Harish, thanks for the question. It sounds like the load balancing method you are using is swapping the backend server mid stream, NGINX defaults to round-robin if no method is specified. I’d suggest using hashing with for example the remote address: hash $remote_addr; This way all consecutive packets from one client should always reach the same backend server preserving your connection.

  33. This article is gonna save our butts as a startup

  34. I’m a bit confused on this. I’m running on CentOS and I never had a default.conf file in my conf.d directory.

    When I create my loadbalancer.conf and try and restart nginx I get some errors around the following –
    nginx: [emerg] “http” directive is not allowed here in /etc/nginx/conf.d/loadbalancer.conf:1

    There are some other guides that are telling me to put what you have in loadbalancer.conf into my actual nginx.conf but that also is not working.. I’ve started fresh dozens of times and not sure what i’m doing wrong here.

    I get my initial nginx welcome page but as soon as I add the loadbalancer.conf and reload it fails to start.

  35. Janne Ruostemaa

    Hi Mike, thanks for the question. After a quick test, it seems Nginx on CentOS has changed the default configuration slightly. You should still set the load balancer configuration to /etc/nginx/conf.d/load-balancer.conf but without the http brackets, just upstream and server from within.

    upstream backend {
    server 10.1.0.101;
    server 10.1.0.102;
    server 10.1.0.103;
    }
    server {
    listen 80;
    location / {
    proxy_pass http://backend;
    }
    }

    You should also keep the /etc/nginx/nginx.conf but remove the server section from that file while leaving the include /etc/nginx/conf.d/*.conf; line and everything above it.

  36. Ah makes sense, I’ll try this later today.

    So if my web application has to be hit on a certain port lets say 9090.. can nginx listen on 80 and then in the server section can I put x.x.x.x:9090 and it should forward to that? Can you discuss what the “location” is for?

  37. Janne Ruostemaa

    That’s right, each server definition in the upstream can set the port as well. The location then again sets the URL prefix allowing you to proxy different parts of your site to different backends. For example:

    server {
        location / {
            proxy_pass http://backend;
        }
        location /images/ {
            root /data;
        }
    }
  38. With this method, the server logs always show the load balancer IP not the connecting client IP.
    I’ve tried adding “proxy_bind $remote_addr transparent;” and “user root; ” but I’m getting timeouts when the option is enabled.

    I’m wondering if this feature has been pulled and now only available in the Nginx plus version, or have I missed something that is required to make it work?

  39. Janne Ruostemaa

    Hi Nigel, thanks for the question. It should be possible to configure the open-source edition in a transparent mode as well and from version 1.13.8 it should no longer require Nginx to be run as root. Another option that might work is using the http_realip_module.

  40. so i just start learning about load balancing.
    if i have php on the backend, shoud i install php fpm on load balancer server, or all process is done in the backend?

  41. Janne Ruostemaa

    Hi Kate, thanks for the question. In general, there’s no need to install PHP packages on the load balancer, all of the server-side processing can be distributed on the backend cloud servers.

  42. Thanks for your Article
    I need some help about this challenge:
    Setup 2 separate PHP-FPM Servers and configure Nginx to balance the load between them.
    + Requests to WordPress admin must be sent to just one of them
    + PHP-FPM workers should run under the user you have created

    * I have created a pool named user1 , but i don’t know what should I do? because i have to implement it on one server

  43. Janne Ruostemaa

    Hi Mohamad, thanks for the question. Configuring one of the backend servers for WordPress admin access is simple enough using reverse proxy. Set the location as your admin panel extension and proxy_pass to the server IP you wish to use.

    location /wp-admin {
       proxy_pass http:///wp-admin;
    }

    As for the PHP-FPM workers, you’ll likely want to use ip_hash load balancing to ensure session persistency, and then include all PHP backend servers in the upstream pool.

  44. Joel Alvarado

    Awesome article Janne.

    Could you please help me with something. I did all the steps for load balance two php sites in two different servers, the two sites are using nginx as web service too, and it works really nice, the challenges begin when I decide to migrate those two servers from http to https. On http it works very easy and very good, but when those servers start working on ssl, it becomes all just a mess up. If I set the dns domain and point to one server only, it works fine, http redirects to https and IP requests redirects to dns requests, I tried this on boths servers 1 by 1 separate, and it works. But in https load balance mode, the best thing I get, is load balance the default nginx site on those servers, it does the balance but shows the default nginx site, not my app site, if I try to unlink the default site, only shows 404 not found, no redirect to the app. I used certbot to set ssl in both servers, I tried to set the load balancer with and without ssl certificates, with certbot too, But I can never make it work Well. Question, my servers need to be ssl or not? Could you please help me to complete this task?

  45. Janne Ruostemaa

    Hi Joel, thanks for the question. It’s not absolutely necessary to enable HTTPS on your website but it can increase your visitors’ trust to your site. To do this, you need to configure SSL on the load balancer itself. This way, you only need to point your domain name to your load balancer which simplifies the SSL certificate process. The nginx config on your load balancer would need to look something like the example below:

    upstream backend1 {
            server server_1_ip_address;
    }
    upstream backend2 {
            server server_2_ip_address;
    }
    server {
        server_name example1.com; # managed by Certbot
        location / {
            proxy_pass http://backend1;
        }
        listen [::]:443 ssl ipv6only=on; # managed by Certbot
        listen 443 ssl; # managed by Certbot
        ssl_certificate /etc/letsencrypt/live/example1.com/fullchain.pem; # managed by Certbot
        ssl_certificate_key /etc/letsencrypt/live/example1.com/privkey.pem; # managed by Certbot
        include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    }
    server {
        if ($host = example1.com) {
            return 301 https://$host$request_uri;
        } # managed by Certbot
        listen 80 ;
        listen [::]:80 ;
        server_name example1.com;
        return 404; # managed by Certbot
    }
    server {
        server_name example2.com; # managed by Certbot
        location / {
            proxy_pass http://backend2;
        }
        listen [::]:443 ssl; # managed by Certbot
        listen 443 ssl; # managed by Certbot
        ssl_certificate /etc/letsencrypt/live/example2.com/fullchain.pem; # managed by Certbot
        ssl_certificate_key /etc/letsencrypt/live/example2.com/privkey.pem; # managed by Certbot
        include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    }
    server {
        if ($host = example2.com) {
            return 301 https://$host$request_uri;
        } # managed by Certbot
        listen 80 ;
        listen [::]:80 ;
        server_name example2.com;
        return 404; # managed by Certbot
    }
  46. One of the best article you could ever find on load balancer with NGINX. Great article!

  47. You save my day! Maybe my whole year! :D

    I’ve got this headache for the load of my server. Thank you in millions.

  48. Hello, this is interesting.

    With this approach, the website content across 3 backend servers will be different, right? How do we get on to that?

    Does is use automatic backup the contents from master backend server to other backend servers?

    Thanks.

  49. Janne Ruostemaa

    Hi Adi, thanks for the question. You could have different parts of your website served by different backend servers by configuring more than one backend in the reverse proxy settings. For example:

    location / {
       proxy_pass http://backend;
    }
    location /media {
       proxy_pass http://backend-media;
    }
    location /admin {
       proxy_pass http://backend-admin;
    }

    Alternatively, if you want to split the load more equally, having 3 identical web servers working together with a shared database server avoids most reasons to have to synchronise the backend servers.

  50. Thanks you for this great article Janne

    I need a note for following question. We run nginx as reverse proxy, 3 upstream server, ip_has method, proxy_cache_key is “$scheme$request_method$host$request_uri$cookie_NAME.
    Caching works fine. But in case that a user visit one website and come later to the same site, but with active authcookie, nginx deliver the cached site. But is necessary to generate a new one, because some content is changed. Can nginx do that?

  51. Janne Ruostemaa

    Hi Kai, thanks for the question. Depending on your use case, you might not want to cache certain parts of the user area. However, nginx does provide an option to ignore cache conditionally by using proxy_cache_bypass which can work together with proxy_no_cache

  52. Hallo Janne,
    thanks for help, proxy_cache_bypass and proxy_no_cache works fine for me.
    Kind regards, Kai

  53. Hello,

    I also have the same concern as *A* above. What if the load balancing server goes down?

    From your response, do we configure the backend servers with the same load balancing configurations?

    Such that, if one server goes down, the rest still remain in connection.

  54. Janne Ruostemaa

    Hi Daniells, you’ve got the right idea, redundancy is a great way to improve reliability. You can create redundancy for the load balancer by cloning the server and configuring a floating IP in front of 2 or more identical load balancers. This can be further improved by using software like Keepalive for automatic failover between the load balancers.

  55. Hey, Great article
    Would it be possible to add another backend server on the fly without having to restart the load balancer (in scaling out use case). Any offline method/software/framework to auto scale and dynamically load balance on a private network
    Thanks

  56. Janne Ruostemaa

    Hi there, thanks for the question. You could preconfigure the backend with predictable IP addresses, for example, by using an SDN Private Network and deploying new backend servers from a template. The best results would require using the API along with the load monitoring of your choice.

  57. I have server configuration like http://localhost:8080/abc/xyz.com , here where we need give this context path

  58. Janne Ruostemaa

    Hi Raju, thanks for the question. You can create specific backends for different parts of your website by adding the path in location:

    http {
       upstream abc {
          server 10.1.0.101;
       }
       upstream xyz {
          server 10.1.0.102;
          server 10.1.0.103;
       }
    
       server {
          listen 80; 
    
          location /abc {
              proxy_pass http://abc;
          }
          location /abc/xyz.com {
              proxy_pass http://xyz;
          }
       }
    }
  59. Very nice article!

    Thanks!

  60. Hi Jaane

    Great article. I have some doubts.

    How much size of db is required for nginx to store data and suppose if install nginx in linux can we add windows in conf file or do we need to make changes in conf as per windows

  61. Janne Ruostemaa

    Hi Sravya, thanks for the question. Nginx as just a load balancer doesn’t cache DB data or care which OS the backend runs on. It simply redirects the connection according to the load balancing rules.

  62. Hi Janne, great article.
    Which services combinations are allowed to use in nginx at the same time? I mean it’s possible to use as a load balancer and as a web server and as a reverse proxy at the same time? Thnx

  63. Janne Ruostemaa

    Hi Oscar, thanks for the question. Nginx can easily serve as both a reverse proxy and a load balancer at the same time. However, if you are looking to take advantage of the benefits of these features, you should run the web server on a different node.

  64. This is very useful article. Tks a lot

  65. hi, thanks for a great tutorial, can you please assist with my setup? I hav ethe following nginx.conf:

    `
    # declare flask app
    upstream pyapi_app {
    least_conn;
    server pyapi1:5000;
    server pyapi2:5000;
    }

    # declare shiny app
    upstream shiny_app {
    server shinyapp:3838;
    }

    map $http_upgrade $connection_upgrade {
    default upgrade;
    ” close;
    }

    # shinyapp server
    server {
    listen 80 default_server;
    server_name shiny_app;

    client_max_body_size 50M;
    # normal requests go to shiny app
    location / {
    proxy_pass http://shiny_app;
    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Host $server_name;

    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    proxy_read_timeout 5d;
    proxy_buffering off;
    }

    }

    # python api server
    server {
    listen 81;
    server_name pyapi_app;

    # pyapi requests go to the flask app
    location / {

    proxy_pass http://pyapi_app;
    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Host $server_name;

    }
    }
    `

    what do I put in my shinyapp frontend in regards to the URL connection string? Prior to scaling up the api backend, I just put the name of the docker container or the ip address of the pyapi server, `pyapi:5000/post` but now that I have 2x `pyapi`s how should I write it?

  66. No word! it helps me a lot! Thank you soooooooooooooooooooooo muchhhhh !!

  67. Janne Ruostemaa

    Hi Spencer, thanks for the question. If you mean the proxy_pass parameters, your example conf seems correct. They link to their corresponding upstreams by the section name. As for the server listeners, your server_name should refect the domain you use to reach the app server just as you would in a regular nginx config.

  68. What about load balancing with Litespeed server?

  69. Janne Ruostemaa

    Hi Ashok, thanks for the question. While we do not have a tutorial on load balancing with LiteSpeed, they have their own documentation that covers the functionality.

  70. Good stuff, thank you. I was having problems with the load-balancer.conf file because both that and the nginx.conf included http {}. I had to remove that from my load-balancer.conf file before I could get nginx to restart successfully.

  71. Excellent Article! Thanks for such a detailed article. It helped save a lot of time and effort!

  72. Janne Ruostemaa

    Hi Sean, thanks for the comment. You are right in that the HTTP block can’t be defined twice. The default nginx.conf includes all custom configuration which allows leaving out the HTTP section from the load balancer config. Alternatively, to have all configurations in one place, we chose to remove the default config instead, hence the HTTP block is set in the load balancer file.

  73. Hi, thanks for the info. Will this configuration work if I don’t have a dedicated load balancing server? I’m trying to configure nginx on one of the three syslog servers I want to load balance between.

  74. Janne Ruostemaa

    Hi there, thanks for the question. It’s entirely possible to run the load balancer on one of your syslog servers as long as you take into account any network ports already in use.

  75. Thanks, I’m really new to this so you’ve been a big help. One more question: when you say “When you enter the load balancer’s public IP address in your web browser, you should pass to one of your back-end servers,” how do I know if it passed to a back-end server?

  76. Janne Ruostemaa

    When testing your load balancer setup, it’s useful to create differing HTML pages on each backend so you can observe how the load balancer is behaving.

  77. Devesh Sharma

    Hi! That was very informative article.
    Just want to know how should I pass the upstream, location and proxy_pass such that if I type docker3_ip/apache-docker1, apache webserver from docker 1 opens. Similarly, if I type docker3_ip/apache-docker2, apache webserver from docker 2 opens and if I type docker3_ip/apache it should send equal no of alternate request to one another.

  78. Hello Janne!

    Here is the contents of my mydomainxyz.com.conf file: https://products.groupdocs.app/viewer/view?file=dc46056e-78c4-4ea3-ad54-211e4801336a/file.txt

    Currently my origin server is in US. I cloned using the “Clone” feature adding 2 more regions: Singapore & Germany. and i have 3 servers. I have some questions as follows:
    – Do I have to upload load-balancer.conf to the origin server, or all 3 servers?
    – How can customers near Singapore area automatically get the nearest server?
    – with my mydomainxyz.com.conf file, how do I set up to get the complete load-balancer.conf file? i’m bewildered because i don’t understand anything about nginx.

    Looking forward to help!

  79. Janne Ruostemaa

    Hi there, thanks for the questions. Generally, you would have a single load balancer that’ll then pass the traffic to the backend servers. Sounds like you are looking for geo load balancing which on Nginx is only available in Nginx Plus. Alternatively, you might find Cloudflare load balancer easier to configure.

  80. Janne Ruostemaa

    Hi Devesh, thanks for the question. You can define multiple backends to split the traffic according to the URL path.
    Note that if all of the web services you mentioned are on the same host as the load balancer, you’ll need to have them listen to different ports. For example:

    http {
       upstream apache-docker1 {
          server localhost:8008;
       }
    
       upstream apache-docker2 {
          server localhost:8118;
       }
    
       upstream apache {
          server localhost:8080;
       }
    
       server {
          listen 80; 
    
          location /apache-docker1 {
              proxy_pass http://apache-docker1;
          }
          location /apache-docker2 {
              proxy_pass http://apache-docker2;
          }
          location /apache {
              proxy_pass http://apache;
          }
       }
    }
  81. Hi Janne,
    Trust this meets you well
    You are simply the best…please i need your advise and clarification on setting up four primary web servers(JBoss) on Primary DMZ and another four secondary web servers(JBoss) on secondary DMZ. The question is how do i setup NGNIX load balancer to automatically failover to the seconday DMZ if there is completely outage on Primary DMZ.

  82. Janne Ruostemaa

    Hi Abdullahi, thanks for the question. Nginx does passive heath checks of the backend servers by default. If it gets an error response or the backend server takes too long to respond, Nginx will mark the server as failed. You can use the weight, max_fails and fail_timeout parameters to adjust this behaviour to meet your needs.

  83. Vasudevan Rao

    Hello UpCloud
    It was indeed an excellent article on how to proceed with nginx right from the installation till configuring “nginx” as a load balancer. I am using Mac OSX that I installed through Homebrew. Very much useful for any person understanding on load balancer. You have done a excellent, fantastic and awesome job guiding others in this aspect. I really apprecaite from the bottom of my heart. Thanks indeed

  84. Josh Weinthraub

    Is it possible to use Cloudflare with NGINX load balancer?

  85. Janne Ruostemaa

    Hi Josh, thanks for the question. Yes, it’s possible to use CloudFlare in addition to an Nginx load balancer without any modification to the load balancer setup.

  86. Hi, thanks for this article.

    I have a question that if there’s only one baremetal node where web server is hosting multiple static web pages, how can they be load balanced on single server. Like different clients reuqesting different pages and load balancer is serving their respective resquests?

  87. Janne Ruostemaa

    Hi Munir, thanks for the question. Load balancing generally refers to a method of distributing traffic between multiple hosts of the same website. As such, load balancing per se would not be possible on a single server. Instead, what you are describing is a multi-site web host which can be done easily by just creating an Nginx config file for each individual page.

  88. Arian Moafizad

    Hello,

    First of all thanks for the article. It is very assertive.

    I configured my NGINX server as a load-balancer with health_check, but when I want to reload the NGINX, it says:
    “nginx: [emerg] unknown directive “health_check” in /etc/nginx/conf.d/test.conf:15
    nginx: configuration file /etc/nginx/nginx.conf test failed”

    Here is my configuration from /etc/nginx/conf.d /test.conf:
    upstream mylab {
    server 192.168.56.11;
    server 192.168.56.12;
    zone mylab 64k;
    }

    server {
    server_name mylab.local.net;
    listen 80;

    location / {
    proxy_pass http://mylab;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header HOST $host;
    health_check;
    }
    }

    server {
    server_name mylab.local.com;
    listen 80;

    location / {
    proxy_pass http://mylab;
    proxy_set_header X-Real_IP $remote_addr;
    proxy_set_header HOST $host;
    health_check;
    }
    }

    I used CentOs 7.9 .
    I’ll appreciate you if you could help me to solve the issue.

  89. Janne Ruostemaa

    Hi Arian, thanks for the question. The active health check using the health_check directive is only available in the NGINX Plus version which is probably why you got the error. However, the open-source NGINX does support automatic passive health checks by failed proxy passes and marking any upstream server unavailable for 30 seconds after 3 failures.

  90. this question is exactly what I came to this article for! thanks! great article!

  91. Justt put the Load Balancer in HA with DNS, two load balancer or just use router

  92. very useful and simple… thanks!!

  93. Amanullah Aman

    Hi,
    First of all, thank you for this nice tutorial about load balancing.

    I have a little confusion. If anyone explains me, It would be very helpful to me.

    upstream backend {
    least_conn;
    server 10.1.0.101;
    server 10.1.0.102;
    server 10.1.0.103;
    }

    In this example, there are 3 servers. That means the same application is running on 3 different servers. Now my question is On which server I will install and configure Nginx for load balancing? Is it on a completely separate server or anyone above example upstream backend server? If we need to install Nginx on a separate server then what will be this server configuration? Is it the same configuration as our upstream backend servers or less configured or more powerful?

    Thanks.

  94. Janne Ruostemaa

    Hi Amanullah, thanks for the question. Ideally, you would install the load balancer, like in this tutorial, on one machine and all applications on separate backend servers. Commonly, the application servers need to be more powerful to handle the actual client requests but that is entirely dependent on the service you are hosting.

Leave a Reply to Mohamad

Your email address will not be published. Required fields are marked *

Back to top