Managed Kubernetes now available in Open Beta. Test out the new service yourself at your Control Panel.

Updated on 17.5.2022

How to configure load balancing using Nginx

Load balancing

Advantages of load balancing

Load balancing is an excellent way to scale out your application and increase its performance and redundancy. Nginx, a popular web server software, can be configured as a simple yet powerful load balancer to improve your servers resource availability and efficiency.

How does Nginx work? Nginx acts as a single entry point to a distributed web application working on multiple separate servers.

This guide describes the advantages of load balancing. Learn how to set up load balancing with nginx for your cloud servers.

As a prerequisite, you’ll need to have at least two hosts with a web server software installed and configured to see the benefit of the load balancer. If you already have one web host set up, duplicate it by creating a custom image and deploy it onto a new server at your UpCloud control panel.

Try UpCloud for free! Deploy a server in just 45 seconds

Installing nginx

The first thing to do is to set up a new host that will serve as your load balancer. Deploy a new instance at your UpCloud Control Panel if you haven’t already. Currently, nginx packages are available on the latest versions of CentOS, Debian and Ubuntu. So pick whichever of these you prefer.

After you have set up the server the way you like, install the latest stable nginx. Use one of the following methods.

# Debian and Ubuntu
sudo apt-get update
# Then install the Nginx Open Source edition
sudo apt-get install nginx
# CentOS
# Install the extra packages repository
sudo yum install epel-release
# Update the repositories and install Nginx
sudo yum update
sudo yum install nginx

Once installed change directory into the nginx main configuration folder.

cd /etc/nginx/

Now depending on your OS, the web server configuration files will be in one of two places.

Ubuntu and Debian follow a rule for storing virtual host files in /etc/nginx/sites-available/, which are enabled through symbolic links to /etc/nginx/sites-enabled/. You can use the command below to enable any new virtual host files.

sudo ln -s /etc/nginx/sites-available/vhost /etc/nginx/sites-enabled/vhost

CentOS users can find their host configuration files under /etc/nginx/conf.d/ in which any .conf type virtual host file gets loaded.

Check that you find at least the default configuration and then restart nginx.

sudo systemctl restart nginx

Test that the server replies to HTTP requests. Open the load balancer server’s public IP address in your web browser. When you see the default welcoming page for nginx the installation was successful.

Nginx default welcome page.

If you have trouble loading the page, check that a firewall is not blocking your connection. For example on CentOS 7 the default firewall rules do not allow HTTP traffic, enable it with the commands below.

sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --reload

Then try reloading your browser.

Configuring nginx as a load balancer

When nginx is installed and tested, start to configure it for load balancing. In essence, all you need to do is set up nginx with instructions for which type of connections to listen to and where to redirect them. Create a new configuration file using whichever text editor you prefer. For example with nano:

sudo nano /etc/nginx/conf.d/load-balancer.conf

In the load-balancer.conf you’ll need to define the following two segments, upstream and server, see the examples below.

# Define which servers to include in the load balancing scheme. 
# It's best to use the servers' private IPs for better performance and security.
# You can find the private IPs at your UpCloud control panel Network section.
http {
   upstream backend {
      server 10.1.0.101; 
      server 10.1.0.102;
      server 10.1.0.103;
   }

   # This server accepts all traffic to port 80 and passes it to the upstream. 
   # Notice that the upstream name and the proxy_pass need to match.

   server {
      listen 80; 

      location / {
          proxy_pass http://backend;
      }
   }
}

Then save the file and exit the editor.

Next, disable the default server configuration you earlier tested was working after the installation. Again depending on your OS, this part differs slightly.

On Debian and Ubuntu systems you’ll need to remove the default symbolic link from the sites-enabled folder.

sudo rm /etc/nginx/sites-enabled/default

CentOS hosts don’t use the same linking. Instead, simply rename the default.conf in the conf.d/ directory to something that doesn’t end with .conf, for example:

sudo mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.disabled

Then use the following to restart nginx.

sudo systemctl restart nginx

Check that nginx starts successfully. If the restart fails, take a look at the  /etc/nginx/conf.d/load-balancer.conf you just created to make sure there are no mistypes or missing semicolons.

When you enter the load balancer’s public IP address in your web browser, you should pass to one of your back-end servers.

Load balancing methods

Load balancing with nginx uses a round-robin algorithm by default if no other method is defined, like in the first example above. With round-robin scheme each server is selected in turns according to the order you set them in the load-balancer.conf file. This balances the number of requests equally for short operations.

Least connections based load balancing is another straightforward method. As the name suggests, this method directs the requests to the server with the least active connections at that time. It works more fairly than round-robin would with applications where requests might sometimes take longer to complete.

To enable least connections balancing method, add the parameter least_conn to your upstream section as shown in the example below.

upstream backend {
   least_conn;
   server 10.1.0.101; 
   server 10.1.0.102;
   server 10.1.0.103;
}

Round-robin and least connections balancing schemes are fair and have their uses. However, they cannot provide session persistence. If your web application requires that the users are subsequently directed to the same back-end server as during their previous connection, use IP hashing method instead. IP hashing uses the visitors IP address as a key to determine which host should be selected to service the request. This allows the visitors to be each time directed to the same server, granted that the server is available and the visitor’s IP address hasn’t changed.

To use this method, add the ip_hash -parameter to your upstream segment like in the example underneath.

upstream backend {
   ip_hash;
   server 10.1.0.101; 
   server 10.1.0.102;
   server 10.1.0.103;
}

In a server setup where the available resources between different hosts are not equal, it might be desirable to favour some servers over others. Defining server weights allows you to further fine-tune load balancing with nginx. The server with the highest weight in the load balancer is selected the most often.

upstream backend {
   server 10.1.0.101 weight=4; 
   server 10.1.0.102 weight=2;
   server 10.1.0.103;
}

For example in the configuration shown above the first server is selected twice as often as the second, which again gets twice the requests compared to the third.

Load balancing with HTTPS enabled

Enable HTTPS for your site, it is a great way to protect your visitors and their data. If you haven’t yet implemented encryption on your web hosts, we highly recommend you take a look at our guide for how to install Let’s Encrypt on nginx.

To use encryption with a load balancer is easier than you might think. All you need to do is to add another server section to your load balancer configuration file which listens to HTTPS traffic at port 443 with SSL.  Then set up a proxy_pass to your upstream segment like with the HTTP in the previous example above.

Open your configuration file again for edit.

sudo nano /etc/nginx/conf.d/load-balancer.conf

Then add the following server segment to the end of the file.

server {
   listen 443 ssl;
   server_name domain_name;
   ssl_certificate /etc/letsencrypt/live/domain_name/cert.pem;
   ssl_certificate_key /etc/letsencrypt/live/domain_name/privkey.pem;
   ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

   location / {
      proxy_pass http://backend;
   }
}

Then save the file, exit the editor and restart nginx again.

sudo systemctl restart nginx

Setting up encryption at your load balancer when you are using the private network connections to your back-end has some great advantages.

  • As only your UpCloud servers have access to your private network, it allows you to terminate the SSL at the load balancer and thus only passing forward HTTP connections.
  • It also greatly simplifies your certificate management. You can obtain and renew the certificates from a single host.

With the HTTPS-enabled you also have the option to enforce encryption to all connections to your load balancer. Simply update your server segment listening to port 80 with a server name and a redirection to your HTTPS port. Then remove or comment out the location portion as it’s no longer needed. See the example below.

server {
   listen 80;
   server_name domain_name;
   return 301 https://$server_name$request_uri;

   #location / {
   #   proxy_pass http://backend;
   #}
}

Save the file again after you have made the changes. Then restart nginx.

sudo systemctl restart nginx

Now all connections to your load balancer will be served over an encrypted HTTPS connection. Requests to the unencrypted HTTP will be redirected to use HTTPS as well. This provides a seamless transition into encryption. Nothing is required from your visitors.

Health checks

In order to know which servers are available, nginx’s implementations of reverse proxy includes passive server health checks. If a server fails to respond to a request or replies with an error, nginx will note the server has failed. It will try to avoid forwarding connections to that server for a time.

The number of consecutive unsuccessful connection attempts within a certain time period can be defined in the load balancer configuration file. Set a parameter max_fails to the server lines. By default, when no max_fails is specified, this value is set to 1. Optionally setting the max_fails to 0 will disable health checks to that server.

If max_fails is set to a value greater than 1 the subsequent fails must happen within a specific time frame for the fails to count. This time frame is specified by a parameter fail_timeout, which also defines how long the server should be considered failed. By default, the fail_timeout is set to 10 seconds.

After a server is marked failed and the time set by fail_timeout has passed, nginx will begin to gracefully probe the server with client requests. If the probes return successful, the server is again marked live and included in the load balancing as normal.

upstream backend {
   server 10.1.0.101 weight=5;
   server 10.1.0.102 max_fails=3 fail_timeout=30s;
   server 10.1.0.103;
}

Use the health checks. They allow you to adapt your server back-end to the current demand by powering up or down hosts as required. When you start up additional servers during high traffic, it can easily increase your application performance when new resources become automatically available to your load balancer.

Conclusions on the advantages of load balancing

If you wish to improve your web application performance and availability, a load balancer is definitely something to consider. Nginx is powerful yet relatively simple to set up to load balance web server. Together with an easy encryption solution, such as Let’s Encrypt client, it makes for a great front-end to your web farm. Check out the documentation for upstream over at nginx.org to learn more.

When you are using multiple hosts protects your web service with redundancy, the load balancer itself can still leave a single point of failure. You can further improve high availability when you set up a floating IP between multiple load balancers. Find out more in our article on floating IPs on UpCloud.

Janne Ruostemaa

  1. This article saved my butt, thank you. Very clear and helpful.

    Reply
  2. Nice article!

    Thanks!

    Reply
  3. Nice article!

    Thanks!

    Reply
  4. good document

    Reply
  5. Thanks for your awesome documentation ! What is the best between having a nginx load balancer plus integrated swarm load balancer or using round robin dns?

    Reply
  6. i don’t have load balance config in my nginx ?

    Reply
  7. You are great!

    Reply
  8. Hi Janne,
    Great article thanks. I just have a question. I am converting my current single Nginx configuration to a frond end proxy with multiple back end servers like you stated above.
    What I am not sure about is what to put on the back end servers and how to configure Nginx on those servers.
    The front end server has the content and the SSL certs configured on it but to get the benefit of the load balancing, what data and Nginx configs needs to sit on the back end servers i.e. server 10.1.0.101, , 10.1.0.102, 101.1.0.103 in your cluster?
    Thanks.

    Reply
  9. Serkan Coskun

    Thanks!!

    Reply
  10. Looking at above diagram image, what if the Load Balancer machine is down?

    Reply
    1. Justt put the Load Balancer in HA with DNS, two load balancer or just use router

      Reply
  11. Great Article! Thank you so much! Keep your awesome work.

    Reply
  12. JOHN Maina Kamau

    Great article. Very well done.

    Reply
  13. Excellent article which covers everything developers wants to know to use nginx as load balancer.Thanks Much!

    Reply
  14. Thanks for the article. I found Nginx very helpful but I wan’t always this sure I had my doubts.

    Reply
  15. Hii.. we have setup a load balancer with 2 servers. We are facing issues while streaming and playing back the same video. If the request is going to one of the servers suppose S1 and for playback the request is going to the server S2 ,in that case we are not able to playback the video. And since we are using the load balancer we can have the request on any of the servers irrespective on which the LB decides.
    So can we have any ways where we can have the the publish and play request going to the same server through the LB.

    Reply
  16. Ganesh Arelly

    Hello Janne,

    Clear explanation, nice write-up.

    We are a start-up from Finland. Horizontal scaling is not an option with us for some legacy software licensing costs. However, we can deploy two application servers at different ports on the same node.

    So, Is it possible load balance on same node between two ports? Something like:
    upstream backend {
    server 10.1.0.101:7777;
    server 10.1.0.101:7778;
    }

    Reply

    Reply
  17. Nwanze Franklin

    Nice article. very helpful

    Reply
  18. great article to get start with Nginx.

    Reply
  19. Hello Janne,

    Nice explanation.

    I have a question: Is it possible to do load balancing in such a way that all the request with same cookie value goes to the same instance ?

    Reply
  20. Hi Janne,
    I am seeing an unexpected behaviour with nginx, when used as a udp load balancer.

    My client and server is supposed to exchange udp packets back and forth between them for longer period. My server will be listening to a specific port and the client will initate the communication with a random port and continue to use it for all communication with the server.

    When i try to introduce NGINX in this topology to proxy the packets, i could see that after few seconds. NGINX changes the port number, which it used to communicate with the backend server for the same client. Hence the server losses the context of the session and thereby resulting in connection loss.

    This behaviour is consistent across both nginx & nginx plus.
    I have tried proxy_timeout option as well, which doesn’t solves the purpose.

    Is it a known issue of nginx ?

    Reply
  21. This article is gonna save our butts as a startup

    Reply
  22. I’m a bit confused on this. I’m running on CentOS and I never had a default.conf file in my conf.d directory.

    When I create my loadbalancer.conf and try and restart nginx I get some errors around the following –
    nginx: [emerg] “http” directive is not allowed here in /etc/nginx/conf.d/loadbalancer.conf:1

    There are some other guides that are telling me to put what you have in loadbalancer.conf into my actual nginx.conf but that also is not working.. I’ve started fresh dozens of times and not sure what i’m doing wrong here.

    I get my initial nginx welcome page but as soon as I add the loadbalancer.conf and reload it fails to start.

    Reply
  23. With this method, the server logs always show the load balancer IP not the connecting client IP.
    I’ve tried adding “proxy_bind $remote_addr transparent;” and “user root; ” but I’m getting timeouts when the option is enabled.

    I’m wondering if this feature has been pulled and now only available in the Nginx plus version, or have I missed something that is required to make it work?

    Reply
  24. so i just start learning about load balancing.
    if i have php on the backend, shoud i install php fpm on load balancer server, or all process is done in the backend?

    Reply
  25. Thanks for your Article
    I need some help about this challenge:
    Setup 2 separate PHP-FPM Servers and configure Nginx to balance the load between them.
    + Requests to WordPress admin must be sent to just one of them
    + PHP-FPM workers should run under the user you have created

    * I have created a pool named user1 , but i don’t know what should I do? because i have to implement it on one server

    Reply
  26. Joel Alvarado

    Awesome article Janne.

    Could you please help me with something. I did all the steps for load balance two php sites in two different servers, the two sites are using nginx as web service too, and it works really nice, the challenges begin when I decide to migrate those two servers from http to https. On http it works very easy and very good, but when those servers start working on ssl, it becomes all just a mess up. If I set the dns domain and point to one server only, it works fine, http redirects to https and IP requests redirects to dns requests, I tried this on boths servers 1 by 1 separate, and it works. But in https load balance mode, the best thing I get, is load balance the default nginx site on those servers, it does the balance but shows the default nginx site, not my app site, if I try to unlink the default site, only shows 404 not found, no redirect to the app. I used certbot to set ssl in both servers, I tried to set the load balancer with and without ssl certificates, with certbot too, But I can never make it work Well. Question, my servers need to be ssl or not? Could you please help me to complete this task?

    Reply
  27. One of the best article you could ever find on load balancer with NGINX. Great article!

    Reply
  28. You save my day! Maybe my whole year! :D

    I’ve got this headache for the load of my server. Thank you in millions.

    Reply
  29. Thanks you for this great article Janne

    I need a note for following question. We run nginx as reverse proxy, 3 upstream server, ip_has method, proxy_cache_key is “$scheme$request_method$host$request_uri$cookie_NAME.
    Caching works fine. But in case that a user visit one website and come later to the same site, but with active authcookie, nginx deliver the cached site. But is necessary to generate a new one, because some content is changed. Can nginx do that?

    Reply
  30. Hallo Janne,
    thanks for help, proxy_cache_bypass and proxy_no_cache works fine for me.
    Kind regards, Kai

    Reply
  31. Hey, Great article
    Would it be possible to add another backend server on the fly without having to restart the load balancer (in scaling out use case). Any offline method/software/framework to auto scale and dynamically load balance on a private network
    Thanks

    Reply
  32. I have server configuration like http://localhost:8080/abc/xyz.com , here where we need give this context path

    Reply
  33. Very nice article!

    Thanks!

    Reply
  34. Hi Jaane

    Great article. I have some doubts.

    How much size of db is required for nginx to store data and suppose if install nginx in linux can we add windows in conf file or do we need to make changes in conf as per windows

    Reply
  35. Hi Janne, great article.
    Which services combinations are allowed to use in nginx at the same time? I mean it’s possible to use as a load balancer and as a web server and as a reverse proxy at the same time? Thnx

    Reply
  36. This is very useful article. Tks a lot

    Reply
  37. hi, thanks for a great tutorial, can you please assist with my setup? I hav ethe following nginx.conf:

    `
    # declare flask app
    upstream pyapi_app {
    least_conn;
    server pyapi1:5000;
    server pyapi2:5000;
    }

    # declare shiny app
    upstream shiny_app {
    server shinyapp:3838;
    }

    map $http_upgrade $connection_upgrade {
    default upgrade;
    ” close;
    }

    # shinyapp server
    server {
    listen 80 default_server;
    server_name shiny_app;

    client_max_body_size 50M;
    # normal requests go to shiny app
    location / {
    proxy_pass http://shiny_app;
    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Host $server_name;

    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    proxy_read_timeout 5d;
    proxy_buffering off;
    }

    }

    # python api server
    server {
    listen 81;
    server_name pyapi_app;

    # pyapi requests go to the flask app
    location / {

    proxy_pass http://pyapi_app;
    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Host $server_name;

    }
    }
    `

    what do I put in my shinyapp frontend in regards to the URL connection string? Prior to scaling up the api backend, I just put the name of the docker container or the ip address of the pyapi server, `pyapi:5000/post` but now that I have 2x `pyapi`s how should I write it?

    Reply
  38. No word! it helps me a lot! Thank you soooooooooooooooooooooo muchhhhh !!

    Reply
  39. What about load balancing with Litespeed server?

    Reply
  40. Good stuff, thank you. I was having problems with the load-balancer.conf file because both that and the nginx.conf included http {}. I had to remove that from my load-balancer.conf file before I could get nginx to restart successfully.

    Reply
  41. Excellent Article! Thanks for such a detailed article. It helped save a lot of time and effort!

    Reply
  42. Hi, thanks for the info. Will this configuration work if I don’t have a dedicated load balancing server? I’m trying to configure nginx on one of the three syslog servers I want to load balance between.

    Reply
  43. Thanks, I’m really new to this so you’ve been a big help. One more question: when you say “When you enter the load balancer’s public IP address in your web browser, you should pass to one of your back-end servers,” how do I know if it passed to a back-end server?

    Reply
  44. Devesh Sharma

    Hi! That was very informative article.
    Just want to know how should I pass the upstream, location and proxy_pass such that if I type docker3_ip/apache-docker1, apache webserver from docker 1 opens. Similarly, if I type docker3_ip/apache-docker2, apache webserver from docker 2 opens and if I type docker3_ip/apache it should send equal no of alternate request to one another.

    Reply
  45. Hello Janne!

    Here is the contents of my mydomainxyz.com.conf file: https://products.groupdocs.app/viewer/view?file=dc46056e-78c4-4ea3-ad54-211e4801336a/file.txt

    Currently my origin server is in US. I cloned using the “Clone” feature adding 2 more regions: Singapore & Germany. and i have 3 servers. I have some questions as follows:
    – Do I have to upload load-balancer.conf to the origin server, or all 3 servers?
    – How can customers near Singapore area automatically get the nearest server?
    – with my mydomainxyz.com.conf file, how do I set up to get the complete load-balancer.conf file? i’m bewildered because i don’t understand anything about nginx.

    Looking forward to help!

    Reply
  46. Hi Janne,
    Trust this meets you well
    You are simply the best…please i need your advise and clarification on setting up four primary web servers(JBoss) on Primary DMZ and another four secondary web servers(JBoss) on secondary DMZ. The question is how do i setup NGNIX load balancer to automatically failover to the seconday DMZ if there is completely outage on Primary DMZ.

    Reply
  47. Vasudevan Rao

    Hello UpCloud
    It was indeed an excellent article on how to proceed with nginx right from the installation till configuring “nginx” as a load balancer. I am using Mac OSX that I installed through Homebrew. Very much useful for any person understanding on load balancer. You have done a excellent, fantastic and awesome job guiding others in this aspect. I really apprecaite from the bottom of my heart. Thanks indeed

    Reply
  48. Josh Weinthraub

    Is it possible to use Cloudflare with NGINX load balancer?

    Reply
  49. Hi, thanks for this article.

    I have a question that if there’s only one baremetal node where web server is hosting multiple static web pages, how can they be load balanced on single server. Like different clients reuqesting different pages and load balancer is serving their respective resquests?

    Reply
  50. Arian Moafizad

    Hello,

    First of all thanks for the article. It is very assertive.

    I configured my NGINX server as a load-balancer with health_check, but when I want to reload the NGINX, it says:
    “nginx: [emerg] unknown directive “health_check” in /etc/nginx/conf.d/test.conf:15
    nginx: configuration file /etc/nginx/nginx.conf test failed”

    Here is my configuration from /etc/nginx/conf.d /test.conf:
    upstream mylab {
    server 192.168.56.11;
    server 192.168.56.12;
    zone mylab 64k;
    }

    server {
    server_name mylab.local.net;
    listen 80;

    location / {
    proxy_pass http://mylab;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header HOST $host;
    health_check;
    }
    }

    server {
    server_name mylab.local.com;
    listen 80;

    location / {
    proxy_pass http://mylab;
    proxy_set_header X-Real_IP $remote_addr;
    proxy_set_header HOST $host;
    health_check;
    }
    }

    I used CentOs 7.9 .
    I’ll appreciate you if you could help me to solve the issue.

    Reply
  51. this question is exactly what I came to this article for! thanks! great article!

    Reply
  52. very useful and simple… thanks!!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top