Updated on 24.5.2023

How to install HAProxy load balancer on Ubuntu

Try this guide out on UpCloud with our free trial! Get started

Load balancing is a common solution for distributing web applications horizontally across multiple hosts while providing the users with a single point of access to the service. HAProxy is one of the most popular open-source load-balancing software, which also offers high availability and proxy functionality.

HAProxy aims to optimise resource usage, maximise throughput, minimise response time, and avoid overloading any single resource. It is available for installation on many Linux distributions like Ubuntu 16 in this guide, but also on Debian 8 and CentOS 7 systems.

HAproxy load balancing

HAProxy is particularly suited for very high-traffic websites and is therefore often used to improve web service reliability and performance for multi-server configurations. This guide lays out the steps for setting up HAProxy as a load balancer on Ubuntu 16 to its own cloud host which then directs the traffic to your web servers.

As a pre-requirement for the best results, you should have a minimum of two web servers and a server for the load balancer. The web servers need to be running at least the basic web service such as Apache2 or nginx to test out the load balancing between them.

Installing HAProxy 1.7

As a fast-developing open-source application, HAProxy available for installation in the Ubuntu default repositories might not be the latest release. To find out what version number is being offered through the official channels enter the following command.

sudo apt show haproxy

HAProxy has always three active stable versions of the releases, two of the latest versions in development plus a third older version that is still receiving critical updates. You can always check the currently newest stable version listed on the HAProxy website and then decide which version you wish to go with.

While the latest stable version 1.7 of HAProxy is not yet available on the packet manager by default, it can be found in a third-party repository. To install HAProxy from an outside repo, you will need to add the new repository with the following command.

sudo add-apt-repository ppa:vbernat/haproxy-1.7

Confirm adding the new PPA by pressing the Enter key.

Next, update your sources list.

sudo apt update

Then install HAProxy as you normally would.

sudo apt install -y haproxy

Afterwards, you can double check the installed version number with the following command.

haproxy -v
HA-Proxy version 1.7.8-1ppa1~xenial 2017/07/09
Copyright 2000-2017 Willy Tarreau <[email protected]>

The installation is then complete. Continue below with the instructions for how to configuring the load balancer to redirect requests to your web servers.

Configuring the load balancer

Setting up HAProxy for load balancing is a quite straightforward process. Basically, all you need to do is tell HAProxy what kind of connections it should be listening for and where the connections should be relayed to.

This is done by creating a configuration file /etc/haproxy/haproxy.cfg with the defining settings. You can read about the configuration options at HAProxy documentation page if you wish to find out more.

Load balancing on layer 4

Once installed HAProxy should already have a template for configuring the load balancer. Open the configuration file, for example, using nano with the command underneath.

sudo nano /etc/haproxy/haproxy.cfg

Add the following sections to the end of the file. Replace the <server name> with whatever you want to call your servers on the statistics page and the <private IP> with the private IPs for the servers you wish to direct the web traffic to. You can check the private IPs at your UpCloud Control Panel and Private network tab under Network menu.

frontend http_front
   bind *:80
   stats uri /haproxy?stats
   default_backend http_back

backend http_back
   balance roundrobin
   server <server1 name> <private IP 1>:80 check
   server <server2 name> <private IP 2>:80 check

This defines a layer 4 load balancer with a front-end name http_front listening to the port number 80, which then directs the traffic to the default backend named http_back. The additional stats URI /haproxy?stats enables the statistics page at that specified address.

Different load balancing algorithms

Configuring the servers in the backend section allows HAProxy to use these servers for load balancing according to the roundrobin algorithm whenever available.

The balancing algorithms are used to decide which server at the backend each connection is transferred to. Some of the useful options include the following:

  • Roundrobin: Each server is used in turns according to its weights. This is the smoothest and fairest algorithm when the server’s processing time remains equally distributed. This algorithm is dynamic, which allows server weights to be adjusted on the fly.
  • Leastconn: The server with the lowest number of connections is chosen. Round-robin is performed between servers with the same load. Using this algorithm is recommended with long sessions, such as LDAP, SQL, TSE, etc, but it is not very well suited for short sessions such as HTTP.
  • First: The first server with available connection slots receives the connection. The servers are chosen from the lowest numeric identifier to the highest, which defaults to the server’s position on the farm. Once a server reaches its maxconn value, the next server is used.
  • Source: The source IP address is hashed and divided by the total weight of the running servers to designate which server will receive the request. This way the same client IP address will always reach the same server while the servers stay the same.

Configuring load balancing for layer 7

Another possibility is to configure the load balancer to work on layer 7, which is useful when parts of your web application are located on different hosts. This can be accomplished by conditioning the connection transfer for example by the URL.

Open the HAProxy configuration file with a text editor.

sudo nano /etc/haproxy/haproxy.cfg

Then set the front and backend segments according to the example below.

frontend http_front
   bind *:80
   stats uri /haproxy?stats
   acl url_blog path_beg /blog
   use_backend blog_back if url_blog
   default_backend http_back

backend http_back
   balance roundrobin
   server <server name> <private IP>:80 check
   server <server name> <private IP>:80 check

backend blog_back
   server <server name> <private IP>:80 check

The front end declares an ACL rule named url_blog that applies to all connections with paths that begin with /blogUse_backend defines that connections matching the url_blog condition should be served by the backend named blog_back, while all other requests are handled by the default backend.

At the backend side, the configuration sets up two server groups, http_back like before and the new one called blog_back that servers specifically connections to example.com/blog.

After making the configurations, save the file and restart HAProxy with the next command.

sudo systemctl restart haproxy

If you get any errors or warnings at startup, check the configuration for any mistypes and then try restarting again.

Testing the setup

With the HAProxy configured and running, open your load balancer server’s public IP in a web browser and check that you get connected to your backend correctly. The parameter stats uri in the configuration enables the statistics page at the defined address.

http://<load balancer public IP>/haproxy?stats

When you load the statistics page and all of your servers are listed in green your configuration was successful!

HAProxy Ubuntu 1.7.8 statistics page

The statistics page contains some helpful information to keep track of your web hosts including up and down times and session counts. If a server is listed in red, check that the server is powered on and that you can ping it from the load balancer machine.

In case your load balancer does not reply, check that HTTP connections are not getting blocked by a firewall. Also, confirm that HAProxy is running with the command below.

sudo systemctl status haproxy

Password protecting the statistics page

Having the statistics page simply listed at the front end, however, is publicly open for anyone to view, which might not be such a good idea. Instead, you can set it up to its own port number by adding the example below to the end of your haproxy.cfg file. Replace the username and password with something secure.

listen stats
   bind *:8181
   stats enable
   stats uri /
   stats realm Haproxy Statistics
   stats auth username:password

After adding the new listen group, remove the old reference to the stats uri from the frontend group. When done, save the file and restart HAProxy again.

sudo systemctl restart haproxy

Then open the load balancer again with the new port number, and log in with the username and password you set in the configuration file.

http://<load balancer public IP>:8181

Check that your servers are still reporting all green and then open just the load balancer IP without any port numbers on your web browser.

http://<load balancer public IP>/

If your backend servers have at least slightly different landing pages you will notice that each time you reload the page you get the reply from a different host. You can try out different balancing algorithms in the configuration section or take a look at the full documentation.

Conclusions

Congratulations on successfully configuring HAProxy! With a basic load balancer setup, you can considerably increase your web application performance and availability. This guide is however just an introduction to load balancing with HAProxy, which is capable of much more than what could be covered in first-time setup instruction. We recommend experimenting with different configurations with the help of the extensive documentation available for HAProxy, and then start planning the load balancing for your production environment.

While using multiple hosts to protect your web service with redundancy, the load balancer itself can still leave a single point of failure. You can further improve the high availability by setting up a floating IP between multiple load balancers. You can find out more about this in our article for floating IPs on UpCloud.

Janne Ruostemaa

Editor-in-Chief

  1. thank you for the tutorial – very very useful, as you say pretty easy to setup.

  2. Thanks for sharing the docs, it helps me to learn a lot but i can see, username: password specify in stats block not defined later on. means which file it user for authorize user & passwords.

  3. Janne Ruostemaa

    Hi there, thanks for the comment. The username:password in stats block directly define the username and password as such. You should replace each with something secure separated by a colon : as in the example.

  4. If I have eCommerce site and I want to use SSL? What’s the config for that?

  5. Janne Ruostemaa

    Hi Betro, thanks for the question. Inevitably, the actual load balancer configuration will depend on how your eCommerce site is set up but the layer 4 example config using balance source for IP hashing could be recommended for session persistency. The frontend naturally needs to be configured to listen to port 443 to enable HTTPS and your SSL certificate needs to be reconfigured for the load balancing server.

  6. Betro Hakala

    Basically, I would just like to see this haproxy-guide, but with SSL enabled.

    My setup is WordPress using nginx.

  7. Janne Ruostemaa

    Thanks for the suggestion, we could certainly include examples of SSL implementation when next updating the guide. Alternatively, if you are already familiar with Nginx, you might want to have a look at our guide to using Nginx for load balancing which includes examples of enabling HTTPS.

  8. Hi, Thank you for sharing such useful information, I would very appreciate if you can share with us load balancing with HA for exchange server with DAG.

    Thank you in advance
    Best Regards,

  9. Janne Ruostemaa

    Hi Vahideh, thanks for the comment. For the HAProxy, load balancing an Exchange Servers should work much the same as any other backend. It’s likely easiest to configure using transport layer 4 but it should also be possible to set up at the application level. The Exchange Server documentation might be useful to help getting started.

  10. Nice post. I learn something totally new and challenging on blogs I stumbleupon everyday.
    It’s always exciting to read articles from other writers
    and use a little something from their websites.

Leave a Reply to News Proxy

Your email address will not be published. Required fields are marked *

Back to top