Tutorials How to deploy CoreOS high availability web server

How to deploy CoreOS high availability web server

Managing your CoreOS cluster with fleet allows you to simplify a node configuration process to a few easy tasks. Services run on the cluster are described in systemd unit files combined with fleet-specific properties, that tell fleet where and how to deploy each process. In this article, you will find the required instructions and example files for setting up a load balancer and two backend hosts. Each part will be loaded on their own node to create a CoreOS HA web server.

CoreOS HA Web Cluster diagram

If you have not yet deployed a cluster or would like some help booting up a new one, check out our earlier guide for getting started with CoreOS clusters. Before proceeding, check that you have a working cluster of at least three nodes that are all able to communicate through etcd correctly.

Web server template

The flexibility of the fleet compliant unit files enables you to configure a service once and launch it as many times as needed. This is done through templates, which are generalised instructions employing systemd specifiers, to create unique instances.

To allow dynamic instance deployment, create a unit file with a name that matches a specific <name>@.<suffix> format.

vi [email protected]

Then copy in the configuration from the below example.

Description=Nginx web server %i

ExecStartPre=-/usr/bin/docker kill nginx%i
ExecStartPre=-/usr/bin/docker rm nginx%i
# Start the web host container
ExecStart=/usr/bin/docker run --name nginx%i -h nginx%i  \
-p ${COREOS_PRIVATE_IPV4}:80:80 nginx
# Updating the default index.html to identify different nodes
ExecStartPost=/usr/bin/docker exec nginx%i sh -c 'sed -i "s|nginx!|$(hostname)|g" \
# Stop the container
ExecStop=/usr/bin/docker stop nginx%i

[email protected]*.service

The comment lines in the example unit file explain some parts of the configuration. In more general terms the file consists of three segments: Unit, Service, and X-Fleet.

  • [Unit] sets the service description to allow easier identification of the process in the systemd log. It can also define dependencies like in this case the container requires docker.service to be available before starting itself.
  • [Service] section lists the execution instructions, what needs to be done before starting while running, and when stopping the service. Each execution command begins with the application or process it runs followed by other options or subcommands. An additional feature of the execution commands is to indicate if the process is allowed to fail without consequences using the dash ‘-‘ symbol at the beginning of the command line. The example above also includes the /etc/environment file to allow the use of $COREOS_PRIVATE_IPV4 variable.
  • [X-Fleet] defines the fleet specific options that say where on the cluster the service should be deployed. The Conflicts options above indicates the service should not be started on a node that already has an instance of the same service. Other options are to deploy to specific MachineID, the same MachineOf another service, certain hosts defined by MachineMetaData, or on all nodes by setting Global=true option.

You can find additional information in the documentation for unit files and scheduling.

Once you are done creating the web host template, save the file and exit the editor. With the unit file ready you can then deploy two instances of the service using the command below.

fleetctl start [email protected]{1,2}.service

The array {1,2} will run the command twice for you creating instances with each value, you should see output along the lines of the example below.

Unit [email protected] inactive
Unit [email protected] inactive
Unit [email protected] launched on 2e20446e.../
Unit [email protected] launched on a6eb210a.../

You can check that the units started successfully with the following command.

fleetctl list-units
UNIT            MACHINE                ACTIVE SUB
[email protected] 2e20446e.../  active running
[email protected] a6eb210a.../ active running

When both units are reporting active and running the hosts are set up correctly and should reply to requests. Test the hosts with curl from any of the nodes using the private IP addresses reported by fleet.

curl <private IP>

You should get the nginx default web page as a reply with the minor change of the title reporting which unit is replying. If both nodes are working, you can continue to set up a service discovery.

Backend discovery service

The containers on CoreOS are intentionally isolated from one another, but will at times require a way to communicate. For this purpose, CoreOS uses a distributed key-value store called etcd. You already have the web hosts up and running, but for a load balancer to be able to find them, they need to report required information to etcd. Set up a service for this by creating a new template using the same naming format as before.

vi [email protected]

Copy the example template from below into the new file and save it.

Description=Track nginx%i availability

# Check the host availability every 10 seconds
# Update etcd if there are changes
ExecStart=/bin/sh -c 'while true;\
do if $(curl -sI ${COREOS_PRIVATE_IPV4}:80 | grep "200 OK" > /dev/null);\
   then case "$(etcdctl get /services/website/[email protected]%i)" in\
      "${COREOS_PRIVATE_IPV4}:80" ) ;;\
      *) etcdctl set /services/website/[email protected]%i "${COREOS_PRIVATE_IPV4}:80";;\
   else etcdctl rm /services/website/[email protected]%i;\
   sleep 10;\

[email protected]%i.service

The discovery service asks fleet to be specifically started on a node with web host service by the same service identifier number. It then creates a simple script that checks if the web host is working and sets the etcd key /services/website/[email protected]%i with the private IP address and port number. If the web service stops replying for some reason the key value is removed to reflect the service status.

Start two instances of the discovery service with the command below.

fleetctl start [email protected]{1,2}.service

Then check the units in the cluster with the same fleet command as before.

fleetctl list-units
UNIT                      MACHINE                ACTIVE SUB
[email protected] 2e20446e.../  active running
[email protected] a6eb210a.../ active running
[email protected]           2e20446e.../  active running
[email protected]           a6eb210a.../ active running

You should see four services running on two nodes, one of both web host and discovery units on each node. If all services are running properly, the etcd key values should also have been added. Check the existing keys with the following command.

etcdctl ls --recursive /services
/services/website/[email protected]
/services/website/[email protected]

You can also test that the keys actually stored the correct information. For example, [email protected] should match the private IP address of the node the unit is running on added with the port number :80.

etcdctl get /services/website/[email protected]

If the keys are getting stored correctly, you can continue with setting up a service to monitor the keys.

Etcd key monitoring

On top of writing and reading key values, etcd can also be set to watch certain keys or directories and perform actions when changes are detected. The etcdctl exec-watch option allows the unit to monitor the keys stored by the discovery services and then update the load balancer configuration file when needed. Start by creating a new unit file.

vi nginx-watch.service

And again copy the example file from below into your text editor.

Description=Exec-watch for /services/website

# Prepare the node for config files
ExecStartPre=-/bin/sh -c 'rm /home/core/nginx.conf'
ExecStartPre=-/bin/sh -c 'rm /home/core/nginx.conf.sh'
ExecStartPre=-/bin/sh -c 'rm /home/core/nginx.conf.tmpl'
# Create a config template
ExecStartPre=/bin/sh -c 'printf "\
server {\n\
   listen 80;\n\
   location / {\n\
      proxy_pass http://backend;\n\
upstream backend {\n" > /home/core/nginx.conf.tmpl'
# Create a host update script and set it executable
ExecStartPre=/bin/sh -c 'printf "#!/bin/sh\n
HOSTS=\$(etcdctl ls /services/website);\n\
for HOST in \$HOSTS;\n\
   do echo \' server\' \$(etcdctl get \$HOST)\';\';\n\
echo \' }\'\n" > /home/core/nginx.conf.sh'
ExecStartPre=/bin/sh -c 'chmod +x /home/core/nginx.conf.sh'
# Create the initial loadbalancer configuration
ExecStartPre=/bin/sh -c 'cat /home/core/nginx.conf.tmpl > /home/core/nginx.conf &&\
./home/core/nginx.conf.sh >> /home/core/nginx.conf'
# Start the etcd exec-watch to watch for changes in the key directory
ExecStart=/bin/sh -c 'etcdctl exec-watch --recursive /services/website/ -- sh -c "\
cat /home/core/nginx.conf.tmpl > /home/core/nginx.conf &&\
./home/core/nginx.conf.sh >> /home/core/nginx.conf && docker kill -s HUP nginx-lb"'
# Stop and clear the configuration files
ExecStop=-/bin/sh -c 'rm -f /home/core/nginx.conf'
ExecStopPost=-/bin/sh -c 'rm -f /home/core/nginx.conf.sh'
ExecStopPost=-/bin/sh -c 'rm -f /home/core/nginx.conf.tmpl'

[email protected]*.service

The file looks terribly long and complicated, but in essence, it does three things:

  • Creates a static load balancer template
  • Writes a script that appends the key values read from etcd to the configuration
  • Sets up an exec-watch service that then compiles the final load balancer configuration file, once before starting and then again when changes to the keys are detected.

The unit file also dictates that the services should not be run on a node that is already hosting a web service unit.

Start the monitoring service with the next command.

fleetctl start nginx-watch.service

Check that the files were created successfully. If you are not currently connected to the node that the nginx-watch.service was started on, you can start an SSH service with fleet to a specific node based on the unit it is hosting.

fleetctl ssh nginx-watch.service

Then list the files on that node.

ls -l /home/core
-rw-r--r--. 1 root root 163 Aug 10 14:21 nginx.conf
-rwxr-xr-x. 1 root root 131 Aug 10 14:21 nginx.conf.sh
-rw-r--r--. 1 root root 109 Aug 10 14:21 nginx.conf.tmpl

If you see the three files and all of them are larger than zero, the service was able to create the configuration. With that done you have just the last part of the puzzle left, continue below with setting up the load balancer itself.

Load balancer

To finally tie all the previous services together, you will need to create a service for the load balancer. Open a new file in the editor with the command below.

vi nginx-loadbalancer.service

Then copy the text from underneath into the file.

Description=Nginx load balancer

ExecStartPre=-/usr/bin/docker kill nginx-lb
ExecStartPre=-/usr/bin/docker rm nginx-lb
# Start the loadbalancer
ExecStart=/usr/bin/docker run --name nginx-lb -p 80:80 \
-v /home/core/nginx.conf:/etc/nginx/conf.d/default.conf:ro nginx
# Stop the container
ExecStop=/usr/bin/docker stop nginx-lb


The unit is dependent on the nginx-watch.service to create a usable configuration file. This means the load balancer must be deployed on the same node as the exec-watch, and only after the service has been started. Unlike the web hosts container, which was bound to the private IP address, the load balancer needs to be accessible from the public internet as well. Therefore it is not set to any specific IP, but only to the usual HTTP port.

Go ahead and start the load balancer service with the following command.

fleetctl start nginx-loadbalancer.service

Now if everything started correctly, the load balancer should be able to read the configuration created by the monitoring service and direct requests to the web hosts reported to the etcd. The easiest way to test the whole configuration is to open your load balancer node’s public IP address in your web browser.

Nginx custom default page

The small update to the default nginx page shows which node is serving the request. If you reload the page consecutively, you should see each request being replied by a different node. When you see the change you know that the load balancer and the backend are working.


Congratulations, you have successfully set up a high availability web host on a CoreOS cluster. The setup was made using only the official nginx docker image together with etcd features already available on CoreOS. This is obviously just one of many ways the same results can be achieved and you might, for example, prefer to use Apache image instead. As such the containers are entirely interchangeable with just minor differences in configuration, mostly about naming the services and containers in a consistent way.

With the HA web service up and running, you might be wondering how to proceed. We would recommend playing around with the configuration and learning about how the containers and services work together. You could also be interested in straight up deploying a website other than the default nginx landing page. To get started with this, connect to the node which is hosting [email protected] with the following. open a terminal into one of the web host containers using the command below.

fleetctl ssh [email protected]

Then open a terminal into one of the web host containers using the command below.

docker exec -it nginx1 bash

This lets you explore the contents of the container including the web host files.

Editor-in-chief and Technical writer at UpCloud since 2015. Cloud enthusiast writing about server technology and software.

Leave a Reply

Your email address will not be published. Required fields are marked *


Helsinki (HQ)

In the capital city of Finland, you will find our headquarters, and our first data centre. This is where we handle most of our development and innovation.


London was our second office to open, and a important step in introducing UpCloud to the world. Here our amazing staff can help you with both sales and support, in addition to host tons of interesting meetups.


Singapore was our 3rd office to be opened, and enjoys one of most engaged and fastest growing user bases we have ever seen.


Seattle is our 4th and latest office to be opened, and our way to reach out across the pond to our many users in the Americas.