Managed Kubernetes now available in Open Beta. Test out the new service yourself at your Control Panel.
Managed Kubernetes now available in Open Beta. Test out the new service yourself at your Control Panel.
Server orientated operating systems call for efficiency and reliability, and while minimalistic options exist, most of the server variants of the popular Linux distributions still carry around features not everyone is going to want.
Enter CoreOS — an open source specialised operating system that utilises Linux containers providing similar benefits to virtual machines, but with a focus on applications. It does away with almost everything you would expect from a regular OS leaving only the bare minimum, that can be built upon using container wrapped software.
CoreOS is the first in its category of minimalistic container-optimised operating systems. It is designed for security, consistency, and reliability to allow dynamic scaling and management of computing capacity. CoreOS is based on Gentoo Linux and shares some of its roots with the Chrome OS and Chromium OS. The first alpha release of the OS was introduced in July 2013. A lot of development has been put into the project since the earliest versions focusing especially in the areas of networking, distributed storage, container runtime, authentication, and security. The last of the focuses mentioned has a great emphasis as part of CoreOS’s mission to improve security and reliability of the Internet.
The main feature and idea at the heart of CoreOS are the containers, which run applications and services in their own isolated systems. The containers differ from the traditional virtual machines in that they share the kernel of the host system without a need for a hypervisor. The removal of the hypervisor layer helps in achieving a near zero performance overhead, which in turn allows greater operation density with fewer hosts and lower running costs.
Unlike most Linux distributions, CoreOS does not include a package manager but instead provides similar functionality and ease of use with docker. Other primary building blocks of CoreOS are the distributed configuration service etcd, cluster management system fleet, and the user-provided system setup file cloud-config.
Containers are a lightweight and portable operating-system-level virtualization method that enables software to be built to run in its own enclosed environment. Docker containers can encapsulate virtually any application, and run consistently on any type of platform from a laptop to a cloud server.
The code within a container runs in isolation from other containers. Docker uses cgroups for process isolation and network namespace for connection separation. Isolating the applications to their own runtime environments provides security and simplicity, as each container implements only what their services need. Keeping the applications in isolation also reduces the possibility of conflict with other services running on the same core host allowing even containers of different operating systems to share a kernel and hardware.
Booting up a new docker container can be extremely fast, some within milliseconds. The unprecedented launch times allow for greater flexibility in load management across an entire cluster. Launching a container on a node over SSH is as simple as a command docker run <image name>, for example, the line below would run the docker test application.
docker run hello-world
Docker checks if the image can be found locally or if it needs to be downloaded from the public image registry. Once the download is complete, the container runs through its programming and exits. Containers are generally designed to close after completing their task but continuous services, such as a web host, can also use containers. Docker by itself is available on other platforms as well, here is a simple getting started guide for running WordPress with docker if you wish to learn more.
While the container isolation is an important part of docker and CoreOS, some services require a way to communicate with one another. For this purpose, CoreOS uses etcd, a distributed key-value store that provides a reliable data exchange across a system or a cluster of hosts. Applications in containers are allowed to read, write and listen for data in etcd, creating a way to distribute configuration details or feature flags between services.
The data written to etcd is available to all containers within the host and is also automatically replicated to other nodes in the cluster. The etcd interface is simple HTTP/JSON API. Etcd can be used manually with the command line tool etcdctl that is preinstalled in CoreOS.
etcdctl set /message "Hello world" etcdctl get /message
Another option is to utilise an HTTP protocol capable applications such as curl.
curl -L https://127.0.0.1:2379/v2/keys/message -XPUT -d value="Hello world" curl -L https://127.0.0.1:2379/v2/keys/message
Etcd is also used for automatic service discovery. For example, an application container can announce itself to a proxy container allowing the proxy to automatically know which machines should receive traffic. Built-in service discovery can allow seamless scalability to services by adding new machines to the cluster when needed.
CoreOS uses systemd and fleet to manage the containers on all the hosts in a cluster. Fleet is the CoreOS recommended way of running distributed docker containers. It is a tool that presents your entire cluster as a single init system and works by receiving systemd unit files, that are then schedule onto hosts in the cluster based on declared conflicts and other preferences listed in the unit file.
Fleet can be controlled through a utility tool called fleetctl, which also comes preinstalled on CoreOS. It allows loading services, querying unit status, remotely access log files and more. The services are defined with simple systemd unit files, which contain the relevant information for running a specific application. For example, a unit file shown below could be saved to the CoreOS user home directory with a name hello.service creating a new service by that name.
[Unit] Description=My Service After=docker.service [Service] TimeoutStartSec=0 ExecStartPre=-/usr/bin/docker kill hello ExecStartPre=-/usr/bin/docker rm hello ExecStartPre=/usr/bin/docker pull busybox ExecStart=/usr/bin/docker run --name hello busybox /bin/sh -c "trap 'exit 0' INT TERM; while true; do echo Hello World; sleep 1; done" ExecStop=/usr/bin/docker stop hello
The service can be loaded and ran with the commands below.
fleetctl load hello.service fleetctl start hello.service
Fleet selects a suitable node to run the service somewhere on the cluster. The service can be removed with the following.
fleetctl destroy hello.service
The basic commands for fleetctl are very much similar to the systemctl. This is because fleet, in essence, provides a control interface for the cluster, which combines all of the individual systemd init nodes to a single easy to manage system. Fleet can also be used from any member node in the cluster allowing flexibility and redundancy.
Managing your cluster manually with fleet is all fun and games, but repeating the same steps every time when adding new nodes at a larger scale would get tiresome. To solve this CoreOS utilises an automated configuration system that follows user-supplied setup file called cloud-config. A program called coreos-cloudinit reads the config file at each boot to perform initial configurations, and it can also be invoked during runtime. This method allows a new node to be automatically discovered and connected to a cluster, run predefined services, and set up important variables all with a single file of instructions.
The cloud-config system was inspired by the cloud-init project and their cloud-config file. Cloud-init, however, includes tools which are not used by the CoreOS. Only a subsection of the original project, that was deemed relevant, is implemented in CoreOS. A cloud-config file can contain instructions for a number of components.
The CoreOS cloud-config file allows portable pre-configuration to easily add new nodes to a cluster. For a more in detail explanation and concrete examples of the cloud-config file, check out the CoreOS documentation for cloud-config.
CoreOS provides stable and reliable updates to all systems connected to their update service. The automatic updates are important in keeping a system secure and do so without compromising the performance of any running services.
The OS is divided into two root partitions often referred to as root A and B. Hosts initially boot into the root A while root B is used for downloading and installing new updates. With the two separate partitions, system updates are performed atomically and can be rolled back. A currently running root is never modified preventing the server from ever entering an unstable or partially updated state. When updates to the dormant root are finished, a simple reboot is enough to switch to the second root with a freshly updated system.
Even with the two-part root system, the CoreOS is extremely lean allowing minimal memory usage. Keeping the strain on the system nominal frees most of the resources to be used by the containers. From the few components that run the CoreOS, the majority are open source and can be custom tailored to specific applications as necessary.
As a containerized platform the CoreOS is easily suited for high availability services. Systemd on the service level and fleet at the node level provide redundancy capable of ensuring that service containers are always online regardless of the machine, availability zone or region. Fleet also supports collocation with the same properties allowing complex architectures.
The public docker registry has a wide variety of applications and services already built into containers that are readily downloadable. Most common software from full operating systems like Ubuntu to popular server applications such as WordPress or MariaDB can already be found, but if you wish to develop your own containers it is also possible to store them privately until you are ready to release your creations.
Although CoreOS is a relatively new distribution, it is already supported by many of the cloud providers. Container optimised clusters are not only popular among the developers but can also support serious production environments to a large scale.
CoreOS has taken the industry by storm and other OS vendors are following in its footsteps. Canonical has an Ubuntu Core variant called Snappy, Red Hat Enterprise Linux is working on their own Project Atomic, and more are coming. Regardless of the underlying operating system, it seems the container optimised servers are definitely an option to consider for both development and production environments.
Now that you have got to know CoreOS and its components, you might want to try it out in practice. Check out our guide for Getting Started with CoreOS Cluster over at the UpCloud Support site.
Like last year, UpCloud was one of the key partners of Slush and we provided the IT infrastructure for the event.
Community and events
In this post, we will look at why you should create cluster computing using Kubernetes and UpCloud.
In this post, we will take a look at how the two of the major players developing container orchestration, Docker and Kubernetes, compare.