Tutorials How to install Kubernetes cluster on CentOS 8

How to install Kubernetes cluster on CentOS 8

There are many guides out there describing how to install Kubernetes on CentOS 8. Nevertheless, some steps might be unnecessary and some might be missing. This guide is based on our notes from real-world deployments and has worked great.

Prerequisites for both Master and Worker nodes

In this guide, we will be using minimal resources with just two cloud servers for simplicity. After the initial setup, you can add more workers when necessary.

Let’s get started!

1. Deploy two CentOS 8 cloud servers. One for the master and the other for the worker node. Check this tutorial to learn more about deploying cloud servers.

Kubernetes has minimum requirements for the server and both master and worker nodes need to have at least 2 GB RAM and 2 CPUs, the $20/mo plan covers these requirements and with double the memory. Note that the minimum requirements are not just guidelines as Kubernetes will refuse to install on a server with less than the minimum resources.

2. Log into both Master and Worker nodes over SSH using the root account and password you received by email after deployment.

Make note of the public IP and private IP addresses of your servers at the UpCloud control panel. You can also use the ip addr command to find these out later.

3. Make sure the servers are up to date before installing anything new.

dnf -y upgrade

4. Disable SELinux enforcement.

setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

5. Enable transparent masquerading and facilitate Virtual Extensible LAN (VxLAN) traffic for communication between Kubernetes pods across the cluster.

modprobe br_netfilter

You will also need to enable IP masquerade at the firewall.

firewall-cmd --add-masquerade --permanent
firewall-cmd --reload

6. Set bridged packets to traverse iptables rules.

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

Then load the new rules.

sysctl --system

7. Disable all memory swaps to increase performance.

swapoff -a

With these steps done on both Master and worker nodes, you can proceed to install Docker.

Installing Docker on Master and Worker nodes

Next, we’ll need to install Docker.

1. Add the repository for the docker installation package.

dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo

2. Install container.io which is not yet provided by the package manager before installing docker.

dnf install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm

3. Then install Docker from the repositories.

dnf install docker-ce --nobest -y

4. Start the docker service.

systemctl start docker

5. Make it also start automatically on server restart.

systemctl enable docker

Once installed, you should check that everything is working correctly.

7. See the docker version.

docker version

8. List what is inside the docker images. Likely still empty for now.

docker images
REPOSITORY   TAG   IMAGE ID   CREATED   SIZE

Now that Docker is ready to go, continue below to install Kubernetes itself.

Installing Kubernetes on Master and Worker nodes

With all the necessary parts installed, we can get Kubernetes installed as well.

1. Add the Kubernetes repository to your package manager by creating the following file.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

2. Then update the repo info.

dnf upgrade -y

3. Install all the necessary components for Kubernetes.

dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

Start the Kubernetes services and enable them to run at startup.

systemctl enable kubelet
systemctl start kubelet

Once running on both nodes, begin configuring Kubernetes on the Master by following the instructions in the next section.

Configuring Kubernetes on the Master node only

Once Kubernetes has been installed, it needs to be configured to form a cluster.

1. Configure kubeadm.

kubeadm config images pull

2. Open the necessary ports used by Kubernetes.

firewall-cmd --zone=public --permanent --add-port={6443,2379,2380,10250,10251,10252}/tcp

3. Allow docker access from another node, replace the worker-IP-address with yours.

firewall-cmd --zone=public --permanent --add-rich-rule 'rule family=ipv4 source address=worker-IP-address/32 accept'

4. Allow access to the host’s localhost from the docker container.

firewall-cmd --zone=public --permanent --add-rich-rule 'rule family=ipv4 source address=172.17.0.0/16 accept'

5. Make the changes permanent.

firewall-cmd --reload

6. Install CNI (container network interface) plugin for Kubernetes.

For this setup, we’ll be using Calico: https://docs.projectcalico.org/getting-started/kubernetes/quickstart#overview

Issue the following command:

kubeadm init --pod-network-cidr 192.168.0.0/16

You should see something like the example below. Make note of the discovery token, it’s needed to join worker nodes to the cluster.

Note that the join token below is just an example.

kubeadm join 94.237.41.193:6443 --token 4xrp9o.v345aic7zc1bj8ba \
--discovery-token-ca-cert-hash sha256:b2e459930f030787654489ba7ccbc701c29b3b60e0aa4998706fe0052de8794c

Make the following directory and configuration files.

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

7. Enable pod to run on Master. This is only for demonstration purposes and is not recommended for production use.

kubectl taint nodes --all node-role.kubernetes.io/master-

8. Check that Master node has been enabled and is running.

kubectl get nodes
NAME  STATUS     ROLES  AGE  VERSION
master  NotReady  master   91s     v1.18.0

On successful execution, you should see a node with ready status. If not, wait a moment and repeat the command.

When the Master node is up and running, continue with the next section to join the Worker node to the cluster.

Configuring Kubernetes on the Worker node only

Each Kubernetes installation needs to have one or more worker nodes that run the containerized applications. We’ll only configure one worker in this example but repeat these steps to join more nodes to your cluster.

1. Open ports used by Kubernetes.

firewall-cmd --zone=public --permanent --add-port={10250,30000-32767}/tcp

2. Make the changes permanent.

firewall-cmd --reload

3. Join the cluster with the previously noted token.

Note that the join token below is just an example.

kubeadm join 94.237.41.193:6443 --token 4xrp9o.v345aic7zc1bj8ba \
--discovery-token-ca-cert-hash sha256:b2e459930f030787654489ba7ccbc701c29b3b60e0aa4998706fe0052de8794c

4. See if the Worker node successfully joined.

Go back to the Master node and issue the following command.

kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
master  Ready    master   10m   v1.18.0
worker  Ready    <none>   28s   v1.18.0

On success, you should see two nodes with ready status. If not, wait a moment and repeat the command.

Finished!

Congratulations, you should now have a working Kubernetes installation running on two nodes.

In case anything goes wrong, you can always repeat the process.

Run this on Master and Workers: kubeadm reset && rm -rf /etc/cni/net.d

Have fun clustering.

25 thoughts on “How to install Kubernetes cluster on CentOS 8

  1. It’s true that you can install k8s this way, even on centos 8 and redhat 8. Unfortunately you won’t be able to run any pods which are depending on other pods like a db-backend. The networking of k8s is depending on iptables which is not compatible with centos 8 / redhat 8.
    I experienced this problem and found out, that even the documentation says, that it’s not supported.
    Otherwise your article is pretty good. Just downgrade to centos 7 / redhat 7.

    1. Hi there, thanks for the comment. You are right that CentOS 8 is not yet officially supported by Kubernetes. It does seem to suffer from difficulties with the move from iptables to nftables but I would expect updates on that front to resolve the issues down the line. In the meanwhile, it still works well with single pod web apps.

      1. It is possible to change the firewalld-cmd implentation back to iptables in /etc/firewalld/firewalld.conf back to iptables (hftables is the default)

        1. Hi Clemens, thanks for the comment. Switching back to iptables could improve compatibility and certainly worth testing.

  2. Hi, Can you please provide steps for a production like full stack deployment cluster scenario in a single host (coreOS cluster)? Apps will be (Webtier (Nginx) ==> Middle-Tier (Tomcat) ==> DB (any).

    1. Hi Thomas, thanks for the question. While we do not currently have a tutorial for the steps you requested, you should be able to follow the same installation logic as in this guide to install Kubernetes on just about any supported platform. You might want to have a look at Rancher which can be helpful especially in cluster management.

  3. Very useful article. While adding the repo, the $basearch is failing to be appended into the file

    Please replace $basearch with /$basearch

    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-/$basearch

    1. Hi there, thanks for the comment. You are right that the $basearch needed to be escaped to include in the append. We’ve now fixed this in the tutorial.

    1. Hi Matt, thanks for the question. While SELinux would be good for the overall security of the server, according to Kubernetes, disabling it is required to allow containers to access the host filesystem, which is needed by pod networks for example. You have to do this until SELinux support is improved in the kubelet. You could leave SELinux enabled if you know how to configure it. However, it may require settings that are not supported by kubeadm.

  4. it doesn’t work

    I did everything what you did.

    After that when I started kubelet I have problem :

    My logs.
    Aug 21 12:55:55 k8s-master systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart.
    Aug 21 12:55:55 k8s-master systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 31.
    Aug 21 12:55:55 k8s-master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
    Aug 21 12:55:55 k8s-master systemd[1]: Started kubelet: The Kubernetes Node Agent.
    Aug 21 12:55:55 k8s-master kubelet[20412]: F0821 12:55:55.834275 20412 server.go:199] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file “/var/lib/kubelet/config.yaml”, error: open /var/lib/kubelet/config.yaml: no such file or directory
    Aug 21 12:55:55 k8s-master systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
    Aug 21 12:55:55 k8s-master systemd[1]: kubelet.service: Failed with result ‘exit-code’.

    1. Hi Arek, thanks for the question. It would seem that Kubelet failed to install properly and is missing the config.yaml file. I’d recommend checking that Docker is installed and running, then remove the failed installs and trying to install them again.

      sudo dnf remove kubelet kubeadm kubectl
      sudo dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
      1. Hi Janne. Thank you for your reply.
        but I did everything what you did.
        Docker was first installed and is working.

        Docker status is active (running)
        Docker info is working and I have docker interface docker0 (172.17.0.1/16)
        This is not problem with Docker.
        I read that the version of kubernates higher than 1.1 is not supportet ?
        I don’t know where is problem. I have installed Kubernetes on two distribustion linux version centos 7 and 8. Problem is the same.

  5. HI, pod-network-cidr 192.168.0.0/16 should not be contained on the host network CIDR righ? i am planning a home setup but i already used 192.168.0.0/24 on my main network where kubernetes hosts are also part.

    1. Hi German, you are right that the pod network should be distinct from any existing network configurations. You can set the Kubernetes network to something else instead e.g. pod-network-cidr 10.0.0.0/16

  6. Hello , I have an issue with the calico pods , they do not come up and are crashing with the instrutions provided

  7. Disabling swap using swapoff -a is not persisted, after rebooting the system, swap is enabled again. This can cause problems. You might want to consider adding a step to permanently disable swap.

    1. Hi Philipp, thanks for the comment. You are right in that this method for turning off swap is not persistent and we’ll look into updating the steps on it. However, it will not be an issue for any UpCloud users as all of our Cloud Servers are already deployed without a swap partition.

  8. kubeadm join :6443 –v=5 –token j6hcyq.e1ei1jca4im15jdh –discovery-token-ca-cert-hash sha256:e11fac383b59444433052b7278fe17a09356b8b9186af423f2a1b977cf739502
    I0915 14:48:37.347342 277884 join.go:398] [preflight] found NodeName empty; using OS hostname as NodeName
    I0915 14:48:37.347543 277884 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock
    [preflight] Running pre-flight checks
    I0915 14:48:37.347633 277884 preflight.go:90] [preflight] Running general checks
    I0915 14:48:37.347772 277884 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests
    I0915 14:48:37.347801 277884 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf
    I0915 14:48:37.347809 277884 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
    I0915 14:48:37.347817 277884 checks.go:102] validating the container runtime
    I0915 14:48:37.446929 277884 checks.go:128] validating if the “docker” service is enabled and active
    [WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
    I0915 14:48:37.543656 277884 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
    I0915 14:48:37.543770 277884 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
    I0915 14:48:37.543791 277884 checks.go:649] validating whether swap is enabled or not
    I0915 14:48:37.543903 277884 checks.go:376] validating the presence of executable conntrack
    I0915 14:48:37.544008 277884 checks.go:376] validating the presence of executable ip
    I0915 14:48:37.544073 277884 checks.go:376] validating the presence of executable iptables
    I0915 14:48:37.544201 277884 checks.go:376] validating the presence of executable mount
    I0915 14:48:37.544256 277884 checks.go:376] validating the presence of executable nsenter
    I0915 14:48:37.544271 277884 checks.go:376] validating the presence of executable ebtables
    I0915 14:48:37.544282 277884 checks.go:376] validating the presence of executable ethtool
    I0915 14:48:37.544292 277884 checks.go:376] validating the presence of executable socat
    I0915 14:48:37.544304 277884 checks.go:376] validating the presence of executable tc
    I0915 14:48:37.544314 277884 checks.go:376] validating the presence of executable touch
    I0915 14:48:37.544345 277884 checks.go:520] running all checks
    I0915 14:48:37.627783 277884 checks.go:406] checking whether the given node name is reachable using net.LookupHost
    I0915 14:48:37.627972 277884 checks.go:618] validating kubelet version
    I0915 14:48:37.688253 277884 checks.go:128] validating if the “kubelet” service is enabled and active
    I0915 14:48:37.701077 277884 checks.go:201] validating availability of port 10250
    I0915 14:48:37.701764 277884 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
    I0915 14:48:37.701778 277884 checks.go:432] validating if the connectivity type is via proxy or direct
    [WARNING HTTPProxy]: Connection to “https://” uses proxy “http://:8000/”. If that is not intended, adjust your proxy settings
    I0915 14:48:37.701866 277884 join.go:469] [preflight] Discovering cluster-info
    I0915 14:48:37.701888 277884 token.go:78] [discovery] Created cluster-info discovery client, requesting info from “:6443”
    I0915 14:48:47.702621 277884 token.go:215] [discovery] Failed to request cluster-info, will try again: Get “https://:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s”: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    I0915 14:48:47.702621 277884 token.go:215] [discovery] Failed to request cluster-info, will try again: Get “ht://:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s”: net/http: request caned while waiting for connection (Client.Timeout exceeded while awaiting headers)

    1. Hi Wilson, thanks for the question. You join command seems to be missing the master node IP which could just be a formatting issue in the comments but important to check nonetheless. Also, make sure you’ve opened the required port numbers including 80, 443 and 6443.

    1. Hi there, thanks for the question. There shouldn’t be a problem with the DNS service itself. Try running the following Busybox pod and testing the name resolution from within.

      kubectl run busybox1 --image busybox:1.28 --restart=Never --rm -it -- s
      / # nslookup kubernetes

Leave a Reply

Your email address will not be published. Required fields are marked *

Locations

Helsinki (HQ)

In the capital city of Finland, you will find our headquarters, and our first data centre. This is where we handle most of our development and innovation.

London

London was our second office to open, and a important step in introducing UpCloud to the world. Here our amazing staff can help you with both sales and support, in addition to host tons of interesting meetups.

Singapore

Singapore was our 3rd office to be opened, and enjoys one of most engaged and fastest growing user bases we have ever seen.

Seattle

Seattle is our 4th and latest office to be opened, and our way to reach out across the pond to our many users in the Americas.