UpCloud
Effortless global cloud infrastructure for SMBs
Introducing
If you’re interested in what we have to offer, contact sales or fill out a contact form.
Our support live chat is available for our customers 24/7. You can also email our support team.
Send us an email to give feedback or to say hello.
Start a new journey
Why Partner with UpCloud?
I’ve been passionate about the hosting industry since 2001. Before founding UpCloud, my first company grew to become one of Finland’s largest shared web hosting providers, serving over 30,000 customers. Along the way, I faced the same challenges many of you know well—24/7 on-call responsibilities, solving technical issues, and managing customer inquiries.
At UpCloud, we’ve designed a platform that solves these challenges, offering reliability, scalability, and unparalleled support. We understand the pressures you face because we’ve been there too. Partner with us, and let’s help you focus on growing your business while we handle the rest.
Sincerely, Joel Pihlajamaa CTO, Founder
Login
Sign up
Updated on 7.2.2025
There are many guides out there describing how to install Kubernetes on CentOS 8. Nevertheless, some steps might be unnecessary, and some might be missing. This guide is based on our notes from real-world deployments and has worked great.
In this guide, we will be using minimal resources with just two cloud servers for simplicity. After the initial setup, you can add more workers when necessary.
Let’s get started!
1. Deploy two CentOS 8 cloud servers. One for the master and the other for the worker node. Check this tutorial to learn more about deploying cloud servers.
Kubernetes has minimum requirements for the server and both master and worker nodes need to have at least 2 GB RAM and 2 CPUs, the $20/mo plan covers these requirements and with double the memory. Note that the minimum requirements are not just guidelines as Kubernetes will refuse to install on a server with less than the minimum resources.
2. Log into both Master and Worker nodes over SSH using the root account and password you received by email after deployment.
Make note of the public IP and private IP addresses of your servers at the UpCloud control panel. You can also use the ip addr command to find these out later.
3. Make sure the servers are up to date before installing anything new.
dnf -y upgrade
4. Disable SELinux enforcement.
setenforce 0 sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
5. Enable transparent masquerading and facilitate Virtual Extensible LAN (VxLAN) traffic for communication between Kubernetes pods across the cluster.
modprobe br_netfilter
You will also need to enable IP masquerade at the firewall.
firewall-cmd --add-masquerade --permanent firewall-cmd --reload
6. Set bridged packets to traverse iptables rules.
cat < /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
Then load the new rules.
sysctl --system
7. Disable all memory swaps to increase performance.
swapoff -a
With these steps done on both Master and worker nodes, you can proceed to install Docker.
Next, we’ll need to install Docker.
1. Add the repository for the docker installation package.
dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
2. Install container.io which is not yet provided by the package manager before installing docker.
dnf install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
3. Then install Docker from the repositories.
dnf install docker-ce --nobest -y
4. Start the docker service.
systemctl start docker
5. Make it also start automatically on server restart.
systemctl enable docker
6. Change docker to use systemd cgroup driver.
echo '{ "exec-opts": ["native.cgroupdriver=systemd"] }' > /etc/docker/daemon.json
And restart docker to apply the change.
systemctl restart docker
Once installed, you should check that everything is working correctly.
7. See the docker version.
docker version
8. List what is inside the docker images. Likely still empty for now.
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
Now that Docker is ready to go, continue below to install Kubernetes itself.
With all the necessary parts installed, we can get Kubernetes installed as well.
1. Add the Kubernetes repository to your package manager by creating the following file.
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kubelet kubeadm kubectl EOF
2. Then update the repo info.
dnf upgrade -y
3. Install all the necessary components for Kubernetes.
dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Start the Kubernetes services and enable them to run at startup.
systemctl enable kubelet systemctl start kubelet
Once running on both nodes, begin configuring Kubernetes on the Master by following the instructions in the next section.
Once Kubernetes has been installed, it needs to be configured to form a cluster.
1. Configure kubeadm.
kubeadm config images pull
2. Open the necessary ports used by Kubernetes.
firewall-cmd --zone=public --permanent --add-port={6443,2379,2380,10250,10251,10252}/tcp
3. Allow docker access from another node, replace the worker-IP-address with yours.
firewall-cmd --zone=public --permanent --add-rich-rule 'rule family=ipv4 source address=worker-IP-address/32 accept'
4. Allow access to the host’s localhost from the docker container.
firewall-cmd --zone=public --permanent --add-rich-rule 'rule family=ipv4 source address=172.17.0.0/16 accept'
5. Make the changes permanent.
firewall-cmd --reload
6. Install CNI (container network interface) plugin for Kubernetes.
For this setup, we’ll be using Calico: https://docs.projectcalico.org/getting-started/kubernetes/quickstart#overview
Issue the following command:
kubeadm init --pod-network-cidr 192.168.0.0/16
You should see something like the example below. Make note of the discovery token, it’s needed to join worker nodes to the cluster.
Note that the join token below is just an example.
kubeadm join 94.237.41.193:6443 --token 4xrp9o.v345aic7zc1bj8ba --discovery-token-ca-cert-hash sha256:b2e459930f030787654489ba7ccbc701c29b3b60e0aa4998706fe0052de8794c
Make the following directory and configuration files.
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
7. Enable pod to run on Master. This is only for demonstration purposes and is not recommended for production use.
kubectl taint nodes --all node-role.kubernetes.io/master-
8. Check that Master node has been enabled and is running.
kubectl get nodes
NAME STATUS ROLES AGE VERSION master NotReady master 91s v1.18.0
On successful execution, you should see a node with ready status. If not, wait a moment and repeat the command.
When the Master node is up and running, continue with the next section to join the Worker node to the cluster.
Each Kubernetes installation needs to have one or more worker nodes that run the containerized applications. We’ll only configure one worker in this example but repeat these steps to join more nodes to your cluster.
1. Open ports used by Kubernetes.
firewall-cmd --zone=public --permanent --add-port={10250,30000-32767}/tcp
2. Make the changes permanent.
3. Join the cluster with the previously noted token.
4. See if the Worker node successfully joined.
Go back to the Master node and issue the following command.
NAME STATUS ROLES AGE VERSION master Ready master 10m v1.18.0 worker Ready 28s v1.18.0
On success, you should see two nodes with ready status. If not, wait a moment and repeat the command.
Congratulations, you should now have a working Kubernetes installation running on two nodes.
In case anything goes wrong, you can always repeat the process.
Run this on Master and Workers: kubeadm reset && rm -rf /etc/cni/net.d
Have fun clustering.
We’ve launched support for Managed Kubernetes®, a fully managed container orchestration service with all the benefits of a self-maintained system but without any of the headaches! See how quick and easy it is to get started by following our dedicated tutorial.
Join discussion
11.5.2020 at 14.05
It’s true that you can install k8s this way, even on centos 8 and redhat 8. Unfortunately you won’t be able to run any pods which are depending on other pods like a db-backend. The networking of k8s is depending on iptables which is not compatible with centos 8 / redhat 8. I experienced this problem and found out, that even the documentation says, that it’s not supported. Otherwise your article is pretty good. Just downgrade to centos 7 / redhat 7.
18.5.2020 at 23.21
Hi there, thanks for the comment. You are right that CentOS 8 is not yet officially supported by Kubernetes. It does seem to suffer from difficulties with the move from iptables to nftables but I would expect updates on that front to resolve the issues down the line. In the meanwhile, it still works well with single pod web apps.
29.6.2020 at 14.34
Hi, Can you please provide steps for a production like full stack deployment cluster scenario in a single host (coreOS cluster)? Apps will be (Webtier (Nginx) ==> Middle-Tier (Tomcat) ==> DB (any).
30.6.2020 at 23.28
Hi Thomas, thanks for the question. While we do not currently have a tutorial for the steps you requested, you should be able to follow the same installation logic as in this guide to install Kubernetes on just about any supported platform. You might want to have a look at Rancher which can be helpful especially in cluster management.
23.7.2020 at 09.42
Very useful article. While adding the repo, the $basearch is failing to be appended into the file
Please replace $basearch with /$basearch
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-/$basearch
23.7.2020 at 13.51
Hi there, thanks for the comment. You are right that the $basearch needed to be escaped to include in the append. We’ve now fixed this in the tutorial.
23.7.2020 at 21.02
Why no selinux? Disabling it removes a major block for potential exploits.
24.7.2020 at 09.51
Hi Matt, thanks for the question. While SELinux would be good for the overall security of the server, according to Kubernetes, disabling it is required to allow containers to access the host filesystem, which is needed by pod networks for example. You have to do this until SELinux support is improved in the kubelet. You could leave SELinux enabled if you know how to configure it. However, it may require settings that are not supported by kubeadm.
11.8.2020 at 10.39
Thanks, nice article & explanation.
21.8.2020 at 13.56
it doesn’t work
I did everything what you did.
After that when I started kubelet I have problem :
My logs. Aug 21 12:55:55 k8s-master systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. Aug 21 12:55:55 k8s-master systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 31. Aug 21 12:55:55 k8s-master systemd[1]: Stopped kubelet: The Kubernetes Node Agent. Aug 21 12:55:55 k8s-master systemd[1]: Started kubelet: The Kubernetes Node Agent. Aug 21 12:55:55 k8s-master kubelet[20412]: F0821 12:55:55.834275 20412 server.go:199] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file “/var/lib/kubelet/config.yaml”, error: open /var/lib/kubelet/config.yaml: no such file or directory Aug 21 12:55:55 k8s-master systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Aug 21 12:55:55 k8s-master systemd[1]: kubelet.service: Failed with result ‘exit-code’.
24.8.2020 at 18.43
Hi Arek, thanks for the question. It would seem that Kubelet failed to install properly and is missing the config.yaml file. I’d recommend checking that Docker is installed and running, then remove the failed installs and trying to install them again.
sudo dnf remove kubelet kubeadm kubectl sudo dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
26.8.2020 at 09.30
HI, pod-network-cidr 192.168.0.0/16 should not be contained on the host network CIDR righ? i am planning a home setup but i already used 192.168.0.0/24 on my main network where kubernetes hosts are also part.
26.8.2020 at 10.43
Hi German, you are right that the pod network should be distinct from any existing network configurations. You can set the Kubernetes network to something else instead e.g. pod-network-cidr 10.0.0.0/16
3.9.2020 at 14.09
Hello , I have an issue with the calico pods , they do not come up and are crashing with the instrutions provided
3.9.2020 at 18.08
Hi Janne. Thank you for your reply. but I did everything what you did. Docker was first installed and is working.
Docker status is active (running) Docker info is working and I have docker interface docker0 (172.17.0.1/16) This is not problem with Docker. I read that the version of kubernates higher than 1.1 is not supportet ? I don’t know where is problem. I have installed Kubernetes on two distribustion linux version centos 7 and 8. Problem is the same.
7.9.2020 at 12.56
Hi Shakthi, thanks for the comment. While these steps for installing Kubernetes should generally work, your server might have something system-specific that is preventing Calico pods from starting up properly. I’d suggest trying to investigate what is causing the pods to fail.
11.9.2020 at 14.13
Disabling swap using swapoff -a is not persisted, after rebooting the system, swap is enabled again. This can cause problems. You might want to consider adding a step to permanently disable swap.
14.9.2020 at 11.42
Hi Philipp, thanks for the comment. You are right in that this method for turning off swap is not persistent and we’ll look into updating the steps on it. However, it will not be an issue for any UpCloud users as all of our Cloud Servers are already deployed without a swap partition.
16.9.2020 at 05.20
kubeadm join :6443 –v=5 –token j6hcyq.e1ei1jca4im15jdh –discovery-token-ca-cert-hash sha256:e11fac383b59444433052b7278fe17a09356b8b9186af423f2a1b977cf739502 I0915 14:48:37.347342 277884 join.go:398] [preflight] found NodeName empty; using OS hostname as NodeName I0915 14:48:37.347543 277884 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock [preflight] Running pre-flight checks I0915 14:48:37.347633 277884 preflight.go:90] [preflight] Running general checks I0915 14:48:37.347772 277884 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests I0915 14:48:37.347801 277884 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf I0915 14:48:37.347809 277884 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf I0915 14:48:37.347817 277884 checks.go:102] validating the container runtime I0915 14:48:37.446929 277884 checks.go:128] validating if the “docker” service is enabled and active [WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/ I0915 14:48:37.543656 277884 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables I0915 14:48:37.543770 277884 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward I0915 14:48:37.543791 277884 checks.go:649] validating whether swap is enabled or not I0915 14:48:37.543903 277884 checks.go:376] validating the presence of executable conntrack I0915 14:48:37.544008 277884 checks.go:376] validating the presence of executable ip I0915 14:48:37.544073 277884 checks.go:376] validating the presence of executable iptables I0915 14:48:37.544201 277884 checks.go:376] validating the presence of executable mount I0915 14:48:37.544256 277884 checks.go:376] validating the presence of executable nsenter I0915 14:48:37.544271 277884 checks.go:376] validating the presence of executable ebtables I0915 14:48:37.544282 277884 checks.go:376] validating the presence of executable ethtool I0915 14:48:37.544292 277884 checks.go:376] validating the presence of executable socat I0915 14:48:37.544304 277884 checks.go:376] validating the presence of executable tc I0915 14:48:37.544314 277884 checks.go:376] validating the presence of executable touch I0915 14:48:37.544345 277884 checks.go:520] running all checks I0915 14:48:37.627783 277884 checks.go:406] checking whether the given node name is reachable using net.LookupHost I0915 14:48:37.627972 277884 checks.go:618] validating kubelet version I0915 14:48:37.688253 277884 checks.go:128] validating if the “kubelet” service is enabled and active I0915 14:48:37.701077 277884 checks.go:201] validating availability of port 10250 I0915 14:48:37.701764 277884 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt I0915 14:48:37.701778 277884 checks.go:432] validating if the connectivity type is via proxy or direct [WARNING HTTPProxy]: Connection to “https://” uses proxy “http://:8000/”. If that is not intended, adjust your proxy settings I0915 14:48:37.701866 277884 join.go:469] [preflight] Discovering cluster-info I0915 14:48:37.701888 277884 token.go:78] [discovery] Created cluster-info discovery client, requesting info from “:6443” I0915 14:48:47.702621 277884 token.go:215] [discovery] Failed to request cluster-info, will try again: Get “https://:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s”: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I0915 14:48:47.702621 277884 token.go:215] [discovery] Failed to request cluster-info, will try again: Get “ht://:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s”: net/http: request caned while waiting for connection (Client.Timeout exceeded while awaiting headers)
18.9.2020 at 22.12
Hi Wilson, thanks for the question. You join command seems to be missing the master node IP which could just be a formatting issue in the comments but important to check nonetheless. Also, make sure you’ve opened the required port numbers including 80, 443 and 6443.
13.10.2020 at 17.24
Useful article. Thanks for documenting this.
17.10.2020 at 20.34
Have you encountered issue with DNS ? I failed to nslookup kubernetes.default
19.10.2020 at 10.34
It is possible to change the firewalld-cmd implentation back to iptables in /etc/firewalld/firewalld.conf back to iptables (hftables is the default)
19.10.2020 at 15.59
Hi Clemens, thanks for the comment. Switching back to iptables could improve compatibility and certainly worth testing.
19.10.2020 at 16.09
Hi there, thanks for the question. There shouldn’t be a problem with the DNS service itself. Try running the following Busybox pod and testing the name resolution from within.
kubectl run busybox1 --image busybox:1.28 --restart=Never --rm -it --
Then run the nslookup inside the pod.
/ # nslookup kubernetes
3.12.2020 at 20.47
[root@master ~]# kubeadm init –pod-network-cidr=10.55.0.0/16 W1203 13:12:05.912619 31533 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s. io] [init] Using Kubernetes version: v1.19.4 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https ://kubernetes.io/docs/setup/cri/ [WARNING FileExisting-tc]: tc not found in system path [WARNING Hostname]: hostname “master” could not be reached [WARNING Hostname]: hostname “master”: lookup master on 192.168.1.254:53: no such host [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’ [certs] Using certificateDir folder “/etc/kubernetes/pki” [certs] Generating “ca” certificate and key [certs] Generating “apiserver” certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 10.0.2.15] [certs] Generating “apiserver-kubelet-client” certificate and key [certs] Generating “front-proxy-ca” certificate and key [certs] Generating “front-proxy-client” certificate and key [certs] Generating “etcd/ca” certificate and key [certs] Generating “etcd/server” certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [10.0.2.15 127.0.0.1 ::1] [certs] Generating “etcd/peer” certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [10.0.2.15 127.0.0.1 ::1] [certs] Generating “etcd/healthcheck-client” certificate and key [certs] Generating “apiserver-etcd-client” certificate and key [certs] Generating “sa” key and public key [kubeconfig] Using kubeconfig folder “/etc/kubernetes” [kubeconfig] Writing “admin.conf” kubeconfig file [kubeconfig] Writing “kubelet.conf” kubeconfig file [kubeconfig] Writing “controller-manager.conf” kubeconfig file [kubeconfig] Writing “scheduler.conf” kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env” [kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml” [kubelet-start] Starting the kubelet [control-plane] Using manifest folder “/etc/kubernetes/manifests” [control-plane] Creating static Pod manifest for “kube-apiserver” [control-plane] Creating static Pod manifest for “kube-controller-manager” [control-plane] Creating static Pod manifest for “kube-scheduler” [etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests” [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by: – The kubelet is not running – The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: – ‘systemctl status kubelet’ – ‘journalctl -xeu kubelet’
Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker: – ‘docker ps -a | grep kube | grep -v pause’ Once you have found the failing container, you can inspect its logs with: – ‘docker logs CONTAINERID’
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster To see the stack trace of this error execute with –v=5 or higher [root@master ~]# docker ps -a | grep kube | grep -v pause
952f3c9c8246 0369cf4303ff “etcd –advertise-cl…” 9 minutes ago Up 8 minutes k8s_etcd_etcd-master_kube-syst em_13511ea52b5654c37f24c8124c551b52_0 ba10354fe516 b15c6247777d “kube-apiserver –ad…” 9 minutes ago Up 8 minutes k8s_kube-apiserver_kube-apiser ver-master_kube-system_1eb8c2b5a38e60b2073267a3a562fcf8_0 4478232ae9ff 4830ab618586 “kube-controller-man…” 9 minutes ago Up 8 minutes k8s_kube-controller-manager_ku be-controller-manager-master_kube-system_d8febe18f6be228a2440536b082d3f38_0 c502c1ceeb24 14cd22f7abe7 “kube-scheduler –au…” 9 minutes ago Up 8 minutes k8s_kube-scheduler_kube-schedu ler-master_kube-system_02dab51e35a1d6fc74e5283db75230f5_0
[root@master ~]# systemctl status kubelet ● kubelet.service – kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Thu 2020-12-03 13:12:10 EST; 17min ago Docs: https://kubernetes.io/docs/ Main PID: 31749 (kubelet) Tasks: 14 (limit: 5012) Memory: 57.1M CGroup: /system.slice/kubelet.service └─31749 /usr/bin/kubelet –bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf –kubeconfig=/etc/kubernetes/kubelet.conf –config=/var/lib/kubelet/config.yaml –netw>
Dec 03 13:28:45 master kubelet[31749]: W1203 13:28:31.079301 31749 watcher.go:87] Error while processing event (“/sys/fs/cgroup/blkio/system.slice/NetworkManager-dispatcher.service”:> Dec 03 13:28:46 master kubelet[31749]: E1203 13:28:42.862061 31749 kubelet.go:1765] skipping pod synchronization – [container runtime is down, PLEG is not healthy: pleg was last seen> Dec 03 13:29:02 master kubelet[31749]: E1203 13:28:57.267792 31749 kubelet.go:2183] node “master” not found Dec 03 13:29:03 master kubelet[31749]: W1203 13:28:49.947073 31749 watcher.go:87] Error while processing event (“/sys/fs/cgroup/memory/system.slice/NetworkManager-dispatcher.service”> Dec 03 13:29:05 master kubelet[31749]: W1203 13:29:05.350328 31749 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d Dec 03 13:29:05 master kubelet[31749]: W1203 13:29:03.029313 31749 watcher.go:87] Error while processing event (“/sys/fs/cgroup/devices/system.slice/NetworkManager-dispatcher.service> Dec 03 13:29:07 master kubelet[31749]: E1203 13:29:05.319785 31749 kubelet.go:1765] skipping pod synchronization – [container runtime is down, PLEG is not healthy: pleg was last seen> Dec 03 13:29:12 master kubelet[31749]: E1203 13:29:06.717532 31749 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node “master” not > Dec 03 13:29:14 master kubelet[31749]: W1203 13:29:05.571704 31749 watcher.go:87] Error while processing event (“/sys/fs/cgroup/pids/system.slice/NetworkManager-dispatcher.service”: > Dec 03 13:29:16 master kubelet[31749]: E1203 13:29:14.959399 31749 kubelet.go:2183] node “master” not found
Does anyone know where the problem may be?
7.12.2020 at 12.29
Hi Cosmin, thanks for the question. If kubelet is running and reachable, there’s likely a misconfiguration somewhere. Check that the firewall rules were applied successfully and that the kubeadm was able to pull the necessary images.
4.2.2021 at 16.28
How can I change my firewall rules from nftables to iptables? On CentOs 8 …
5.2.2021 at 08.08
Hey Janne, really appreciated this article. After reading the comments I really appreciate how responsive you are to everyone! Good article, very easily readable and very responsive to anyone who needs help. Great contribution
5.2.2021 at 14.29
Hi Harsha, thanks for the comment, glad to hear you found the tutorial useful, it was submitted by our valued community member. But we, of course, strive to help every visitor to make the most of it.
5.2.2021 at 15.26
Hi Daniel, thanks for the question. As mentioned above, you’ll need to edit the firewalld config /etc/firewalld/firewalld.conf and set FirewallBackend=iptables Note that the config change would be best done before installing Kubernetes or you’ll need to migrate the rules afterwards.
25.5.2021 at 06.23
sudo kubeadm join 192.xxxx–token o027xu.5ucqgclh8v8zej4t –discovery-token-ca-cert-hash sha256:948b8ad5a4997039275a11928a32d234cd1b4eca2f353d2bc5f2cab48b1d8320 [preflight] Running pre-flight checks error execution phase preflight: couldn’t validate the identity of the API Server: configmaps “cluster-info” is forbidden: User “system:anonymous” cannot get resource “configmaps” in API group “” in the namespace “kube-public” To see the stack trace of this error execute with –v=5 or higher Does anyone know where the problem may be?
7.6.2021 at 13.15
Hi Trung, thanks for the comment. The error would indicate a lack of permissions, possibly related to this GitHub issue.
25.6.2021 at 19.41
I did not find any RHEL8 kubernetes repo. Is it safe and recommended to use K8S package from RHEL7 for production? Any issues encountered?
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
26.6.2021 at 11.52
Hi Amit, thanks for the question. It’s not commonly recommended to use packages built for another base version, but as you mentioned, Kubernetes is not currently natively available on CentOS 8. Additionally, since Red Hat has decided to discontinue CentOS, it’s uncertain if Kubernetes is coming to CentOS 8.
28.6.2021 at 16.55
Thanks Janne. Appreciated. Any idea by when Kubernetes will have RHEL8 packages available? We already have RHEL8.3 VMs which needs to have Kubernetes installed for production.
12.9.2021 at 12.24
Hi Janne, I always get stuck with in this step with each try, initiating kubeadm, please help. Below is the output for reference
[vagrant@node1 yum.repos.d]$ sudo kubeadm init –pod-network-cidr 192.168.0.0/16 [init] Using Kubernetes version: v1.22.1 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [WARNING FileExisting-tc]: tc not found in system path [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’ [certs] Using certificateDir folder “/etc/kubernetes/pki” [certs] Generating “ca” certificate and key [certs] Generating “apiserver” certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node1] and IPs [10.96.0.1 10.0.2.15] [certs] Generating “apiserver-kubelet-client” certificate and key [certs] Generating “front-proxy-ca” certificate and key [certs] Generating “front-proxy-client” certificate and key [certs] Generating “etcd/ca” certificate and key [certs] Generating “etcd/server” certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost node1] and IPs [10.0.2.15 127.0.0.1 ::1] [certs] Generating “etcd/peer” certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost node1] and IPs [10.0.2.15 127.0.0.1 ::1] [certs] Generating “etcd/healthcheck-client” certificate and key [certs] Generating “apiserver-etcd-client” certificate and key [certs] Generating “sa” key and public key [kubeconfig] Using kubeconfig folder “/etc/kubernetes” [kubeconfig] Writing “admin.conf” kubeconfig file [kubeconfig] Writing “kubelet.conf” kubeconfig file [kubeconfig] Writing “controller-manager.conf” kubeconfig file [kubeconfig] Writing “scheduler.conf” kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env” [kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml” [kubelet-start] Starting the kubelet [control-plane] Using manifest folder “/etc/kubernetes/manifests” [control-plane] Creating static Pod manifest for “kube-apiserver” [control-plane] Creating static Pod manifest for “kube-controller-manager” [control-plane] Creating static Pod manifest for “kube-scheduler” [etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests” [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn’t running or healthy. [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ failed with error: Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn’t running or healthy. [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ failed with error: Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn’t running or healthy. [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ failed with error: Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn’t running or healthy. [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ failed with error: Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn’t running or healthy. [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ failed with error: Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused.
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster To see the stack trace of this error execute with –v=5 or higher
16.9.2021 at 01.41
Hi Rakesh, thanks for the comment. It seems docker is now using cgroupfs driver by default while Kubernetes is expecting systemd. You can change docker to use systemd as well by adding the following to a file /etc/docker/daemon.json
{ "exec-opts": ["native.cgroupdriver=systemd"] }
Next, restart docker
sudo systemctl restart docker
Then reset Kubernetes to clear the failed install
kubeadm reset
You should then be able to initialise the master as normal
5.11.2021 at 00.46
I wonder a procedure for installing a particular version of Kubernetes. When run on actual environment it install the latest one which is not compatible with all.
5.11.2021 at 11.28
Hi Skaochen, thanks for the question. You should be able to install any specific version of Kubernetes by just selecting the version you want to install. For example:
sudo dnf install kubelet=1.22.3 kubectl=1.22.3 kubeadm=1.22.3
4.12.2021 at 15.25
vim your /etc/fstab and comment out the swap partition;
Follow with a: #sysctl -p
It restarts the services without needing to reboot.
3.6.2022 at 19.55
That’s true, iptables is deprectaed in rhel 8. For this reason yu need to use IPVS as backend service for kube proxy in the installation. This change could be made in the “kubeadm init” phase.
Your email address will not be published. Required fields are marked *
Comment *
Name *
Email *
Website
Save my name, email, and website in this browser for the next time I comment.
Δ
See all tutorials