Edited on 17.5.2022

How to install Kubernetes cluster on CentOS 8

Kubernetes

There are many guides out there describing how to install Kubernetes on CentOS 8. Nevertheless, some steps might be unnecessary and some might be missing. This guide is based on our notes from real-world deployments and has worked great.

Try UpCloud for free! Deploy a server in just 45 seconds

Prerequisites for both Master and Worker nodes

In this guide, we will be using minimal resources with just two cloud servers for simplicity. After the initial setup, you can add more workers when necessary.

Let’s get started!

1. Deploy two CentOS 8 cloud servers. One for the master and the other for the worker node. Check this tutorial to learn more about deploying cloud servers.

Kubernetes has minimum requirements for the server and both master and worker nodes need to have at least 2 GB RAM and 2 CPUs, the $20/mo plan covers these requirements and with double the memory. Note that the minimum requirements are not just guidelines as Kubernetes will refuse to install on a server with less than the minimum resources.

2. Log into both Master and Worker nodes over SSH using the root account and password you received by email after deployment.

Make note of the public IP and private IP addresses of your servers at the UpCloud control panel. You can also use the ip addr command to find these out later.

3. Make sure the servers are up to date before installing anything new.

dnf -y upgrade

4. Disable SELinux enforcement.

setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

5. Enable transparent masquerading and facilitate Virtual Extensible LAN (VxLAN) traffic for communication between Kubernetes pods across the cluster.

modprobe br_netfilter

You will also need to enable IP masquerade at the firewall.

firewall-cmd --add-masquerade --permanent
firewall-cmd --reload

6. Set bridged packets to traverse iptables rules.

cat < /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

Then load the new rules.

sysctl --system

7. Disable all memory swaps to increase performance.

swapoff -a

With these steps done on both Master and worker nodes, you can proceed to install Docker.

Installing Docker on Master and Worker nodes

Next, we’ll need to install Docker.

1. Add the repository for the docker installation package.

dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo

2. Install container.io which is not yet provided by the package manager before installing docker.

dnf install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm

3. Then install Docker from the repositories.

dnf install docker-ce --nobest -y

4. Start the docker service.

systemctl start docker

5. Make it also start automatically on server restart.

systemctl enable docker

6. Change docker to use systemd cgroup driver.

echo '{
  "exec-opts": ["native.cgroupdriver=systemd"]
}' > /etc/docker/daemon.json

And restart docker to apply the change.

systemctl restart docker

Once installed, you should check that everything is working correctly.

7. See the docker version.

docker version

8. List what is inside the docker images. Likely still empty for now.

docker images
REPOSITORY   TAG   IMAGE ID   CREATED   SIZE

Now that Docker is ready to go, continue below to install Kubernetes itself.

Installing Kubernetes on Master and Worker nodes

With all the necessary parts installed, we can get Kubernetes installed as well.

1. Add the Kubernetes repository to your package manager by creating the following file.

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

2. Then update the repo info.

dnf upgrade -y

3. Install all the necessary components for Kubernetes.

dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

Start the Kubernetes services and enable them to run at startup.

systemctl enable kubelet
systemctl start kubelet

Once running on both nodes, begin configuring Kubernetes on the Master by following the instructions in the next section.

Configuring Kubernetes on the Master node only

Once Kubernetes has been installed, it needs to be configured to form a cluster.

1. Configure kubeadm.

kubeadm config images pull

2. Open the necessary ports used by Kubernetes.

firewall-cmd --zone=public --permanent --add-port={6443,2379,2380,10250,10251,10252}/tcp

3. Allow docker access from another node, replace the worker-IP-address with yours.

firewall-cmd --zone=public --permanent --add-rich-rule 'rule family=ipv4 source address=worker-IP-address/32 accept'

4. Allow access to the host’s localhost from the docker container.

firewall-cmd --zone=public --permanent --add-rich-rule 'rule family=ipv4 source address=172.17.0.0/16 accept'

5. Make the changes permanent.

firewall-cmd --reload

6. Install CNI (container network interface) plugin for Kubernetes.

For this setup, we’ll be using Calico: https://docs.projectcalico.org/getting-started/kubernetes/quickstart#overview

Issue the following command:

kubeadm init --pod-network-cidr 192.168.0.0/16

You should see something like the example below. Make note of the discovery token, it’s needed to join worker nodes to the cluster.

Note that the join token below is just an example.

kubeadm join 94.237.41.193:6443 --token 4xrp9o.v345aic7zc1bj8ba 
--discovery-token-ca-cert-hash sha256:b2e459930f030787654489ba7ccbc701c29b3b60e0aa4998706fe0052de8794c

Make the following directory and configuration files.

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

7. Enable pod to run on Master. This is only for demonstration purposes and is not recommended for production use.

kubectl taint nodes --all node-role.kubernetes.io/master-

8. Check that Master node has been enabled and is running.

kubectl get nodes
NAME  STATUS     ROLES  AGE  VERSION
master  NotReady  master   91s     v1.18.0

On successful execution, you should see a node with ready status. If not, wait a moment and repeat the command.

When the Master node is up and running, continue with the next section to join the Worker node to the cluster.

Configuring Kubernetes on the Worker node only

Each Kubernetes installation needs to have one or more worker nodes that run the containerized applications. We’ll only configure one worker in this example but repeat these steps to join more nodes to your cluster.

1. Open ports used by Kubernetes.

firewall-cmd --zone=public --permanent --add-port={10250,30000-32767}/tcp

2. Make the changes permanent.

firewall-cmd --reload

3. Join the cluster with the previously noted token.

Note that the join token below is just an example.

kubeadm join 94.237.41.193:6443 --token 4xrp9o.v345aic7zc1bj8ba 
--discovery-token-ca-cert-hash sha256:b2e459930f030787654489ba7ccbc701c29b3b60e0aa4998706fe0052de8794c

4. See if the Worker node successfully joined.

Go back to the Master node and issue the following command.

kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
master  Ready    master   10m   v1.18.0
worker  Ready       28s   v1.18.0

On success, you should see two nodes with ready status. If not, wait a moment and repeat the command.

Finished!

Congratulations, you should now have a working Kubernetes installation running on two nodes.

In case anything goes wrong, you can always repeat the process.

Run this on Master and Workers: kubeadm reset && rm -rf /etc/cni/net.d

Have fun clustering.

Yuwono Mujahidin

  1. It’s true that you can install k8s this way, even on centos 8 and redhat 8. Unfortunately you won’t be able to run any pods which are depending on other pods like a db-backend. The networking of k8s is depending on iptables which is not compatible with centos 8 / redhat 8.
    I experienced this problem and found out, that even the documentation says, that it’s not supported.
    Otherwise your article is pretty good. Just downgrade to centos 7 / redhat 7.

    Reply
    1. Alberto Vidal

      That’s true, iptables is deprectaed in rhel 8. For this reason yu need to use IPVS as backend service for kube proxy in the installation. This change could be made in the “kubeadm init” phase.

      Reply
  2. Hi, Can you please provide steps for a production like full stack deployment cluster scenario in a single host (coreOS cluster)? Apps will be (Webtier (Nginx) ==> Middle-Tier (Tomcat) ==> DB (any).

    Reply
  3. Tamilarasan J

    Very useful article. While adding the repo, the $basearch is failing to be appended into the file

    Please replace $basearch with /$basearch

    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-/$basearch

    Reply
  4. Why no selinux? Disabling it removes a major block for potential exploits.

    Reply
  5. Binh Thanh Nguyen

    Thanks, nice article & explanation.

    Reply
  6. it doesn’t work

    I did everything what you did.

    After that when I started kubelet I have problem :

    My logs.
    Aug 21 12:55:55 k8s-master systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart.
    Aug 21 12:55:55 k8s-master systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 31.
    Aug 21 12:55:55 k8s-master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
    Aug 21 12:55:55 k8s-master systemd[1]: Started kubelet: The Kubernetes Node Agent.
    Aug 21 12:55:55 k8s-master kubelet[20412]: F0821 12:55:55.834275 20412 server.go:199] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file “/var/lib/kubelet/config.yaml”, error: open /var/lib/kubelet/config.yaml: no such file or directory
    Aug 21 12:55:55 k8s-master systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
    Aug 21 12:55:55 k8s-master systemd[1]: kubelet.service: Failed with result ‘exit-code’.

    Reply
  7. HI, pod-network-cidr 192.168.0.0/16 should not be contained on the host network CIDR righ? i am planning a home setup but i already used 192.168.0.0/24 on my main network where kubernetes hosts are also part.

    Reply
  8. Shakthi Manai

    Hello , I have an issue with the calico pods , they do not come up and are crashing with the instrutions provided

    Reply
  9. Disabling swap using swapoff -a is not persisted, after rebooting the system, swap is enabled again. This can cause problems. You might want to consider adding a step to permanently disable swap.

    Reply
  10. kubeadm join :6443 –v=5 –token j6hcyq.e1ei1jca4im15jdh –discovery-token-ca-cert-hash sha256:e11fac383b59444433052b7278fe17a09356b8b9186af423f2a1b977cf739502
    I0915 14:48:37.347342 277884 join.go:398] [preflight] found NodeName empty; using OS hostname as NodeName
    I0915 14:48:37.347543 277884 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock
    [preflight] Running pre-flight checks
    I0915 14:48:37.347633 277884 preflight.go:90] [preflight] Running general checks
    I0915 14:48:37.347772 277884 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests
    I0915 14:48:37.347801 277884 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf
    I0915 14:48:37.347809 277884 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
    I0915 14:48:37.347817 277884 checks.go:102] validating the container runtime
    I0915 14:48:37.446929 277884 checks.go:128] validating if the “docker” service is enabled and active
    [WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
    I0915 14:48:37.543656 277884 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
    I0915 14:48:37.543770 277884 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
    I0915 14:48:37.543791 277884 checks.go:649] validating whether swap is enabled or not
    I0915 14:48:37.543903 277884 checks.go:376] validating the presence of executable conntrack
    I0915 14:48:37.544008 277884 checks.go:376] validating the presence of executable ip
    I0915 14:48:37.544073 277884 checks.go:376] validating the presence of executable iptables
    I0915 14:48:37.544201 277884 checks.go:376] validating the presence of executable mount
    I0915 14:48:37.544256 277884 checks.go:376] validating the presence of executable nsenter
    I0915 14:48:37.544271 277884 checks.go:376] validating the presence of executable ebtables
    I0915 14:48:37.544282 277884 checks.go:376] validating the presence of executable ethtool
    I0915 14:48:37.544292 277884 checks.go:376] validating the presence of executable socat
    I0915 14:48:37.544304 277884 checks.go:376] validating the presence of executable tc
    I0915 14:48:37.544314 277884 checks.go:376] validating the presence of executable touch
    I0915 14:48:37.544345 277884 checks.go:520] running all checks
    I0915 14:48:37.627783 277884 checks.go:406] checking whether the given node name is reachable using net.LookupHost
    I0915 14:48:37.627972 277884 checks.go:618] validating kubelet version
    I0915 14:48:37.688253 277884 checks.go:128] validating if the “kubelet” service is enabled and active
    I0915 14:48:37.701077 277884 checks.go:201] validating availability of port 10250
    I0915 14:48:37.701764 277884 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
    I0915 14:48:37.701778 277884 checks.go:432] validating if the connectivity type is via proxy or direct
    [WARNING HTTPProxy]: Connection to “https://” uses proxy “http://:8000/”. If that is not intended, adjust your proxy settings
    I0915 14:48:37.701866 277884 join.go:469] [preflight] Discovering cluster-info
    I0915 14:48:37.701888 277884 token.go:78] [discovery] Created cluster-info discovery client, requesting info from “:6443”
    I0915 14:48:47.702621 277884 token.go:215] [discovery] Failed to request cluster-info, will try again: Get “https://:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s”: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    I0915 14:48:47.702621 277884 token.go:215] [discovery] Failed to request cluster-info, will try again: Get “ht://:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s”: net/http: request caned while waiting for connection (Client.Timeout exceeded while awaiting headers)

    Reply
  11. Useful article. Thanks for documenting this.

    Reply
  12. Have you encountered issue with DNS ?
    I failed to nslookup kubernetes.default

    Reply
  13. [[email protected] ~]# kubeadm init –pod-network-cidr=10.55.0.0/16
    W1203 13:12:05.912619 31533 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s. io]
    [init] Using Kubernetes version: v1.19.4
    [preflight] Running pre-flight checks
    [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
    [WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https ://kubernetes.io/docs/setup/cri/
    [WARNING FileExisting-tc]: tc not found in system path
    [WARNING Hostname]: hostname “master” could not be reached
    [WARNING Hostname]: hostname “master”: lookup master on 192.168.1.254:53: no such host
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
    [certs] Using certificateDir folder “/etc/kubernetes/pki”
    [certs] Generating “ca” certificate and key
    [certs] Generating “apiserver” certificate and key
    [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 10.0.2.15]
    [certs] Generating “apiserver-kubelet-client” certificate and key
    [certs] Generating “front-proxy-ca” certificate and key
    [certs] Generating “front-proxy-client” certificate and key
    [certs] Generating “etcd/ca” certificate and key
    [certs] Generating “etcd/server” certificate and key
    [certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [10.0.2.15 127.0.0.1 ::1]
    [certs] Generating “etcd/peer” certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [10.0.2.15 127.0.0.1 ::1]
    [certs] Generating “etcd/healthcheck-client” certificate and key
    [certs] Generating “apiserver-etcd-client” certificate and key
    [certs] Generating “sa” key and public key
    [kubeconfig] Using kubeconfig folder “/etc/kubernetes”
    [kubeconfig] Writing “admin.conf” kubeconfig file
    [kubeconfig] Writing “kubelet.conf” kubeconfig file
    [kubeconfig] Writing “controller-manager.conf” kubeconfig file
    [kubeconfig] Writing “scheduler.conf” kubeconfig file
    [kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
    [kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
    [kubelet-start] Starting the kubelet
    [control-plane] Using manifest folder “/etc/kubernetes/manifests”
    [control-plane] Creating static Pod manifest for “kube-apiserver”
    [control-plane] Creating static Pod manifest for “kube-controller-manager”
    [control-plane] Creating static Pod manifest for “kube-scheduler”
    [etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
    [kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:
    timed out waiting for the condition

    This error is likely caused by:
    – The kubelet is not running
    – The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    – ‘systemctl status kubelet’
    – ‘journalctl -xeu kubelet’

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
    – ‘docker ps -a | grep kube | grep -v pause’
    Once you have found the failing container, you can inspect its logs with:
    – ‘docker logs CONTAINERID’

    error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
    To see the stack trace of this error execute with –v=5 or higher
    [[email protected] ~]# docker ps -a | grep kube | grep -v pause

    952f3c9c8246 0369cf4303ff “etcd –advertise-cl…” 9 minutes ago Up 8 minutes k8s_etcd_etcd-master_kube-syst em_13511ea52b5654c37f24c8124c551b52_0
    ba10354fe516 b15c6247777d “kube-apiserver –ad…” 9 minutes ago Up 8 minutes k8s_kube-apiserver_kube-apiser ver-master_kube-system_1eb8c2b5a38e60b2073267a3a562fcf8_0
    4478232ae9ff 4830ab618586 “kube-controller-man…” 9 minutes ago Up 8 minutes k8s_kube-controller-manager_ku be-controller-manager-master_kube-system_d8febe18f6be228a2440536b082d3f38_0
    c502c1ceeb24 14cd22f7abe7 “kube-scheduler –au…” 9 minutes ago Up 8 minutes k8s_kube-scheduler_kube-schedu ler-master_kube-system_02dab51e35a1d6fc74e5283db75230f5_0

    [[email protected] ~]# systemctl status kubelet
    ● kubelet.service – kubelet: The Kubernetes Node Agent
    Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
    Drop-In: /usr/lib/systemd/system/kubelet.service.d
    └─10-kubeadm.conf
    Active: active (running) since Thu 2020-12-03 13:12:10 EST; 17min ago
    Docs: https://kubernetes.io/docs/
    Main PID: 31749 (kubelet)
    Tasks: 14 (limit: 5012)
    Memory: 57.1M
    CGroup: /system.slice/kubelet.service
    └─31749 /usr/bin/kubelet –bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf –kubeconfig=/etc/kubernetes/kubelet.conf –config=/var/lib/kubelet/config.yaml –netw>

    Dec 03 13:28:45 master kubelet[31749]: W1203 13:28:31.079301 31749 watcher.go:87] Error while processing event (“/sys/fs/cgroup/blkio/system.slice/NetworkManager-dispatcher.service”:>
    Dec 03 13:28:46 master kubelet[31749]: E1203 13:28:42.862061 31749 kubelet.go:1765] skipping pod synchronization – [container runtime is down, PLEG is not healthy: pleg was last seen>
    Dec 03 13:29:02 master kubelet[31749]: E1203 13:28:57.267792 31749 kubelet.go:2183] node “master” not found
    Dec 03 13:29:03 master kubelet[31749]: W1203 13:28:49.947073 31749 watcher.go:87] Error while processing event (“/sys/fs/cgroup/memory/system.slice/NetworkManager-dispatcher.service”>
    Dec 03 13:29:05 master kubelet[31749]: W1203 13:29:05.350328 31749 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
    Dec 03 13:29:05 master kubelet[31749]: W1203 13:29:03.029313 31749 watcher.go:87] Error while processing event (“/sys/fs/cgroup/devices/system.slice/NetworkManager-dispatcher.service>
    Dec 03 13:29:07 master kubelet[31749]: E1203 13:29:05.319785 31749 kubelet.go:1765] skipping pod synchronization – [container runtime is down, PLEG is not healthy: pleg was last seen>
    Dec 03 13:29:12 master kubelet[31749]: E1203 13:29:06.717532 31749 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node “master” not >
    Dec 03 13:29:14 master kubelet[31749]: W1203 13:29:05.571704 31749 watcher.go:87] Error while processing event (“/sys/fs/cgroup/pids/system.slice/NetworkManager-dispatcher.service”: >
    Dec 03 13:29:16 master kubelet[31749]: E1203 13:29:14.959399 31749 kubelet.go:2183] node “master” not found

    Does anyone know where the problem may be?

    Reply
  14. Hey Janne, really appreciated this article. After reading the comments I really appreciate how responsive you are to everyone! Good article, very easily readable and very responsive to anyone who needs help. Great contribution

    Reply
  15. sudo kubeadm join 192.xxxx–token o027xu.5ucqgclh8v8zej4t –discovery-token-ca-cert-hash sha256:948b8ad5a4997039275a11928a32d234cd1b4eca2f353d2bc5f2cab48b1d8320
    [preflight] Running pre-flight checks
    error execution phase preflight: couldn’t validate the identity of the API Server: configmaps “cluster-info” is forbidden: User “system:anonymous” cannot get resource “configmaps” in API group “” in the namespace “kube-public”
    To see the stack trace of this error execute with –v=5 or higher
    Does anyone know where the problem may be?

    Reply
  16. I did not find any RHEL8 kubernetes repo. Is it safe and recommended to use K8S package from RHEL7 for production? Any issues encountered?

    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch

    Reply
  17. Thanks Janne. Appreciated. Any idea by when Kubernetes will have RHEL8 packages available? We already have RHEL8.3 VMs which needs to have Kubernetes installed for production.

    Reply
  18. Hi Janne, I always get stuck with in this step with each try, initiating kubeadm, please help. Below is the output for reference

    [[email protected] yum.repos.d]$ sudo kubeadm init –pod-network-cidr 192.168.0.0/16
    [init] Using Kubernetes version: v1.22.1
    [preflight] Running pre-flight checks
    [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
    [WARNING FileExisting-tc]: tc not found in system path
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
    [certs] Using certificateDir folder “/etc/kubernetes/pki”
    [certs] Generating “ca” certificate and key
    [certs] Generating “apiserver” certificate and key
    [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node1] and IPs [10.96.0.1 10.0.2.15]
    [certs] Generating “apiserver-kubelet-client” certificate and key
    [certs] Generating “front-proxy-ca” certificate and key
    [certs] Generating “front-proxy-client” certificate and key
    [certs] Generating “etcd/ca” certificate and key
    [certs] Generating “etcd/server” certificate and key
    [certs] etcd/server serving cert is signed for DNS names [localhost node1] and IPs [10.0.2.15 127.0.0.1 ::1]
    [certs] Generating “etcd/peer” certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [localhost node1] and IPs [10.0.2.15 127.0.0.1 ::1]
    [certs] Generating “etcd/healthcheck-client” certificate and key
    [certs] Generating “apiserver-etcd-client” certificate and key
    [certs] Generating “sa” key and public key
    [kubeconfig] Using kubeconfig folder “/etc/kubernetes”
    [kubeconfig] Writing “admin.conf” kubeconfig file
    [kubeconfig] Writing “kubelet.conf” kubeconfig file
    [kubeconfig] Writing “controller-manager.conf” kubeconfig file
    [kubeconfig] Writing “scheduler.conf” kubeconfig file
    [kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
    [kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
    [kubelet-start] Starting the kubelet
    [control-plane] Using manifest folder “/etc/kubernetes/manifests”
    [control-plane] Creating static Pod manifest for “kube-apiserver”
    [control-plane] Creating static Pod manifest for “kube-controller-manager”
    [control-plane] Creating static Pod manifest for “kube-scheduler”
    [etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
    [kubelet-check] Initial timeout of 40s passed.
    [kubelet-check] It seems like the kubelet isn’t running or healthy.
    [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ failed with error: Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused.
    [kubelet-check] It seems like the kubelet isn’t running or healthy.
    [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ failed with error: Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused.
    [kubelet-check] It seems like the kubelet isn’t running or healthy.
    [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ failed with error: Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused.
    [kubelet-check] It seems like the kubelet isn’t running or healthy.
    [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ failed with error: Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused.
    [kubelet-check] It seems like the kubelet isn’t running or healthy.
    [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ failed with error: Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused.

    Unfortunately, an error has occurred:
    timed out waiting for the condition

    This error is likely caused by:
    – The kubelet is not running
    – The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    – ‘systemctl status kubelet’
    – ‘journalctl -xeu kubelet’

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
    – ‘docker ps -a | grep kube | grep -v pause’
    Once you have found the failing container, you can inspect its logs with:
    – ‘docker logs CONTAINERID’

    error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
    To see the stack trace of this error execute with –v=5 or higher

    Reply
  19. I wonder a procedure for installing a particular version of Kubernetes. When run on actual environment it install the latest one which is not compatible with all.

    Reply

Leave a Reply to Binh Thanh Nguyen Cancel reply

Your email address will not be published.

Back to top