This post will be similar as “Create a Debian 11 kubernetes cluster with kubeadm” and of course for CentOS 8 or Rocky Linux 8.
The cluster will be composed of three machines, one control plane and two workers.
I used KVM (Kernel-based Virtual Machine) running Centos 8 (tested also on Rocky Linux 8) and installed a minimal system with SSH.
Note : this tutorial is made for learning, doesn’t cover any security or best practices in order to keep it simple we will disable firewalld and SELinux.
Test environment:
CentOS-8.4
Rocky-8.4
References
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
Hostnames
c2-control-plane
c2-worker-1
c2-worker-2
The following setup has to be done on all three machines to be more efficient you can use a terminal multiplexer like tmux.
Edit hosts file with your favorite editor and add the following lines with your right IP addresses.
sudo vim /etc/hosts
192.168.254.20 c2-control-plane 192.168.254.21 c2-worker-1 192.168.254.22 c2-worker-2
Disable firewalld, SELinux and swap
sudo systemctl disable firewalld
sudo sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
sudo sed -i '/swap/d' /etc/fstab
Load required modules and set kernel settings.
overlay it’s needed for overlayfs, https://www.kernel.org/doc/html/latest/filesystems/overlayfs.html
br_netfilter for iptables to correctly see bridged traffic, http://ebtables.netfilter.org/documentation/bridge-nf.html
sudo cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
sudo cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
Apply settings, you can also skip this by rebooting the machines.
sudo systemctl stop firewalld
sudo setenforce Permissive
sudo modprobe overlay
sudo modprobe br_netfilter
sudo sysctl --system
sudo swapoff -a
Install requirements
sudo yum install iproute-tc chrony -y
Install containerd
sudo yum install yum-utils -y
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum update -y
sudo yum install containerd.io -y
Configure containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
Edit containerd configuration
sudo vim /etc/containerd/config.toml
below: [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] add: SystemdCgroup = true
result:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true
Enable and start containerd and check the status
sudo systemctl enable containerd
sudo systemctl start containerd
sudo systemctl status containerd
Install kubelet, kubeadm and kubectl
Add repository
sudo cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Update repository
sudo yum update -y
Install in my case I have installed version 1.21.0-0
sudo yum install kubelet-1.21.0-0 kubeadm-1.21.0-0 kubectl-1.21.0-0 -y
Enable and start kubelet
sudo systemctl enable kubelet.service
sudo systemctl start kubelet.service
Lock versions in order to avoid unwanted updated via yum update
sudo yum install yum-plugin-versionlock -y
sudo yum versionlock kubelet kubeadm kubectl
The following setup have to be done on the Control Plane node only
Create cluster configuration
sudo kubeadm config print init-defaults | tee ClusterConfiguration.yaml
Modify ClusterConfiguration.yaml, replace 192.168.254.20 with your Control Plane's IP address
sudo sed -i '/name/d' ClusterConfiguration.yaml
sudo sed -i 's/ advertiseAddress: 1.2.3.4/ advertiseAddress: 192.168.254.20/' ClusterConfiguration.yaml
sudo sed -i 's/ criSocket: \/var\/run\/dockershim\.sock/ criSocket: \/run\/containerd\/containerd\.sock/' ClusterConfiguration.yaml
sudo cat << EOF | cat >> ClusterConfiguration.yaml
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
Create kubernetes cluster
sudo kubeadm init --config=ClusterConfiguration.yaml --cri-socket /run/containerd/containerd.sock
Move kube configuration
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
From the control plane node you can now check you kubernetes cluster, c2-control-plane is in not NotReady mode because we didn't set up the networking yet.
kubectl get nodes
NAME STATUS ROLES AGE VERSION c2-control-plane NotReady control-plane,master 3m49s v1.21.0
Setup networking with calico
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
The following setup have to be done on the Worker nodes only
Join the other nodes to our cluster, the command must be run on the worker nodes only.
At the end of the "kubeadmin init ..." command you were prompted for a join command, it should look like:
sudo kubeadm join 192.168.254.20:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:e60463ed4aa5d49f0f41460c6904f992f0e53f1921f81dc88a80131a9be273c0
If you missed it, you can still generate a token and generate the command with:
kubeadm token create --print-join-command
We are ready, the setup can be validate with kubectl, all nodes are in ready state and kube-system pods are running.
kubectl get nodes
NAME STATUS ROLES AGE VERSION root@c2-control-plane ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION c2-control-plane Ready control-plane,master 54m v1.21.0 c2-worker-1 Ready90s v1.21.0 c2-worker-2 Ready 86s v1.21.0
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-74b8fbdb46-fwx7g 1/1 Running 0 2m42s kube-system calico-node-5p22k 1/1 Running 0 2m42s kube-system calico-node-h47c9 1/1 Running 0 2m kube-system calico-node-nl4gd 1/1 Running 0 2m4s kube-system coredns-558bd4d5db-f2j45 1/1 Running 0 55m kube-system coredns-558bd4d5db-qcvhg 1/1 Running 0 55m kube-system etcd-c2-control-plane 1/1 Running 0 55m kube-system kube-apiserver-c2-control-plane 1/1 Running 0 55m kube-system kube-controller-manager-c2-control-plane 1/1 Running 0 55m kube-system kube-proxy-5988n 1/1 Running 0 55m kube-system kube-proxy-6q2br 1/1 Running 0 2m kube-system kube-proxy-jqdbh 1/1 Running 0 2m4s kube-system kube-scheduler-c2-control-plane 1/1 Running 0 55m