In this blog, I’ll try to keep as simple as possible to get up and running a Kubernetes cluster.
The cluster will be composed of three machines, one control plane and two workers.
I used KVM (Kernel-based Virtual Machine) running Debian 11 and installed a minimal system with SSH.
Note: this tutorial is made for learning, doesn’t cover any security or best practices.
References
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
Hostnames
c1-control-plane
c1-worker-1
c1-worker-2
The following setup has to be done on all three machines to be more efficient you can use a terminal multiplexer like tmux.
Edit hosts file with your favorite editor and add the following lines with your right IP addresses.
sudo vim /etc/hosts
192.168.254.10 c1-control-plane 192.168.254.11 c1-worker-1 192.168.254.12 c1-worker-2
Load required modules and set kernel settings.
overlay it’s needed for overlayfs, https://www.kernel.org/doc/html/latest/filesystems/overlayfs.html
br_netfilter for iptables to correctly see bridged traffic, http://ebtables.netfilter.org/documentation/bridge-nf.html
cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
Turn off swap, as kubelet requires that
sudo sed -i '/swap/d' /etc/fstab
Apply settings, you can also skip this by rebooting the machines.
sudo modprobe overlay
sudo modprobe br_netfilter
sudo sysctl --system
sudo swapoff -a
Install an NTP server otherwise etcd will be mad.
sudo apt install -y chrony
Install containerd
# requirements
sudo apt install -y curl gpg lsb-release apparmor apparmor-utils
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# install
sudo apt update
sudo apt-get install -y containerd.io
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
Edit containerd configuration and restart service
sudo vim /etc/containerd/config.toml
below: [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] add: SystemdCgroup = true
result:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true
sudo systemctl restart containerd
Install kubernetes tools, in my case version 1.21.0.
# requirements
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
# install
sudo apt-get update
sudo apt install iptables libiptc0/stable libxtables12/stable
sudo apt-get install -y kubelet=1.21.0-00 kubeadm=1.21.0-00 kubectl=1.21.0-00
sudo apt-mark hold kubelet kubeadm kubectl
Now it's time to create our Kubernetes cluster the following commands needs to be run from the control plane only.
sudo kubeadm init --pod-network-cidr 172.20.0.0/16 --kubernetes-version 1.21.0
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
From the control plane node you can now check you kubernetes cluster, c1-control-plane is in not NotReady mode because we didn't set up the networking yet.
kubectl get nodes
NAME STATUS ROLES AGE VERSION c1-control-plane NotReady control-plane,master 3m49s v1.21.0
Setup networking with calico
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Join the other nodes to our cluster, the command must be run on the worker nodes only.
At the end of the "kubeadmin init ..." command you were prompted for a join command, it should look like:
sudo kubeadm join 192.168.254.10:6443 --token oxaul0.24g50wlwsp4ktiqs --discovery-token-ca-cert-hash sha256:74746c748be5fef131d9c91a591c053591b6ce1e274396bcb7c48b6e6664bded
If you missed it, you can still generate a token and generate the command with:
kubeadm token create --print-join-command
We are ready, the setup can be validate with kubectl, all nodes are in ready state and kube-system pods are running.
kubectl get nodes
NAME STATUS ROLES AGE VERSION c1-control-plane Ready control-plane,master 5m52s v1.21.0 c1-worker-1 Ready2m10s v1.21.0 c1-worker-2 Ready 66s v1.21.0
kubectl get pods -A
kube-system calico-kube-controllers-78d6f96c7b-9q4lq 1/1 Running 0 9h kube-system calico-node-4mq7p 1/1 Running 0 9h kube-system calico-node-8km7w 1/1 Running 0 9h kube-system calico-node-sjzs4 1/1 Running 0 9h kube-system coredns-558bd4d5db-7pbjx 1/1 Running 0 9h kube-system coredns-558bd4d5db-ptn59 1/1 Running 0 9h kube-system etcd-c1-control-plane 1/1 Running 1 9h kube-system kube-apiserver-c1-control-plane 1/1 Running 0 9h kube-system kube-controller-manager-c1-control-plane 1/1 Running 0 9h kube-system kube-proxy-ls768 1/1 Running 0 9h kube-system kube-proxy-mk98k 1/1 Running 0 9h kube-system kube-proxy-qbxwb 1/1 Running 0 9h kube-system kube-scheduler-c1-control-plane 1/1 Running 0 9h
This is the best walkthrough I’ve found so far. The only thing I had to change was around swap. On my debian 11, there must be some systemd process turning on swap after a reboot, even with /etc/fstab swap disabled.
I had to delete the swap partition, add a new non-swap version (fdisk /dev/sda). Then, to be sure I ran mkfs the /dev/sda3 partition.
I did some digging through systemd for swap references, but no luck yet.
Great job!
Thank you for the feedback, indeed I had recently such issue and managed to workaround so far only by removing the swap partition, can be done with fdisk or parted.
Hello, With Debian 10, I had no issues deploying Kubernetes however with 11 the connection with the API keeps being lost, have you experienced something like that?
Thanks
I have spent all day troubleshooting this exact same problem!! Driving me mad as I’m a k8s newbie and couldn’t find any reference on the web to the API connection being lost every so often. I cracked it though:
On a rare occasion where I could use kubectl, I checked the various control plane pod logs and found they kept restarting since they couldn’t connect to the api using the hostname. I didn’t put two and two together at the time but it’s because in /etc/hosts the hostname is set to the loopback IP (127.0.1.1). I would assume this doesn’t work as the API only listens on the main network interface. I never bothered changing the hosts file since I figured I’d do it all in one go once I’d created the worker nodes, but following this tutorial and editing the hostname IP to the nodes priv address fixed all my problems.
If anyone
Hi Max,
Happy to hear that, thanks for your feedback.
Hello Alxndr3,
Not really on my side the Debian 11 deployment it’s fine, you should check the kube-api pod logs.
Thanks for your feedback
I Agree this is the best guide to setup k8s cluster on Debian!!
Thank you!!
Hello Godie,
Thank you for your feedback! Much appreciated.
Great article. I would like to see a small section at the end about adding another control plane node.
Hi jonny,
Thank you for your feedback for control planes in HA mode the –control-plane-endpoint needs to be used + load balancer, see
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/. Would be needed maybe to create a different article.
Hi,
there is no need to install apt-transport-https. It’s a dummy since Buster.
Bye…
Dirk