K8s
Alpine Linux π² K8s in 10 Minutes
Summary
This guide will allow you to deploy a fresh Alpine Linux install into a Kubernetes K8 cluster in less than 10 minutes.
Why β¨
I went to learn Kubernetes recently and I built a k3 cluster using Alpine in an hour or so, it was a great experience. I figured the next step would be K8s, but I found no material on K8s for Alpine. This guide is the result of my first pass and the incorporations of high quality notes from the contributers. Kubernetes π¦ is awesome.
Contributers
- Matthew Rogers Github LinkedIn
- Mike Zolla Github LinkedIn
- Matthew Emmett Github LinkedIn
- Richard Aik Github LinkedIn
Build K8s on Alpine Linux π²
Prerequisites π
You need an Alpine Linux install (this guide is written against version 3.17 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node.
For HA control planes you'll need a mininum of three nodes
1. Setup the Repositories π
Update you repositories under /etc/apk/repositories to include community, edge community and testing.
Contents of /etc/apk/repositories
2. Node Setup π₯οΈ
This series of commands solves a series is incremental problems and sets up the system (if the first control node) for kubectl/kubeadm to run properly on next login by linking the config.
The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. πΆ
Add kernel module for networking stuff
# echo "br_netfilter" > /etc/modules-load.d/k8s.conf # modprobe br_netfilter # sysctl net.ipv4.ip_forward=1 # echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
Kernel stuff
# echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf # sysctl net.bridge.bridge-nf-call-iptables=1
Installing kubernetes packages
# apk add cni-plugin-flannel # apk add cni-plugins # apk add flannel # apk add flannel-contrib-cni # apk add kubelet # apk add kubeadm # apk add kubectl # apk add containerd # apk add uuidgen # apk add nfs-utils
Get rid of swap
# cp -av /etc/fstab /etc/fstab.bak # sed -i '/swap/s/^/#/' /etc/fstab # swapoff -a
Fix prometheus errors
# mount --make-rshared /
- echo "#!/bin/sh" > /etc/local.d/sharemetrics.start
- echo "mount --make-rshared /" >> /etc/local.d/sharemetrics.start
- chmod +x /etc/local.d/sharemetrics.start
- rc-update add local
Fix id error messages
# uuidgen > /etc/machine-id
Update the containerd sandbox_image to the latest version (It's pause:3.9 in K8S v1.30)
# sed -i 's/pause:3.8/pause:3.9/' /etc/containerd/config.toml
Add kubernetes services on all controlplane and worker nodes
# rc-update add containerd # rc-update add kubelet # rc-service containerd start
Enable time sync (Not required in 3.20 if using default chrony)
# rc-update add ntpd # rc-service ntpd start
Option 1 - Using flannel as your CNI
NOTE: This may no longer be necessary on newer versions of the flannel package
# ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel
Option 2 - Using calico as your CNI NOTE: This is required in 3.20 if you use calico
# ln -s /opt/cni/bin/calico /usr/libexec/cni/calico # ln -s /opt/cni/bin/calico-ipam /usr/libexec/cni/calico-ipam
Pin your versions! If you update and the nodes get out of sync, it implodes.
# apk add 'kubelet=~1.30' # apk add 'kubeadm=~1.30' # apk add 'kubectl=~1.30'
Your blank node is now ready! If it's the first, you'll want to make a control node.
3. Setup the Control Plane (New Cluster!) π¦Ύ
Run this command to start the cluster and then apply a network.
#do not change subnet kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name=$(hostname) mkdir ~/.kube ln -s /etc/kubernetes/admin.conf /root/.kube/config kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers.
4. Join the cluster. π
Run this to get the join command from the control plane which you would then run on your new worker.
kubeadm token create --print-join-command
Bonus π°
Setup NFS Mounts on K8s
This can be shared NFS storage to allow for auto persistent claim fulfilment. You'll need your IP updated and export information.
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ --set nfs.server=192.168.1.31 \ --set nfs.path=/exports/cluster00
Now set the default storage class for the cluster.
kubectl get storageclass kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Check on System π
Check on your system.
kubectl get nodes kubectl get all kubectl events -A
Cloud Bonus π¦οΈ
A description of the cloud init version is avalable at K8s_with_cloud-init