K8s: Difference between revisions
Richard.Aik (talk | contribs) |
Richard.Aik (talk | contribs) Β |
||
(One intermediate revision by the same user not shown) | |||
Line 44: | Line 44: | ||
The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. <span class="emoji" data-emoji="notes">πΆ</span> | The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. <span class="emoji" data-emoji="notes">πΆ</span> | ||
{{Note|<span class{{=}}"emoji" data-emoji{{=}}"bell">π</span> This build assumes CNI usage of flannel for networking <span class{{=}}"emoji" data-emoji{{=}}"bell">π</span>}} | {{Note|<span class{{=}}"emoji" data-emoji{{=}}"bell">π</span> This build assumes CNI usage of flannel for networking. Skip the flannel packages if you want to use calico <span class{{=}}"emoji" data-emoji{{=}}"bell">π</span>}} | ||
'''Add kernel module for networking stuff''' | '''Add kernel module for networking stuff''' | ||
Line 52: | Line 52: | ||
# sysctl net.ipv4.ip_forward{{=}}1 | # sysctl net.ipv4.ip_forward{{=}}1 | ||
# echo "net.ipv4.ip_forward{{=}}1" >> /etc/sysctl.conf | # echo "net.ipv4.ip_forward{{=}}1" >> /etc/sysctl.conf | ||
# apk add {{Pkg|cni-plugin-flannel}} | }} | ||
Β | |||
'''Kernel stuff''' | |||
Β | |||
{{Cmd|# echo "net.bridge.bridge-nf-call-iptables{{=}}1" >> /etc/sysctl.conf | |||
# sysctl net.bridge.bridge-nf-call-iptables{{=}}1}} | |||
Β | |||
Β | |||
'''Installing kubernetes packages''' | |||
{{Cmd|# apk add {{Pkg|cni-plugin-flannel}} | |||
# apk add {{Pkg|cni-plugins}} | # apk add {{Pkg|cni-plugins}} | ||
# apk add {{Pkg|flannel}} | # apk add {{Pkg|flannel}} | ||
Line 65: | Line 74: | ||
'''Get rid of swap''' | '''Get rid of swap''' | ||
{{Cmd|# | {{Cmd|# cp -av /etc/fstab /etc/fstab.bak | ||
# | # sed -i '/swap/s/^/#/' /etc/fstab | ||
# swapoff -a}} | # swapoff -a}} | ||
Line 82: | Line 90: | ||
{{Cmd|# uuidgen > /etc/machine-id}} | {{Cmd|# uuidgen > /etc/machine-id}} | ||
'''Add services''' | '''Update the containerd sandbox_image to the latest version (It's pause:3.9 in K8S v1.30)''' | ||
{{Cmd|# sed -i 's/pause:3.8/pause:3.9/' /etc/containerd/config.toml}} | |||
Β | |||
'''Add kubernetes services on all controlplane and worker nodes''' | |||
{{Cmd|# rc-update add containerd | {{Cmd|# rc-update add containerd | ||
# rc-update add kubelet}} | # rc-update add kubelet | ||
# rc-service containerd start}} | |||
''' | '''Enable time sync (Not required in 3.20 if using default chrony)''' | ||
{{Cmd|# rc-update add ntpd | {{Cmd|# rc-update add ntpd | ||
# rc-service ntpd | # rc-service ntpd start}} | ||
''' | '''Option 1 - Using flannel as your CNI''' | ||
''NOTE: This may no longer be necessary on newer versions of the flannel package'' | ''NOTE: This may no longer be necessary on newer versions of the flannel package'' | ||
{{Cmd|# ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel}} | {{Cmd|# ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel}} | ||
''' | '''Option 2 - Using calico as your CNI''' Β | ||
Β | ''NOTE: This is required in 3.20 if you use calico'' | ||
{{Cmd|# | {{Cmd|# ln -s /opt/cni/bin/calico /usr/libexec/cni/calico | ||
# | # ln -s /opt/cni/bin/calico-ipamΒ /usr/libexec/cni/calico-ipam}} | ||
'''Pin your versions!'''Β If you update and the nodes get out of sync, it implodes. | '''Pin your versions!'''Β If you update and the nodes get out of sync, it implodes. | ||
{{Cmd|# apk add 'kubelet{{=}}~1. | {{Cmd|# apk add 'kubelet{{=}}~1.30' | ||
# apk add 'kubeadm{{=}}~1. | # apk add 'kubeadm{{=}}~1.30' | ||
# apk add 'kubectl{{=}}~1. | # apk add 'kubectl{{=}}~1.30'}} | ||
{{Note|In the future you will manually have to add a newer version the same way to upgrade.}} | {{Note|In the future you will manually have to add a newer version the same way to upgrade.}} |
Latest revision as of 13:38, 24 August 2024
Alpine Linux π² K8s in 10 Minutes
Summary
This guide will allow you to deploy a fresh Alpine Linux install into a Kubernetes K8 cluster in less than 10 minutes.
Why β¨
I went to learn Kubernetes recently and I built a k3 cluster using Alpine in an hour or so, it was a great experience. I figured the next step would be K8s, but I found no material on K8s for Alpine. This guide is the result of my first pass and the incorporations of high quality notes from the contributers. Kubernetes π¦ is awesome.
Contributers
- Matthew Rogers Github LinkedIn
- Mike Zolla Github LinkedIn
- Matthew Emmett Github LinkedIn
- Richard Aik Github LinkedIn
Build K8s on Alpine Linux π²
Prerequisites π
You need an Alpine Linux install (this guide is written against version 3.17 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node.
For HA control planes you'll need a mininum of three nodes
1. Setup the Repositories π
Update you repositories under /etc/apk/repositories to include community, edge community and testing.
Contents of /etc/apk/repositories
2. Node Setup π₯οΈ
This series of commands solves a series is incremental problems and sets up the system (if the first control node) for kubectl/kubeadm to run properly on next login by linking the config.
The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. πΆ
Add kernel module for networking stuff
# echo "br_netfilter" > /etc/modules-load.d/k8s.conf # modprobe br_netfilter # sysctl net.ipv4.ip_forward=1 # echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
Kernel stuff
# echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf # sysctl net.bridge.bridge-nf-call-iptables=1
Installing kubernetes packages
# apk add cni-plugin-flannel # apk add cni-plugins # apk add flannel # apk add flannel-contrib-cni # apk add kubelet # apk add kubeadm # apk add kubectl # apk add containerd # apk add uuidgen # apk add nfs-utils
Get rid of swap
# cp -av /etc/fstab /etc/fstab.bak # sed -i '/swap/s/^/#/' /etc/fstab # swapoff -a
Fix prometheus errors
# mount --make-rshared /
- echo "#!/bin/sh" > /etc/local.d/sharemetrics.start
- echo "mount --make-rshared /" >> /etc/local.d/sharemetrics.start
- chmod +x /etc/local.d/sharemetrics.start
- rc-update add local
Fix id error messages
# uuidgen > /etc/machine-id
Update the containerd sandbox_image to the latest version (It's pause:3.9 in K8S v1.30)
# sed -i 's/pause:3.8/pause:3.9/' /etc/containerd/config.toml
Add kubernetes services on all controlplane and worker nodes
# rc-update add containerd # rc-update add kubelet # rc-service containerd start
Enable time sync (Not required in 3.20 if using default chrony)
# rc-update add ntpd # rc-service ntpd start
Option 1 - Using flannel as your CNI
NOTE: This may no longer be necessary on newer versions of the flannel package
# ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel
Option 2 - Using calico as your CNI NOTE: This is required in 3.20 if you use calico
# ln -s /opt/cni/bin/calico /usr/libexec/cni/calico # ln -s /opt/cni/bin/calico-ipam /usr/libexec/cni/calico-ipam
Pin your versions! If you update and the nodes get out of sync, it implodes.
# apk add 'kubelet=~1.30' # apk add 'kubeadm=~1.30' # apk add 'kubectl=~1.30'
Your blank node is now ready! If it's the first, you'll want to make a control node.
3. Setup the Control Plane (New Cluster!) π¦Ύ
Run this command to start the cluster and then apply a network.
#do not change subnet kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name=$(hostname) mkdir ~/.kube ln -s /etc/kubernetes/admin.conf /root/.kube/config kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers.
4. Join the cluster. π
Run this to get the join command from the control plane which you would then run on your new worker.
kubeadm token create --print-join-command
Bonus π°
Setup NFS Mounts on K8s
This can be shared NFS storage to allow for auto persistent claim fulfilment. You'll need your IP updated and export information.
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ --set nfs.server=192.168.1.31 \ --set nfs.path=/exports/cluster00
Now set the default storage class for the cluster.
kubectl get storageclass kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Check on System π
Check on your system.
kubectl get nodes kubectl get all kubectl events -A
Cloud Bonus π¦οΈ
A description of the cloud init version is avalable at K8s_with_cloud-init