K8s: Difference between revisions

From Alpine Linux
(Upgrade to kube 1.26, use containerd instead, set node-name to hostname to avoid not found on network)
 
(15 intermediate revisions by 7 users not shown)
Line 14: Line 14:
* Mike Zolla [https://github.com/Zolla-Zolla Github] [https://www.linkedin.com/in/mike-zolla-5903b8/ LinkedIn]
* Mike Zolla [https://github.com/Zolla-Zolla Github] [https://www.linkedin.com/in/mike-zolla-5903b8/ LinkedIn]
* Matthew Emmett [https://github.com/mattemmett Github] [https://www.linkedin.com/in/mattemmett/ LinkedIn]
* Matthew Emmett [https://github.com/mattemmett Github] [https://www.linkedin.com/in/mattemmett/ LinkedIn]
 
* Richard Aik [https://github.com/richardaik Github] [https://www.linkedin.com/in/zushyongaik/ LinkedIn]


-----
-----
Line 20: Line 20:
= Build K8s on Alpine Linux <span class="emoji" data-emoji="evergreen_tree">🌲</span> =
= Build K8s on Alpine Linux <span class="emoji" data-emoji="evergreen_tree">🌲</span> =


=== Prerequisits <span class="emoji" data-emoji="mag">🔍</span> ===
=== Prerequisites <span class="emoji" data-emoji="mag">🔍</span> ===


You need an [https://alpinelinux.org/ Alpine Linux] install (this guide is written against version 3.15 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node.
You need an [https://alpinelinux.org/ Alpine Linux] install (this guide is written against version 3.17 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node.


<blockquote>For HA control planes you'll need a mininum of three nodes
<blockquote>For HA control planes you'll need a mininum of three nodes
</blockquote>
</blockquote>
=== 1. Setup the Repositories <span class="emoji" data-emoji="green_book">📗</span> ===
=== 1. Setup the Repositories <span class="emoji" data-emoji="green_book">📗</span> ===


Update you repositories under /etc/apk/repositories to include community, edge community and testing.
Update you repositories under {{Path|/etc/apk/repositories}} to include '''community''', '''edge community''' and '''testing'''.


<pre>#/media/cdrom/apks
{{Cat|/etc/apk/repositories|#/media/cdrom/apks
http://dl-cdn.alpinelinux.org/alpine/v3.15/main
http://dl-cdn.alpinelinux.org/alpine/v3.20/main
http://dl-cdn.alpinelinux.org/alpine/v3.15/community
http://dl-cdn.alpinelinux.org/alpine/v3.20/community
#http://dl-cdn.alpinelinux.org/alpine/edge/main
#http://dl-cdn.alpinelinux.org/alpine/edge/main
http://dl-cdn.alpinelinux.org/alpine/edge/community
http://dl-cdn.alpinelinux.org/alpine/edge/community
http://dl-cdn.alpinelinux.org/alpine/edge/testing
http://dl-cdn.alpinelinux.org/alpine/edge/testing}}
</pre>
 
=== 2. Node Setup <span class="emoji" data-emoji="desktop_computer">🖥️</span> ===
=== 2. Node Setup <span class="emoji" data-emoji="desktop_computer">🖥️</span> ===


Line 43: Line 44:
The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. <span class="emoji" data-emoji="notes">🎶</span>
The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. <span class="emoji" data-emoji="notes">🎶</span>


<blockquote>*** <span class="emoji" data-emoji="bell">🔔</span> This build assumes CNI usage of flannel for networking <span class="emoji" data-emoji="bell">🔔</span> ***
{{Note|<span class{{=}}"emoji" data-emoji{{=}}"bell">🔔</span> This build assumes CNI usage of flannel for networking. Skip the flannel packages if you want to use calico <span class{{=}}"emoji" data-emoji{{=}}"bell">🔔</span>}}
</blockquote>
 
<pre>#add kernel module for networking stuff
'''Add kernel module for networking stuff'''
echo "br_netfilter" > /etc/modules-load.d/k8s.conf
 
modprobe br_netfilter
{{Cmd|# echo "br_netfilter" > /etc/modules-load.d/k8s.conf
apk add cni-plugin-flannel
&#35; modprobe br_netfilter
apk add cni-plugins
&#35; sysctl net.ipv4.ip_forward{{=}}1
apk add flannel
&#35; echo "net.ipv4.ip_forward{{=}}1" >> /etc/sysctl.conf
apk add flannel-contrib-cni
}}
apk add kubelet
 
apk add kubeadm
'''Kernel stuff'''
apk add kubectl
 
apk add containerd
{{Cmd|# echo "net.bridge.bridge-nf-call-iptables{{=}}1" >> /etc/sysctl.conf
apk add uuidgen
&#35; sysctl net.bridge.bridge-nf-call-iptables{{=}}1}}
apk add nfs-utils
 
#get rid of swap
 
cat /etc/fstab | grep -v swap > temp.fstab
'''Installing kubernetes packages'''
cat temp.fstab > /etc/fstab
{{Cmd|# apk add {{Pkg|cni-plugin-flannel}}
rm temp.fstab
&#35; apk add {{Pkg|cni-plugins}}
swapoff -a
&#35; apk add {{Pkg|flannel}}
#Fix prometheus errors
&#35; apk add {{Pkg|flannel-contrib-cni}}
mount --make-rshared /
&#35; apk add {{Pkg|kubelet}}
echo "#!/bin/sh" > /etc/local.d/sharemetrics.start
&#35; apk add {{Pkg|kubeadm}}
echo "mount --make-rshared /" >> /etc/local.d/sharemetrics.start
&#35; apk add {{Pkg|kubectl}}
chmod +x /etc/local.d/sharemetrics.start
&#35; apk add {{Pkg|containerd}}
rc-update add local
&#35; apk add {{Pkg|uuidgen}}
#Fix id error messages
&#35; apk add {{Pkg|nfs-utils}}}}
uuidgen > /etc/machine-id
 
#Add services
'''Get rid of swap'''
rc-update add containerd
 
rc-update add kubelet
{{Cmd|# cp -av /etc/fstab /etc/fstab.bak
#Sync time
&#35; sed -i '/swap/s/^/#/' /etc/fstab
rc-update add ntpd
&#35; swapoff -a}}
/etc/init.d/ntpd start
 
/etc/init.d/containerd start
'''Fix prometheus errors'''
#fix flannel
 
ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel
{{Cmd|# mount --make-rshared /
#kernel stuff
# echo "#!/bin/sh" > /etc/local.d/sharemetrics.start
echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf
# echo "mount --make-rshared /" >> /etc/local.d/sharemetrics.start
sysctl net.bridge.bridge-nf-call-iptables=1
# chmod +x /etc/local.d/sharemetrics.start
#Pin your versions!  If you update and the nodes get out of sync, it implodes.
# rc-update add local}}
apk add 'kubelet=~1.26'
 
apk add 'kubeadm=~1.26'
'''Fix id error messages'''
apk add 'kubectl=~1.26'
 
#Note that in the future you will manually have to add a newer version the same way to upgrade.
{{Cmd|# uuidgen > /etc/machine-id}}
</pre>
 
'''Update the containerd sandbox_image to the latest version (It's pause:3.9 in K8S v1.30)'''
{{Cmd|# sed -i 's/pause:3.8/pause:3.9/' /etc/containerd/config.toml}}
 
'''Add kubernetes services on all controlplane and worker nodes'''
 
{{Cmd|# rc-update add containerd
&#35; rc-update add kubelet
&#35; rc-service containerd start}}
 
'''Enable time sync (Not required in 3.20 if using default chrony)'''
 
{{Cmd|# rc-update add ntpd
&#35; rc-service ntpd start}}
 
'''Option 1 - Using flannel as your CNI'''
 
''NOTE: This may no longer be necessary on newer versions of the flannel package''
{{Cmd|# ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel}}
 
'''Option 2 - Using calico as your CNI'''
''NOTE: This is required in 3.20 if you use calico''
{{Cmd|# ln -s /opt/cni/bin/calico /usr/libexec/cni/calico
&#35; ln -s /opt/cni/bin/calico-ipam  /usr/libexec/cni/calico-ipam}}
 
'''Pin your versions!''' If you update and the nodes get out of sync, it implodes.
 
{{Cmd|# apk add 'kubelet{{=}}~1.30'
&#35; apk add 'kubeadm{{=}}~1.30'
&#35; apk add 'kubectl{{=}}~1.30'}}
 
{{Note|In the future you will manually have to add a newer version the same way to upgrade.}}
 
Your blank node is now ready! If it's the first, you'll want to make a control node.
Your blank node is now ready! If it's the first, you'll want to make a control node.


Line 133: Line 166:
<pre>kubectl get nodes
<pre>kubectl get nodes
kubectl get all
kubectl get all
kubectl events -A
</pre>
</pre>
= Cloud Bonus <span class="emoji" data-emoji="cloud">🌦️</span> =
A description of the cloud init version is avalable at [[K8s_with_cloud-init]]
[[Category:Virtualization]]

Latest revision as of 13:38, 24 August 2024

Alpine Linux 🌲 K8s in 10 Minutes

Summary

This guide will allow you to deploy a fresh Alpine Linux install into a Kubernetes K8 cluster in less than 10 minutes.

Why

I went to learn Kubernetes recently and I built a k3 cluster using Alpine in an hour or so, it was a great experience. I figured the next step would be K8s, but I found no material on K8s for Alpine. This guide is the result of my first pass and the incorporations of high quality notes from the contributers. Kubernetes 🦄 is awesome.

Contributers


Build K8s on Alpine Linux 🌲

Prerequisites 🔍

You need an Alpine Linux install (this guide is written against version 3.17 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node.

For HA control planes you'll need a mininum of three nodes

1. Setup the Repositories 📗

Update you repositories under /etc/apk/repositories to include community, edge community and testing.

Contents of /etc/apk/repositories

#/media/cdrom/apks http://dl-cdn.alpinelinux.org/alpine/v3.20/main http://dl-cdn.alpinelinux.org/alpine/v3.20/community #http://dl-cdn.alpinelinux.org/alpine/edge/main http://dl-cdn.alpinelinux.org/alpine/edge/community http://dl-cdn.alpinelinux.org/alpine/edge/testing

2. Node Setup 🖥️

This series of commands solves a series is incremental problems and sets up the system (if the first control node) for kubectl/kubeadm to run properly on next login by linking the config.

The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. 🎶

Note: 🔔 This build assumes CNI usage of flannel for networking. Skip the flannel packages if you want to use calico 🔔

Add kernel module for networking stuff

# echo "br_netfilter" > /etc/modules-load.d/k8s.conf # modprobe br_netfilter # sysctl net.ipv4.ip_forward=1 # echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf

Kernel stuff

# echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf # sysctl net.bridge.bridge-nf-call-iptables=1


Installing kubernetes packages

# apk add cni-plugin-flannel # apk add cni-plugins # apk add flannel # apk add flannel-contrib-cni # apk add kubelet # apk add kubeadm # apk add kubectl # apk add containerd # apk add uuidgen # apk add nfs-utils

Get rid of swap

# cp -av /etc/fstab /etc/fstab.bak # sed -i '/swap/s/^/#/' /etc/fstab # swapoff -a

Fix prometheus errors

# mount --make-rshared /

  1. echo "#!/bin/sh" > /etc/local.d/sharemetrics.start
  2. echo "mount --make-rshared /" >> /etc/local.d/sharemetrics.start
  3. chmod +x /etc/local.d/sharemetrics.start
  4. rc-update add local

Fix id error messages

# uuidgen > /etc/machine-id

Update the containerd sandbox_image to the latest version (It's pause:3.9 in K8S v1.30)

# sed -i 's/pause:3.8/pause:3.9/' /etc/containerd/config.toml

Add kubernetes services on all controlplane and worker nodes

# rc-update add containerd # rc-update add kubelet # rc-service containerd start

Enable time sync (Not required in 3.20 if using default chrony)

# rc-update add ntpd # rc-service ntpd start

Option 1 - Using flannel as your CNI

NOTE: This may no longer be necessary on newer versions of the flannel package

# ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel

Option 2 - Using calico as your CNI NOTE: This is required in 3.20 if you use calico

# ln -s /opt/cni/bin/calico /usr/libexec/cni/calico # ln -s /opt/cni/bin/calico-ipam /usr/libexec/cni/calico-ipam

Pin your versions! If you update and the nodes get out of sync, it implodes.

# apk add 'kubelet=~1.30' # apk add 'kubeadm=~1.30' # apk add 'kubectl=~1.30'

Note: In the future you will manually have to add a newer version the same way to upgrade.

Your blank node is now ready! If it's the first, you'll want to make a control node.

3. Setup the Control Plane (New Cluster!) 🦾

Run this command to start the cluster and then apply a network.

#do not change subnet
kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name=$(hostname)
mkdir ~/.kube
ln -s /etc/kubernetes/admin.conf /root/.kube/config
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers.

4. Join the cluster. 🐜

Run this to get the join command from the control plane which you would then run on your new worker.

kubeadm token create --print-join-command 

Bonus 💰

Setup NFS Mounts on K8s

This can be shared NFS storage to allow for auto persistent claim fulfilment. You'll need your IP updated and export information.

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=192.168.1.31 \
    --set nfs.path=/exports/cluster00

Now set the default storage class for the cluster.

kubectl get storageclass
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Check on System 👀

Check on your system.

kubectl get nodes
kubectl get all
kubectl events -A

Cloud Bonus 🌦️

A description of the cloud init version is avalable at K8s_with_cloud-init