K8s: Difference between revisions

From Alpine Linux
(K8s in 10 Minutes)
 
 
(25 intermediate revisions by 10 users not shown)
Line 14: Line 14:
* Mike Zolla [https://github.com/Zolla-Zolla Github] [https://www.linkedin.com/in/mike-zolla-5903b8/ LinkedIn]
* Mike Zolla [https://github.com/Zolla-Zolla Github] [https://www.linkedin.com/in/mike-zolla-5903b8/ LinkedIn]
* Matthew Emmett [https://github.com/mattemmett Github] [https://www.linkedin.com/in/mattemmett/ LinkedIn]
* Matthew Emmett [https://github.com/mattemmett Github] [https://www.linkedin.com/in/mattemmett/ LinkedIn]
 
* Richard Aik [https://github.com/richardaik Github] [https://www.linkedin.com/in/zushyongaik/ LinkedIn]


-----
-----
Line 20: Line 20:
= Build K8s on Alpine Linux <span class="emoji" data-emoji="evergreen_tree">🌲</span> =
= Build K8s on Alpine Linux <span class="emoji" data-emoji="evergreen_tree">🌲</span> =


=== Prerequisits <span class="emoji" data-emoji="mag">🔍</span> ===
=== Prerequisites <span class="emoji" data-emoji="mag">🔍</span> ===


You need an [https://alpinelinux.org/ Alpine Linux] install (this guide is written against version 3.15 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node.
You need an [https://alpinelinux.org/ Alpine Linux] install (this guide is written against version 3.17 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node.


<blockquote>For HA control planes you'll need a mininum of three nodes
<blockquote>For HA control planes you'll need a mininum of three nodes
</blockquote>
</blockquote>
=== 1. Setup the Repositories <span class="emoji" data-emoji="green_book">📗</span> ===
=== 1. Setup the Repositories <span class="emoji" data-emoji="green_book">📗</span> ===


Update you repositories under /etc/apk/repositories to include community, edge community and testing.
Update you repositories under {{Path|/etc/apk/repositories}} to include '''community''', '''edge community''' and '''testing'''.


<pre>#/media/cdrom/apks
{{Cat|/etc/apk/repositories|#/media/cdrom/apks
http://dl-cdn.alpinelinux.org/alpine/v3.15/main
http://dl-cdn.alpinelinux.org/alpine/v3.20/main
http://dl-cdn.alpinelinux.org/alpine/v3.15/community
http://dl-cdn.alpinelinux.org/alpine/v3.20/community
#http://dl-cdn.alpinelinux.org/alpine/edge/main
#http://dl-cdn.alpinelinux.org/alpine/edge/main
http://dl-cdn.alpinelinux.org/alpine/edge/community
http://dl-cdn.alpinelinux.org/alpine/edge/community
http://dl-cdn.alpinelinux.org/alpine/edge/testing
http://dl-cdn.alpinelinux.org/alpine/edge/testing}}
</pre>
 
=== 2. Node Setup <span class="emoji" data-emoji="desktop_computer">🖥️</span> ===
=== 2. Node Setup <span class="emoji" data-emoji="desktop_computer">🖥️</span> ===


Line 43: Line 44:
The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. <span class="emoji" data-emoji="notes">🎶</span>
The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. <span class="emoji" data-emoji="notes">🎶</span>


<blockquote>*** <span class="emoji" data-emoji="bell">🔔</span> This build assumes CNI usage of flannel for networking <span class="emoji" data-emoji="bell">🔔</span> ***
{{Note|<span class{{=}}"emoji" data-emoji{{=}}"bell">🔔</span> This build assumes CNI usage of flannel for networking. Skip the flannel packages if you want to use calico <span class{{=}}"emoji" data-emoji{{=}}"bell">🔔</span>}}
</blockquote>
 
<pre>#add kernel module for networking stuff
'''Add kernel module for networking stuff'''
echo &quot;br_netfilter&quot; &gt; /etc/modules-load.d/k8s.conf
 
modprobe br_netfilter
{{Cmd|# echo "br_netfilter" > /etc/modules-load.d/k8s.conf
apk add cni-plugin-flannel
&#35; modprobe br_netfilter
apk add cni-plugins
&#35; sysctl net.ipv4.ip_forward{{=}}1
apk add flannel
&#35; echo "net.ipv4.ip_forward{{=}}1" >> /etc/sysctl.conf
apk add flannel-contrib-cni
}}
apk add kubelet
 
apk add kubeadm
'''Kernel stuff'''
apk add kubectl
 
apk add docker
{{Cmd|# echo "net.bridge.bridge-nf-call-iptables{{=}}1" >> /etc/sysctl.conf
apk add uuidgen
&#35; sysctl net.bridge.bridge-nf-call-iptables{{=}}1}}
#get rid of swap
 
cat /etc/fstab | grep -v swap &gt; temp.fstab
 
cat temp.fstab &gt; /etc/fstab
'''Installing kubernetes packages'''
rm temp.fstab
{{Cmd|# apk add {{Pkg|cni-plugin-flannel}}
swapoff -a
&#35; apk add {{Pkg|cni-plugins}}
#Fix prometheus errors
&#35; apk add {{Pkg|flannel}}
mount --make-rshared /
&#35; apk add {{Pkg|flannel-contrib-cni}}
#Fix id error messages
&#35; apk add {{Pkg|kubelet}}
uuidgen &gt; /etc/machine-id
&#35; apk add {{Pkg|kubeadm}}
#Add services
&#35; apk add {{Pkg|kubectl}}
rc-update add docker
&#35; apk add {{Pkg|containerd}}
rc-update add kubelet
&#35; apk add {{Pkg|uuidgen}}
#Sync time
&#35; apk add {{Pkg|nfs-utils}}}}
rc-update add ntpd
 
/etc/init.d/ntpd start
'''Get rid of swap'''
/etc/init.d/docker start
 
#create flannel link to where kubernetes expects it
{{Cmd|# cp -av /etc/fstab /etc/fstab.bak
ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel
&#35; sed -i '/swap/s/^/#/' /etc/fstab
#kernel stuff
&#35; swapoff -a}}
echo &quot;net.bridge.bridge-nf-call-iptables=1&quot; &gt;&gt; /etc/sysctl.conf
 
sysctl net.bridge.bridge-nf-call-iptables=1
'''Fix prometheus errors'''
 
{{Cmd|# mount --make-rshared /
# echo "#!/bin/sh" > /etc/local.d/sharemetrics.start
# echo "mount --make-rshared /" >> /etc/local.d/sharemetrics.start
# chmod +x /etc/local.d/sharemetrics.start
# rc-update add local}}
 
'''Fix id error messages'''
 
{{Cmd|# uuidgen > /etc/machine-id}}
 
'''Update the containerd sandbox_image to the latest version (It's pause:3.9 in K8S v1.30)'''
{{Cmd|# sed -i 's/pause:3.8/pause:3.9/' /etc/containerd/config.toml}}
 
'''Add kubernetes services on all controlplane and worker nodes'''
 
{{Cmd|# rc-update add containerd
&#35; rc-update add kubelet
&#35; rc-service containerd start}}
 
'''Enable time sync (Not required in 3.20 if using default chrony)'''
 
{{Cmd|# rc-update add ntpd
&#35; rc-service ntpd start}}
 
'''Option 1 - Using flannel as your CNI'''
 
''NOTE: This may no longer be necessary on newer versions of the flannel package''
{{Cmd|# ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel}}
 
'''Option 2 - Using calico as your CNI'''
''NOTE: This is required in 3.20 if you use calico''
{{Cmd|# ln -s /opt/cni/bin/calico /usr/libexec/cni/calico
&#35; ln -s /opt/cni/bin/calico-ipam  /usr/libexec/cni/calico-ipam}}
 
'''Pin your versions!'''  If you update and the nodes get out of sync, it implodes.
 
{{Cmd|# apk add 'kubelet{{=}}~1.30'
&#35; apk add 'kubeadm{{=}}~1.30'
&#35; apk add 'kubectl{{=}}~1.30'}}
 
{{Note|In the future you will manually have to add a newer version the same way to upgrade.}}


</pre>
Your blank node is now ready! If it's the first, you'll want to make a control node.
Your blank node is now ready! If it's the first, you'll want to make a control node.


Line 86: Line 128:
Run this command to start the cluster and then apply a network.
Run this command to start the cluster and then apply a network.


<pre>#do not change subnet
<pre>
kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name=master
#do not change subnet
#set up the networking
kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name=$(hostname)
mkdir ~/.kube
ln -s /etc/kubernetes/admin.conf /root/.kube/config
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
#now you don't need to export
ln -s /etc/kubernetes/admin.conf /root/.kube/config
</pre>
</pre>
You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers.
You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers.
Line 97: Line 139:
=== 4. Join the cluster. <span class="emoji" data-emoji="ant">🐜</span> ===
=== 4. Join the cluster. <span class="emoji" data-emoji="ant">🐜</span> ===


Simply run the command given from the control plane to add this node to your cluster.
Run this to get the join command from the control plane which you would then run on your new worker.
<pre>
kubeadm token create --print-join-command
</pre>


= Bonus <span class="emoji" data-emoji="moneybag">💰</span> =
= Bonus <span class="emoji" data-emoji="moneybag">💰</span> =
Line 121: Line 166:
<pre>kubectl get nodes
<pre>kubectl get nodes
kubectl get all
kubectl get all
kubectl events -A
</pre>
</pre>
= Cloud Bonus <span class="emoji" data-emoji="cloud">🌦️</span> =
A description of the cloud init version is avalable at [[K8s_with_cloud-init]]
[[Category:Virtualization]]

Latest revision as of 13:38, 24 August 2024

Alpine Linux 🌲 K8s in 10 Minutes

Summary

This guide will allow you to deploy a fresh Alpine Linux install into a Kubernetes K8 cluster in less than 10 minutes.

Why

I went to learn Kubernetes recently and I built a k3 cluster using Alpine in an hour or so, it was a great experience. I figured the next step would be K8s, but I found no material on K8s for Alpine. This guide is the result of my first pass and the incorporations of high quality notes from the contributers. Kubernetes 🦄 is awesome.

Contributers


Build K8s on Alpine Linux 🌲

Prerequisites 🔍

You need an Alpine Linux install (this guide is written against version 3.17 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node.

For HA control planes you'll need a mininum of three nodes

1. Setup the Repositories 📗

Update you repositories under /etc/apk/repositories to include community, edge community and testing.

Contents of /etc/apk/repositories

#/media/cdrom/apks http://dl-cdn.alpinelinux.org/alpine/v3.20/main http://dl-cdn.alpinelinux.org/alpine/v3.20/community #http://dl-cdn.alpinelinux.org/alpine/edge/main http://dl-cdn.alpinelinux.org/alpine/edge/community http://dl-cdn.alpinelinux.org/alpine/edge/testing

2. Node Setup 🖥️

This series of commands solves a series is incremental problems and sets up the system (if the first control node) for kubectl/kubeadm to run properly on next login by linking the config.

The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. 🎶

Note: 🔔 This build assumes CNI usage of flannel for networking. Skip the flannel packages if you want to use calico 🔔

Add kernel module for networking stuff

# echo "br_netfilter" > /etc/modules-load.d/k8s.conf # modprobe br_netfilter # sysctl net.ipv4.ip_forward=1 # echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf

Kernel stuff

# echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf # sysctl net.bridge.bridge-nf-call-iptables=1


Installing kubernetes packages

# apk add cni-plugin-flannel # apk add cni-plugins # apk add flannel # apk add flannel-contrib-cni # apk add kubelet # apk add kubeadm # apk add kubectl # apk add containerd # apk add uuidgen # apk add nfs-utils

Get rid of swap

# cp -av /etc/fstab /etc/fstab.bak # sed -i '/swap/s/^/#/' /etc/fstab # swapoff -a

Fix prometheus errors

# mount --make-rshared /

  1. echo "#!/bin/sh" > /etc/local.d/sharemetrics.start
  2. echo "mount --make-rshared /" >> /etc/local.d/sharemetrics.start
  3. chmod +x /etc/local.d/sharemetrics.start
  4. rc-update add local

Fix id error messages

# uuidgen > /etc/machine-id

Update the containerd sandbox_image to the latest version (It's pause:3.9 in K8S v1.30)

# sed -i 's/pause:3.8/pause:3.9/' /etc/containerd/config.toml

Add kubernetes services on all controlplane and worker nodes

# rc-update add containerd # rc-update add kubelet # rc-service containerd start

Enable time sync (Not required in 3.20 if using default chrony)

# rc-update add ntpd # rc-service ntpd start

Option 1 - Using flannel as your CNI

NOTE: This may no longer be necessary on newer versions of the flannel package

# ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel

Option 2 - Using calico as your CNI NOTE: This is required in 3.20 if you use calico

# ln -s /opt/cni/bin/calico /usr/libexec/cni/calico # ln -s /opt/cni/bin/calico-ipam /usr/libexec/cni/calico-ipam

Pin your versions! If you update and the nodes get out of sync, it implodes.

# apk add 'kubelet=~1.30' # apk add 'kubeadm=~1.30' # apk add 'kubectl=~1.30'

Note: In the future you will manually have to add a newer version the same way to upgrade.

Your blank node is now ready! If it's the first, you'll want to make a control node.

3. Setup the Control Plane (New Cluster!) 🦾

Run this command to start the cluster and then apply a network.

#do not change subnet
kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name=$(hostname)
mkdir ~/.kube
ln -s /etc/kubernetes/admin.conf /root/.kube/config
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers.

4. Join the cluster. 🐜

Run this to get the join command from the control plane which you would then run on your new worker.

kubeadm token create --print-join-command 

Bonus 💰

Setup NFS Mounts on K8s

This can be shared NFS storage to allow for auto persistent claim fulfilment. You'll need your IP updated and export information.

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=192.168.1.31 \
    --set nfs.path=/exports/cluster00

Now set the default storage class for the cluster.

kubectl get storageclass
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Check on System 👀

Check on your system.

kubectl get nodes
kubectl get all
kubectl events -A

Cloud Bonus 🌦️

A description of the cloud init version is avalable at K8s_with_cloud-init