K8s: Difference between revisions
m (→2. Node Setup) |
Richard.Aik (talk | contribs) |
||
(19 intermediate revisions by 9 users not shown) | |||
Line 1: | Line 1: | ||
= Alpine Linux K8s in 10 Minutes = | = Alpine Linux <span class="emoji" data-emoji="evergreen_tree">🌲</span> K8s in 10 Minutes = | ||
== Summary == | == Summary == | ||
Line 5: | Line 5: | ||
This guide will allow you to deploy a fresh Alpine Linux install into a Kubernetes K8 cluster in less than 10 minutes. | This guide will allow you to deploy a fresh Alpine Linux install into a Kubernetes K8 cluster in less than 10 minutes. | ||
== Why == | == Why <span class="emoji" data-emoji="sparkles">✨</span> == | ||
I went to learn Kubernetes recently and I built a k3 cluster using Alpine in an hour or so, it was a great experience. I figured the next step would be K8s, but I found no material on K8s for Alpine. This guide is the result of my first pass and the incorporations of high quality notes from the contributers. Kubernetes is awesome. | I went to learn Kubernetes recently and I built a k3 cluster using Alpine in an hour or so, it was a great experience. I figured the next step would be K8s, but I found no material on K8s for Alpine. This guide is the result of my first pass and the incorporations of high quality notes from the contributers. Kubernetes <span class="emoji" data-emoji="unicorn">🦄</span> is awesome. | ||
== Contributers == | == Contributers == | ||
Line 14: | Line 14: | ||
* Mike Zolla [https://github.com/Zolla-Zolla Github] [https://www.linkedin.com/in/mike-zolla-5903b8/ LinkedIn] | * Mike Zolla [https://github.com/Zolla-Zolla Github] [https://www.linkedin.com/in/mike-zolla-5903b8/ LinkedIn] | ||
* Matthew Emmett [https://github.com/mattemmett Github] [https://www.linkedin.com/in/mattemmett/ LinkedIn] | * Matthew Emmett [https://github.com/mattemmett Github] [https://www.linkedin.com/in/mattemmett/ LinkedIn] | ||
* Richard Aik [https://github.com/richardaik Github] [https://www.linkedin.com/in/zushyongaik/ LinkedIn] | |||
----- | ----- | ||
= Build K8s on Alpine Linux = | = Build K8s on Alpine Linux <span class="emoji" data-emoji="evergreen_tree">🌲</span> = | ||
=== | === Prerequisites <span class="emoji" data-emoji="mag">🔍</span> === | ||
You need an [https://alpinelinux.org/ Alpine Linux] install (this guide is written against version 3. | You need an [https://alpinelinux.org/ Alpine Linux] install (this guide is written against version 3.17 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node. | ||
<blockquote>For HA control planes you'll need a mininum of three nodes | <blockquote>For HA control planes you'll need a mininum of three nodes | ||
</blockquote> | </blockquote> | ||
Update you repositories under /etc/apk/repositories to include community, edge community and testing. | === 1. Setup the Repositories <span class="emoji" data-emoji="green_book">📗</span> === | ||
Update you repositories under {{Path|/etc/apk/repositories}} to include '''community''', '''edge community''' and '''testing'''. | |||
{{Cat|/etc/apk/repositories|#/media/cdrom/apks | |||
http://dl-cdn.alpinelinux.org/alpine/v3. | http://dl-cdn.alpinelinux.org/alpine/v3.20/main | ||
http://dl-cdn.alpinelinux.org/alpine/v3. | http://dl-cdn.alpinelinux.org/alpine/v3.20/community | ||
#http://dl-cdn.alpinelinux.org/alpine/edge/main | #http://dl-cdn.alpinelinux.org/alpine/edge/main | ||
http://dl-cdn.alpinelinux.org/alpine/edge/community | http://dl-cdn.alpinelinux.org/alpine/edge/community | ||
http://dl-cdn.alpinelinux.org/alpine/edge/testing | http://dl-cdn.alpinelinux.org/alpine/edge/testing}} | ||
=== 2. Node Setup === | === 2. Node Setup <span class="emoji" data-emoji="desktop_computer">🖥️</span> === | ||
This series of commands solves a series is incremental problems and sets up the system (if the first control node) for kubectl/kubeadm to run properly on next login by linking the config. | This series of commands solves a series is incremental problems and sets up the system (if the first control node) for kubectl/kubeadm to run properly on next login by linking the config. | ||
Line 43: | Line 44: | ||
The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. <span class="emoji" data-emoji="notes">🎶</span> | The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. <span class="emoji" data-emoji="notes">🎶</span> | ||
< | {{Note|<span class{{=}}"emoji" data-emoji{{=}}"bell">🔔</span> This build assumes CNI usage of flannel for networking. Skip the flannel packages if you want to use calico <span class{{=}}"emoji" data-emoji{{=}}"bell">🔔</span>}} | ||
</ | |||
'''Add kernel module for networking stuff''' | |||
echo "br_netfilter" > /etc/modules-load.d/k8s.conf | |||
modprobe br_netfilter | {{Cmd|# echo "br_netfilter" > /etc/modules-load.d/k8s.conf | ||
apk add cni-plugin-flannel | # modprobe br_netfilter | ||
apk add cni-plugins | # sysctl net.ipv4.ip_forward{{=}}1 | ||
apk add flannel | # echo "net.ipv4.ip_forward{{=}}1" >> /etc/sysctl.conf | ||
apk add flannel-contrib-cni | }} | ||
apk add kubelet | |||
apk add kubeadm | '''Kernel stuff''' | ||
apk add kubectl | |||
apk add | {{Cmd|# echo "net.bridge.bridge-nf-call-iptables{{=}}1" >> /etc/sysctl.conf | ||
apk add uuidgen | # sysctl net.bridge.bridge-nf-call-iptables{{=}}1}} | ||
apk add nfs-utils | |||
'''Installing kubernetes packages''' | |||
{{Cmd|# apk add {{Pkg|cni-plugin-flannel}} | |||
# apk add {{Pkg|cni-plugins}} | |||
# apk add {{Pkg|flannel}} | |||
# apk add {{Pkg|flannel-contrib-cni}} | |||
# apk add {{Pkg|kubelet}} | |||
# apk add {{Pkg|kubeadm}} | |||
# apk add {{Pkg|kubectl}} | |||
# apk add {{Pkg|containerd}} | |||
# apk add {{Pkg|uuidgen}} | |||
# apk add {{Pkg|nfs-utils}}}} | |||
'''Get rid of swap''' | |||
{{Cmd|# cp -av /etc/fstab /etc/fstab.bak | |||
# sed -i '/swap/s/^/#/' /etc/fstab | |||
# swapoff -a}} | |||
'''Fix prometheus errors''' | |||
{{Cmd|# mount --make-rshared / | |||
# echo "#!/bin/sh" > /etc/local.d/sharemetrics.start | |||
# echo "mount --make-rshared /" >> /etc/local.d/sharemetrics.start | |||
# chmod +x /etc/local.d/sharemetrics.start | |||
# rc-update add local}} | |||
'''Fix id error messages''' | |||
{{Cmd|# uuidgen > /etc/machine-id}} | |||
'''Update the containerd sandbox_image to the latest version (It's pause:3.9 in K8S v1.30)''' | |||
{{Cmd|# sed -i 's/pause:3.8/pause:3.9/' /etc/containerd/config.toml}} | |||
'''Add kubernetes services on all controlplane and worker nodes''' | |||
{{Cmd|# rc-update add containerd | |||
# rc-update add kubelet | |||
# rc-service containerd start}} | |||
'''Enable time sync (Not required in 3.20 if using default chrony)''' | |||
{{Cmd|# rc-update add ntpd | |||
# rc-service ntpd start}} | |||
'''Option 1 - Using flannel as your CNI''' | |||
''NOTE: This may no longer be necessary on newer versions of the flannel package'' | |||
{{Cmd|# ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel}} | |||
'''Option 2 - Using calico as your CNI''' | |||
''NOTE: This is required in 3.20 if you use calico'' | |||
{{Cmd|# ln -s /opt/cni/bin/calico /usr/libexec/cni/calico | |||
# ln -s /opt/cni/bin/calico-ipam /usr/libexec/cni/calico-ipam}} | |||
'''Pin your versions!''' If you update and the nodes get out of sync, it implodes. | |||
# | {{Cmd|# apk add 'kubelet{{=}}~1.30' | ||
apk add 'kubelet=~1. | # apk add 'kubeadm{{=}}~1.30' | ||
apk add 'kubeadm=~1. | # apk add 'kubectl{{=}}~1.30'}} | ||
apk add 'kubectl=~1. | |||
{{Note|In the future you will manually have to add a newer version the same way to upgrade.}} | |||
Your blank node is now ready! If it's the first, you'll want to make a control node. | Your blank node is now ready! If it's the first, you'll want to make a control node. | ||
=== 3. Setup the Control Plane (New Cluster!) === | === 3. Setup the Control Plane (New Cluster!) <span class="emoji" data-emoji="mechanical_arm">🦾</span> === | ||
Run this command to start the cluster and then apply a network. | Run this command to start the cluster and then apply a network. | ||
Line 99: | Line 130: | ||
<pre> | <pre> | ||
#do not change subnet | #do not change subnet | ||
kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name= | kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name=$(hostname) | ||
mkdir ~/.kube | mkdir ~/.kube | ||
ln -s /etc/kubernetes/admin.conf /root/.kube/config | ln -s /etc/kubernetes/admin.conf /root/.kube/config | ||
Line 106: | Line 137: | ||
You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers. | You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers. | ||
=== 4. Join the cluster. === | === 4. Join the cluster. <span class="emoji" data-emoji="ant">🐜</span> === | ||
Run this to get the join command from the control plane which you would then run on your new worker. | Run this to get the join command from the control plane which you would then run on your new worker. | ||
Line 113: | Line 144: | ||
</pre> | </pre> | ||
= Bonus = | = Bonus <span class="emoji" data-emoji="moneybag">💰</span> = | ||
== Setup NFS Mounts on K8s == | == Setup NFS Mounts on K8s == | ||
Line 129: | Line 160: | ||
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' | kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' | ||
</pre> | </pre> | ||
== Check on System == | == Check on System <span class="emoji" data-emoji="eyes">👀</span> == | ||
Check on your system. | Check on your system. | ||
Line 135: | Line 166: | ||
<pre>kubectl get nodes | <pre>kubectl get nodes | ||
kubectl get all | kubectl get all | ||
kubectl events -A | |||
</pre> | </pre> | ||
= Cloud Bonus <span class="emoji" data-emoji="cloud">🌦️</span> = | |||
A description of the cloud init version is avalable at [[K8s_with_cloud-init]] | |||
[[Category:Virtualization]] |
Latest revision as of 13:38, 24 August 2024
Alpine Linux 🌲 K8s in 10 Minutes
Summary
This guide will allow you to deploy a fresh Alpine Linux install into a Kubernetes K8 cluster in less than 10 minutes.
Why ✨
I went to learn Kubernetes recently and I built a k3 cluster using Alpine in an hour or so, it was a great experience. I figured the next step would be K8s, but I found no material on K8s for Alpine. This guide is the result of my first pass and the incorporations of high quality notes from the contributers. Kubernetes 🦄 is awesome.
Contributers
- Matthew Rogers Github LinkedIn
- Mike Zolla Github LinkedIn
- Matthew Emmett Github LinkedIn
- Richard Aik Github LinkedIn
Build K8s on Alpine Linux 🌲
Prerequisites 🔍
You need an Alpine Linux install (this guide is written against version 3.17 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node.
For HA control planes you'll need a mininum of three nodes
1. Setup the Repositories 📗
Update you repositories under /etc/apk/repositories to include community, edge community and testing.
Contents of /etc/apk/repositories
2. Node Setup 🖥️
This series of commands solves a series is incremental problems and sets up the system (if the first control node) for kubectl/kubeadm to run properly on next login by linking the config.
The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. 🎶
Add kernel module for networking stuff
# echo "br_netfilter" > /etc/modules-load.d/k8s.conf # modprobe br_netfilter # sysctl net.ipv4.ip_forward=1 # echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
Kernel stuff
# echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf # sysctl net.bridge.bridge-nf-call-iptables=1
Installing kubernetes packages
# apk add cni-plugin-flannel # apk add cni-plugins # apk add flannel # apk add flannel-contrib-cni # apk add kubelet # apk add kubeadm # apk add kubectl # apk add containerd # apk add uuidgen # apk add nfs-utils
Get rid of swap
# cp -av /etc/fstab /etc/fstab.bak # sed -i '/swap/s/^/#/' /etc/fstab # swapoff -a
Fix prometheus errors
# mount --make-rshared /
- echo "#!/bin/sh" > /etc/local.d/sharemetrics.start
- echo "mount --make-rshared /" >> /etc/local.d/sharemetrics.start
- chmod +x /etc/local.d/sharemetrics.start
- rc-update add local
Fix id error messages
# uuidgen > /etc/machine-id
Update the containerd sandbox_image to the latest version (It's pause:3.9 in K8S v1.30)
# sed -i 's/pause:3.8/pause:3.9/' /etc/containerd/config.toml
Add kubernetes services on all controlplane and worker nodes
# rc-update add containerd # rc-update add kubelet # rc-service containerd start
Enable time sync (Not required in 3.20 if using default chrony)
# rc-update add ntpd # rc-service ntpd start
Option 1 - Using flannel as your CNI
NOTE: This may no longer be necessary on newer versions of the flannel package
# ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel
Option 2 - Using calico as your CNI NOTE: This is required in 3.20 if you use calico
# ln -s /opt/cni/bin/calico /usr/libexec/cni/calico # ln -s /opt/cni/bin/calico-ipam /usr/libexec/cni/calico-ipam
Pin your versions! If you update and the nodes get out of sync, it implodes.
# apk add 'kubelet=~1.30' # apk add 'kubeadm=~1.30' # apk add 'kubectl=~1.30'
Your blank node is now ready! If it's the first, you'll want to make a control node.
3. Setup the Control Plane (New Cluster!) 🦾
Run this command to start the cluster and then apply a network.
#do not change subnet kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name=$(hostname) mkdir ~/.kube ln -s /etc/kubernetes/admin.conf /root/.kube/config kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers.
4. Join the cluster. 🐜
Run this to get the join command from the control plane which you would then run on your new worker.
kubeadm token create --print-join-command
Bonus 💰
Setup NFS Mounts on K8s
This can be shared NFS storage to allow for auto persistent claim fulfilment. You'll need your IP updated and export information.
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ --set nfs.server=192.168.1.31 \ --set nfs.path=/exports/cluster00
Now set the default storage class for the cluster.
kubectl get storageclass kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Check on System 👀
Check on your system.
kubectl get nodes kubectl get all kubectl events -A
Cloud Bonus 🌦️
A description of the cloud init version is avalable at K8s_with_cloud-init