K8s: Difference between revisions
(Reduce risk of vomiting on the part of the reader) |
(update for new versions) |
||
(16 intermediate revisions by 7 users not shown) | |||
Line 1: | Line 1: | ||
= Alpine Linux K8s in 10 Minutes = | = Alpine Linux <span class="emoji" data-emoji="evergreen_tree">🌲</span> K8s in 10 Minutes = | ||
== Summary == | == Summary == | ||
Line 5: | Line 5: | ||
This guide will allow you to deploy a fresh Alpine Linux install into a Kubernetes K8 cluster in less than 10 minutes. | This guide will allow you to deploy a fresh Alpine Linux install into a Kubernetes K8 cluster in less than 10 minutes. | ||
== Why == | == Why <span class="emoji" data-emoji="sparkles">✨</span> == | ||
I went to learn Kubernetes recently and I built a k3 cluster using Alpine in an hour or so, it was a great experience. I figured the next step would be K8s, but I found no material on K8s for Alpine. This guide is the result of my first pass and the incorporations of high quality notes from the contributers. Kubernetes is awesome. | I went to learn Kubernetes recently and I built a k3 cluster using Alpine in an hour or so, it was a great experience. I figured the next step would be K8s, but I found no material on K8s for Alpine. This guide is the result of my first pass and the incorporations of high quality notes from the contributers. Kubernetes <span class="emoji" data-emoji="unicorn">🦄</span> is awesome. | ||
== Contributers == | == Contributers == | ||
Line 18: | Line 18: | ||
----- | ----- | ||
= Build K8s on Alpine Linux = | = Build K8s on Alpine Linux <span class="emoji" data-emoji="evergreen_tree">🌲</span> = | ||
=== | === Prerequisites <span class="emoji" data-emoji="mag">🔍</span> === | ||
You need an [https://alpinelinux.org/ Alpine Linux] install (this guide is written against version 3. | You need an [https://alpinelinux.org/ Alpine Linux] install (this guide is written against version 3.17 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node. | ||
<blockquote>For HA control planes you'll need a mininum of three nodes | <blockquote>For HA control planes you'll need a mininum of three nodes | ||
</blockquote> | </blockquote> | ||
=== 1. Setup the Repositories <span class="emoji" data-emoji="green_book">📗</span> === | |||
Update you repositories under {{Path|/etc/apk/repositories}} to include '''community''', '''edge community''' and '''testing'''. | |||
http://dl-cdn.alpinelinux.org/alpine/v3. | |||
http://dl-cdn.alpinelinux.org/alpine/v3. | {{Cat|/etc/apk/repositories|#/media/cdrom/apks | ||
http://dl-cdn.alpinelinux.org/alpine/v3.17/main | |||
http://dl-cdn.alpinelinux.org/alpine/v3.17/community | |||
#http://dl-cdn.alpinelinux.org/alpine/edge/main | #http://dl-cdn.alpinelinux.org/alpine/edge/main | ||
http://dl-cdn.alpinelinux.org/alpine/edge/community | http://dl-cdn.alpinelinux.org/alpine/edge/community | ||
http://dl-cdn.alpinelinux.org/alpine/edge/testing | http://dl-cdn.alpinelinux.org/alpine/edge/testing}} | ||
=== 2. Node Setup === | === 2. Node Setup <span class="emoji" data-emoji="desktop_computer">🖥️</span> === | ||
This series of commands solves a series is incremental problems and sets up the system (if the first control node) for kubectl/kubeadm to run properly on next login by linking the config. | This series of commands solves a series is incremental problems and sets up the system (if the first control node) for kubectl/kubeadm to run properly on next login by linking the config. | ||
Line 43: | Line 44: | ||
The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. <span class="emoji" data-emoji="notes">🎶</span> | The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. <span class="emoji" data-emoji="notes">🎶</span> | ||
< | {{Note|<span class{{=}}"emoji" data-emoji{{=}}"bell">🔔</span> This build assumes CNI usage of flannel for networking <span class{{=}}"emoji" data-emoji{{=}}"bell">🔔</span>}} | ||
</ | |||
'''Add kernel module for networking stuff''' | |||
echo "br_netfilter" > /etc/modules-load.d/k8s.conf | |||
modprobe br_netfilter | {{Cmd|# echo "br_netfilter" > /etc/modules-load.d/k8s.conf | ||
apk add cni-plugin-flannel | # modprobe br_netfilter | ||
apk add cni-plugins | # sysctl net.ipv4.ip_forward{{=}}1 | ||
apk add flannel | # echo "net.ipv4.ip_forward{{=}}1" >> /etc/sysctl.conf | ||
apk add flannel-contrib-cni | # apk add {{Pkg|cni-plugin-flannel}} | ||
apk add kubelet | # apk add {{Pkg|cni-plugins}} | ||
apk add kubeadm | # apk add {{Pkg|flannel}} | ||
apk add kubectl | # apk add {{Pkg|flannel-contrib-cni}} | ||
apk add | # apk add {{Pkg|kubelet}} | ||
apk add uuidgen | # apk add {{Pkg|kubeadm}} | ||
apk add nfs-utils | # apk add {{Pkg|kubectl}} | ||
# apk add {{Pkg|containerd}} | |||
cat /etc/fstab | grep -v swap > temp.fstab | # apk add {{Pkg|uuidgen}} | ||
cat temp.fstab > /etc/fstab | # apk add {{Pkg|nfs-utils}}}} | ||
rm temp.fstab | |||
swapoff -a | '''Get rid of swap''' | ||
mount --make-rshared / | {{Cmd|# cat /etc/fstab | grep -v swap > temp.fstab | ||
echo "#!/bin/sh" > /etc/local.d/sharemetrics.start | # cat temp.fstab > /etc/fstab | ||
echo "mount --make-rshared /" >> /etc/local.d/sharemetrics.start | # rm temp.fstab | ||
chmod +x /etc/local.d/sharemetrics.start | # swapoff -a}} | ||
rc-update add local | |||
'''Fix prometheus errors''' | |||
uuidgen > /etc/machine-id | |||
{{Cmd|# mount --make-rshared / | |||
rc-update add | # echo "#!/bin/sh" > /etc/local.d/sharemetrics.start | ||
rc-update add kubelet | # echo "mount --make-rshared /" >> /etc/local.d/sharemetrics.start | ||
# chmod +x /etc/local.d/sharemetrics.start | |||
rc-update add ntpd | # rc-update add local}} | ||
'''Fix id error messages''' | |||
ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel | {{Cmd|# uuidgen > /etc/machine-id}} | ||
echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf | '''Add services''' | ||
sysctl net.bridge.bridge-nf-call-iptables=1 | |||
{{Cmd|# rc-update add containerd | |||
# rc-update add kubelet}} | |||
'''Sync time''' | |||
{{Cmd|# rc-update add ntpd | |||
# rc-service ntpd start | |||
# rc-service containerd start}} | |||
'''Fix flannel''' | |||
''NOTE: This may no longer be necessary on newer versions of the flannel package'' | |||
{{Cmd|# ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel}} | |||
'''Kernel stuff''' | |||
{{Cmd|# echo "net.bridge.bridge-nf-call-iptables{{=}}1" >> /etc/sysctl.conf | |||
# sysctl net.bridge.bridge-nf-call-iptables{{=}}1}} | |||
'''Pin your versions!''' If you update and the nodes get out of sync, it implodes. | |||
{{Cmd|# apk add 'kubelet{{=}}~1.27' | |||
# apk add 'kubeadm{{=}}~1.27' | |||
# apk add 'kubectl{{=}}~1.27'}} | |||
{{Note|In the future you will manually have to add a newer version the same way to upgrade.}} | |||
Your blank node is now ready! If it's the first, you'll want to make a control node. | Your blank node is now ready! If it's the first, you'll want to make a control node. | ||
=== 3. Setup the Control Plane (New Cluster!) === | === 3. Setup the Control Plane (New Cluster!) <span class="emoji" data-emoji="mechanical_arm">🦾</span> === | ||
Run this command to start the cluster and then apply a network. | Run this command to start the cluster and then apply a network. | ||
Line 93: | Line 120: | ||
<pre> | <pre> | ||
#do not change subnet | #do not change subnet | ||
kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name= | kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name=$(hostname) | ||
mkdir ~/.kube | mkdir ~/.kube | ||
ln -s /etc/kubernetes/admin.conf /root/.kube/config | ln -s /etc/kubernetes/admin.conf /root/.kube/config | ||
Line 100: | Line 127: | ||
You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers. | You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers. | ||
=== 4. Join the cluster. === | === 4. Join the cluster. <span class="emoji" data-emoji="ant">🐜</span> === | ||
Run this to get the join command from the control plane which you would then run on your new worker. | Run this to get the join command from the control plane which you would then run on your new worker. | ||
Line 107: | Line 134: | ||
</pre> | </pre> | ||
= Bonus = | = Bonus <span class="emoji" data-emoji="moneybag">💰</span> = | ||
== Setup NFS Mounts on K8s == | == Setup NFS Mounts on K8s == | ||
Line 123: | Line 150: | ||
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' | kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' | ||
</pre> | </pre> | ||
== Check on System == | == Check on System <span class="emoji" data-emoji="eyes">👀</span> == | ||
Check on your system. | Check on your system. | ||
Line 129: | Line 156: | ||
<pre>kubectl get nodes | <pre>kubectl get nodes | ||
kubectl get all | kubectl get all | ||
kubectl events -A | |||
</pre> | </pre> | ||
= Cloud Bonus <span class="emoji" data-emoji="cloud">🌦️</span> = | |||
A description of the cloud init version is avalable at [[K8s_with_cloud-init]] | |||
[[Category:Virtualization]] |
Latest revision as of 01:00, 4 May 2024
Alpine Linux 🌲 K8s in 10 Minutes
Summary
This guide will allow you to deploy a fresh Alpine Linux install into a Kubernetes K8 cluster in less than 10 minutes.
Why ✨
I went to learn Kubernetes recently and I built a k3 cluster using Alpine in an hour or so, it was a great experience. I figured the next step would be K8s, but I found no material on K8s for Alpine. This guide is the result of my first pass and the incorporations of high quality notes from the contributers. Kubernetes 🦄 is awesome.
Contributers
Build K8s on Alpine Linux 🌲
Prerequisites 🔍
You need an Alpine Linux install (this guide is written against version 3.17 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node.
For HA control planes you'll need a mininum of three nodes
1. Setup the Repositories 📗
Update you repositories under /etc/apk/repositories to include community, edge community and testing.
Contents of /etc/apk/repositories
2. Node Setup 🖥️
This series of commands solves a series is incremental problems and sets up the system (if the first control node) for kubectl/kubeadm to run properly on next login by linking the config.
The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. 🎶
Add kernel module for networking stuff
# echo "br_netfilter" > /etc/modules-load.d/k8s.conf # modprobe br_netfilter # sysctl net.ipv4.ip_forward=1 # echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf # apk add cni-plugin-flannel # apk add cni-plugins # apk add flannel # apk add flannel-contrib-cni # apk add kubelet # apk add kubeadm # apk add kubectl # apk add containerd # apk add uuidgen # apk add nfs-utils
Get rid of swap
# cat /etc/fstab
Fix prometheus errors
# mount --make-rshared /
- echo "#!/bin/sh" > /etc/local.d/sharemetrics.start
- echo "mount --make-rshared /" >> /etc/local.d/sharemetrics.start
- chmod +x /etc/local.d/sharemetrics.start
- rc-update add local
Fix id error messages
# uuidgen > /etc/machine-id
Add services
# rc-update add containerd # rc-update add kubelet
Sync time
# rc-update add ntpd # rc-service ntpd start # rc-service containerd start
Fix flannel
NOTE: This may no longer be necessary on newer versions of the flannel package
# ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel
Kernel stuff
# echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf # sysctl net.bridge.bridge-nf-call-iptables=1
Pin your versions! If you update and the nodes get out of sync, it implodes.
# apk add 'kubelet=~1.27' # apk add 'kubeadm=~1.27' # apk add 'kubectl=~1.27'
Your blank node is now ready! If it's the first, you'll want to make a control node.
3. Setup the Control Plane (New Cluster!) 🦾
Run this command to start the cluster and then apply a network.
#do not change subnet kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name=$(hostname) mkdir ~/.kube ln -s /etc/kubernetes/admin.conf /root/.kube/config kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers.
4. Join the cluster. 🐜
Run this to get the join command from the control plane which you would then run on your new worker.
kubeadm token create --print-join-command
Bonus 💰
Setup NFS Mounts on K8s
This can be shared NFS storage to allow for auto persistent claim fulfilment. You'll need your IP updated and export information.
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ --set nfs.server=192.168.1.31 \ --set nfs.path=/exports/cluster00
Now set the default storage class for the cluster.
kubectl get storageclass kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Check on System 👀
Check on your system.
kubectl get nodes kubectl get all kubectl events -A
Cloud Bonus 🌦️
A description of the cloud init version is avalable at K8s_with_cloud-init