K8s: Difference between revisions

From Alpine Linux
No edit summary
Tag: Manual revert
Line 1: Line 1:
= Alpine Linux K8s in 10 Minutes =
= Alpine Linux <span class="emoji" data-emoji="evergreen_tree">🌲</span> K8s in 10 Minutes =


== Summary ==
== Summary ==
Line 5: Line 5:
This guide will allow you to deploy a fresh Alpine Linux install into a Kubernetes K8 cluster in less than 10 minutes.
This guide will allow you to deploy a fresh Alpine Linux install into a Kubernetes K8 cluster in less than 10 minutes.


== Why ==
== Why <span class="emoji" data-emoji="sparkles">✨</span> ==


I went to learn Kubernetes recently and I built a k3 cluster using Alpine in an hour or so, it was a great experience. I figured the next step would be K8s, but I found no material on K8s for Alpine. This guide is the result of my first pass and the incorporations of high quality notes from the contributers. Kubernetes is awesome.
I went to learn Kubernetes recently and I built a k3 cluster using Alpine in an hour or so, it was a great experience. I figured the next step would be K8s, but I found no material on K8s for Alpine. This guide is the result of my first pass and the incorporations of high quality notes from the contributers. Kubernetes <span class="emoji" data-emoji="unicorn">🦄</span> is awesome.


== Contributers ==
== Contributers ==
Line 18: Line 18:
-----
-----


= Build K8s on Alpine Linux =
= Build K8s on Alpine Linux <span class="emoji" data-emoji="evergreen_tree">🌲</span> =


=== Prerequisits ===
=== Prerequisits <span class="emoji" data-emoji="mag">🔍</span> ===


You need an [https://alpinelinux.org/ Alpine Linux] install (this guide is written against version 3.15 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node.
You need an [https://alpinelinux.org/ Alpine Linux] install (this guide is written against version 3.15 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node.
Line 26: Line 26:
<blockquote>For HA control planes you'll need a mininum of three nodes
<blockquote>For HA control planes you'll need a mininum of three nodes
</blockquote>
</blockquote>
=== 1. Setup the Repositories ===
=== 1. Setup the Repositories <span class="emoji" data-emoji="green_book">📗</span> ===


Update you repositories under /etc/apk/repositories to include community, edge community and testing.
Update you repositories under /etc/apk/repositories to include community, edge community and testing.
Line 37: Line 37:
http://dl-cdn.alpinelinux.org/alpine/edge/testing
http://dl-cdn.alpinelinux.org/alpine/edge/testing
</pre>
</pre>
=== 2. Node Setup ===
=== 2. Node Setup <span class="emoji" data-emoji="desktop_computer">🖥️</span> ===


This series of commands solves a series is incremental problems and sets up the system (if the first control node) for kubectl/kubeadm to run properly on next login by linking the config.
This series of commands solves a series is incremental problems and sets up the system (if the first control node) for kubectl/kubeadm to run properly on next login by linking the config.
Line 43: Line 43:
The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. <span class="emoji" data-emoji="notes">🎶</span>
The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. <span class="emoji" data-emoji="notes">🎶</span>


<blockquote>*** This build assumes CNI usage of flannel for networking ***
<blockquote>*** <span class="emoji" data-emoji="bell">🔔</span> This build assumes CNI usage of flannel for networking <span class="emoji" data-emoji="bell">🔔</span> ***
</blockquote>
</blockquote>
<pre>#add kernel module for networking stuff
<pre>#add kernel module for networking stuff
Line 58: Line 58:
apk add uuidgen
apk add uuidgen
apk add nfs-utils
apk add nfs-utils
#Pin your versions!  If you update and the nodes get out of sync, it implodes.
apk add 'kubelet=~1.23'
apk add 'kubeadm=~1.23'
apk add 'kubectl=~1.23'
#get rid of swap
#get rid of swap
cat /etc/fstab | grep -v swap > temp.fstab
cat /etc/fstab | grep -v swap > temp.fstab
Line 93: Line 87:
Your blank node is now ready! If it's the first, you'll want to make a control node.
Your blank node is now ready! If it's the first, you'll want to make a control node.


=== 3. Setup the Control Plane (New Cluster!) ===
=== 3. Setup the Control Plane (New Cluster!) <span class="emoji" data-emoji="mechanical_arm">🦾</span> ===


Run this command to start the cluster and then apply a network.
Run this command to start the cluster and then apply a network.
Line 106: Line 100:
You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers.
You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers.


=== 4. Join the cluster. ===
=== 4. Join the cluster. <span class="emoji" data-emoji="ant">🐜</span> ===


Run this to get the join command from the control plane which you would then run on your new worker.
Run this to get the join command from the control plane which you would then run on your new worker.
Line 113: Line 107:
</pre>
</pre>


= Bonus =
= Bonus <span class="emoji" data-emoji="moneybag">💰</span> =


== Setup NFS Mounts on K8s ==
== Setup NFS Mounts on K8s ==
Line 129: Line 123:
kubectl patch storageclass nfs-client -p '{&quot;metadata&quot;: {&quot;annotations&quot;:{&quot;storageclass.kubernetes.io/is-default-class&quot;:&quot;true&quot;}}}'
kubectl patch storageclass nfs-client -p '{&quot;metadata&quot;: {&quot;annotations&quot;:{&quot;storageclass.kubernetes.io/is-default-class&quot;:&quot;true&quot;}}}'
</pre>
</pre>
== Check on System ==
== Check on System <span class="emoji" data-emoji="eyes">👀</span> ==


Check on your system.
Check on your system.

Revision as of 04:27, 20 December 2022

Alpine Linux 🌲 K8s in 10 Minutes

Summary

This guide will allow you to deploy a fresh Alpine Linux install into a Kubernetes K8 cluster in less than 10 minutes.

Why

I went to learn Kubernetes recently and I built a k3 cluster using Alpine in an hour or so, it was a great experience. I figured the next step would be K8s, but I found no material on K8s for Alpine. This guide is the result of my first pass and the incorporations of high quality notes from the contributers. Kubernetes 🦄 is awesome.

Contributers



Build K8s on Alpine Linux 🌲

Prerequisits 🔍

You need an Alpine Linux install (this guide is written against version 3.15 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node.

For HA control planes you'll need a mininum of three nodes

1. Setup the Repositories 📗

Update you repositories under /etc/apk/repositories to include community, edge community and testing.

#/media/cdrom/apks
http://dl-cdn.alpinelinux.org/alpine/v3.15/main
http://dl-cdn.alpinelinux.org/alpine/v3.15/community
#http://dl-cdn.alpinelinux.org/alpine/edge/main
http://dl-cdn.alpinelinux.org/alpine/edge/community
http://dl-cdn.alpinelinux.org/alpine/edge/testing

2. Node Setup 🖥️

This series of commands solves a series is incremental problems and sets up the system (if the first control node) for kubectl/kubeadm to run properly on next login by linking the config.

The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. 🎶

*** 🔔 This build assumes CNI usage of flannel for networking 🔔 ***

#add kernel module for networking stuff
echo "br_netfilter" > /etc/modules-load.d/k8s.conf
modprobe br_netfilter
apk add cni-plugin-flannel
apk add cni-plugins
apk add flannel
apk add flannel-contrib-cni
apk add kubelet
apk add kubeadm
apk add kubectl
apk add docker
apk add uuidgen
apk add nfs-utils
#get rid of swap
cat /etc/fstab | grep -v swap > temp.fstab
cat temp.fstab > /etc/fstab
rm temp.fstab
swapoff -a
#Fix prometheus errors
mount --make-rshared /
echo "#!/bin/sh" > /etc/local.d/sharemetrics.start
echo "mount --make-rshared /" >> /etc/local.d/sharemetrics.start
chmod +x /etc/local.d/sharemetrics.start
rc-update add local
#Fix id error messages
uuidgen > /etc/machine-id
#Add services
rc-update add docker
rc-update add kubelet
#Sync time
rc-update add ntpd
/etc/init.d/ntpd start
/etc/init.d/docker start
#fix flannel
ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel
#kernel stuff
echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf
sysctl net.bridge.bridge-nf-call-iptables=1

Your blank node is now ready! If it's the first, you'll want to make a control node.

3. Setup the Control Plane (New Cluster!) 🦾

Run this command to start the cluster and then apply a network.

#do not change subnet
kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name=master
mkdir ~/.kube
ln -s /etc/kubernetes/admin.conf /root/.kube/config
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers.

4. Join the cluster. 🐜

Run this to get the join command from the control plane which you would then run on your new worker.

kubeadm token create --print-join-command 

Bonus 💰

Setup NFS Mounts on K8s

This can be shared NFS storage to allow for auto persistent claim fulfilment. You'll need your IP updated and export information.

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=192.168.1.31 \
    --set nfs.path=/exports/cluster00

Now set the default storage class for the cluster.

kubectl get storageclass
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Check on System 👀

Check on your system.

kubectl get nodes
kubectl get all