<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.alpinelinux.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Dvh312</id>
	<title>Alpine Linux - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.alpinelinux.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Dvh312"/>
	<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/wiki/Special:Contributions/Dvh312"/>
	<updated>2026-05-11T00:37:01Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.40.0</generator>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Postgresql_16&amp;diff=29052</id>
		<title>Postgresql 16</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Postgresql_16&amp;diff=29052"/>
		<updated>2025-02-18T02:01:01Z</updated>

		<summary type="html">&lt;p&gt;Dvh312: Fix restart command&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PostgreSQL is a well known opensource database that scales well and is easy to use. In Alpine v3.20 we can install the latest version using the package postgresql16&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
{{Cmd|apk add {{pkg|postgresql16|arch=}} {{pkg|postgresql16-contrib|arch=}} {{pkg|postgresql16-openrc|arch=}}}}&lt;br /&gt;
{{Cmd|rc-update add postgresql}}&lt;br /&gt;
{{Cmd|rc-service postgresql start}}&lt;br /&gt;
&lt;br /&gt;
This will start the postgresql 16 server and perform some initial configuration.&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
Login as the postgres user and start psql to create a new user and database:&lt;br /&gt;
{{Cmd|su postgres}}&lt;br /&gt;
{{Cmd|psql}}&lt;br /&gt;
{{Cmd|create user user with encrypted password &#039;password&#039;;}}&lt;br /&gt;
{{Cmd|create database database;}}&lt;br /&gt;
{{Cmd|grant all privileges on database database to user;}}&lt;br /&gt;
&lt;br /&gt;
=== Network access ===&lt;br /&gt;
&lt;br /&gt;
By default only local access is allowed to PostgreSQL. To allow other networked services to access the database we need to configure PostgreSQL to allow external connections.&lt;br /&gt;
&lt;br /&gt;
Edit the {{Path|/etc/postgresql16/postgresql.conf}} file using &amp;lt;code&amp;gt;nano&amp;lt;/code&amp;gt; or any other {{ic|&amp;lt;editor&amp;gt; /etc/postgresql16/postgresql.conf}}&lt;br /&gt;
Find the line that starts with &amp;lt;pre&amp;gt;#listen_addresses = &#039;localhost&#039;&amp;lt;/pre&amp;gt; &lt;br /&gt;
Uncomment it and change it to the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;listen_addresses = &#039;*&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you want it to listen on a specific ip you can change * to 192.168.1.2/24.&lt;br /&gt;
Save the file and change the next config file.&lt;br /&gt;
&lt;br /&gt;
Modify the {{Path|/etc/postgresql16/pg_hba.conf}} file using &amp;lt;code&amp;gt;nano&amp;lt;/code&amp;gt; or any other {{ic|&amp;lt;editor&amp;gt; /etc/postgresql16/pg_hba.conf}}&lt;br /&gt;
Look for the line: &amp;lt;pre&amp;gt;host    all             all             127.0.0.1/32            md5&amp;lt;/pre&amp;gt;&lt;br /&gt;
And change it to: &amp;lt;pre&amp;gt;host all all 0.0.0.0/0 md5&amp;lt;/pre&amp;gt;&lt;br /&gt;
This line allows connections from any IP address and requires a password for authentication (md5).&lt;br /&gt;
Restart the server to allow incoming connections from other hosts. {{Cmd|rc-service postgresql restart}}&lt;br /&gt;
&lt;br /&gt;
Allow the port through the firewall. For [[UFW]] firewall type: {{Cmd|ufw allow 5432}}&lt;br /&gt;
&lt;br /&gt;
This is a basic configuration. You can configure the PostgreSQL server to only allow certain networks or IP&#039;s to connect but thats beyond the scope of this documentation.&lt;br /&gt;
&lt;br /&gt;
== Upgrading PostgreSQL ==&lt;br /&gt;
&lt;br /&gt;
{{Todo| Need to add Notes on upgrading PostgreSQL }}&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
{{Todo|Need to add troubleshooting examples}}&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* https://blog.devart.com/configure-postgresql-to-allow-remote-connection.html&lt;br /&gt;
&lt;br /&gt;
[[Category:Database]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Server]]&lt;/div&gt;</summary>
		<author><name>Dvh312</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=K8s&amp;diff=23097</id>
		<title>K8s</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=K8s&amp;diff=23097"/>
		<updated>2023-04-05T05:45:37Z</updated>

		<summary type="html">&lt;p&gt;Dvh312: Enable ip_forward&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Alpine Linux &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;evergreen_tree&amp;quot;&amp;gt;🌲&amp;lt;/span&amp;gt; K8s in 10 Minutes =&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This guide will allow you to deploy a fresh Alpine Linux install into a Kubernetes K8 cluster in less than 10 minutes.&lt;br /&gt;
&lt;br /&gt;
== Why &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;sparkles&amp;quot;&amp;gt;✨&amp;lt;/span&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
I went to learn Kubernetes recently and I built a k3 cluster using Alpine in an hour or so, it was a great experience. I figured the next step would be K8s, but I found no material on K8s for Alpine. This guide is the result of my first pass and the incorporations of high quality notes from the contributers. Kubernetes &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;unicorn&amp;quot;&amp;gt;🦄&amp;lt;/span&amp;gt; is awesome.&lt;br /&gt;
&lt;br /&gt;
== Contributers ==&lt;br /&gt;
&lt;br /&gt;
* Matthew Rogers [https://github.com/RamboRogers Github] [https://www.linkedin.com/in/matthewrogerscissp/ LinkedIn]&lt;br /&gt;
* Mike Zolla [https://github.com/Zolla-Zolla Github] [https://www.linkedin.com/in/mike-zolla-5903b8/ LinkedIn]&lt;br /&gt;
* Matthew Emmett [https://github.com/mattemmett Github] [https://www.linkedin.com/in/mattemmett/ LinkedIn]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
= Build K8s on Alpine Linux &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;evergreen_tree&amp;quot;&amp;gt;🌲&amp;lt;/span&amp;gt; =&lt;br /&gt;
&lt;br /&gt;
=== Prerequisits &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;mag&amp;quot;&amp;gt;🔍&amp;lt;/span&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
You need an [https://alpinelinux.org/ Alpine Linux] install (this guide is written against version 3.17 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;For HA control planes you&#039;ll need a mininum of three nodes&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
=== 1. Setup the Repositories &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;green_book&amp;quot;&amp;gt;📗&amp;lt;/span&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Update you repositories under /etc/apk/repositories to include community, edge community and testing.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;#/media/cdrom/apks&lt;br /&gt;
http://dl-cdn.alpinelinux.org/alpine/v3.17/main&lt;br /&gt;
http://dl-cdn.alpinelinux.org/alpine/v3.17/community&lt;br /&gt;
#http://dl-cdn.alpinelinux.org/alpine/edge/main&lt;br /&gt;
http://dl-cdn.alpinelinux.org/alpine/edge/community&lt;br /&gt;
http://dl-cdn.alpinelinux.org/alpine/edge/testing&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== 2. Node Setup &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;desktop_computer&amp;quot;&amp;gt;🖥️&amp;lt;/span&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
This series of commands solves a series is incremental problems and sets up the system (if the first control node) for kubectl/kubeadm to run properly on next login by linking the config.&lt;br /&gt;
&lt;br /&gt;
The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;notes&amp;quot;&amp;gt;🎶&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;*** &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;bell&amp;quot;&amp;gt;🔔&amp;lt;/span&amp;gt; This build assumes CNI usage of flannel for networking &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;bell&amp;quot;&amp;gt;🔔&amp;lt;/span&amp;gt; ***&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;#add kernel module for networking stuff&lt;br /&gt;
echo &amp;quot;br_netfilter&amp;quot; &amp;gt; /etc/modules-load.d/k8s.conf&lt;br /&gt;
modprobe br_netfilter&lt;br /&gt;
echo 1 &amp;gt; /proc/sys/net/ipv4/ip_forward&lt;br /&gt;
apk add cni-plugin-flannel&lt;br /&gt;
apk add cni-plugins&lt;br /&gt;
apk add flannel&lt;br /&gt;
apk add flannel-contrib-cni&lt;br /&gt;
apk add kubelet&lt;br /&gt;
apk add kubeadm&lt;br /&gt;
apk add kubectl&lt;br /&gt;
apk add containerd&lt;br /&gt;
apk add uuidgen&lt;br /&gt;
apk add nfs-utils&lt;br /&gt;
#get rid of swap&lt;br /&gt;
cat /etc/fstab | grep -v swap &amp;gt; temp.fstab&lt;br /&gt;
cat temp.fstab &amp;gt; /etc/fstab&lt;br /&gt;
rm temp.fstab&lt;br /&gt;
swapoff -a&lt;br /&gt;
#Fix prometheus errors&lt;br /&gt;
mount --make-rshared /&lt;br /&gt;
echo &amp;quot;#!/bin/sh&amp;quot; &amp;gt; /etc/local.d/sharemetrics.start&lt;br /&gt;
echo &amp;quot;mount --make-rshared /&amp;quot; &amp;gt;&amp;gt; /etc/local.d/sharemetrics.start&lt;br /&gt;
chmod +x /etc/local.d/sharemetrics.start&lt;br /&gt;
rc-update add local&lt;br /&gt;
#Fix id error messages&lt;br /&gt;
uuidgen &amp;gt; /etc/machine-id&lt;br /&gt;
#Add services&lt;br /&gt;
rc-update add containerd&lt;br /&gt;
rc-update add kubelet&lt;br /&gt;
#Sync time&lt;br /&gt;
rc-update add ntpd&lt;br /&gt;
/etc/init.d/ntpd start&lt;br /&gt;
/etc/init.d/containerd start&lt;br /&gt;
#fix flannel&lt;br /&gt;
ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel&lt;br /&gt;
#kernel stuff&lt;br /&gt;
echo &amp;quot;net.bridge.bridge-nf-call-iptables=1&amp;quot; &amp;gt;&amp;gt; /etc/sysctl.conf&lt;br /&gt;
sysctl net.bridge.bridge-nf-call-iptables=1&lt;br /&gt;
#Pin your versions!  If you update and the nodes get out of sync, it implodes.&lt;br /&gt;
apk add &#039;kubelet=~1.26&#039;&lt;br /&gt;
apk add &#039;kubeadm=~1.26&#039;&lt;br /&gt;
apk add &#039;kubectl=~1.26&#039;&lt;br /&gt;
#Note that in the future you will manually have to add a newer version the same way to upgrade.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Your blank node is now ready! If it&#039;s the first, you&#039;ll want to make a control node.&lt;br /&gt;
&lt;br /&gt;
=== 3. Setup the Control Plane (New Cluster!) &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;mechanical_arm&amp;quot;&amp;gt;🦾&amp;lt;/span&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Run this command to start the cluster and then apply a network.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#do not change subnet&lt;br /&gt;
kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name=$(hostname)&lt;br /&gt;
mkdir ~/.kube&lt;br /&gt;
ln -s /etc/kubernetes/admin.conf /root/.kube/config&lt;br /&gt;
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers.&lt;br /&gt;
&lt;br /&gt;
=== 4. Join the cluster. &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;ant&amp;quot;&amp;gt;🐜&amp;lt;/span&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Run this to get the join command from the control plane which you would then run on your new worker.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kubeadm token create --print-join-command &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Bonus &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;moneybag&amp;quot;&amp;gt;💰&amp;lt;/span&amp;gt; =&lt;br /&gt;
&lt;br /&gt;
== Setup NFS Mounts on K8s ==&lt;br /&gt;
&lt;br /&gt;
This can be shared NFS storage to allow for auto persistent claim fulfilment. You&#039;ll need your IP updated and export information.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/&lt;br /&gt;
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \&lt;br /&gt;
    --set nfs.server=192.168.1.31 \&lt;br /&gt;
    --set nfs.path=/exports/cluster00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the default storage class for the cluster.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;kubectl get storageclass&lt;br /&gt;
kubectl patch storageclass nfs-client -p &#039;{&amp;amp;quot;metadata&amp;amp;quot;: {&amp;amp;quot;annotations&amp;amp;quot;:{&amp;amp;quot;storageclass.kubernetes.io/is-default-class&amp;amp;quot;:&amp;amp;quot;true&amp;amp;quot;}}}&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
== Check on System &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;eyes&amp;quot;&amp;gt;👀&amp;lt;/span&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Check on your system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;kubectl get nodes&lt;br /&gt;
kubectl get all&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dvh312</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=K8s&amp;diff=23096</id>
		<title>K8s</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=K8s&amp;diff=23096"/>
		<updated>2023-04-05T05:18:38Z</updated>

		<summary type="html">&lt;p&gt;Dvh312: Update to Alpine 3.17&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Alpine Linux &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;evergreen_tree&amp;quot;&amp;gt;🌲&amp;lt;/span&amp;gt; K8s in 10 Minutes =&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This guide will allow you to deploy a fresh Alpine Linux install into a Kubernetes K8 cluster in less than 10 minutes.&lt;br /&gt;
&lt;br /&gt;
== Why &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;sparkles&amp;quot;&amp;gt;✨&amp;lt;/span&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
I went to learn Kubernetes recently and I built a k3 cluster using Alpine in an hour or so, it was a great experience. I figured the next step would be K8s, but I found no material on K8s for Alpine. This guide is the result of my first pass and the incorporations of high quality notes from the contributers. Kubernetes &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;unicorn&amp;quot;&amp;gt;🦄&amp;lt;/span&amp;gt; is awesome.&lt;br /&gt;
&lt;br /&gt;
== Contributers ==&lt;br /&gt;
&lt;br /&gt;
* Matthew Rogers [https://github.com/RamboRogers Github] [https://www.linkedin.com/in/matthewrogerscissp/ LinkedIn]&lt;br /&gt;
* Mike Zolla [https://github.com/Zolla-Zolla Github] [https://www.linkedin.com/in/mike-zolla-5903b8/ LinkedIn]&lt;br /&gt;
* Matthew Emmett [https://github.com/mattemmett Github] [https://www.linkedin.com/in/mattemmett/ LinkedIn]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
= Build K8s on Alpine Linux &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;evergreen_tree&amp;quot;&amp;gt;🌲&amp;lt;/span&amp;gt; =&lt;br /&gt;
&lt;br /&gt;
=== Prerequisits &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;mag&amp;quot;&amp;gt;🔍&amp;lt;/span&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
You need an [https://alpinelinux.org/ Alpine Linux] install (this guide is written against version 3.17 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;For HA control planes you&#039;ll need a mininum of three nodes&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
=== 1. Setup the Repositories &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;green_book&amp;quot;&amp;gt;📗&amp;lt;/span&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Update you repositories under /etc/apk/repositories to include community, edge community and testing.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;#/media/cdrom/apks&lt;br /&gt;
http://dl-cdn.alpinelinux.org/alpine/v3.17/main&lt;br /&gt;
http://dl-cdn.alpinelinux.org/alpine/v3.17/community&lt;br /&gt;
#http://dl-cdn.alpinelinux.org/alpine/edge/main&lt;br /&gt;
http://dl-cdn.alpinelinux.org/alpine/edge/community&lt;br /&gt;
http://dl-cdn.alpinelinux.org/alpine/edge/testing&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== 2. Node Setup &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;desktop_computer&amp;quot;&amp;gt;🖥️&amp;lt;/span&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
This series of commands solves a series is incremental problems and sets up the system (if the first control node) for kubectl/kubeadm to run properly on next login by linking the config.&lt;br /&gt;
&lt;br /&gt;
The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;notes&amp;quot;&amp;gt;🎶&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;*** &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;bell&amp;quot;&amp;gt;🔔&amp;lt;/span&amp;gt; This build assumes CNI usage of flannel for networking &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;bell&amp;quot;&amp;gt;🔔&amp;lt;/span&amp;gt; ***&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;#add kernel module for networking stuff&lt;br /&gt;
echo &amp;quot;br_netfilter&amp;quot; &amp;gt; /etc/modules-load.d/k8s.conf&lt;br /&gt;
modprobe br_netfilter&lt;br /&gt;
apk add cni-plugin-flannel&lt;br /&gt;
apk add cni-plugins&lt;br /&gt;
apk add flannel&lt;br /&gt;
apk add flannel-contrib-cni&lt;br /&gt;
apk add kubelet&lt;br /&gt;
apk add kubeadm&lt;br /&gt;
apk add kubectl&lt;br /&gt;
apk add containerd&lt;br /&gt;
apk add uuidgen&lt;br /&gt;
apk add nfs-utils&lt;br /&gt;
#get rid of swap&lt;br /&gt;
cat /etc/fstab | grep -v swap &amp;gt; temp.fstab&lt;br /&gt;
cat temp.fstab &amp;gt; /etc/fstab&lt;br /&gt;
rm temp.fstab&lt;br /&gt;
swapoff -a&lt;br /&gt;
#Fix prometheus errors&lt;br /&gt;
mount --make-rshared /&lt;br /&gt;
echo &amp;quot;#!/bin/sh&amp;quot; &amp;gt; /etc/local.d/sharemetrics.start&lt;br /&gt;
echo &amp;quot;mount --make-rshared /&amp;quot; &amp;gt;&amp;gt; /etc/local.d/sharemetrics.start&lt;br /&gt;
chmod +x /etc/local.d/sharemetrics.start&lt;br /&gt;
rc-update add local&lt;br /&gt;
#Fix id error messages&lt;br /&gt;
uuidgen &amp;gt; /etc/machine-id&lt;br /&gt;
#Add services&lt;br /&gt;
rc-update add containerd&lt;br /&gt;
rc-update add kubelet&lt;br /&gt;
#Sync time&lt;br /&gt;
rc-update add ntpd&lt;br /&gt;
/etc/init.d/ntpd start&lt;br /&gt;
/etc/init.d/containerd start&lt;br /&gt;
#fix flannel&lt;br /&gt;
ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel&lt;br /&gt;
#kernel stuff&lt;br /&gt;
echo &amp;quot;net.bridge.bridge-nf-call-iptables=1&amp;quot; &amp;gt;&amp;gt; /etc/sysctl.conf&lt;br /&gt;
sysctl net.bridge.bridge-nf-call-iptables=1&lt;br /&gt;
#Pin your versions!  If you update and the nodes get out of sync, it implodes.&lt;br /&gt;
apk add &#039;kubelet=~1.26&#039;&lt;br /&gt;
apk add &#039;kubeadm=~1.26&#039;&lt;br /&gt;
apk add &#039;kubectl=~1.26&#039;&lt;br /&gt;
#Note that in the future you will manually have to add a newer version the same way to upgrade.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Your blank node is now ready! If it&#039;s the first, you&#039;ll want to make a control node.&lt;br /&gt;
&lt;br /&gt;
=== 3. Setup the Control Plane (New Cluster!) &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;mechanical_arm&amp;quot;&amp;gt;🦾&amp;lt;/span&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Run this command to start the cluster and then apply a network.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#do not change subnet&lt;br /&gt;
kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name=$(hostname)&lt;br /&gt;
mkdir ~/.kube&lt;br /&gt;
ln -s /etc/kubernetes/admin.conf /root/.kube/config&lt;br /&gt;
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers.&lt;br /&gt;
&lt;br /&gt;
=== 4. Join the cluster. &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;ant&amp;quot;&amp;gt;🐜&amp;lt;/span&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Run this to get the join command from the control plane which you would then run on your new worker.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kubeadm token create --print-join-command &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Bonus &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;moneybag&amp;quot;&amp;gt;💰&amp;lt;/span&amp;gt; =&lt;br /&gt;
&lt;br /&gt;
== Setup NFS Mounts on K8s ==&lt;br /&gt;
&lt;br /&gt;
This can be shared NFS storage to allow for auto persistent claim fulfilment. You&#039;ll need your IP updated and export information.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/&lt;br /&gt;
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \&lt;br /&gt;
    --set nfs.server=192.168.1.31 \&lt;br /&gt;
    --set nfs.path=/exports/cluster00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the default storage class for the cluster.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;kubectl get storageclass&lt;br /&gt;
kubectl patch storageclass nfs-client -p &#039;{&amp;amp;quot;metadata&amp;amp;quot;: {&amp;amp;quot;annotations&amp;amp;quot;:{&amp;amp;quot;storageclass.kubernetes.io/is-default-class&amp;amp;quot;:&amp;amp;quot;true&amp;amp;quot;}}}&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
== Check on System &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;eyes&amp;quot;&amp;gt;👀&amp;lt;/span&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Check on your system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;kubectl get nodes&lt;br /&gt;
kubectl get all&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dvh312</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=K8s&amp;diff=23095</id>
		<title>K8s</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=K8s&amp;diff=23095"/>
		<updated>2023-04-05T05:09:24Z</updated>

		<summary type="html">&lt;p&gt;Dvh312: Upgrade to kube 1.26, use containerd instead, set node-name to hostname to avoid not found on network&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Alpine Linux &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;evergreen_tree&amp;quot;&amp;gt;🌲&amp;lt;/span&amp;gt; K8s in 10 Minutes =&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This guide will allow you to deploy a fresh Alpine Linux install into a Kubernetes K8 cluster in less than 10 minutes.&lt;br /&gt;
&lt;br /&gt;
== Why &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;sparkles&amp;quot;&amp;gt;✨&amp;lt;/span&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
I went to learn Kubernetes recently and I built a k3 cluster using Alpine in an hour or so, it was a great experience. I figured the next step would be K8s, but I found no material on K8s for Alpine. This guide is the result of my first pass and the incorporations of high quality notes from the contributers. Kubernetes &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;unicorn&amp;quot;&amp;gt;🦄&amp;lt;/span&amp;gt; is awesome.&lt;br /&gt;
&lt;br /&gt;
== Contributers ==&lt;br /&gt;
&lt;br /&gt;
* Matthew Rogers [https://github.com/RamboRogers Github] [https://www.linkedin.com/in/matthewrogerscissp/ LinkedIn]&lt;br /&gt;
* Mike Zolla [https://github.com/Zolla-Zolla Github] [https://www.linkedin.com/in/mike-zolla-5903b8/ LinkedIn]&lt;br /&gt;
* Matthew Emmett [https://github.com/mattemmett Github] [https://www.linkedin.com/in/mattemmett/ LinkedIn]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
= Build K8s on Alpine Linux &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;evergreen_tree&amp;quot;&amp;gt;🌲&amp;lt;/span&amp;gt; =&lt;br /&gt;
&lt;br /&gt;
=== Prerequisits &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;mag&amp;quot;&amp;gt;🔍&amp;lt;/span&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
You need an [https://alpinelinux.org/ Alpine Linux] install (this guide is written against version 3.15 standard image) with internet access. I recommend at least 2 CPU with 4GB of ram and 10GB of disk for each node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;For HA control planes you&#039;ll need a mininum of three nodes&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
=== 1. Setup the Repositories &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;green_book&amp;quot;&amp;gt;📗&amp;lt;/span&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Update you repositories under /etc/apk/repositories to include community, edge community and testing.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;#/media/cdrom/apks&lt;br /&gt;
http://dl-cdn.alpinelinux.org/alpine/v3.15/main&lt;br /&gt;
http://dl-cdn.alpinelinux.org/alpine/v3.15/community&lt;br /&gt;
#http://dl-cdn.alpinelinux.org/alpine/edge/main&lt;br /&gt;
http://dl-cdn.alpinelinux.org/alpine/edge/community&lt;br /&gt;
http://dl-cdn.alpinelinux.org/alpine/edge/testing&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== 2. Node Setup &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;desktop_computer&amp;quot;&amp;gt;🖥️&amp;lt;/span&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
This series of commands solves a series is incremental problems and sets up the system (if the first control node) for kubectl/kubeadm to run properly on next login by linking the config.&lt;br /&gt;
&lt;br /&gt;
The result here gives you a functional node that can be joined to an existing cluster or can become the first control plane of a new cluster. &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;notes&amp;quot;&amp;gt;🎶&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;*** &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;bell&amp;quot;&amp;gt;🔔&amp;lt;/span&amp;gt; This build assumes CNI usage of flannel for networking &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;bell&amp;quot;&amp;gt;🔔&amp;lt;/span&amp;gt; ***&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;#add kernel module for networking stuff&lt;br /&gt;
echo &amp;quot;br_netfilter&amp;quot; &amp;gt; /etc/modules-load.d/k8s.conf&lt;br /&gt;
modprobe br_netfilter&lt;br /&gt;
apk add cni-plugin-flannel&lt;br /&gt;
apk add cni-plugins&lt;br /&gt;
apk add flannel&lt;br /&gt;
apk add flannel-contrib-cni&lt;br /&gt;
apk add kubelet&lt;br /&gt;
apk add kubeadm&lt;br /&gt;
apk add kubectl&lt;br /&gt;
apk add containerd&lt;br /&gt;
apk add uuidgen&lt;br /&gt;
apk add nfs-utils&lt;br /&gt;
#get rid of swap&lt;br /&gt;
cat /etc/fstab | grep -v swap &amp;gt; temp.fstab&lt;br /&gt;
cat temp.fstab &amp;gt; /etc/fstab&lt;br /&gt;
rm temp.fstab&lt;br /&gt;
swapoff -a&lt;br /&gt;
#Fix prometheus errors&lt;br /&gt;
mount --make-rshared /&lt;br /&gt;
echo &amp;quot;#!/bin/sh&amp;quot; &amp;gt; /etc/local.d/sharemetrics.start&lt;br /&gt;
echo &amp;quot;mount --make-rshared /&amp;quot; &amp;gt;&amp;gt; /etc/local.d/sharemetrics.start&lt;br /&gt;
chmod +x /etc/local.d/sharemetrics.start&lt;br /&gt;
rc-update add local&lt;br /&gt;
#Fix id error messages&lt;br /&gt;
uuidgen &amp;gt; /etc/machine-id&lt;br /&gt;
#Add services&lt;br /&gt;
rc-update add containerd&lt;br /&gt;
rc-update add kubelet&lt;br /&gt;
#Sync time&lt;br /&gt;
rc-update add ntpd&lt;br /&gt;
/etc/init.d/ntpd start&lt;br /&gt;
/etc/init.d/containerd start&lt;br /&gt;
#fix flannel&lt;br /&gt;
ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel&lt;br /&gt;
#kernel stuff&lt;br /&gt;
echo &amp;quot;net.bridge.bridge-nf-call-iptables=1&amp;quot; &amp;gt;&amp;gt; /etc/sysctl.conf&lt;br /&gt;
sysctl net.bridge.bridge-nf-call-iptables=1&lt;br /&gt;
#Pin your versions!  If you update and the nodes get out of sync, it implodes.&lt;br /&gt;
apk add &#039;kubelet=~1.26&#039;&lt;br /&gt;
apk add &#039;kubeadm=~1.26&#039;&lt;br /&gt;
apk add &#039;kubectl=~1.26&#039;&lt;br /&gt;
#Note that in the future you will manually have to add a newer version the same way to upgrade.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Your blank node is now ready! If it&#039;s the first, you&#039;ll want to make a control node.&lt;br /&gt;
&lt;br /&gt;
=== 3. Setup the Control Plane (New Cluster!) &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;mechanical_arm&amp;quot;&amp;gt;🦾&amp;lt;/span&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Run this command to start the cluster and then apply a network.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#do not change subnet&lt;br /&gt;
kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name=$(hostname)&lt;br /&gt;
mkdir ~/.kube&lt;br /&gt;
ln -s /etc/kubernetes/admin.conf /root/.kube/config&lt;br /&gt;
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You now have a control plane. This also gives you the command to run on our blank nodes to add them to this cluster as workers.&lt;br /&gt;
&lt;br /&gt;
=== 4. Join the cluster. &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;ant&amp;quot;&amp;gt;🐜&amp;lt;/span&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Run this to get the join command from the control plane which you would then run on your new worker.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kubeadm token create --print-join-command &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Bonus &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;moneybag&amp;quot;&amp;gt;💰&amp;lt;/span&amp;gt; =&lt;br /&gt;
&lt;br /&gt;
== Setup NFS Mounts on K8s ==&lt;br /&gt;
&lt;br /&gt;
This can be shared NFS storage to allow for auto persistent claim fulfilment. You&#039;ll need your IP updated and export information.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/&lt;br /&gt;
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \&lt;br /&gt;
    --set nfs.server=192.168.1.31 \&lt;br /&gt;
    --set nfs.path=/exports/cluster00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the default storage class for the cluster.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;kubectl get storageclass&lt;br /&gt;
kubectl patch storageclass nfs-client -p &#039;{&amp;amp;quot;metadata&amp;amp;quot;: {&amp;amp;quot;annotations&amp;amp;quot;:{&amp;amp;quot;storageclass.kubernetes.io/is-default-class&amp;amp;quot;:&amp;amp;quot;true&amp;amp;quot;}}}&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
== Check on System &amp;lt;span class=&amp;quot;emoji&amp;quot; data-emoji=&amp;quot;eyes&amp;quot;&amp;gt;👀&amp;lt;/span&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Check on your system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;kubectl get nodes&lt;br /&gt;
kubectl get all&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dvh312</name></author>
	</entry>
</feed>