K8s with cloud-init

From Alpine Linux
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Alpine Linux 🌦️ K8s in 1 Minute with the nocloud cloud-init image

This is a very early version and contains some hacks

Summary

This is an idea how to set up Alpine with cloud init. The current version has been tested with the nocloud Alpine 3.19 cloudinit base image. It has been inspired by k8s and intended for the adminless set up of virtual and bare metal machines. At the moment it is only a proof of concept and needs to be extended to a real multi server setup.

Necessary fixes to k8s

The runcmd command fixes several issues with the image and the way how services are started.

  1. added another user k8s
  2. added name and ip address to /etc/hosts
  3. align the versions of the modules by a quick hack link in /lib/modules
  4. some config patches using sed
  5. for some reason in need to start the croup service otherwise containerd did not come up

The rest of the installations worked out of the box. Full code available at github

You need two files metadata identifying the system

alpine
local-hostname: alpine

and user-data with the actual configuration.

package_update: true
package_upgrade: true
package_reboot_if_required: true
#package_reboot: true

write_files:
  - path: /etc/apk/repositories
    content: |
      #https://dl-cdn.alpinelinux.org/alpine/v3.19/testing
      http://dl-cdn.alpinelinux.org/alpine/edge/community
      http://dl-cdn.alpinelinux.org/alpine/edge/testing
    append: true
  - path: /etc/modules-load.d/k8s.conf
    content: |
      br_netfilter
  - path: /etc/sysctl.conf
    content: |
      net.bridge.bridge-nf-call-iptables=1
      net.ipv4.ip_forward=1
  - path: /etc/crictl.yaml
    content: |
      runtime-endpoint: unix:///run/containerd/containerd.sock
      image-endpoint: unix:///run/containerd/containerd.sock
      timeout: 2
      pull-image-on-create: false

    
packages:
 - cni-plugin-flannel
 - cni-plugins
 - flannel
 - flannel-contrib-cni
 - kubelet
 - kubeadm
 - kubectl
 - containerd
 - containerd-ctr
 - k9s
 - uuidgen
 - openssh-server-pam

ssh_pwauth: true # sshd service will be configured to accept password authentication method
password: changeme # Set a password for alpine
chpasswd:
    expire: false # Don't ask for password reset after the first log-in
ssh_authorized_keys:
- ssh-ed25519 your key you@example.com

users:
- default
- name: k8s
  doas:
  - permit nopass k8s as root
  ssh_authorized_keys:
  - ssh-ed25519 your key you@example.com

runcmd:
  - |
    # mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
    uuidgen > /etc/machine-id
    sed -i 's/^#UsePAM no/UsePAM yes/' /etc/ssh/sshd_config && /etc/init.d/sshd restart
    echo $(ifconfig eth0 | grep 'inet addr:' | cut -d: -f2| cut -d' ' -f1) $(hostname) >> /etc/hosts
    (cd /lib/modules && ln -s 6.6.28-0-virt 6.6.14-0-virt && depmod)
    modprobe br_netfilter
    sed -ie '/swap/ s/^/#/' /etc/fstab && swapoff -a
    mount --make-rshared /
    echo "#!/bin/sh" > /etc/local.d/sharedmetrics.start
    echo "mount --make-rshared /" >> /etc/local.d/sharedmetrics.start
    chmod +x /etc/local.d/sharedmetrics.start
    rc-update add local default
    rc-update add cgroups
    rc-update add containerd
    rc-update add kubelet
    rc-update add ntpd
    rc-service ntpd start
    rc-service cgroups start
    rc-service containerd start
    echo 1 > /proc/sys/net/ipv4/ip_forward
    kubeadm config images pull
    kubeadm init --pod-network-cidr=10.244.0.0/16 --node-name=$(hostname) --apiserver-advertise-address $(hostname -i) --control-plane-endpoint=$(hostname -i) --upload-certs
    mkdir ~/.kube
    mkdir -p /root/.kube/ && ln -s /etc/kubernetes/admin.conf /root/.kube/config
    export KUBECONFIG=/etc/kubernetes/admin.conf
    while ! kubectl cluster-info ; do echo waiting for api server ; sleep 1; done
    kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
    kubectl taint nodes --all node-role.kubernetes.io/control-plane-
    echo "cloud-init schema"
    cloud-init schema --system
    echo "cloud-init status"
    cloud-init status --long