User talk:Jch: Difference between revisions

From Alpine Linux
m (→‎About gpve: Rescued dead link by replacing it with pkg template.)
 
(73 intermediate revisions by one other user not shown)
Line 1: Line 1:
==How to automate KVM creation==
== [[User_talk:Jch/How to automate KVM creation|How to automate KVM creation]] ==
How to emulate USB stick with KVM.


The goal is not only to have a working install but to have it at the after setup-alpine stage without human intervention...
== [[User_talk:Jch/Starting_AL_from_network|Starting_AL_from_network]] ==
Tis is the first stages of a work in progress...
How to set up a PXE environement.


I want to pass a Block Device and a name as parameters. The block device could be an image file, a LV, a NBD, a hdd, a raid array, whatever.<br/>
== [[User_talk:Jch/Building_a_complete_infrastucture_with_AL|Building_a_complete_infrastucture_with_AL]] ==
Everything else should be fully automatic according to some config file (stating the http-proxy, the time server, the log server, ...).


The I will just run the script, watch my dhcp logs to discover the new IP assigned (that's why the name is a parameter), then log in with ssh without password to customize it further but at high level only (will be a robot and not me in fact).
<u>From first repo</u> (boot media):


I guess it would be something like emulate boot from usb key with specific overlay already on key... <br/>
AlpineLinux dhcpd tftp-hpa syslinux mkinitfs nfs-utils darkhttpd rsync openssh openvswitch screen qemu-system-X86_64 qemu-img gptfdisk parted mdadm lvm2 nbd xfsprogs e2fsprogs multipath '''consul''' dnsmasq vim collectd collectd-network git syslog-ng <s>envconsul</s> <s>consul-template</s> <s>xnbd</s> <s>ceph</s> lxc lxc-templates xfsprogs gptfdisk e2fsprogs multipath wipe tcpdump curl openvpn <s>fsconsul</s>
then run setup-disk with proper parameters on the command line to avoid the interactive process (like setup-alpine does)... <br/>
Methink this could be done from a couple of scripts put in /etc/local.d/. The last.stop one deleting all of them to be clean at next reboot.<br/>
Let's start easy ;)


=== How to prepare a img file to emulate an USB key ===
and all dependecies...


first a working example done in console (accessed trough ssh).<br/>
will [[How_to_make_a_custom_ISO_image|build a custom ISO]] with that list...
Will build a script from it...
 
First, lets's prepare somme block device (here an image file but could be something else) <pre>
apk add qemu-img
qemu-img create -f raw usbkey.img 512M
apk del qemu-img
T="usbkey.img"
</pre>
 
Next, let's install AL on this $T <pre>
apk add multipath-tools syslinux dosfstools
fdisk $T
kpartx -av $T
mkdosfs -F32 /dev/mapper/loop1p1
dd if=/usr/share/syslinux/mbr.bin of=/dev/mapper/loop1
syslinux /dev/mapper/loop1p1
mkdir key
mount -t vfat /dev/mapper/loop1p1 key
wget http://wiki.alpinelinux.org/cgi-bin/dl.cgi/v3.1/releases/x86_64/alpine-mini-3.1.1-x86_64.iso
mkdir cdrom
mount alpine-mini-3.1.1-x86_64.iso cdrom
cd cdrom
cp -a .alpine-release * ../key/
cd ..
umount key
umount cdrom
kpartx -d $T
apk del multipath-tools syslinux dosfstools
rm alpine-mini-3.1.1-x86_64.iso
</pre>
 
This block device may now be use to boot some KVM for instance like: <pre>
screen -d -m -S KVM-builder \
qemu-system-x86_64 -name KVM-usb -enable-kvm -cpu qemu64 -curses \
-device nec-usb-xhci -drive if=none,id=usbstick,file=$T -device usb-storage,drive=usbstick
</pre> This is working fine.
 
The problem is when adding a HDD to the lot, qemu try to boot from the hdd and does not even try to boot from the usb key. Enabling menu in boot let's one access the emulated bios which allows to select USB device to boot interactively but this break the goal of fully automated boot :( The stanza is for instance <pre>
screen -d -m -S KVM-builder \
qemu-system-x86_64 -name KVM-usb -enable-kvm -cpu qemu64 -curses \
-device nec-usb-xhci -drive if=none,id=usbstick,file=$T -device usb-storage,drive=usbstick \
-drive file=$T2 boot menu=on
</pre>
 
qemu-doc states that very clearly:<br/>
> -boot [order=drives][,once=drives][,menu=on|off][,splash=sp_name][,splash-time=sp_time][,reboot-timeout=rb_timeout][,strict=on|off]<br/>
>  Specify boot order drives as a string of drive letters. Valid drive letters depend on the target achitecture. The x86 PC uses: a, b (floppy 1 and 2), c (first hard disk), d (first CD-ROM), n-p (Etherboot from network adapter 1-4), hard disk boot is the default
 
==Starting AL from network==
 
As it does not seems possible to start qemu with a virtual USB key *and* a virtual HDD attached to the VM. Let's try something different: to start AL from the network and mount the HDD later on...
 
Usually this kind of setup needs
* a DHCP server to get an IP address and the location of the TFTP server
* a TFTP server to download the kernel and tje root file system to boot from
* a NFS server or a HTTP one to get the overlay used to configure the machine
* a NFS server to share files with others
* a NBD server to get his own block devices as storage
* a machine where to prepare initramfs
 
First, let's check what is vailable in AL and what is not...
* dhcpcd-6.6.7-r0
* tftp-hpa-5.2-r1
* nfs-utils-1.3.1-r2
* darkhttpd-1.10-r1
* nbd-3-10-r0
 
=== PXE_boot ===
 
We are trying to do something as in [[PXE_boot]].
 
We did it on separate machine for each service. It forces us to deeply understand all interactions between processes.
 
In current state we <pre>umount /media/alpine</pre> as last step of the boot procees and we are running with no tie.
 
==== dhcpd ====
 
192.168.1.1
 
with package dhcp from repo. <s>Nothing special.</s>
 
<pre>
  filename "pxelinux.0";
  next-server 192.168.1.2;
</pre>
 
and
<pre>
# Disable RFC 2136 dynamic DNS updates.
ddns-update-style none;
 
# Define actions to take when leases are committed, released, or expired to
# accomplish dynamic DNS updates to djbdns. This does not use the RFC 2136
# update mechanism, because djbdns does not support it. However, it
# accomplishes the same thing.
# syntax "execute(cmd, arg, ...)"
### need to check if the two "on EVENT" must be nested or in sequence...
on commit {
  execute ("/usr/local/bin/dns-update-djb",
          "commit",
          lcase (option host-name),
          config-option domain-name,
          binary-to-ascii (10, 8, ".", leased-address));
  on release or expiry {
    execute ("/usr/local/bin/dns-update-djb",
            "release",
            binary-to-ascii (10, 8, ".", leased-address));
  }
}
</pre>
 
with a custom /usr/local/bin/dns-update-djb script largely inspired from https://sites.google.com/site/dmoulding/dns-update-djb but adapted for a distant tinydns server and to the AL way.
 
==== tftp ====
 
192.168.1.2
 
tftp-hpa configured to serve some SYSLINUX files.
 
The config is in /etc/conf.d/in.tftpd<br/>
Then to issue:
<pre>
rc-update add in.tftpd
rc-service in.tftpd start
</pre>
 
We serve from /var/tftpboot.
 
We add to temporary install the syslinux apk to get pxelinix.0 and other libs needed. <br/>
We did prepare a "pxerd" initramfs file with virtio_net.ko, dhcp and nfs included; made sure loop and squashfs are included. <br/>
pxelinux.cfg/default looks like <pre>
PROMPT 0
TIMEOUT 3
default alpine
LABEL alpine
LINUX alpine/vmlinuz-grsec
INITRD alpine/pxerd
APPEND ip=dhcp alpine_dev=nfs:192.168.1.3:/srv/boot/alpine modloop=/boot/grsec.modloop.squashfs nomodeset quiet apkovl=http://192.168.1.4/localhost.apkovl.tar.gz
#APPEND modloop=http:/192.168.1.4/grsec.modloop.squashfs
#APPEND apkovl=http://192.168.1.4/localhost.apkovl.tar.gz # including the modloop hack
#APPEND alpine_repo=http://repo-url
</pre>
 
Modules are loaded <pre>
/ # lsmod
Module                  Size  Used by    Not tainted
nfsv3                  22784  1
nfs                  144376  2 nfsv3
lockd                  71917  2 nfsv3,nfs
sunrpc                225574  6 nfsv3,nfs,lockd
af_packet              28735  0
sr_mod                13487  0
cdrom                  40424  1 sr_mod
pata_acpi              3326  0
ata_piix              25601  0
ata_generic            3554  0
libata                181955  3 pata_acpi,ata_piix,ata_generic
virtio_net            19684  0
scsi_mod              113710  2 sr_mod,libata
virtio_pci              6485  0
virtio                  4933  2 virtio_net,virtio_pci
virtio_ring            9161  2 virtio_net,virtio_pci
squashfs              25893  1
loop                  18243  2
</pre> Network is up <pre>
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 52:54:33:B0:C2:D2
inet addr:192.168.1.108  Bcast:0.0.0.0  Mask:255.255.255.0 
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1 
RX packets:322 errors:0 dropped:0 overruns:0 frame:0 
TX packets:2 errors:0 dropped:0 overruns:0 carrier:0 
collisions:0 txqueuelen:1000
RX bytes:20514 (20.0 KiB)  TX bytes:684 (684.0 B)
</pre> but modloop does not load
 
This patch fix this issue (hope to see it mainstream soon) <pre>
localhost:~# diff /etc/init.d/modloop modloop.new
--- /etc/init.d/modloop
+++ modloop.new
@@ -32,7 +32,7 @@
        local search_dev="$1" fstab="$2"
        local dev mnt fs mntopts chk
        case "$search_dev" in
-              UUID=*|LABEL=*|/dev/*);;
+              UUID=*|LABEL=*|/dev/*|nfs);;
                *) search_dev=/dev/$search_dev;;
        esac
        local search_real_dev=$(resolve_dev $search_dev)
@@ -49,6 +49,10 @@
                                fi
                        done
                done
+              if [ "$fs" = "$search_dev" ]; then
+                      echo "$mnt"
+                      return
+              fi
        done < $fstab 2>/dev/null
}
 
</pre>
 
==== References ====
 
http://www.syslinux.org/wiki/index.php/PXELINUX
 
==== nfs ====
 
192.168.1.3
 
see http://wiki.alpinelinux.org/wiki/User_talk:Jch#NFS_bug_study <br/>
'''It is now working with''' http://dev.alpinelinux.org/~clandmeter/rpcbind-0.2.3_rc2-r0.apk
 
We serve the content of an usb key (iso) in ro as <pre>
/srv/boot/alpine *(ro,no_root_squash,no_subtree_check)
</pre>
 
==== http ====
 
192.168.1.4
 
With package [[Darkhttpd]] from repo serving from /var/tftpboot/ to serve files needed to boot (kernel, rootfs, apkovl.tar.gz)
 
==== nbd ====
 
192.168.1.5
 
I really would like to have xnbd-server in AL. nbd-3.1.0 was just added to edge/testing repo; need to try it in real situation...<br/>
For now, we have a qcow2 debian image  added to the apkovl with lbu add; lbu ci.<br/>
This image is used to launch a first KVM with /dev/mdX as second drive.<br/>
In turn, inside the KVM, vdb is used to define a lvm2 volume.<br/>
The LV are published with xnbd-server.
 
Later on, the same KVM will be able to connect to RBD device and re-publish it as NBD.
 
'''xnbd-server''' allows ''live migration'' of Block Devices while live. And has a powerfull ''proxy'' mode.
 
All other KVM are running from FS accessed trough NBD from such SAN. Even other SAN.<br/>
As soon as those '''KVM-NBD''' are up, they may be used to <u>launch others</u> or to provide ''datastores''.
 
We put that image on every USB key we use along with mdadm and OpenVSwitch (and collectd).
 
==== dns ====
 
192.168.1.6
 
tinydns from repo with split-dns config.
 
== Building a complete infrastucture with AL ==
 
I'm doing it. It's for real! That's my daily job at present ^^
 
I'm building a full private cloud bootstraped with only an AlpineLinux USB key for each physical machine. But next ones will be able to boot from network; not even USB keys will be needed. As a matter of fact, we used more than only one physical USB key because we didn't started from scratch but had a live migration from Debian to Alpine for most of the services and machines...
 
If there is some feed-back, I may develop config files and so on ;)
 
As I started from scratch and OpenVSwitch was not available in Alpine at that time yet, It took me a while to build everything. But to reproduce it, it would be ''piece of cake''!
 
We use qemu-kvm for KVM. But I guess one may use whatever Virtual Machine technology one likes.
 
'''This is the presentation of a use case. Not a HOW TO. And it's still a work in progess...'''
 
=== Network ===
 
==== Firewall ====
 
We put a dedicated physical machine on each link between our LAN and other networks.
It just run iptables and some paquets accounting metrology.
 
==== Router ====
 
Physical machine connected to our LAN and other networks (trough a firewall). A static routing table do the trick.
 
==== Switches ====
 
All physical machines run OpenVSwitch reproducing virtually all physical switches we have plus some virtuals only.
 
==== VPN ====
 
All physical machines run openVPN as client to as many switch defined less the physical interfaces of the machine. There is an openVPN server somewhere running in a KVM connected to needed switches.
 
=== Storage ===
 
==== SAN ====
 
On each physical machine, a couple of HDD are mounted in raid1 witch mdadm. This raid array is passed as parameter to a KVM who in turn mount it as physical volume for LVM. The created LV are published as NBD with '''xnbd-server'''. For the time being, this KVM is running debian 7.8 as xnbd is not in Alpine (yet?)..
 
The SAN also connects to the CEPH cluster as client and publish reached RBD as NBD with xnbd-server. For the time being, this KVM is running debian 7.8 as no xnbd nor RBD are in Alpine (yet?)..
 
==== NAS ====
 
Some KVM is mounting some NBD as local drives and publishing some directories as NFS shares.<br/>
We now have nfs and nbd in AL.
 
==== CEPH ====
 
KVM with physical HDD as parameters are used for building OSD and MON needed to operate a CEPH cluster.
One KVM is the "console" to drive it from a single point of presence (usefull but not "needed").For the time being, those KVM are running debian 7.8 as CEPH and RBD are not in Alpine (yet?)..
 
=== Low-level services ===
 
No service at all is running in the AL on bare metal. All are running is some KVM connected to needed switches by the means of the OpenVSwitches.
The apkovl on the USB keys contains only the scripts to launch KVM and one image file to launch the first SAN. Other KVM are launched from LV in the SAN.
 
==== dhcp ====
 
Exactly two KVM stored in different SAN, ''primary'' and ''secondary'' in <u>failover mode</u>, are running '''dhcp'''d from repo. <br/>
We just have to configure it properly.
 
We have to test if '''dhcp'''d may run in a LXC instead of a KVM?
 
==== DNS ====
 
tinydns from repo with split-dns config.
 
==== Resolver ====
 
With '''dnscache''' from repo.
 
Those KVM have <u>manually assigned IP address in the LAN</u> and does know a gateway to the Internet.<br/>
They use themselves as resolver... <br/>
They know the direct manually assigned IP address in the LAN of the main DNS server of selected domains (for split dns configuration).
 
==== PXEboot ====
 
kernel and initrd files in '''tftp''' server.<br/>
copy of usb content in '''nfs''' server.<br/>
apkovl files in '''darkhttpd''' server.
 
==== Time server ====
 
The router (who has access to internet) usr '''ntpd''' (or similar) from repo, to act as <u>client to the WAN</u> and <u>server to the LAN</u>.
 
==== syslog ====
 
With '''syslog-ng''' from repo, we receive the logs from all machines be it physical or virtual.<br/>
It's the only place who needs '''logrotate''' from repo.
 
==== HTTP proxy/cache ====
 
The web proxy/cache '''squid''', from repo, uses a NBD as cache.
It has a link to the internet to forward requests and one to the LAN.
 
Because of him, no machine, as they are all connected to the LAN, be it physical or virtual, needs a published default gateway.
And all machines are able to install/upgrade packages or to see the WWW as client.
 
We point all AL boxes to this KVM with '''setup-proxy'''.
 
==== Monitoring ====
 
'''shinken''' from sources in some LXC with barely only the python package installed
 
==== Metrology ====
 
'''Collectd''' (one LXC as server, all other machines, be it physical or virtual, as client) with collectd-network from repo.<br/>
A couple of lines in CGP config file is enough for now.
 
==== Backups ====
 
with common tools: '''rsync''', '''tar''', '''nc''', '''bzip2''', '''openssh''', '''cron'''
 
==== LDAP ====
 
'''openldap''' with openldap-back-hdb, both from repo.
 
http://www.openldap.org/doc/admin24/backends.html states<br/>
> The hdb backend to slapd(8) is the recommended primary backend for a normal slapd database.<br/>
And<br/>
> Note: The hdb backend has superseded the bdb backend, and both will soon be deprecated in favor of the new mdb backend.
 
> The mdb backend to slapd(8) is the upcoming primary backend for a normal slapd database. It uses OpenLDAP's own Lightning Memory-Mapped Database (LMDB) library to store data and is intended to replace the Berkeley DB backends.
 
Unfortunately there is no ''openldap-back-mdb'' package in AL yet.
 
=== High-level services ===
 
in LXC AL whenever possible.<br/>
in LXC Debian as second choice<br/>
in KVM otherwise.
 
==== x2goserver ====
 
I did package nx-libs and x2goserver. I'm waiting for the packages to be included in edge/testing. They are already being used for single app access. Next step is full desktop but we are not sure if AL is the right choice for full desktop usage for our customers...
 
unfortunately, '''x2goclient''' pops up "kex error : did not find one of algos diffie-hellman-group1-sha1 in list curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1 for kex algos"
need to specify diffie-hellman-group1-sha1 in sshd_config. Luckyly a fix exists and my business partner is looking after a way to enhance it's security upstream.
 
==== ejabberd ====
 
with ejabberd from edge/testing repo.
I migrate the mnesia DB from an old debian squeeze just copying the files and changing ownership in a LXC-AL. I just had to disable mod_pubsub to have it run properly. Authentification is done with openLDAP.
I now plan to migrate a very very old jabberd (11 years I guess) running on a debian etch to it if I find a way to keep users's password and rosters...
I also would like to use it as a gateway to IRC to follow #alpine, #alpine-devel and #x2go channels ;)
Some other ejabberd features are interesting to my organisation and we will experiment more in depth, namely mod_sip, mod_stun, mod_proxy65...
 
==== redmine ====
 
in a brand new LXC with edge/main and edge/testing repos<br/>
mostly following [[Redmine]] page<br/>
I use a mariaDB server on another host where I did create the user and pushed the sqldump from a running redmine 3.0.0 instance
 
<pre>
apk update
apk upgrade
reboot
setup-timezone
apk add redmine
apk add ruby-unicorn
cp /etc/unicorn/redmine.conf.rb.sample /etc/unicorn/redmine.conf.rb
vi /etc/conf.d/unicorn
vi /etc/redmine/database.yml
apk add sudo
apk add ruby-mysql2
apk add ruby-yard
apk add tzdata
cd /usr/share/webapps/redmine
sudo -u redmine rake generate_secret_token
sudo -u redmine RAILS_ENV=production rake db:migrate
</pre>
 
==== email ====
 
LDAP
 
SMTP in
 
Antispam
 
Antivirus
 
SMTP store
 
IMAP
 
SMTP relay
 
SMTP out
 
Webmail
 
==== webhosting ====
 
Front-end
 
Back-end static
 
Back-end dynamic
 
== Master key ==
 
We want to be able to bootstrap the full infrastructure from only one usb key and one machine with physical access (to insert the usb key obviously).
 
This key will run AL stable.
With only very few packages installed.
But some images on the storage.
 
<pre>
almirror:~# du -shc /var/www/localhost/htdocs/alpine/????/*/x86_64
3.4G    /var/www/localhost/htdocs/alpine/edge/main/x86_64
1.3G    /var/www/localhost/htdocs/alpine/edge/releases/x86_64
2.3G    /var/www/localhost/htdocs/alpine/edge/testing/x86_64
3.2G    /var/www/localhost/htdocs/alpine/v3.1/main/x86_64
6.5G    /var/www/localhost/htdocs/alpine/v3.1/releases/x86_64
16.6G  total
</pre> A repo with stable and edge will be present on the 32GB USB stick.
 
 
=== Initial packages ===
 
dhcpd tftp syslinux nfs darkhttp openssh vim
openvswitch mdadm qemu screen collectd collectd-network gptdisk irqbalance ssmtp mailx
 
=== Bootstrap PXEboot capacity ===
 
First, we setup the network. Remember, this is a bootstrap. We assume nothing.<br/>
It means we may take any decision we see fit.
 
Our primary machine is the only fixed point for now. Let's give it the number 1.<br/>
All machines will be connected to the LAN. We know nothing yet about other NICs.<br/>
First we must decide about the LAN_IP_RANGE. For instance be it 192.168.1.0/24.
 
We will use complicated network setup, let's start by installing openvswitch
rc-service networking start
apk add openvswitch consul
rc-update add ovs-modules
rc-update add ovsdb-server
rc-update add ovs-vswitch
rc-service ovs-modules start
rc-service ovsdb-server start
rc-service ovs-vswitch start
ovs-vsctl add-br lan
ovs-vsctl add-br wan
ovs-vsctl add-br storage
ovs-vsctl add-br ipmi
ovs-vsctl add-br vpn
ovs-vsctl add-port lan eth0
vi /etc/network/interfaces #iface eth0 inet manual
                            #iface lan inet dhcp
rc-service networking restart
rc-service sshd restart
 
No machine will offer service from bare metal. LXC will be prefered, KVM otherwise.
if /dev/sda # bare metal or first level KVM
then
  apk add qemu-system-x86_64 screen libusb
  modprobe kvm
  modprobe kvm-intel
  modprobe kvm-amd
  modprobe tun
elif /dev/vda # second level KVM
  ##
fi
 
On instancie un PXEboot server
screen -m -d -S KVM-PXE-01 qemu-system-x86_64 -kvm -kernel /kernel -initrd /initrd -append alpine_dev=...,apkovl=... -net -net -drive /media/usb/custom/pxeboot.img,ro
 
La suite immédiate se fait dans cette VM
screen -r KVM-PXE-01
 
We need the storage space from the usb key to handle boot images and apkovl files (as vda1).<br/>
In KVM-PXE-01
setup-alpine --mode data
vi /etc/network/interfaces #iface eth0 inet static LAN_IP=1
reboot KVM-PXE-01
apk add dhcpd
rc-update dhcpd
vi /etc/dhcp/dhcpd.conf  #filename "pxelinux.0";
                          #next-server ${LAN_IP};
apk add darkhttp
rc-update add darkhttpd
vi /etc/darkhttp ${LAN_IP}
rc-service darkhttpd start
apk add tftp-hla
rc-update add tftp
vi /etc/tftp  ${LAN_IP}
rc-service tftp start
apk add nfs-utils
rc-update add nfs
vi /etc/exports /var/tftpboot/media ${LAN_IP_RANGE}
cp -pr /media/usb /var/tftpboot/media
rc-service nfs start
mkdir -p /var/tftpboot/alpine
cp /media/usb/boot/vmlinuz* /var/tftpboot/alpine/
cp /media/usb/boot/modloop* /var/tftpboot/alpine/
apk add mkinitfs
cd /etc/mkinitfs
vi features.d/network.modules
vi features.d/dhcp.files
vi features.d/dhcp.modules
vi features.d/nfs.modules
vi mkinitfs.conf # add network, dhcp, nfs and squashfs
mkinitfs -o /var/tftpboot/alpine/pxerd
apk del mkinitfs
apk add syslinux
cp /usr/share/syslinux/pxelinux.0 /var/tftpboot/
cp /usr/share/syslinux/ldlinux.c32 /var/tftpboot/
apk del syslinux
mkdir -p /var/tftpboot/pxelinux.cfg
vi /var/tftpboot/pxelinux.cfg/default
src=rsync://rsync.alpinelinux.org/alpine/
dest=/var/www/localhost/htdocs/alpine/
exclude="--exclude v2.[0-9] --exclude v3.0 --exclude edge-uclibc --exclude armhf --exclude x86/"
mkdir -p "$dest"
/usr/bin/rsync -prua \
        $exclude \
        --delete \
        --timeout=600 \
        --delay-updates \
        --delete-after \
        "$src" "$dest"
dest=/var/www/localhost/htdocs/apkovl/
mkdir -p "$dest"
apk add consul
rc-update add consul default
configure consul as server
rc-service consul start
apk add timeserver
rc-update add timeserver
rc-service timeserver start
apk add dnscache
vi /etc/dnscache.conf
rc-update add dnscache
rc-service dnscache start
apk add git
consul service add pxe
consul service add repo
consul service add dnscache
consul service add timeserver
# detect if running from key or from pxe
if $run_from_usb()
then # /media/usb
  mkdir -p /var/www/localhost/htdocs/apkovl
  cd /var/www/localhost/htdocs/apkovl
  git init
  # populate with default config for PXEboot client
  git add .
  git commit -m "apkovl:: initial commit"
else # /media/alpine
  ##
  consul join pxeserver
  rm -fr /var/www/localhost/htdocs/apkovl
  cd /var/www/localhost/htdocs/
  git clone apkovl
  cd /var/tftpboot/
  git clone pxelinux.cfg
fi
if consul leader = self
then
  rc-service dhcpd restart
else
  rc-service dhcpd stop
  dhclient lan
fi
check_consul_leader_is_dhcpd_server()
 
=== Bootstrap regular machines ===
 
A <u>manual</u> phase one valid for every KVM => default.apkovl.tar.gz
setup-alpine --mode none (eth0 dhcp)
umount /media/alpine
rc-service networking start # (if not already done)
setup-repository pxeserver
apk update
apk add openvswitch consul openssh lxc rsync screen git curl collectd collectd-network
rc-update add consul
rc-update add sshd
rc-update add collectd
rc-update add ovs-modules
rc-update add ovsdb-server
rc-update add ovs-vswitch
rc-service ovs-modules start
rc-service ovsdb-server start
rc-service ovs-vswitch start
ovs-vsctl add-br lan
ovs-vsctl add-br wan
ovs-vsctl add-br storage
ovs-vsctl add-br ipmi
ovs-vsctl add-br vpn
ovs-vsctl add-port lan eth0
vi /etc/network/interfaces #iface eth0 inet manual
                            #iface lan inet dhcp
rc-service networking restart
cp -r /media/alpine/bootstrap/ssh ~/.ssh
chmod -R go-rwx ~/.ssh
lbu add ~/.ssh/authorized_keys
consul service add ssh check
lbu package
scp ~/${hostname}.apkovl.tar.gz pxeserver:/var/www/localhost/htdocs/apkovl/default.apkovl.tar.gz
 
As phase 2, setup /etc/local.d/bootstrap.start (chmod +x /etc/local.d/bootstrap.start)
#!/bin/sh
rc-service consul start
consul join ${pxeserver}
pxeserver=$(consul leader)
wget ${pxeserver}:fichier_de_conf?name=${MAC}
echo "#/bin/sh" > /etc/local.d/untie.start
echo "umount /media/alpine" >> /etc/local.d/untie.start
chmod +x /etc/local.d/untie.start
apk add nfs-utils
rc-update add nfs default
rc-service nfs start
echo "${pxeserver}:/var/nfs/local/ /usr/local nfs ro,nosuid,intr 0 0" >> /etc/fstab
mount /usr/local
if $(dmesg | grep -q "/dev/sda")
then # prepare SAN
  apk add qemu-system-x86_64 screen libusb ssmtp mailx mdadm
  modprobe kvm
  modprobe kvm-intel
  modprobe kvm-amd
  modprobe tun
  echo "tun" >> /etc/modules
  echo "kvm-intel" >> /etc/modules
  echo "kvm-amd" >> /etc/modules
  rc-update add mdadm
  rc-update add mdadm-raid
  echo <<EOF > /etc/ssmtp/ssmtp.conf
EOF
  mdadm create raid1 /dev/md0 /dev/sda1 /dev/sdb1
  mdadm scan > /etc/mdadm.conf
  echo "my@adress" >> /etc/mdadm.conf
  echo "screen -m -d -S storage qemu -net -net -boot n -drive /dev/md0" >> etc/local.d/untie.start
  echo $(date) > /etc/issan
  setup-hostname ${hostname_by_config}
  consul service add raid
  consul service add kvm
  rc-status --all | mailx -s "Install report ${MAC}" cloud@hellea.net
elif $(dmesg | grep -q "/dev/vda") and [ -f /etc/issan ]
  apk add lvm2 nbd gptfdisk netcat # later we hope for rbd (ceph) also
  rc-update add lvm2
  echo "nbd" >> /etc/modules
  pv create /dev/vda
  vg create /dev/vda storage
  # every SAN will have a copy of the needed files to start a new PXE server
  # and as it is always mounted ro but on master, it my be mounted several times
  lv create -L 32g -n pxeserver storage
  # we copy it from the running consul leader (the active PXE server)
  screen -m -d -S REC-pxeserver "nc -l -p 12345 | dd BS=16M of=/dev/storage/pxeserver"
  ### ssh ${pxeserver} screen -m -d -S SND-pxeserver "dd BS=16M if=/dev/vda | nc ${self} 12345"
  consul event fire "pushsan" ${MAC}
  nbd-server publish storage/pxeserver ro # to add to nbd.conf
  setup-hostname ${hostname_by_config}
  consul service add storage
elif $(dmesg | grep -q "/dev/vda") # not_diskless() /dev/vda
  apk add xfsprogs btrfsprogs gptfdisk lxc
  modprobe xfs nbd
  echo "nbd" >> /etc/modules
  echo "xfs" >> /etc/modules
  echo "tun" >> /etc/modules
  rc-update add nfs
  setup-alpine --mode data -f fichier_de_conf
  consul service add lxc
else # diskless
  apk add screen vim qemu-system-x86_64 libusb
  echo "tun" >> /etc/modules
  echo "kvm-intel" >> /etc/modules
  echo "kvm-amd" >> /etc/modules
  setup-hostname ${hostname_from_conf}
  consul service add kvm
fi
rm /etc/local.d/bootstrap.start
echo "    hostname $(hostname)" >> /etc/network/interfaces
lbu package
/etc/local.d/untie.start
consul event fire "bootstrapped" ${MAC}
 
The process is as follow: we launch a new KVM (stage 1) with only her hwaddr on LAN as identification and list of (N)BD as virtual drives. <br/>
First launch will use the default.apkovl.tar.gz which in turn will generate the ${MAC}.apkovl.tar.gz and reboot (stage 2).  <br/>
At his stage we have an empty available server known by her hwaddr on lan.  <br/>
Stage 3 is to push some usefull service configuration and activate it (ansible? chef? puppet? home made? other?). <br/>
 
=== Discussion ===
 
All we need now to boot another AL machines (be it physical or virtual) are some {MAC}.apkovl.tar.gz files served by darkhttpd.  <br/>
We badly need name resolution at this stage.  <br/>
DNS and resolver are needed.  <br/>
DNS (lan view) will be updated dynamically by consul.  <br/>
Resolver knows localhost.consul.agent, a.ns.hellea.net and the default route (if known at this stage).  <br/>
DNS will be consul (lan view) and djbdns (wan view) and resolver will be dnscache (both from repo).
 
It is to be noted that after bootstrap KVM may move to other physical machines.  <br/>
While some KVM-PXEboot is somehow connected to the LAN, everything stay alive!  <br/>
This precise image will be reproduced in every SAN build.
 
==== Remarq ====
 
It is unfortunate to have to push config from newly bootstraped KVM as therefore we need to know some key pair to connect to the pxeserver. <br/>
Better would be to fire an event on consul leader when appearance of a new hwaddr on OVS "LAN" (or dhcp, or pxe...). <br/>
So only the consul leader has to know about KVM and it already knows a lot about those.
 
We have two planned need to exchange data:
# push of the PXEboot content (including AL mirror)
#* by the magic of a consul event
#* consul event watch "pushsan" ${MAC}
#** dd BS=16M if=/dev/vda | nc ${MAC}.fqdn 12345
# pull of the ${MAC}.apkovl.tar.gz
#* As the KVM invocation if done from a special server "la_console", la_console can pull the apkovl from and push it to the consul leader. This could be done watching consul events. The reboot of the KVM for stage 3 can be initiated from la_console trough ssh.
#* consul event watch "bootstraped" ${MAC}
#** attend fin d'éventuel lock sur /dev/vda
#** scp ${MAC}.fqdn:${MAC}.apkovl.tar.gz $(pxeserver):/var/www/localhost/htdocs/apkovl/${MAC}.apkovl.tar.gz
#** ssh ${MAC}.fqdn reboot
 
=== Deploy ===
 
After bootstraping, we dispose of a way to boot any AL KVM or bare-bone in about 10 sec.
 
<u>First</u> we deploy KVM-SAN on bare-metal.
 
Next we deploy KVM-AL grouping (or not) some LXC (AL or debian).
 
<u>Second</u> we deploy low-level services: syslog-ng, fail2ban, openVPN, '''la_console''', http-reverse-proxy (primary and secondary), http-proxy, smtp relay, secondary resolver, secondary dns, ldap (primary and secondary), NAS, mariaDB, backups, collectd, shinken, local AL repo, git
 
<u>Third</u> intermediary services: smtp in, smtp out, antivirus, antispam, smtp store, imap, pop3, http, php, sip, jabber
 
<u>High level</u> services: x2goserver, lamp, mail toaster, webdav, redmine, etc
 
For each of those services, we provide a template in the form of a {kvm-template}.apkovl.tar.gz.<br/>
After customisation, "lbu package" followed by sending the a.tgz to the central repository is all needed.<br/>
We follow a naming convention for MAC:
 
For bare metal, the 3 first bytes of the MAC is the manufacturer ID.<br/>We symlink that to the baremetal.apkovl.tar.gz.
 
For KVM, we fix the MAC ourself. <br/>The first 2 bytes (AA:BB) are fixed. <br/>The third one (CC) is the level type of the KVM. <br/>The fourth one (DD) is the specific type of the template. <br/>The last 2 ones are incremental unique ID. <br/>So we are able to define pxelinux.cfg/AA:BB:CC:DD symlinks to config files defining use of {kvm-template}.apkovl.tar.gz.
 
As {kvm-template}.apkovl.tar.gz tend to be small, we can store a lot of those on the initial USB stick.<br/>
Depending on available space on the USB stick, we could offer {lxc-template}s that way from the USB stick to be downloaded from darkhttpd with wget to the right KVM. Or later on from any wanted NAS.
 
We add a couple of other OVS (WAN, STORAGE) in every machines. Some are connected to NIC. Some are connected to VPN. Netflow will be used in the future to manage the network (naas: network as a service). One of those OVS (WAN) allows connected machines to access the internet trough a default route passing through a physical firewall. STORAGE is used for data replication between SAN and NAS.
 
We have the list of bare-metal machines.<br/>
Those may launch KVM in one command.
 
We have the list of SAN KVM.<br/>
Those may create and publish NBD in two commands.<br/>
Even on ''diskless' machines those are present to offer nbd-proxy in one command.
 
All those command are grouped as ''one-liner'' scripts in some redundant NAS available from '''la_console'''.
 
Waiting for CEPH, we need a strategy for duplicating NBD accros SAN.


== About NFS ==
== About NFS ==
Line 889: Line 89:
NBD is now in edge/testing thanks to clandmeter.
NBD is now in edge/testing thanks to clandmeter.


''I cannot test it properly at the moment because all the machine are busy in prod. and this package allows newstyle only. I'm waiting my new lab machine...''
we now use xnbd ^^


We still miss '''xnbd''' fot it's proxy features allowing live migration.
Also we are still looking after the right solution to backup NBD as a whole (versus by it's content) while in use. dd|nc is the used way nowadays.


Also we are still looking after the right solution to backup NBD as a whole (versus by it's content) while in use. dd|nc is the used way nowadays.
== About consul ==
 
nothing yet but big hopes ^^<br/>
I'm lurking IRC about it ;)
 
We plan to use it's dynamic DNS feature, it's hosts listing, services inventory, events, k/v store... <br/>
and even semi high-availability for our PXE infrastructure the consul leader being the active PXEserver and other consul server are dormant PXEservers.<br/>
All config scripts adapted to pull values out of consul k/v datastore based on profiles found out of consul various lists.<br/>
As the key for dhcpd and PXEboot is the hwaddr, it will become our uuid for LAN and consul too.<br/>
'''We are very exited by consul capacities!'''<br/>
Will be avid tester!
 
'''Open questions''':
 
# What memory footprint is needed?
# What about dynamycally adapt quorum size?
# Are checks possible triggers?
#* <pre>consul watch -prefix type -name name /path/to/executable</pre>
#* <pre>consul event [options] -name name [payload]</pre>
# What best practice to store etc configurations?
#* http://code.hootsuite.com/distributed-configuration-management-and-dark-launching-using-consul/
#* http://agiletesting.blogspot.fr/2014/11/service-discovery-with-consul-and.html
#* envconsul
#* consul-template
 
log of experimentation at [[User_talk:Jch/consul]]
 
== About CEPH ==
 
CEPH is supposed to sovle the problem of high availability for the data stores, be it block devices (disks) or character devices (files).
 
The actual situation is not satisfactory.
 
'''We are very exited by CEPH capacities!'''<br/>
Will be avid tester!
 
The Alpine kernel has now RBD modules compiled.
 
We will build a CEPH cluster out of 3 Ubuntu LTS and use AL boxes as client if possible (to launch qemu instances directly from RBD). If not, we then will attach RBD and reexport them with xNBD inside a debian KVM.
 
== About Docker ==
 
not a lot of information on the [[Docker]] page yet ...
 
== About E-MailRelay ==
 
E-MailRelay is a simple SMTP proxy and store-and-forward message transfer agent (MTA). <br/>
See http://emailrelay.sourceforge.net/
 
It compiles fine on AL.
<pre>
apk update
apk add subversion alpine-sdk
svn checkout svn://svn.code.sf.net/p/emailrelay/code/trunk emailrelay-code
cd emailrelay-code
./configure --prefix=/usr
make
make install
apk del subversion alpine-sdk
apk add libgcc libstdc++
emailrelay --help
</pre>
 
But I still have issues to properly build a package because it wants to install some stuff in <PREFIX>/libexec...<br/>
(And I also need to separate -doc, -test, -extra and optionnaly -gui in subpackages I guess)
 
== About X2Go ==


== New lab machine ==
=== x2goserver ===  


Very soon, I will receive a brand new lab machine.
I did prepare x2goserver and nx-libs packages.  


I plan to use lxc in '''qemu''' ('''KVM''') in qemu (yes, '''''twice'''''!) to '''''simulate a rack of servers running AL'''''.
=== x2goclient ===


There will be 8 first level KVMs. A '''firewall''', a '''router''', '''storage''' nodes and '''compute''' nodes.  
<pre>
lrelease-qt4 x2goclient.pro
/bin/bash: lrelease-qt4: command not found
Makefile:39: recipe for target 'build_client' failed
</pre> Dunno where to find that...


'''OpenVSwitch''' (OVS) will be used to simulate the networks ('''isp''', '''internet''', '''lan''', '''storage''', '''wan''', '''ipmi''').
== My laptop setup ==


The first level KVMs will receive block devices (BD) as logical volumes ('''LV''') in '''LVM2''' on top of a '''mdadm''' raid array composed with the physical hard disk drives. <br/>
AL 3.3 with +/etc/inittab+ <pre>
They will assemble the received BD with mdadm and pass the raw raid as single BD tho the second level SAN KVMs. Those SAN will use LVM2 to publish LV as '''NBD''' on OVS "lan".<br/>
tty5::respawn:/usr/bin/su - jch mcabber
Some second level KVM will mount NBDs to expose '''NFS''' shares.<br/>
tty6::respawn:/usr/bin/su - jch tmux
Other will mount NBS and NFS for real data access with containers ('''LXC''') and expose services on OVS "wan" or "lan".<br/>
tty7::respawn:/usr/bin/su - jch startx
</pre> and +~/.xinitrc+ <pre>
#!/bin/sh
exec chromium-browser --no-sandbox
</pre>


The first second level KVM to be launched will be a virtual laptop from an virtual USB stick. This particuliar machine with offer a PXEboot environment to the OVS "lan".<br/>
== About gpve ==
The storage and compute nodes will be launched with PXE on the OVS "lan" but will be able to run totally from RAM with no string attached to the boot devices (for instance the initial NFS share).


As soon as 1 SAN and 1 compute node will be available, the PXEboot server will reproduce himself from the virtual laptop USB stick to the compute node using the storage node to store the information about the setup; then live-migrate (keeping status of running machines).
{{pkg|gvpe}}<br>
http://software.schmorp.de/pkg/gvpe.html


'''eth0''' is ''almays'' connected to OVS "lan" but on the firewall (connected to OVS "internet" and "isp"). <br/>
Plan to use it to interconnect about 5 sites.
The router is connected to all OVS but "isp" and "storage".<br/>
The storage nodes are connected to OVS "storage".<br/>
The compute nodes are connected to OVS "wan".


The '''DHCP''' lease is offered with no time limit after absence check on OVS "lan".
== About freeswitch ==


As a matter of fact, the only difference between a first and a second level KVM is '''sda''' first and '''vda''' second.  
I have a request to run a SIP server for a couple of users.<br/>
I'm doing it in some LXC accessed trough an openVPN from Jolla phones.


All machines run a '''consul''' instance.<br/> The PXEboot server is a fixed known consul server guarantee to be present (otherwise boot does'nt even exist!).<br/>
== New rollout of our infra ==
On the N first compute nodes launched, a consul server KVM will be started (configured to reach a quorum of N) to replace the standard consul client. <br/>
As the state of a running cluster is always kept in the PXEboot server, This capacity is present in all consul server but active only on the actual consul leader.<br/>
We need to link or maintain the PXE configuration and bootstrap (including relevant apkovl) files to the consul key/value datastore to benefit from his resilience.<br/>
We need to hack lbu commit to push the resulting apkovl to all consul servers (as they are also stand-by copy of the consul leader).<br/>
Each consul election need to enforce the consul leader as the active PXE server.


In the real rack, at this stage, we just switch machine on connected to right switches after checking that it will boot trough PXE on first NIC (eth0).<br/>
This week, we will upgrade some hardware and also redo all the infrastructure based on the fresh 3.3 serie.
In our simulator, we can manually start a KVM as fake physical machine (sda) or have a script on the real physical lab machine driving the lyfe cycle of those KVMs.


== About consul ==
The compute nodes will run (on baremetal) with mdadm, openvswitch, qemu, consul, collectd, screen (maybe tmux) and openssh.


nothing yet but big hopes ^^
The storage nodes will run a CEPH cluster (unfortunately not based on AL).
I'm lurking IRC about it ;)


=== Open questions ===
Everything else will run in various KVM on the compute nodes.


# What memory footprint is needed?
First, let's check if the needed package are available in the basic ISOs. If yes we will be able to run from USB keys. If not we will need to have sys install on the HDD...
# What about dynamycally adapt quorum size?
# Are checks possible triggers?
#* <pre>consul watch -prefix type -name name /path/to/executable</pre>
#* <pre>consul event [options] -name name [payload]</pre>
# What best practice to store etc configurations?
#* http://code.hootsuite.com/distributed-configuration-management-and-dark-launching-using-consul/
#* http://www.ansible.com/get-started?__hssc=&__hstc=
#* http://agiletesting.blogspot.fr/2014/11/service-discovery-with-consul-and.html

Latest revision as of 01:56, 28 August 2023

How to automate KVM creation

How to emulate USB stick with KVM.

Starting_AL_from_network

How to set up a PXE environement.

Building_a_complete_infrastucture_with_AL

From first repo (boot media):

AlpineLinux dhcpd tftp-hpa syslinux mkinitfs nfs-utils darkhttpd rsync openssh openvswitch screen qemu-system-X86_64 qemu-img gptfdisk parted mdadm lvm2 nbd xfsprogs e2fsprogs multipath consul dnsmasq vim collectd collectd-network git syslog-ng envconsul consul-template xnbd ceph lxc lxc-templates xfsprogs gptfdisk e2fsprogs multipath wipe tcpdump curl openvpn fsconsul

and all dependecies...

will build a custom ISO with that list...

About NFS

NFS is now working with AL. Both as server and client with the nfs-utils package.
However, to use NFS as client in some LXC does not seems to work yet as shown below

nfstest:~# mount -t nfs -o ro 192.168.1.149:/srv/boot/alpine /mnt
mount.nfs: Operation not permitted
mount: permission denied (are you root?)
nfstest:~# tail /var/log/messages 
Apr  4 10:05:59 nfstest daemon.notice rpc.statd[431]: Version 1.3.1 starting
Apr  4 10:05:59 nfstest daemon.warn rpc.statd[431]: Flags: TI-RPC 
Apr  4 10:05:59 nfstest daemon.warn rpc.statd[431]: Failed to read /var/lib/nfs/state: Address in use
Apr  4 10:05:59 nfstest daemon.notice rpc.statd[431]: Initializing NSM state
Apr  4 10:05:59 nfstest daemon.warn rpc.statd[431]: Failed to write NSM state number: Operation not permitted
Apr  4 10:05:59 nfstest daemon.warn rpc.statd[431]: Running as root.  chown /var/lib/nfs to choose different user
nfstest:~# ls -l /var/lib/nfs
total 12
-rw-r--r--    1 root     root             0 Nov 10 15:43 etab
-rw-r--r--    1 root     root             0 Nov 10 15:43 rmtab
drwx------    2 nobody   root          4096 Apr  4 10:05 sm
drwx------    2 nobody   root          4096 Apr  4 10:05 sm.bak
-rw-r--r--    1 root     root             4 Apr  4 10:05 state
-rw-r--r--    1 root     root             0 Nov 10 15:43 xtab

msg from ncopa """ dmesg should tell you that grsecurity tries to prevent you to do this.

grsecurity does not permit the syscall mount from within a chroot since that is a way to break out of a chroot. This affects lxc containers too.

I would recommend that you do the mouting from the lxc host in the container config with lxc.mount.entry or similar.

https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html#lbAR

If you still want disable mount protection in grsecurity then you can do that with: echo 0 > /proc/sys/kernel/grsecurity/chroot_deny_mount """

this is not working with

lxc.mount.entry=nfsserver:/srv/boot/alpine mnt nfs nosuid,intr 0 0

on the host machine with all nfs modules and helper software installed and loaded.

backend:~# lxc-start -n nfstest
lxc-start: conf.c: mount_entry: 2049 Invalid argument - failed to mount
'nfsserver:/srv/boot/alpine' on '/usr/lib/lxc/rootfs/mnt'
lxc-start: conf.c: lxc_setup: 4163 failed to setup the mount entries for
'nfstest'
lxc-start: start.c: do_start: 688 failed to setup the container
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 2
lxc-start: start.c: __lxc_start: 1080 failed to spawn 'nfstest'

Nor with

echo 0 > /proc/sys/kernel/grsecurity/chroot_deny_mount

on the host machine with all nfs modules and helper software installed and loaded which does'nt work either.

To find a proper way to use NFS shares from AL LXC is an important topic in order to be able to, for instance, load balance web servers sharing contents uploaded by users.

Next step will be to have HA for the NFS server itself (with only AL machines).

About NBD

NBD is now in edge/testing thanks to clandmeter.

we now use xnbd ^^

Also we are still looking after the right solution to backup NBD as a whole (versus by it's content) while in use. dd|nc is the used way nowadays.

About consul

nothing yet but big hopes ^^
I'm lurking IRC about it ;)

We plan to use it's dynamic DNS feature, it's hosts listing, services inventory, events, k/v store...
and even semi high-availability for our PXE infrastructure the consul leader being the active PXEserver and other consul server are dormant PXEservers.
All config scripts adapted to pull values out of consul k/v datastore based on profiles found out of consul various lists.
As the key for dhcpd and PXEboot is the hwaddr, it will become our uuid for LAN and consul too.
We are very exited by consul capacities!
Will be avid tester!

Open questions:

  1. What memory footprint is needed?
  2. What about dynamycally adapt quorum size?
  3. Are checks possible triggers?
    • consul watch -prefix type -name name /path/to/executable
    • consul event [options] -name name [payload]
  4. What best practice to store etc configurations?

log of experimentation at User_talk:Jch/consul

About CEPH

CEPH is supposed to sovle the problem of high availability for the data stores, be it block devices (disks) or character devices (files).

The actual situation is not satisfactory.

We are very exited by CEPH capacities!
Will be avid tester!

The Alpine kernel has now RBD modules compiled.

We will build a CEPH cluster out of 3 Ubuntu LTS and use AL boxes as client if possible (to launch qemu instances directly from RBD). If not, we then will attach RBD and reexport them with xNBD inside a debian KVM.

About Docker

not a lot of information on the Docker page yet ...

About E-MailRelay

E-MailRelay is a simple SMTP proxy and store-and-forward message transfer agent (MTA).
See http://emailrelay.sourceforge.net/

It compiles fine on AL.

apk update
apk add subversion alpine-sdk
svn checkout svn://svn.code.sf.net/p/emailrelay/code/trunk emailrelay-code
cd emailrelay-code
./configure --prefix=/usr
make
make install
apk del subversion alpine-sdk
apk add libgcc libstdc++
emailrelay --help

But I still have issues to properly build a package because it wants to install some stuff in <PREFIX>/libexec...
(And I also need to separate -doc, -test, -extra and optionnaly -gui in subpackages I guess)

About X2Go

x2goserver

I did prepare x2goserver and nx-libs packages.

x2goclient

lrelease-qt4 x2goclient.pro
/bin/bash: lrelease-qt4: command not found
Makefile:39: recipe for target 'build_client' failed

Dunno where to find that...

My laptop setup

AL 3.3 with +/etc/inittab+

tty5::respawn:/usr/bin/su - jch mcabber
tty6::respawn:/usr/bin/su - jch tmux
tty7::respawn:/usr/bin/su - jch startx

and +~/.xinitrc+

  1. !/bin/sh

exec chromium-browser --no-sandbox

About gpve

gvpe
http://software.schmorp.de/pkg/gvpe.html

Plan to use it to interconnect about 5 sites.

About freeswitch

I have a request to run a SIP server for a couple of users.
I'm doing it in some LXC accessed trough an openVPN from Jolla phones.

New rollout of our infra

This week, we will upgrade some hardware and also redo all the infrastructure based on the fresh 3.3 serie.

The compute nodes will run (on baremetal) with mdadm, openvswitch, qemu, consul, collectd, screen (maybe tmux) and openssh.

The storage nodes will run a CEPH cluster (unfortunately not based on AL).

Everything else will run in various KVM on the compute nodes.

First, let's check if the needed package are available in the basic ISOs. If yes we will be able to run from USB keys. If not we will need to have sys install on the HDD...