User talk:Jch: Difference between revisions

From Alpine Linux
m (→‎About gpve: Rescued dead link by replacing it with pkg template.)
 
(229 intermediate revisions by one other user not shown)
Line 1: Line 1:
= NFS bug study =
== [[User_talk:Jch/How to automate KVM creation|How to automate KVM creation]] ==
How to emulate USB stick with KVM.


All debian used are fresh install of wheezy 7.8.<br/>
== [[User_talk:Jch/Starting_AL_from_network|Starting_AL_from_network]] ==
All alpine used are fresh install of edge. (will also try vanilla kernel in KVM)<br/>
How to set up a PXE environement.
All boxes are supermicro servers with bi-Xeon running AL from USB key.<br/>
I do not have physical access to the boxes!


The NFS-servers are configured to export
== [[User_talk:Jch/Building_a_complete_infrastucture_with_AL|Building_a_complete_infrastucture_with_AL]] ==
/srv/home      192.168.1.0/24(rw,sync,no_subtree_check)


The nfs-clients are configured to mount from fstab
<u>From first repo</u> (boot media):
storage:/srv/home /home nfs noauto,defaults,noexec 0 0


"storage" is defined in /etc/hosts to point to the right server.
AlpineLinux dhcpd tftp-hpa syslinux mkinitfs nfs-utils darkhttpd rsync openssh openvswitch screen qemu-system-X86_64 qemu-img gptfdisk parted mdadm lvm2 nbd xfsprogs e2fsprogs multipath '''consul''' dnsmasq vim collectd collectd-network git syslog-ng <s>envconsul</s> <s>consul-template</s> <s>xnbd</s> <s>ceph</s> lxc lxc-templates xfsprogs gptfdisk e2fsprogs multipath wipe tcpdump curl openvpn <s>fsconsul</s>


The test is done with
and all dependecies...
mount /home


We will compare the '''dmesg''' outputs, the '''ls -ld /home''' outputs, the '''cat /home/test''' and '''touch /home/toto''' ones. /home/test is prepared on the server (just a text file containing "do you see me?"). Those tests are run as root user.
will [[How_to_make_a_custom_ISO_image|build a custom ISO]] with that list...


'''Will redo usage tests with non root user because of the default squashroot of NFS...'''
== About NFS ==


== NFS-server in KVM-Debian ==
NFS is now working with AL. Both as server and client with the nfs-utils package.<br/>
 
However, to use NFS as client in some LXC does not seems to work yet as shown below
fresh install with tasksel "file server"<br/>
<pre>
this KVM in running on bare metal alpine
nfstest:~# mount -t nfs -o ro 192.168.1.149:/srv/boot/alpine /mnt
mount.nfs: Operation not permitted
mount: permission denied (are you root?)
nfstest:~# tail /var/log/messages
Apr  4 10:05:59 nfstest daemon.notice rpc.statd[431]: Version 1.3.1 starting
Apr  4 10:05:59 nfstest daemon.warn rpc.statd[431]: Flags: TI-RPC
Apr  4 10:05:59 nfstest daemon.warn rpc.statd[431]: Failed to read /var/lib/nfs/state: Address in use
Apr  4 10:05:59 nfstest daemon.notice rpc.statd[431]: Initializing NSM state
Apr  4 10:05:59 nfstest daemon.warn rpc.statd[431]: Failed to write NSM state number: Operation not permitted
Apr  4 10:05:59 nfstest daemon.warn rpc.statd[431]: Running as root.  chown /var/lib/nfs to choose different user
nfstest:~# ls -l /var/lib/nfs
total 12
-rw-r--r--    1 root    root            0 Nov 10 15:43 etab
-rw-r--r--    1 root    root            0 Nov 10 15:43 rmtab
drwx------    2 nobody  root          4096 Apr  4 10:05 sm
drwx------    2 nobody  root          4096 Apr  4 10:05 sm.bak
-rw-r--r--    1 root    root            4 Apr  4 10:05 state
-rw-r--r--    1 root    root            0 Nov 10 15:43 xtab
</pre>


=== nfs-client in KVM AL ===
msg from ncopa """
dmesg should tell you that grsecurity tries to prevent you to do this.


mount /home gives <br/>
grsecurity does not permit the syscall mount from within a chroot since
in '''dmesg'''
that is a way to break out of a chroot. This affects lxc containers too.
<pre>
[73460.112383] RPC: Registered named UNIX socket transport module.
[73460.112386] RPC: Registered udp transport module.
[73460.112388] RPC: Registered tcp transport module.
[73460.112389] RPC: Registered tcp NFSv4.1 backchannel transport module.
[73460.165060] svc: failed to register lockdv1 RPC service (errno 111).
[73460.165069] lockd_up: makesock failed, error=-111
[73460.217513] NFS: Registering the id_resolver key type
[73460.217524] Key type id_resolver registered
[73460.217525] Key type id_legacy registered
</pre>
in '''ls -ld /home/
drwxr-xr-x    2 42949672 42949672        6 Jan 23 12:27 /home
in '''cat /home/test'''
  Do you see me?
in '''touch /home/toto'''
touch: /home/toto: Permission denied


=== nfs-client in KVM debian ===
I would recommend that you do the mouting from the lxc host in the
container config with lxc.mount.entry or similar.


'''dmesg''' is empty<br/>
https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html#lbAR
'''ls -ld /home'''
drwxr-xr-x 2 root root 17 Jan 23 08:39 /home
'''cat /home/test'''
Do you see me?
'''touch /home/toto''' (even after adding rw to the mount options in fstab)
touch: cannot touch `/home/toto': Permission denied


<u>Some pointers to investigate this permission problem</u>:
If you still want disable mount protection in grsecurity then you
* http://unix.stackexchange.com/questions/79172/nfs-permission-denied
can do that with:
echo 0 > /proc/sys/kernel/grsecurity/chroot_deny_mount
"""


''To begin using machine as an NFS client, you will need the portmapper running on that machine, and to use NFS file locking, you will also need rpc.statd and rpc.lockd running on both the client and the server.''
this is not working with


''''
<pre>lxc.mount.entry=nfsserver:/srv/boot/alpine mnt nfs nosuid,intr 0 0</pre>


=== nfs-client in LXC AL (on bare metal AL) ===
on the host machine with all nfs modules and helper software installed and loaded.


apk add nfs-utils
dmesg empy sofar
mount /home
'''dmesg'''
<pre>
<pre>
[4153944.457610] RPC: Registered named UNIX socket transport module.
backend:~# lxc-start -n nfstest
[4153944.457615] RPC: Registered udp transport module.
lxc-start: conf.c: mount_entry: 2049 Invalid argument - failed to mount
[4153944.457618] RPC: Registered tcp transport module.
'nfsserver:/srv/boot/alpine' on '/usr/lib/lxc/rootfs/mnt'
[4153944.457620] RPC: Registered tcp NFSv4.1 backchannel transport module.
lxc-start: conf.c: lxc_setup: 4163 failed to setup the mount entries for
[4153944.504475] svc: failed to register lockdv1 RPC service (errno 111).
'nfstest'
[4153944.504484] lockd_up: makesock failed, error=-111
lxc-start: start.c: do_start: 688 failed to setup the container
[4153944.681725] NFS: Registering the id_resolver key type
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 2
[4153944.681744] Key type id_resolver registered
lxc-start: start.c: __lxc_start: 1080 failed to spawn 'nfstest'
[4153944.681748] Key type id_legacy registered
</pre>
</pre>
'''ls -ld /home'''
drwxr-xr-x    2 42949672 42949672        17 Jan 23 14:39 /home
'''cat /home/test'''
Do you see me?
'''touch /home/toto'''
touch: /home/toto: Permission denied


=== nfs-client in LXC AL (in KVM AL) ===
Nor with


apk add nfs-utils
but
<pre>
<pre>
# mount /home
echo 0 > /proc/sys/kernel/grsecurity/chroot_deny_mount
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified
mount: permission denied (are you root?)
</pre>
and
<pre>
# /etc/init.d/rpc.statd start
* Caching service dependencies ... [ ok ]
* Starting rpcbind ... [ ok ]
* Starting NFS statd ... * start-stop-daemon: failed to start `/usr/sbin/rpc.statd'
[ !! ]
* ERROR: rpc.statd failed to start
</pre>
'''dmesg'''
<pre>
[74747.135827] rpcbind[6718]: segfault at 7ccfe7b0 ip 000072977ccef5cd sp 00007c6b3e329a68 error 4 in ld-musl-x86_64.so.1[72977cca0000+85000]
[74747.135841] grsec: Segmentation fault occurred at 000000007ccfe7b0 in /sbin/rpcbind[rpcbind:6718] uid/euid:100/100 gid/egid:101/101, parent /bin/busybox[init:1831] uid/euid:0/0 gid/egid:0/0
[74747.135887] grsec: bruteforce prevention initiated due to crash of /sbin/rpcbind against uid 100, banning suid/sgid execs for 15 minutes.  Please investigate the crash report for /sbin/rpcbind[rpcbind:6718] uid/euid:100/100 gid/egid:101/101, parent /bin/busybox[init:1831] uid/euid:0/0 gid/egid:0/0
</pre>
</pre>


=== nfs-client in LXC debian (in KVM AL) ===
on the host machine with all nfs modules and helper software installed and loaded which does'nt work either.
 
To find a proper way to use NFS shares from AL LXC is an important topic in order to be able to, for instance, load balance web servers sharing contents uploaded by users.
 
Next step will be to have HA for the NFS server itself (with only AL machines).
 
== About NBD ==
 
NBD is now in edge/testing thanks to clandmeter.
 
we now use xnbd ^^


apt-get install nfs-commonn
Also we are still looking after the right solution to backup NBD as a whole (versus by it's content) while in use. dd|nc is the used way nowadays.
gives
[FAIL] Starting NFS common utilities: statd idmapd failed!
then mount /home gives same results in guest as in host


== NFS-server in KVM-Alpine ==
== About consul ==


Done from a KVM running in memory straight from the iso
nothing yet but big hopes ^^<br/>
CDROM="/my/path/alpine-mini-3.1.1-x86_64.iso"
I'm lurking IRC about it ;)
qemu-system-x86_64 -name test -enable-kvm -cpu qemu64 -m 256 -smp 1 -curses \
  -net nic,vlan=0,model=virtio,macaddr=52:54:32:a0:a0:a0 \
  -net tap,vlan=0,script=/etc/openvswitch/ovs-ifup-lan,downscript=/etc/openvswitch/ovs-ifdown-lan,ifname=test0 \
  -cdrom ${CDROM}
do not forget to issue "grsec nomedeset" at SYSLINUX prompt or you loose the output (I'm doing it trough ssh term)
<pre>
# setup-alpine # no disk install at all, no apk cache but proxy
# . /etc/profile.d/proxy.sh
# apk add nfs-utils
# echo "/home  192.168.1.0/24(rw,no_root_squash)" >> /etc/exports
# echo "Do you see me?" > /home/test
# /etc/init.d/nfs start
* Caching service dependencies ...                                      [ ok ]
* Starting rpcbind ...                                                  [ ok ]
* Starting NFS statd ...
* start-stop-daemon: failed to start `/usr/sbin/rpc.statd'              [ !! ]
* ERROR: rpc.statd failed to start
* ERROR: cannot start nfs as rpc.statd would not start
# dmesg # only relevant lines displayed
[  462.262020] rpcbind[1890]: segfault at 1e783940 ip 000070591e773f1d sp 00007dc1da01a4d8 error 4 in ld-musl-x86_64.so.1[70591e724000+86000]
[  462.262032] grsec: Segmentation fault occurred at 000000001e783940 in /sbin/rpcbind[rpcbind:1890] uid/euid:100/100 gid/egid:101/101, parent /bin/busybox[init:1] uid/euid:0/0 gid/egid:0/0             
[  462.262043] grsec: bruteforce prevention initiated due to crash of /sbin/rpcbind against uid 100, banning suid/sgid execs for 15 minutes.  Please investigate the crash report for /sbin/rpcbind[rpcbind:1890] uid/euid:100/100 gid/egid:101/101, parent /bin/busybox[init:1] uid/euid:0/0 gid/egid:0/0
# poweroff
</pre>
Let's try with the vanilla kernel
CDROM="/my/path/alpine-vanilla-3.1.1-x86_64.iso"
with same command line and same sequence of instructions
<pre>
test:~# /etc/init.d/nfs start
* Caching service dependencies ...                                      [ ok ]
* Starting rpcbind ...                                                  [ ok ]
* Starting NFS statd ...
* start-stop-daemon: failed to start `/usr/sbin/rpc.statd'              [ !! ]
* ERROR: rpc.statd failed to start
* ERROR: cannot start nfs as rpc.statd would not start
test:~# dmesg
[  243.445710] rpcbind[1930]: segfault at 33f30940 ip 00007f5a33f20f1d sp 00007fffa4290e48 error 4 in ld-musl-x86_64.so.1[7f5a33ed1000+86000]
test:~# poweroff
</pre>


Obviously I will not be able to test clients now...
We plan to use it's dynamic DNS feature, it's hosts listing, services inventory, events, k/v store... <br/>
and even semi high-availability for our PXE infrastructure the consul leader being the active PXEserver and other consul server are dormant PXEservers.<br/>
All config scripts adapted to pull values out of consul k/v datastore based on profiles found out of consul various lists.<br/>
As the key for dhcpd and PXEboot is the hwaddr, it will become our uuid for LAN and consul too.<br/>
'''We are very exited by consul capacities!'''<br/>
Will be avid tester!


=== nfs-client on bare metal AL ===
'''Open questions''':


=== nfs-client in KVM AL ===
# What memory footprint is needed?
# What about dynamycally adapt quorum size?
# Are checks possible triggers?
#* <pre>consul watch -prefix type -name name /path/to/executable</pre>
#* <pre>consul event [options] -name name [payload]</pre>
# What best practice to store etc configurations?
#* http://code.hootsuite.com/distributed-configuration-management-and-dark-launching-using-consul/
#* http://agiletesting.blogspot.fr/2014/11/service-discovery-with-consul-and.html
#* envconsul
#* consul-template


=== nfs-client in KVM debian ===
log of experimentation at [[User_talk:Jch/consul]]


=== nfs-client in LXC AL (on bare metal AL) ===
== About CEPH ==


=== nfs-client in LXC AL (in KVM AL) ===
CEPH is supposed to sovle the problem of high availability for the data stores, be it block devices (disks) or character devices (files).


=== nfs-client in LXC debian (in KVM AL) ===
The actual situation is not satisfactory.


=How to automate KVM creation=
'''We are very exited by CEPH capacities!'''<br/>
Will be avid tester!


The goal is not only to have a working install but to have it at the after setup-alpine stage without human intervention...
The Alpine kernel has now RBD modules compiled.
Tis is the first stages of a work in progress...


I want to pass a Block Device and a name as parameters. The block device could be an image file, a LV, a NBD, a hdd, a raid array, whatever.<br/>
We will build a CEPH cluster out of 3 Ubuntu LTS and use AL boxes as client if possible (to launch qemu instances directly from RBD). If not, we then will attach RBD and reexport them with xNBD inside a debian KVM.
Everything else should be fully automatic according to some config file (stating the http-proxy, the time server, the log server, ...).


The I will just run the script, watch my dhcp logs to discover the new IP assigned (that's why the name is a parameter), then log in with ssh without password to customize it further but at high level only (will be a robot and not me in fact).
== About Docker ==


I guess it would be something like emulate boot from usb key with specific overlay already on key... <br/>
not a lot of information on the [[Docker]] page yet ...
then run setup-disk with proper parameters on the command line to avoid the interactive process (like setup-alpine does)... <br/>
Methink this could be done from a couple of scripts put in /etc/local.d/. The last.stop one deleting all of them to be clean at next reboot.<br/>
Let's start easy ;)


== How to prepare a img file to emulate an USB key ==
== About E-MailRelay ==


first a working example done in console (accessed trough ssh).<br/>
E-MailRelay is a simple SMTP proxy and store-and-forward message transfer agent (MTA). <br/>
Will build a script from it...
See http://emailrelay.sourceforge.net/


First, lets's prepare somme block device (here an image file but could be something else) <pre>
It compiles fine on AL.
apk add qemu-img
<pre>
qemu-img create -f raw usbkey.img 512M
apk update
apk del qemu-img
apk add subversion alpine-sdk
T="usbkey.img"
svn checkout svn://svn.code.sf.net/p/emailrelay/code/trunk emailrelay-code
cd emailrelay-code
./configure --prefix=/usr
make
make install
apk del subversion alpine-sdk
apk add libgcc libstdc++
emailrelay --help
</pre>
</pre>


Next, let's install AL on this $T <pre>
But I still have issues to properly build a package because it wants to install some stuff in <PREFIX>/libexec...<br/>
apk add multipath-tools syslinux dosfstools
(And I also need to separate -doc, -test, -extra and optionnaly -gui in subpackages I guess)
fdisk $T
 
kpartx -av $T
== About X2Go ==
mkdosfs -F32 /dev/mapper/loop1p1
 
dd if=/usr/share/syslinux/mbr.bin of=/dev/mapper/loop1
=== x2goserver ===
syslinux /dev/mapper/loop1p1
 
mkdir key
I did prepare x2goserver and nx-libs packages.  
mount -t vfat /dev/mapper/loop1p1 key
 
wget http://wiki.alpinelinux.org/cgi-bin/dl.cgi/v3.1/releases/x86_64/alpine-mini-3.1.1-x86_64.iso
=== x2goclient ===
mkdir cdrom
 
mount alpine-mini-3.1.1-x86_64.iso cdrom
<pre>
cd cdrom
lrelease-qt4 x2goclient.pro
cp -a .alpine-release * ../key/
/bin/bash: lrelease-qt4: command not found
cd ..
Makefile:39: recipe for target 'build_client' failed
umount key
</pre> Dunno where to find that...
umount cdrom
kpartx -d $T
apk del multipath-tools syslinux dosfstools
rm alpine-mini-3.1.1-x86_64.iso
</pre>


This block device may now be use to boot some KVM for instance like: <pre>
== My laptop setup ==
screen -d -m -S KVM-builder \
qemu-system-x86_64 -name KVM-usb -enable-kvm -cpu qemu64 -curses \
-device nec-usb-xhci -drive if=none,id=usbstick,file=$T -device usb-storage,drive=usbstick
</pre> This is working fine.


The problem is when adding a HDD to the lot, qemu try to boot from the hdd and does not even try to boot from the usb key. Enabling menu in boot let's one access the emulated bios which allows to select USB device to boot interactively but this break the goal of fully automated boot :( The stanza is for instance <pre>
AL 3.3 with +/etc/inittab+ <pre>
screen -d -m -S KVM-builder \
tty5::respawn:/usr/bin/su - jch mcabber
qemu-system-x86_64 -name KVM-usb -enable-kvm -cpu qemu64 -curses \
tty6::respawn:/usr/bin/su - jch tmux
-device nec-usb-xhci -drive if=none,id=usbstick,file=$T -device usb-storage,drive=usbstick \
tty7::respawn:/usr/bin/su - jch startx
-drive file=$T2 boot menu=on
</pre> and +~/.xinitrc+ <pre>
#!/bin/sh
exec chromium-browser --no-sandbox
</pre>
</pre>


qemu-doc states that very clearly:<br/>
== About gpve ==
> -boot [order=drives][,once=drives][,menu=on|off][,splash=sp_name][,splash-time=sp_time][,reboot-timeout=rb_timeout][,strict=on|off]<br/>
>  Specify boot order drives as a string of drive letters. Valid drive letters depend on the target achitecture. The x86 PC uses: a, b (floppy 1 and 2), c (first hard disk), d (first CD-ROM), n-p (Etherboot from network adapter 1-4), hard disk boot is the default
 
=Starting AL from network=


As it does not seems possible to start qemu with a virtual USB key *and* a virtual HDD attached to the VM. Let's try something different: to start AL from the network and mount the HDD later on...
{{pkg|gvpe}}<br>
http://software.schmorp.de/pkg/gvpe.html


Usually this kind of setup needs
Plan to use it to interconnect about 5 sites.
* a DHCP server to get an IP address and the location of the TFTP server
* a TFTP server to download the kernel and tje root file system to boot from
* a NFS server or a HTTP one to get the overlay used to configure the machine
* a NFS server to share files with others
* a NBD server to get his own block devices as storage


First, let's check what is vailable in AL and what is not... ''to be continued''
== About freeswitch ==
* dhcpcd-6.6.7-r0
* tftp-hpa-5.2-r1
* nfs-utils-1.3.1-r2 darkhttpd-1.10-r1 (nfs seems broken, see supra)
* qemu-nbd (not really good but exists)


== dhcpd ==
I have a request to run a SIP server for a couple of users.<br/>
I'm doing it in some LXC accessed trough an openVPN from Jolla phones.


== tftp ==
== New rollout of our infra ==


== nfs ==
This week, we will upgrade some hardware and also redo all the infrastructure based on the fresh 3.3 serie.


== http ==
The compute nodes will run (on baremetal) with mdadm, openvswitch, qemu, consul, collectd, screen (maybe tmux) and openssh.


[[Darkhttpd]]
The storage nodes will run a CEPH cluster (unfortunately not based on AL).


== nbd ==
Everything else will run in various KVM on the compute nodes.


I really would like to have xnbd-server from
First, let's check if the needed package are available in the basic ISOs. If yes we will be able to run from USB keys. If not we will need to have sys install on the HDD...

Latest revision as of 01:56, 28 August 2023

How to automate KVM creation

How to emulate USB stick with KVM.

Starting_AL_from_network

How to set up a PXE environement.

Building_a_complete_infrastucture_with_AL

From first repo (boot media):

AlpineLinux dhcpd tftp-hpa syslinux mkinitfs nfs-utils darkhttpd rsync openssh openvswitch screen qemu-system-X86_64 qemu-img gptfdisk parted mdadm lvm2 nbd xfsprogs e2fsprogs multipath consul dnsmasq vim collectd collectd-network git syslog-ng envconsul consul-template xnbd ceph lxc lxc-templates xfsprogs gptfdisk e2fsprogs multipath wipe tcpdump curl openvpn fsconsul

and all dependecies...

will build a custom ISO with that list...

About NFS

NFS is now working with AL. Both as server and client with the nfs-utils package.
However, to use NFS as client in some LXC does not seems to work yet as shown below

nfstest:~# mount -t nfs -o ro 192.168.1.149:/srv/boot/alpine /mnt
mount.nfs: Operation not permitted
mount: permission denied (are you root?)
nfstest:~# tail /var/log/messages 
Apr  4 10:05:59 nfstest daemon.notice rpc.statd[431]: Version 1.3.1 starting
Apr  4 10:05:59 nfstest daemon.warn rpc.statd[431]: Flags: TI-RPC 
Apr  4 10:05:59 nfstest daemon.warn rpc.statd[431]: Failed to read /var/lib/nfs/state: Address in use
Apr  4 10:05:59 nfstest daemon.notice rpc.statd[431]: Initializing NSM state
Apr  4 10:05:59 nfstest daemon.warn rpc.statd[431]: Failed to write NSM state number: Operation not permitted
Apr  4 10:05:59 nfstest daemon.warn rpc.statd[431]: Running as root.  chown /var/lib/nfs to choose different user
nfstest:~# ls -l /var/lib/nfs
total 12
-rw-r--r--    1 root     root             0 Nov 10 15:43 etab
-rw-r--r--    1 root     root             0 Nov 10 15:43 rmtab
drwx------    2 nobody   root          4096 Apr  4 10:05 sm
drwx------    2 nobody   root          4096 Apr  4 10:05 sm.bak
-rw-r--r--    1 root     root             4 Apr  4 10:05 state
-rw-r--r--    1 root     root             0 Nov 10 15:43 xtab

msg from ncopa """ dmesg should tell you that grsecurity tries to prevent you to do this.

grsecurity does not permit the syscall mount from within a chroot since that is a way to break out of a chroot. This affects lxc containers too.

I would recommend that you do the mouting from the lxc host in the container config with lxc.mount.entry or similar.

https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html#lbAR

If you still want disable mount protection in grsecurity then you can do that with: echo 0 > /proc/sys/kernel/grsecurity/chroot_deny_mount """

this is not working with

lxc.mount.entry=nfsserver:/srv/boot/alpine mnt nfs nosuid,intr 0 0

on the host machine with all nfs modules and helper software installed and loaded.

backend:~# lxc-start -n nfstest
lxc-start: conf.c: mount_entry: 2049 Invalid argument - failed to mount
'nfsserver:/srv/boot/alpine' on '/usr/lib/lxc/rootfs/mnt'
lxc-start: conf.c: lxc_setup: 4163 failed to setup the mount entries for
'nfstest'
lxc-start: start.c: do_start: 688 failed to setup the container
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 2
lxc-start: start.c: __lxc_start: 1080 failed to spawn 'nfstest'

Nor with

echo 0 > /proc/sys/kernel/grsecurity/chroot_deny_mount

on the host machine with all nfs modules and helper software installed and loaded which does'nt work either.

To find a proper way to use NFS shares from AL LXC is an important topic in order to be able to, for instance, load balance web servers sharing contents uploaded by users.

Next step will be to have HA for the NFS server itself (with only AL machines).

About NBD

NBD is now in edge/testing thanks to clandmeter.

we now use xnbd ^^

Also we are still looking after the right solution to backup NBD as a whole (versus by it's content) while in use. dd|nc is the used way nowadays.

About consul

nothing yet but big hopes ^^
I'm lurking IRC about it ;)

We plan to use it's dynamic DNS feature, it's hosts listing, services inventory, events, k/v store...
and even semi high-availability for our PXE infrastructure the consul leader being the active PXEserver and other consul server are dormant PXEservers.
All config scripts adapted to pull values out of consul k/v datastore based on profiles found out of consul various lists.
As the key for dhcpd and PXEboot is the hwaddr, it will become our uuid for LAN and consul too.
We are very exited by consul capacities!
Will be avid tester!

Open questions:

  1. What memory footprint is needed?
  2. What about dynamycally adapt quorum size?
  3. Are checks possible triggers?
    • consul watch -prefix type -name name /path/to/executable
    • consul event [options] -name name [payload]
  4. What best practice to store etc configurations?

log of experimentation at User_talk:Jch/consul

About CEPH

CEPH is supposed to sovle the problem of high availability for the data stores, be it block devices (disks) or character devices (files).

The actual situation is not satisfactory.

We are very exited by CEPH capacities!
Will be avid tester!

The Alpine kernel has now RBD modules compiled.

We will build a CEPH cluster out of 3 Ubuntu LTS and use AL boxes as client if possible (to launch qemu instances directly from RBD). If not, we then will attach RBD and reexport them with xNBD inside a debian KVM.

About Docker

not a lot of information on the Docker page yet ...

About E-MailRelay

E-MailRelay is a simple SMTP proxy and store-and-forward message transfer agent (MTA).
See http://emailrelay.sourceforge.net/

It compiles fine on AL.

apk update
apk add subversion alpine-sdk
svn checkout svn://svn.code.sf.net/p/emailrelay/code/trunk emailrelay-code
cd emailrelay-code
./configure --prefix=/usr
make
make install
apk del subversion alpine-sdk
apk add libgcc libstdc++
emailrelay --help

But I still have issues to properly build a package because it wants to install some stuff in <PREFIX>/libexec...
(And I also need to separate -doc, -test, -extra and optionnaly -gui in subpackages I guess)

About X2Go

x2goserver

I did prepare x2goserver and nx-libs packages.

x2goclient

lrelease-qt4 x2goclient.pro
/bin/bash: lrelease-qt4: command not found
Makefile:39: recipe for target 'build_client' failed

Dunno where to find that...

My laptop setup

AL 3.3 with +/etc/inittab+

tty5::respawn:/usr/bin/su - jch mcabber
tty6::respawn:/usr/bin/su - jch tmux
tty7::respawn:/usr/bin/su - jch startx

and +~/.xinitrc+

  1. !/bin/sh

exec chromium-browser --no-sandbox

About gpve

gvpe
http://software.schmorp.de/pkg/gvpe.html

Plan to use it to interconnect about 5 sites.

About freeswitch

I have a request to run a SIP server for a couple of users.
I'm doing it in some LXC accessed trough an openVPN from Jolla phones.

New rollout of our infra

This week, we will upgrade some hardware and also redo all the infrastructure based on the fresh 3.3 serie.

The compute nodes will run (on baremetal) with mdadm, openvswitch, qemu, consul, collectd, screen (maybe tmux) and openssh.

The storage nodes will run a CEPH cluster (unfortunately not based on AL).

Everything else will run in various KVM on the compute nodes.

First, let's check if the needed package are available in the basic ISOs. If yes we will be able to run from USB keys. If not we will need to have sys install on the HDD...