User talk:Jch
How to automate KVM creation
The goal is not only to have a working install but to have it at the after setup-alpine stage without human intervention... Tis is the first stages of a work in progress...
I want to pass a Block Device and a name as parameters. The block device could be an image file, a LV, a NBD, a hdd, a raid array, whatever.
Everything else should be fully automatic according to some config file (stating the http-proxy, the time server, the log server, ...).
The I will just run the script, watch my dhcp logs to discover the new IP assigned (that's why the name is a parameter), then log in with ssh without password to customize it further but at high level only (will be a robot and not me in fact).
I guess it would be something like emulate boot from usb key with specific overlay already on key...
then run setup-disk with proper parameters on the command line to avoid the interactive process (like setup-alpine does)...
Methink this could be done from a couple of scripts put in /etc/local.d/. The last.stop one deleting all of them to be clean at next reboot.
Let's start easy ;)
How to prepare a img file to emulate an USB key
first a working example done in console (accessed trough ssh).
Will build a script from it...
First, lets's prepare somme block device (here an image file but could be something else)
apk add qemu-img qemu-img create -f raw usbkey.img 512M apk del qemu-img T="usbkey.img"
Next, let's install AL on this $T
apk add multipath-tools syslinux dosfstools fdisk $T kpartx -av $T mkdosfs -F32 /dev/mapper/loop1p1 dd if=/usr/share/syslinux/mbr.bin of=/dev/mapper/loop1 syslinux /dev/mapper/loop1p1 mkdir key mount -t vfat /dev/mapper/loop1p1 key wget http://wiki.alpinelinux.org/cgi-bin/dl.cgi/v3.1/releases/x86_64/alpine-mini-3.1.1-x86_64.iso mkdir cdrom mount alpine-mini-3.1.1-x86_64.iso cdrom cd cdrom cp -a .alpine-release * ../key/ cd .. umount key umount cdrom kpartx -d $T apk del multipath-tools syslinux dosfstools rm alpine-mini-3.1.1-x86_64.iso
This block device may now be use to boot some KVM for instance like:
screen -d -m -S KVM-builder \ qemu-system-x86_64 -name KVM-usb -enable-kvm -cpu qemu64 -curses \ -device nec-usb-xhci -drive if=none,id=usbstick,file=$T -device usb-storage,drive=usbstick
This is working fine. The problem is when adding a HDD to the lot, qemu try to boot from the hdd and does not even try to boot from the usb key. Enabling menu in boot let's one access the emulated bios which allows to select USB device to boot interactively but this break the goal of fully automated boot :( The stanza is for instance
screen -d -m -S KVM-builder \ qemu-system-x86_64 -name KVM-usb -enable-kvm -cpu qemu64 -curses \ -device nec-usb-xhci -drive if=none,id=usbstick,file=$T -device usb-storage,drive=usbstick \ -drive file=$T2 boot menu=on
qemu-doc states that very clearly:
> -boot [order=drives][,once=drives][,menu=on|off][,splash=sp_name][,splash-time=sp_time][,reboot-timeout=rb_timeout][,strict=on|off]
> Specify boot order drives as a string of drive letters. Valid drive letters depend on the target achitecture. The x86 PC uses: a, b (floppy 1 and 2), c (first hard disk), d (first CD-ROM), n-p (Etherboot from network adapter 1-4), hard disk boot is the default
Starting AL from network
As it does not seems possible to start qemu with a virtual USB key *and* a virtual HDD attached to the VM. Let's try something different: to start AL from the network and mount the HDD later on...
Usually this kind of setup needs
- a DHCP server to get an IP address and the location of the TFTP server
- a TFTP server to download the kernel and tje root file system to boot from
- a NFS server or a HTTP one to get the overlay used to configure the machine
- a NFS server to share files with others
- a NBD server to get his own block devices as storage
- a machine where to prepare initramfs
First, let's check what is vailable in AL and what is not...
- dhcpcd-6.6.7-r0
- tftp-hpa-5.2-r1
- nfs-utils-1.3.1-r2
- darkhttpd-1.10-r1
- nbd-3-10-r0
PXE_boot
We are trying to do something as in PXE_boot.
We did it on separate machine for each service. It forces us to deeply understand all interactions between processes.
In current state we
umount /media/alpine
as last step of the boot procees and we are running with no tie.
dhcpd
192.168.1.1
with package dhcp from repo. Nothing special.
filename "pxelinux.0"; next-server 192.168.1.2;
and
# Disable RFC 2136 dynamic DNS updates. ddns-update-style none; # Define actions to take when leases are committed, released, or expired to # accomplish dynamic DNS updates to djbdns. This does not use the RFC 2136 # update mechanism, because djbdns does not support it. However, it # accomplishes the same thing. # syntax "execute(cmd, arg, ...)" ### need to check if the two "on EVENT" must be nested or in sequence... on commit { execute ("/usr/local/bin/dns-update-djb", "commit", lcase (option host-name), config-option domain-name, binary-to-ascii (10, 8, ".", leased-address)); on release or expiry { execute ("/usr/local/bin/dns-update-djb", "release", binary-to-ascii (10, 8, ".", leased-address)); } }
with a custom /usr/local/bin/dns-update-djb script largely inspired from https://sites.google.com/site/dmoulding/dns-update-djb but adapted for a distant tinydns server and to the AL way.
tftp
192.168.1.2
tftp-hpa configured to serve some SYSLINUX files.
The config is in /etc/conf.d/in.tftpd
Then to issue:
rc-update add in.tftpd rc-service in.tftpd start
We serve from /var/tftpboot.
We add to temporary install the syslinux apk to get pxelinix.0 and other libs needed.
We did prepare a "pxerd" initramfs file with virtio_net.ko, dhcp and nfs included; made sure loop and squashfs are included.
pxelinux.cfg/default looks like
PROMPT 0 TIMEOUT 3 default alpine LABEL alpine LINUX alpine/vmlinuz-grsec INITRD alpine/pxerd APPEND ip=dhcp alpine_dev=nfs:192.168.1.3:/srv/boot/alpine modloop=/boot/grsec.modloop.squashfs nomodeset quiet apkovl=http://192.168.1.4/localhost.apkovl.tar.gz #APPEND modloop=http:/192.168.1.4/grsec.modloop.squashfs #APPEND apkovl=http://192.168.1.4/localhost.apkovl.tar.gz # including the modloop hack #APPEND alpine_repo=http://repo-url
Modules are loaded
/ # lsmod Module Size Used by Not tainted nfsv3 22784 1 nfs 144376 2 nfsv3 lockd 71917 2 nfsv3,nfs sunrpc 225574 6 nfsv3,nfs,lockd af_packet 28735 0 sr_mod 13487 0 cdrom 40424 1 sr_mod pata_acpi 3326 0 ata_piix 25601 0 ata_generic 3554 0 libata 181955 3 pata_acpi,ata_piix,ata_generic virtio_net 19684 0 scsi_mod 113710 2 sr_mod,libata virtio_pci 6485 0 virtio 4933 2 virtio_net,virtio_pci virtio_ring 9161 2 virtio_net,virtio_pci squashfs 25893 1 loop 18243 2
Network is up
/ # ifconfig eth0 Link encap:Ethernet HWaddr 52:54:33:B0:C2:D2 inet addr:192.168.1.108 Bcast:0.0.0.0 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:322 errors:0 dropped:0 overruns:0 frame:0 TX packets:2 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:20514 (20.0 KiB) TX bytes:684 (684.0 B)
but modloop does not load This patch fix this issue (hope to see it mainstream soon)
localhost:~# diff /etc/init.d/modloop modloop.new --- /etc/init.d/modloop +++ modloop.new @@ -32,7 +32,7 @@ local search_dev="$1" fstab="$2" local dev mnt fs mntopts chk case "$search_dev" in - UUID=*|LABEL=*|/dev/*);; + UUID=*|LABEL=*|/dev/*|nfs);; *) search_dev=/dev/$search_dev;; esac local search_real_dev=$(resolve_dev $search_dev) @@ -49,6 +49,10 @@ fi done done + if [ "$fs" = "$search_dev" ]; then + echo "$mnt" + return + fi done < $fstab 2>/dev/null }
References
http://www.syslinux.org/wiki/index.php/PXELINUX
nfs
192.168.1.3
see http://wiki.alpinelinux.org/wiki/User_talk:Jch#NFS_bug_study
It is now working with http://dev.alpinelinux.org/~clandmeter/rpcbind-0.2.3_rc2-r0.apk
We serve the content of an usb key (iso) in ro as
/srv/boot/alpine *(ro,no_root_squash,no_subtree_check)
http
192.168.1.4
With package Darkhttpd from repo serving from /var/tftpboot/ to serve files needed to boot (kernel, rootfs, apkovl.tar.gz)
nbd
192.168.1.5
I really would like to have xnbd-server in AL. nbd-3.1.0 was just added to edge/testing repo; need to try it in real situation...
For now, we have a qcow2 debian image added to the apkovl with lbu add; lbu ci.
This image is used to launch a first KVM with /dev/mdX as second drive.
In turn, inside the KVM, vdb is used to define a lvm2 volume.
The LV are published with xnbd-server.
Later on, the same KVM will be able to connect to RBD device and re-publish it as NBD.
xnbd-server allows live migration of Block Devices while live. And has a powerfull proxy mode.
All other KVM are running from FS accessed trough NBD from such SAN. Even other SAN.
As soon as those KVM-NBD are up, they may be used to launch others or to provide datastores.
We put that image on every USB key we use along with mdadm and OpenVSwitch (and collectd).
dns
192.168.1.6
tinydns from repo with split-dns config.
Building a complete infrastucture with AL
I'm doing it. It's for real! That's my daily job at present ^^
I'm building a full private cloud bootstraped with only an AlpineLinux USB key for each physical machine. But next ones will be able to boot from network; not even USB keys will be needed. As a matter of fact, we used more than only one physical USB key because we didn't started from scratch but had a live migration from Debian to Alpine for most of the services and machines...
If there is some feed-back, I may develop config files and so on ;)
As I started from scratch and OpenVSwitch was not available in Alpine at that time yet, It took me a while to build everything. But to reproduce it, it would be piece of cake!
We use qemu-kvm for KVM. But I guess one may use whatever Virtual Machine technology one likes.
This is the presentation of a use case. Not a HOW TO. And it's still a work in progess...
Network
Firewall
We put a dedicated physical machine on each link between our LAN and other networks. It just run iptables and some paquets accounting metrology.
Router
Physical machine connected to our LAN and other networks (trough a firewall). A static routing table do the trick.
Switches
All physical machines run OpenVSwitch reproducing virtually all physical switches we have plus some virtuals only.
VPN
All physical machines run openVPN as client to as many switch defined less the physical interfaces of the machine. There is an openVPN server somewhere running in a KVM connected to needed switches.
Storage
SAN
On each physical machine, a couple of HDD are mounted in raid1 witch mdadm. This raid array is passed as parameter to a KVM who in turn mount it as physical volume for LVM. The created LV are published as NBD with xnbd-server. For the time being, this KVM is running debian 7.8 as xnbd is not in Alpine (yet?)..
The SAN also connects to the CEPH cluster as client and publish reached RBD as NBD with xnbd-server. For the time being, this KVM is running debian 7.8 as no xnbd nor RBD are in Alpine (yet?)..
NAS
Some KVM is mounting some NBD as local drives and publishing some directories as NFS shares.
We now have nfs and nbd in AL.
CEPH
KVM with physical HDD as parameters are used for building OSD and MON needed to operate a CEPH cluster. One KVM is the "console" to drive it from a single point of presence (usefull but not "needed").For the time being, those KVM are running debian 7.8 as CEPH and RBD are not in Alpine (yet?)..
Low-level services
No service at all is running in the AL on bare metal. All are running is some KVM connected to needed switches by the means of the OpenVSwitches. The apkovl on the USB keys contains only the scripts to launch KVM and one image file to launch the first SAN. Other KVM are launched from LV in the SAN.
dhcp
Exactly two KVM stored in different SAN, primary and secondary in failover mode, are running dhcpd from repo.
We just have to configure it properly.
We have to test if dhcpd may run in a LXC instead of a KVM?
DNS
tinydns from repo with split-dns config.
Resolver
With dnscache from repo.
Those KVM have manually assigned IP address in the LAN and does know a gateway to the Internet.
They use themselves as resolver...
They know the direct manually assigned IP address in the LAN of the main DNS server of selected domains (for split dns configuration).
PXEboot
kernel and initrd files in tftp server.
copy of usb content in nfs server.
apkovl files in darkhttpd server.
Time server
The router (who has access to internet) usr ntpd (or similar) from repo, to act as client to the WAN and server to the LAN.
syslog
With syslog-ng from repo, we receive the logs from all machines be it physical or virtual.
It's the only place who needs logrotate from repo.
HTTP proxy/cache
The web proxy/cache squid, from repo, uses a NBD as cache. It has a link to the internet to forward requests and one to the LAN.
Because of him, no machine, as they are all connected to the LAN, be it physical or virtual, needs a published default gateway. And all machines are able to install/upgrade packages or to see the WWW as client.
We point all AL boxes to this KVM with setup-proxy.
Monitoring
shinken from sources in some LXC with barely only the python package installed
Metrology
Collectd (one LXC as server, all other machines, be it physical or virtual, as client) with collectd-network from repo.
A couple of lines in CGP config file is enough for now.
Backups
with common tools: rsync, tar, netcat, bzip2, openssh, cron, dd
LDAP
openldap with openldap-back-hdb, both from repo.
Database
with mariaDB from repo
High-level services
in LXC AL whenever possible.
in LXC Debian as second choice
in KVM otherwise.
x2goserver
I did package nx-libs and x2goserver. I'm waiting for the packages to be included in edge/testing. They are already being used for single app access. Next step is full desktop but we are not sure if AL is the right choice for full desktop usage for our customers...
unfortunately, x2goclient pops up "kex error : did not find one of algos diffie-hellman-group1-sha1 in list curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1 for kex algos" need to specify diffie-hellman-group1-sha1 in sshd_config. Luckyly a fix exists and my business partner is looking after a way to enhance it's security upstream.
ejabberd
with ejabberd from edge/testing repo. I migrate the mnesia DB from an old debian squeeze just copying the files and changing ownership in a LXC-AL. I just had to disable mod_pubsub to have it run properly. Authentification is done with openLDAP. I now plan to migrate a very very old jabberd (11 years I guess) running on a debian etch to it if I find a way to keep users's password and rosters... I also would like to use it as a gateway to IRC to follow #alpine, #alpine-devel and #x2go channels ;) Some other ejabberd features are interesting to my organisation and we will experiment more in depth, namely mod_sip, mod_stun, mod_proxy65...
redmine
in a brand new LXC with edge/main and edge/testing repos
mostly following Redmine page
I use a mariaDB server on another host where I did create the user and pushed the sqldump from a running redmine 3.0.0 instance
apk update apk upgrade reboot setup-timezone apk add redmine apk add ruby-unicorn cp /etc/unicorn/redmine.conf.rb.sample /etc/unicorn/redmine.conf.rb vi /etc/conf.d/unicorn vi /etc/redmine/database.yml apk add sudo apk add ruby-mysql2 apk add ruby-yard apk add tzdata cd /usr/share/webapps/redmine sudo -u redmine rake generate_secret_token sudo -u redmine RAILS_ENV=production rake db:migrate
LDAP - opeldap from repo
SMTP in - postfix-ldap from repo
Antispam - spamassassin from repo
Antivirus - clamav from repo
SMTP store - postfix-ldap from repo
mail NAS - nfsutils from repo
IMAP - dovecot-ldap from repo
SMTP relay - emailrelay from source
SMTP out - postfix from repo
Webmail - squirrelmail from source
webhosting
Front-end - nginx
Back-end static - darkhttpd
Back-end dynamic - php-fpm
File server - nfs, sftp (based on ssh-ldap)
About NFS
NFS is now working with AL. Both as server and client with the nfs-utils package.
However, to use NFS as client in some LXC does not seems to work yet as shown below
nfstest:~# mount -t nfs -o ro 192.168.1.149:/srv/boot/alpine /mnt mount.nfs: Operation not permitted mount: permission denied (are you root?) nfstest:~# tail /var/log/messages Apr 4 10:05:59 nfstest daemon.notice rpc.statd[431]: Version 1.3.1 starting Apr 4 10:05:59 nfstest daemon.warn rpc.statd[431]: Flags: TI-RPC Apr 4 10:05:59 nfstest daemon.warn rpc.statd[431]: Failed to read /var/lib/nfs/state: Address in use Apr 4 10:05:59 nfstest daemon.notice rpc.statd[431]: Initializing NSM state Apr 4 10:05:59 nfstest daemon.warn rpc.statd[431]: Failed to write NSM state number: Operation not permitted Apr 4 10:05:59 nfstest daemon.warn rpc.statd[431]: Running as root. chown /var/lib/nfs to choose different user nfstest:~# ls -l /var/lib/nfs total 12 -rw-r--r-- 1 root root 0 Nov 10 15:43 etab -rw-r--r-- 1 root root 0 Nov 10 15:43 rmtab drwx------ 2 nobody root 4096 Apr 4 10:05 sm drwx------ 2 nobody root 4096 Apr 4 10:05 sm.bak -rw-r--r-- 1 root root 4 Apr 4 10:05 state -rw-r--r-- 1 root root 0 Nov 10 15:43 xtab
msg from ncopa """ dmesg should tell you that grsecurity tries to prevent you to do this.
grsecurity does not permit the syscall mount from within a chroot since that is a way to break out of a chroot. This affects lxc containers too.
I would recommend that you do the mouting from the lxc host in the container config with lxc.mount.entry or similar.
https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html#lbAR
If you still want disable mount protection in grsecurity then you can do that with: echo 0 > /proc/sys/kernel/grsecurity/chroot_deny_mount """
this is not working with
lxc.mount.entry=nfsserver:/srv/boot/alpine mnt nfs nosuid,intr 0 0
on the host machine with all nfs modules and helper software installed and loaded.
backend:~# lxc-start -n nfstest lxc-start: conf.c: mount_entry: 2049 Invalid argument - failed to mount 'nfsserver:/srv/boot/alpine' on '/usr/lib/lxc/rootfs/mnt' lxc-start: conf.c: lxc_setup: 4163 failed to setup the mount entries for 'nfstest' lxc-start: start.c: do_start: 688 failed to setup the container lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 2 lxc-start: start.c: __lxc_start: 1080 failed to spawn 'nfstest'
Nor with
echo 0 > /proc/sys/kernel/grsecurity/chroot_deny_mount
on the host machine with all nfs modules and helper software installed and loaded which does'nt work either.
To find a proper way to use NFS shares from AL LXC is an important topic in order to be able to, for instance, load balance web servers sharing contents uploaded by users.
Next step will be to have HA for the NFS server itself (with only AL machines).
About NBD
NBD is now in edge/testing thanks to clandmeter.
I cannot test it properly at the moment because all the machine are busy in prod. and this package allows newstyle only. I'm waiting my new lab machine...
We still miss xnbd fot it's proxy features allowing live migration.
We are very exited by xnbd capacities!
Will be avid tester!
Also we are still looking after the right solution to backup NBD as a whole (versus by it's content) while in use. dd|nc is the used way nowadays.
New lab machine
Very soon, I will receive a brand new lab machine.
I plan to use lxc in qemu (KVM) in qemu (yes, twice!) to simulate a rack of servers running AL.
There will be 8 first level KVMs. A firewall, a router, storage nodes and compute nodes.
OpenVSwitch (OVS) will be used to simulate the networks (isp, internet, lan, storage, wan, ipmi).
The first level KVMs will receive block devices (BD) as logical volumes (LV) in LVM2 on top of a mdadm raid array composed with the physical hard disk drives.
They will assemble the received BD with mdadm and pass the raw raid as single BD tho the second level SAN KVMs. Those SAN will use LVM2 to publish LV as NBD on OVS "lan".
Some second level KVM will mount NBDs to expose NFS shares.
Other will mount NBS and NFS for real data access with containers (LXC) and expose services on OVS "wan" or "lan".
The first second level KVM to be launched will be a virtual laptop from an virtual USB stick. This particuliar machine with offer a PXEboot environment to the OVS "lan".
The storage and compute nodes will be launched with PXE on the OVS "lan" but will be able to run totally from RAM with no string attached to the boot devices (for instance the initial NFS share).
As soon as 1 SAN and 1 compute node will be available, the PXEboot server will reproduce himself from the virtual laptop USB stick to the compute node using the storage node to store the information about the setup; then live-migrate (keeping status of running machines).
eth0 is almays connected to OVS "lan" but on the firewall (connected to OVS "internet" and "isp").
The router is connected to all OVS but "isp" and "storage".
The storage nodes are connected to OVS "storage".
The compute nodes are connected to OVS "wan".
The DHCP lease is offered with no time limit after absence check on OVS "lan".
As a matter of fact, the only difference between a first and a second level KVM is sda first and vda second.
All machines run a consul instance.
The PXEboot server is a fixed known consul server guarantee to be present (otherwise boot does'nt even exist!).
On the N first compute nodes launched, a consul server KVM will be started (configured to reach a quorum of N) to replace the standard consul client.
As the state of a running cluster is always kept in the PXEboot server, This capacity is present in all consul server but active only on the actual consul leader.
We need to link or maintain the PXE configuration and bootstrap (including relevant apkovl) files to the consul key/value datastore to benefit from his resilience.
We need to hack lbu commit to push the resulting apkovl to all consul servers (as they are also stand-by copy of the consul leader).
Each consul election need to enforce the consul leader as the active PXE server.
In the real rack, at this stage, we just switch machine on connected to right switches after checking that it will boot trough PXE on first NIC (eth0).
In our simulator, we can manually start a KVM as fake physical machine (sda) or have a script on the real physical lab machine driving the lyfe cycle of those KVMs.
About consul
nothing yet but big hopes ^^
I'm lurking IRC about it ;)
We plan to use it's dynamic DNS feature, it's hosts listing, services inventory, events, k/v store...
and even semi high-availability for our PXE infrastructure the consul leader being the active PXEserver and other consul server are dormant PXEservers.
All config scripts adapted to pull values out of consul k/v datastore based on profiles found out of consul various lists.
As the key for dhcpd and PXEboot is the hwaddr, it will become our uuid for LAN and consul too.
We are very exited by consul capacities!
Will be avid tester!
Open questions
- What memory footprint is needed?
- What about dynamycally adapt quorum size?
- Are checks possible triggers?
consul watch -prefix type -name name /path/to/executable
consul event [options] -name name [payload]
- What best practice to store etc configurations?
consul_template
seems a very interesting feature!
Hope to see it packaged as soon as consul will be ;)
About CEPH
CEPH is supposed to sovle the problem of high availability for the data stores, be it block devices (disks) or character devices (files).
The actual situation is not satisfactory.
We are very exited by CEPH capacities!
Will be avid tester!
About Docker
not a lot of information on the Docker page yet ...