User talk:Jch: Difference between revisions
m (→About gpve: Rescued dead link by replacing it with pkg template.) |
|||
(229 intermediate revisions by one other user not shown) | |||
Line 1: | Line 1: | ||
= | == [[User_talk:Jch/How to automate KVM creation|How to automate KVM creation]] == | ||
How to emulate USB stick with KVM. | |||
== [[User_talk:Jch/Starting_AL_from_network|Starting_AL_from_network]] == | |||
How to set up a PXE environement. | |||
== [[User_talk:Jch/Building_a_complete_infrastucture_with_AL|Building_a_complete_infrastucture_with_AL]] == | |||
<u>From first repo</u> (boot media): | |||
AlpineLinux dhcpd tftp-hpa syslinux mkinitfs nfs-utils darkhttpd rsync openssh openvswitch screen qemu-system-X86_64 qemu-img gptfdisk parted mdadm lvm2 nbd xfsprogs e2fsprogs multipath '''consul''' dnsmasq vim collectd collectd-network git syslog-ng <s>envconsul</s> <s>consul-template</s> <s>xnbd</s> <s>ceph</s> lxc lxc-templates xfsprogs gptfdisk e2fsprogs multipath wipe tcpdump curl openvpn <s>fsconsul</s> | |||
and all dependecies... | |||
will [[How_to_make_a_custom_ISO_image|build a custom ISO]] with that list... | |||
== About NFS == | |||
NFS is now working with AL. Both as server and client with the nfs-utils package.<br/> | |||
However, to use NFS as client in some LXC does not seems to work yet as shown below | |||
<pre> | |||
nfstest:~# mount -t nfs -o ro 192.168.1.149:/srv/boot/alpine /mnt | |||
mount.nfs: Operation not permitted | |||
mount: permission denied (are you root?) | |||
nfstest:~# tail /var/log/messages | |||
Apr 4 10:05:59 nfstest daemon.notice rpc.statd[431]: Version 1.3.1 starting | |||
Apr 4 10:05:59 nfstest daemon.warn rpc.statd[431]: Flags: TI-RPC | |||
Apr 4 10:05:59 nfstest daemon.warn rpc.statd[431]: Failed to read /var/lib/nfs/state: Address in use | |||
Apr 4 10:05:59 nfstest daemon.notice rpc.statd[431]: Initializing NSM state | |||
Apr 4 10:05:59 nfstest daemon.warn rpc.statd[431]: Failed to write NSM state number: Operation not permitted | |||
Apr 4 10:05:59 nfstest daemon.warn rpc.statd[431]: Running as root. chown /var/lib/nfs to choose different user | |||
nfstest:~# ls -l /var/lib/nfs | |||
total 12 | |||
-rw-r--r-- 1 root root 0 Nov 10 15:43 etab | |||
-rw-r--r-- 1 root root 0 Nov 10 15:43 rmtab | |||
drwx------ 2 nobody root 4096 Apr 4 10:05 sm | |||
drwx------ 2 nobody root 4096 Apr 4 10:05 sm.bak | |||
-rw-r--r-- 1 root root 4 Apr 4 10:05 state | |||
-rw-r--r-- 1 root root 0 Nov 10 15:43 xtab | |||
</pre> | |||
msg from ncopa """ | |||
dmesg should tell you that grsecurity tries to prevent you to do this. | |||
mount | grsecurity does not permit the syscall mount from within a chroot since | ||
that is a way to break out of a chroot. This affects lxc containers too. | |||
I would recommend that you do the mouting from the lxc host in the | |||
container config with lxc.mount.entry or similar. | |||
https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html#lbAR | |||
If you still want disable mount protection in grsecurity then you | |||
can do that with: | |||
echo 0 > /proc/sys/kernel/grsecurity/chroot_deny_mount | |||
""" | |||
this is not working with | |||
<pre>lxc.mount.entry=nfsserver:/srv/boot/alpine mnt nfs nosuid,intr 0 0</pre> | |||
on the host machine with all nfs modules and helper software installed and loaded. | |||
<pre> | <pre> | ||
backend:~# lxc-start -n nfstest | |||
lxc-start: conf.c: mount_entry: 2049 Invalid argument - failed to mount | |||
'nfsserver:/srv/boot/alpine' on '/usr/lib/lxc/rootfs/mnt' | |||
lxc-start: conf.c: lxc_setup: 4163 failed to setup the mount entries for | |||
'nfstest' | |||
lxc-start: start.c: do_start: 688 failed to setup the container | |||
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 2 | |||
lxc-start: start.c: __lxc_start: 1080 failed to spawn 'nfstest' | |||
</pre> | </pre> | ||
Nor with | |||
<pre> | <pre> | ||
echo 0 > /proc/sys/kernel/grsecurity/chroot_deny_mount | |||
</pre> | </pre> | ||
on the host machine with all nfs modules and helper software installed and loaded which does'nt work either. | |||
To find a proper way to use NFS shares from AL LXC is an important topic in order to be able to, for instance, load balance web servers sharing contents uploaded by users. | |||
Next step will be to have HA for the NFS server itself (with only AL machines). | |||
== About NBD == | |||
NBD is now in edge/testing thanks to clandmeter. | |||
we now use xnbd ^^ | |||
Also we are still looking after the right solution to backup NBD as a whole (versus by it's content) while in use. dd|nc is the used way nowadays. | |||
== | == About consul == | ||
nothing yet but big hopes ^^<br/> | |||
I'm lurking IRC about it ;) | |||
We plan to use it's dynamic DNS feature, it's hosts listing, services inventory, events, k/v store... <br/> | |||
and even semi high-availability for our PXE infrastructure the consul leader being the active PXEserver and other consul server are dormant PXEservers.<br/> | |||
All config scripts adapted to pull values out of consul k/v datastore based on profiles found out of consul various lists.<br/> | |||
As the key for dhcpd and PXEboot is the hwaddr, it will become our uuid for LAN and consul too.<br/> | |||
'''We are very exited by consul capacities!'''<br/> | |||
Will be avid tester! | |||
'''Open questions''': | |||
# What memory footprint is needed? | |||
# What about dynamycally adapt quorum size? | |||
# Are checks possible triggers? | |||
#* <pre>consul watch -prefix type -name name /path/to/executable</pre> | |||
#* <pre>consul event [options] -name name [payload]</pre> | |||
# What best practice to store etc configurations? | |||
#* http://code.hootsuite.com/distributed-configuration-management-and-dark-launching-using-consul/ | |||
#* http://agiletesting.blogspot.fr/2014/11/service-discovery-with-consul-and.html | |||
#* envconsul | |||
#* consul-template | |||
log of experimentation at [[User_talk:Jch/consul]] | |||
== | == About CEPH == | ||
CEPH is supposed to sovle the problem of high availability for the data stores, be it block devices (disks) or character devices (files). | |||
The actual situation is not satisfactory. | |||
'''We are very exited by CEPH capacities!'''<br/> | |||
Will be avid tester! | |||
The | The Alpine kernel has now RBD modules compiled. | ||
We will build a CEPH cluster out of 3 Ubuntu LTS and use AL boxes as client if possible (to launch qemu instances directly from RBD). If not, we then will attach RBD and reexport them with xNBD inside a debian KVM. | |||
== About Docker == | |||
not a lot of information on the [[Docker]] page yet ... | |||
== | == About E-MailRelay == | ||
E-MailRelay is a simple SMTP proxy and store-and-forward message transfer agent (MTA). <br/> | |||
See http://emailrelay.sourceforge.net/ | |||
It compiles fine on AL. | |||
apk add | <pre> | ||
apk update | |||
apk del | apk add subversion alpine-sdk | ||
svn checkout svn://svn.code.sf.net/p/emailrelay/code/trunk emailrelay-code | |||
cd emailrelay-code | |||
./configure --prefix=/usr | |||
make | |||
make install | |||
apk del subversion alpine-sdk | |||
apk add libgcc libstdc++ | |||
emailrelay --help | |||
</pre> | </pre> | ||
But I still have issues to properly build a package because it wants to install some stuff in <PREFIX>/libexec...<br/> | |||
(And I also need to separate -doc, -test, -extra and optionnaly -gui in subpackages I guess) | |||
== About X2Go == | |||
=== x2goserver === | |||
I did prepare x2goserver and nx-libs packages. | |||
=== x2goclient === | |||
<pre> | |||
lrelease-qt4 x2goclient.pro | |||
/bin/bash: lrelease-qt4: command not found | |||
Makefile:39: recipe for target 'build_client' failed | |||
</pre> Dunno where to find that... | |||
</pre> | |||
== My laptop setup == | |||
AL 3.3 with +/etc/inittab+ <pre> | |||
tty5::respawn:/usr/bin/su - jch mcabber | |||
tty6::respawn:/usr/bin/su - jch tmux | |||
- | tty7::respawn:/usr/bin/su - jch startx | ||
- | </pre> and +~/.xinitrc+ <pre> | ||
#!/bin/sh | |||
exec chromium-browser --no-sandbox | |||
</pre> | </pre> | ||
== About gpve == | |||
{{pkg|gvpe}}<br> | |||
http://software.schmorp.de/pkg/gvpe.html | |||
Plan to use it to interconnect about 5 sites. | |||
== About freeswitch == | |||
I have a request to run a SIP server for a couple of users.<br/> | |||
I'm doing it in some LXC accessed trough an openVPN from Jolla phones. | |||
== | == New rollout of our infra == | ||
This week, we will upgrade some hardware and also redo all the infrastructure based on the fresh 3.3 serie. | |||
The compute nodes will run (on baremetal) with mdadm, openvswitch, qemu, consul, collectd, screen (maybe tmux) and openssh. | |||
The storage nodes will run a CEPH cluster (unfortunately not based on AL). | |||
Everything else will run in various KVM on the compute nodes. | |||
First, let's check if the needed package are available in the basic ISOs. If yes we will be able to run from USB keys. If not we will need to have sys install on the HDD... |
Latest revision as of 01:56, 28 August 2023
How to automate KVM creation
How to emulate USB stick with KVM.
Starting_AL_from_network
How to set up a PXE environement.
Building_a_complete_infrastucture_with_AL
From first repo (boot media):
AlpineLinux dhcpd tftp-hpa syslinux mkinitfs nfs-utils darkhttpd rsync openssh openvswitch screen qemu-system-X86_64 qemu-img gptfdisk parted mdadm lvm2 nbd xfsprogs e2fsprogs multipath consul dnsmasq vim collectd collectd-network git syslog-ng envconsul consul-template xnbd ceph lxc lxc-templates xfsprogs gptfdisk e2fsprogs multipath wipe tcpdump curl openvpn fsconsul
and all dependecies...
will build a custom ISO with that list...
About NFS
NFS is now working with AL. Both as server and client with the nfs-utils package.
However, to use NFS as client in some LXC does not seems to work yet as shown below
nfstest:~# mount -t nfs -o ro 192.168.1.149:/srv/boot/alpine /mnt mount.nfs: Operation not permitted mount: permission denied (are you root?) nfstest:~# tail /var/log/messages Apr 4 10:05:59 nfstest daemon.notice rpc.statd[431]: Version 1.3.1 starting Apr 4 10:05:59 nfstest daemon.warn rpc.statd[431]: Flags: TI-RPC Apr 4 10:05:59 nfstest daemon.warn rpc.statd[431]: Failed to read /var/lib/nfs/state: Address in use Apr 4 10:05:59 nfstest daemon.notice rpc.statd[431]: Initializing NSM state Apr 4 10:05:59 nfstest daemon.warn rpc.statd[431]: Failed to write NSM state number: Operation not permitted Apr 4 10:05:59 nfstest daemon.warn rpc.statd[431]: Running as root. chown /var/lib/nfs to choose different user nfstest:~# ls -l /var/lib/nfs total 12 -rw-r--r-- 1 root root 0 Nov 10 15:43 etab -rw-r--r-- 1 root root 0 Nov 10 15:43 rmtab drwx------ 2 nobody root 4096 Apr 4 10:05 sm drwx------ 2 nobody root 4096 Apr 4 10:05 sm.bak -rw-r--r-- 1 root root 4 Apr 4 10:05 state -rw-r--r-- 1 root root 0 Nov 10 15:43 xtab
msg from ncopa """ dmesg should tell you that grsecurity tries to prevent you to do this.
grsecurity does not permit the syscall mount from within a chroot since that is a way to break out of a chroot. This affects lxc containers too.
I would recommend that you do the mouting from the lxc host in the container config with lxc.mount.entry or similar.
https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html#lbAR
If you still want disable mount protection in grsecurity then you can do that with: echo 0 > /proc/sys/kernel/grsecurity/chroot_deny_mount """
this is not working with
lxc.mount.entry=nfsserver:/srv/boot/alpine mnt nfs nosuid,intr 0 0
on the host machine with all nfs modules and helper software installed and loaded.
backend:~# lxc-start -n nfstest lxc-start: conf.c: mount_entry: 2049 Invalid argument - failed to mount 'nfsserver:/srv/boot/alpine' on '/usr/lib/lxc/rootfs/mnt' lxc-start: conf.c: lxc_setup: 4163 failed to setup the mount entries for 'nfstest' lxc-start: start.c: do_start: 688 failed to setup the container lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 2 lxc-start: start.c: __lxc_start: 1080 failed to spawn 'nfstest'
Nor with
echo 0 > /proc/sys/kernel/grsecurity/chroot_deny_mount
on the host machine with all nfs modules and helper software installed and loaded which does'nt work either.
To find a proper way to use NFS shares from AL LXC is an important topic in order to be able to, for instance, load balance web servers sharing contents uploaded by users.
Next step will be to have HA for the NFS server itself (with only AL machines).
About NBD
NBD is now in edge/testing thanks to clandmeter.
we now use xnbd ^^
Also we are still looking after the right solution to backup NBD as a whole (versus by it's content) while in use. dd|nc is the used way nowadays.
About consul
nothing yet but big hopes ^^
I'm lurking IRC about it ;)
We plan to use it's dynamic DNS feature, it's hosts listing, services inventory, events, k/v store...
and even semi high-availability for our PXE infrastructure the consul leader being the active PXEserver and other consul server are dormant PXEservers.
All config scripts adapted to pull values out of consul k/v datastore based on profiles found out of consul various lists.
As the key for dhcpd and PXEboot is the hwaddr, it will become our uuid for LAN and consul too.
We are very exited by consul capacities!
Will be avid tester!
Open questions:
- What memory footprint is needed?
- What about dynamycally adapt quorum size?
- Are checks possible triggers?
consul watch -prefix type -name name /path/to/executable
consul event [options] -name name [payload]
- What best practice to store etc configurations?
log of experimentation at User_talk:Jch/consul
About CEPH
CEPH is supposed to sovle the problem of high availability for the data stores, be it block devices (disks) or character devices (files).
The actual situation is not satisfactory.
We are very exited by CEPH capacities!
Will be avid tester!
The Alpine kernel has now RBD modules compiled.
We will build a CEPH cluster out of 3 Ubuntu LTS and use AL boxes as client if possible (to launch qemu instances directly from RBD). If not, we then will attach RBD and reexport them with xNBD inside a debian KVM.
About Docker
not a lot of information on the Docker page yet ...
About E-MailRelay
E-MailRelay is a simple SMTP proxy and store-and-forward message transfer agent (MTA).
See http://emailrelay.sourceforge.net/
It compiles fine on AL.
apk update apk add subversion alpine-sdk svn checkout svn://svn.code.sf.net/p/emailrelay/code/trunk emailrelay-code cd emailrelay-code ./configure --prefix=/usr make make install apk del subversion alpine-sdk apk add libgcc libstdc++ emailrelay --help
But I still have issues to properly build a package because it wants to install some stuff in <PREFIX>/libexec...
(And I also need to separate -doc, -test, -extra and optionnaly -gui in subpackages I guess)
About X2Go
x2goserver
I did prepare x2goserver and nx-libs packages.
x2goclient
lrelease-qt4 x2goclient.pro /bin/bash: lrelease-qt4: command not found Makefile:39: recipe for target 'build_client' failed
Dunno where to find that...
My laptop setup
AL 3.3 with +/etc/inittab+
tty5::respawn:/usr/bin/su - jch mcabber tty6::respawn:/usr/bin/su - jch tmux tty7::respawn:/usr/bin/su - jch startx
and +~/.xinitrc+
- !/bin/sh
exec chromium-browser --no-sandbox
About gpve
gvpe
http://software.schmorp.de/pkg/gvpe.html
Plan to use it to interconnect about 5 sites.
About freeswitch
I have a request to run a SIP server for a couple of users.
I'm doing it in some LXC accessed trough an openVPN from Jolla phones.
New rollout of our infra
This week, we will upgrade some hardware and also redo all the infrastructure based on the fresh 3.3 serie.
The compute nodes will run (on baremetal) with mdadm, openvswitch, qemu, consul, collectd, screen (maybe tmux) and openssh.
The storage nodes will run a CEPH cluster (unfortunately not based on AL).
Everything else will run in various KVM on the compute nodes.
First, let's check if the needed package are available in the basic ISOs. If yes we will be able to run from USB keys. If not we will need to have sys install on the HDD...