Linux Containers (LXC) provides containers similar BSD Jails, Linux VServer and Solaris Zones. It gives the impression of virtualization, but shares the kernel and resources with the "host".
- 1 Installation
- 2 Upgrading from 2.x
- 3 Prepare network on host
- 4 Grsecurity restrictions
- 5 Create a guest
- 6 Starting/Stopping the guest
- 7 Connecting to the guest
- 8 Deleting a guest
- 9 Advanced
- 10 LXC 1.0 Additional information
- 11 See also
Install the required packages:
If you want to create containers other than alpine you will need lxc-templates:
Upgrading from 2.x
Since Alpine 3.9 we ship LXC version 3.1. LXC 3.x has major changes which will/can break your current setup. LXC 3.x will NOT ship with legacy container templates. Check your current container configs to see if you have any includes pointing to files that don't exist (shipped by legacy templates). For example if you use Alpine containers created with the alpine template you will need to install:
apk add lxc-templates-legacy-alpine
Also make sure you convert your LXC config files to the new 2.x format (this is now required).
lxc-update-config -c /var/lib/lxc/container-name/config
Make sure you have removed cgroup_enable from your cmdline as this will fail to mount cgroups and fail LXC service.
Prepare network on host
Set up a bridge on the host. Example /etc/network/interfaces:
auto br0 iface br0 inet dhcp bridge-ports eth0
Create a network configuration template for the guests, /etc/lxc/default.conf:
lxc.net.0.type = veth lxc.net.0.link = br0 lxc.net.0.flags = up lxc.net.0.hwaddr = fe:xx:xx:xx:xx:xx
NOTE: since alpine 3.8 we no longer ship grsecurity and it should not be used in lxc setup.
Some restrictions will be applied when using a grsecurity kernel (Alpine Linux default kernel). The most notable is the use of lxc-attach which will not be allowed because of GRKERNSEC_CHROOT_CAPS. To solve this we will have to disable this grsec restriction by creating a sysctl profile for lxc. Create the following file /etc/sysctl.d/10-lxc.conf and add:
kernel.grsecurity.chroot_caps = 0
There are a few other restrictions that can prevent proper container functionality. When things do not work as expected always check the kernel log with dmesg to see if grsec prevented things from happening.
Other possible restrictions are:
kernel.grsecurity.chroot_deny_chroot = 0 kernel.grsecurity.chroot_deny_mount = 0 kernel.grsecurity.chroot_deny_mknod = 0 kernel.grsecurity.chroot_deny_chmod = 0
When you finished creating your new sysctl profile you can apply it by restarting sysctl service
rc-service sysctl restart
NOTE: Always consult the Grsecurity documentation before applying these settings.
Create a guest
This will create a /var/lib/lxc/guest1 directory with a config file and a rootfs directory.
Note that by default alpine template does not have networking service on, you will need to add it using lxc-console
If running on x86_64 architecture, it is possible to create a 32bit guest:
In order to create a debian template container you will need to install some packages:
Also you will need to turn off some grsecurity chroot options otherwise the debootstrap will fail:
Please remember to turn them back on, or just simply reboot the system.
Now you can run:
In order to create an ubuntu template container you will need to turn off some grsecurity chroot options:
Please remember to turn them back on, or just simply reboot the system.
Now you can run (replace %MIRROR% with the actual hostname, for example: http://us.archive.ubuntu.com/ubuntu/)
Unprivileged LXC images (Debian / Ubuntu / Centos etc..)
& choose the Distribution | Release | Architecture.
To be able to login to a Debian container you currently need to:
You can also remove Systemd from the container.
Starting/Stopping the guest
Create a symlink to the /etc/init.d/lxc script for your guest.
You can start your guest with:
Stop it with:
Make it autostart on boot up with:
You can also add to the container config:
lxc.start.auto = 1
to autostart containers by the lxc service only.
Connecting to the guest
By default sshd is not installed, so you will have to attach to the container or connect to the virtual console. This is done with:
Attach to container
Just type exit to detach the container again (please do check the grsec notes above)
Connect to virtual console
To disconnect from it, press+
Deleting a guest
Make sure the guest is stopped and run:
This will erase everything, without asking any questions. It is equivalent to:
Creating a LXC container without modifying your network interfaces
The problem with bridging is that the interface you bridge gets replaced with your new bridge interface. That is to say that say you have an interface eth0 that you want to bridge, your eth0 interface gets replaced with the br0 interface that you create. It also means that the interface you use needs to be placed into promiscuous mode to catch all the traffic that could de destined to the other side of the bridge, which again may not be what you want.
The solution is to create a dummy network interface, bridge that, and set up NAT so that traffic out of your bridge interface gets pushed through the interface of your choice.
So, first, lets create that dummy interface (thanks to ncopa for talking me out of macvlan and pointing out the dummy interface kernel module)
This will create a dummy interface called dummy0 on your host. To create this interface on every boot, you may need to create a /etc/modprobe.d/dummy.conf with:
Now we will create a bridge called br0
and then make that dummy interface one end of the bridge
Next, let's give that bridged interface a reason to exists
Create a file for your container, let's say /etc/lxc/bridgenat.conf, with the following settings.
lxc.net.0.type = veth lxc.net.0.flags = up lxc.net.0.link = br0 lxc.net.0.name = eth1 lxc.net.0.ipv4.address = 192.168.1.2/24 192.168.1.255 lxc.net.0.ipv4.gateway = 192.168.1.1 lxc.net.0.veth.pair = veth-if-0
and build your container with that file
You should now be able to ping your container from your hosts, and your host from your container.
Your container needs to know where to push traffic that isn't within it's subnet. To do so, we tell the container to route through the bridge interface br0 From inside the container run
The next step is you push the traffic coming from your private subnet over br0 out through your internet facing interface, or any interface you chose
We are messing with your IP tables here, so make sure these settings don't conflict with anything you may have already set up, obviously.
Say eth0 was your internet facing network interface, and br0 is the name of the bridge you made earlier, we'd do this:
Now you should be able to route through your bridge interface to the internet facing interface of your host from your container, just like at home!
You could also have a dhcp server running on your host, and set it up to give IP addresses from your private subnet to any container that requests it, and then have one template for multiple alpine LXC containers, perfect for alpine development :)
Using static IP
If you're using static IP, you need to configure this properly on guest's /etc/network/interfaces. To stay on the above example, modify /var/lib/lxc/guest1/rootfs/etc/network/interfaces
#auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp
#auto lo iface lo inet loopback auto eth0 iface eth0 inet static address <lxc-container-ip> # IP which the lxc container should use gateway <gateway-ip> # IP of gateway to use, mostly same as on lxc-host netmask <netmask>
mem and swap
In order for network to work on containers you need to set "Promiscuous Mode" to "Allow All" in VirtualBox settings for the network adapter.
Inside the container run:
LXC 1.0 Additional information
Some info regarding new features in LXC 1.0