From Alpine Linux
Revision as of 18:37, 8 March 2021 by FanFlan44 (talk | contribs) (Since 3.13.0, special features of qemu-system-x86-64 are split into seperate packages, covered by the metapackage qemu-modules. https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.13.0#QEMU_packages_split.)
Jump to: navigation, search

KVM is an free and open source virtualization solution in a kernel module. Although it is often simply referred to as KVM, the actual hypervisor is QEMU. QEMU runs from user-space, but can integrate with KVM, providing better performance by leveraging the hardware from kernel-space. QEMU can virtualize x86, PowerPC, and S390 guests, amongst others. Libvirt is a management framework that integrates with QEMU/KVM, LXC, Xen and others.


The following commands provide libvirt as well as QEMU with emulation for x86_64 and qemu-img, a necessary component for using various disk formats such as qcow2. Without qemu-img, only raw disks are available. It can also convert images between several formats like vhdx and vmdk. It also provides the metapackage qemu-modules, which provides subpackages needed for special features. In versions of Alpine before 3.13.0 these features were covered by QEMU with emulation for x86_64.

# apk add libvirt-daemon qemu-img qemu-system-x86_64 qemu-modules # rc-update add libvirtd


By default, libvirt uses NAT for VM connectivity. If you want to use the default configuration, you need to load the tun module.

# modprobe tun

If you prefer bridging a guest over your Ethernet interface, you need to make a bridge.

It's quite common to use bridges with KVM environments but when IPv6 is used, Alpine will assign itself a link-local address as well as an SLAAC address in case there's a router sending Router Advertisements. You don't want this because you don't want to have the KVM host an IP address in every network it serves to guests. Unfortunately you cannot just disable IPv6 for the bridge via sysctl because it might not be up at boottime when sysctl fires. There's a workaround though which is to put a post-up hook into the /etc/network/interfaces file like this:

auto brlan
iface brlan inet manual
       bridge-ports eth1.5
       bridge-stp 0
       post-up ip -6 a flush dev brlan; sysctl -w net.ipv6.conf.brlan.disable_ipv6=1


For (non-root) management, you will need to add your user to the libvirt group.

# addgroup user libvirt

You can use libvirt's virsh on the CLI. It can execute commands as well as run as an interactive shell. Read its manual page and/or use the "help" command for more info. Some basic commands are:

virsh help virsh list --all virsh start $domain virsh shutdown $domain

The libvirt project provides a GUI for managing hosts, called virt-manager. It handles local systems as well as remote ones via SSH.

# apk add dbus polkit virt-manager # rc-update add dbus

In order to use libvirtd to remotely control KVM over ssh PolicyKit needs a .pkla informing it that this is allowed. Write the following file to /etc/polkit-1/localauthority/50-local.d/50-libvirt-ssh-remote-access-policy.pkla

[Remote libvirt SSH access] Identity=unix-group:libvirt Action=org.libvirt.unix.manage ResultAny=yes ResultInactive=yes ResultActive=yes


VFIO is more flexible way to do PCI passthrough. Let's suppose you want to use following ethernet card as PCI device in a VM.

# lspci | grep 02:00.0
02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
# lspci -n -s 02:00.0
02:00.0 0200: 8086:10c9 (rev 01)

First, create /etc/mkinitfs/features.d/vfio.modules with the following content, so mkinitfs includes the VFIO modules in the initramfs.


Add vfio the the list of features in /etc/mkinitfs/mkinitfs.conf.

Modify following file to instruct mkinitfs to load following module with the options and rebuild kernel ramdisk.

# cat /etc/modprobe.d/vfio.conf <<EOF
options vfio-pci ids=8086:10c9
options vfio_iommu_type1 allow_unsafe_interrupts=1
softdep igb pre: vfio-pci
# mkinitfs

Now modify GRUB, include intel_iommu=o iommu=pt for Intel platform (AMD uses amd_iommu=on) and add the VFIO modules.

# grep ^GRUB_CMDLINE_LINUX_DEFAULT /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="modules=sd-mod,usb-storage,ext4,raid1,vfio,vfio-pci,vfio_iommu_type1,vfio_virqfd nomodeset rootfstype=ext4 intel_iommu=on iommu=pt console=ttyS0,115200"
# grub-mkconfig /boot/grub/grub.cfg

Reboot and check dmesg.

# grep -i -e DMAR -e IOMMU /var/log/dmesg
[    0.343795] DMAR: Host address width 36
[    0.343797] DMAR: DRHD base: 0x000000fed90000 flags: 0x1
[    0.343804] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap c90780106f0462 ecap f020e3
[    0.343806] DMAR: RMRR base: 0x000000000ed000 end: 0x000000000effff
[    0.343807] DMAR: RMRR base: 0x000000bf7ed000 end: 0x000000bf7fffff
[    0.553830] iommu: Default domain type: Passthrough (set via kernel command line)
[    0.902477] DMAR: No ATSR found
[    0.902563] DMAR: dmar0: Using Queued invalidation
[    0.903256] pci 0000:02:00.0: Adding to iommu group 12
[    0.903768] DMAR: Intel(R) Virtualization Technology for Directed I/O

If you do not run libvirt VMs under root (egrep '^#*user' /etc/libvirt/qemu.conf), then you must have correct permission on /dev/vfio/<iommu_group>, eg. /dev/vfio/12. You have to tune /etc/mdev.conf or UDEV rules.

# virsh dumpxml vm01 | xmllint --xpath '//*/hostdev' -
<hostdev mode="subsystem" type="pci" managed="yes">
      <driver name="vfio"/>
        <address domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
      <alias name="hostdev0"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x06" function="0x0"/>
<hostdev mode="subsystem" type="pci" managed="yes">
      <driver name="vfio"/>
        <address domain="0x0000" bus="0x02" slot="0x00" function="0x1"/>
      <alias name="hostdev1"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x08" function="0x0"/>

If you directly use QEMU without libvirt and are trying to pass a GPU to your VM, you may get a "VFIO_MAP_DMA failed: Out of memory" error, when starting the VM as a non-root user. One way to fix it is to install the shadow package, and increase the amount of memory the user can lock via the /etc/security/limits.conf file:

# apk add shadow # echo "youruser soft memlock RAMamount \ youruser hard memlock RAMamount" >> /etc/security/limits.conf # reboot

Replace "youruser" with the user you wish to run the VM as, and "RAMamount" with how much RAM your VM will need (in KB). The exact amount may throw the same error in the end, so you probably want to increase this value by a few dozen MB (typically +40).

A lot of info at [1].