<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.alpinelinux.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Opendna</id>
	<title>Alpine Linux - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.alpinelinux.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Opendna"/>
	<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/wiki/Special:Contributions/Opendna"/>
	<updated>2026-04-25T18:09:25Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.40.0</generator>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=KVM&amp;diff=20407</id>
		<title>KVM</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=KVM&amp;diff=20407"/>
		<updated>2021-12-04T20:20:54Z</updated>

		<summary type="html">&lt;p&gt;Opendna: /* Installation */ The rc-update command is part of the openrc package. The following command fails without it.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://www.linux-kvm.org/page/Main_Page KVM] is an free and open source virtualization solution in a kernel module. Although it is often simply referred to as KVM, the actual hypervisor is [https://www.qemu.org QEMU]. QEMU runs from user-space, but can integrate with KVM, providing better performance by leveraging the hardware from kernel-space. QEMU can virtualize x86, PowerPC, and S390 guests, amongst others. [https://libvirt.org Libvirt] is a management framework that integrates with QEMU/KVM, [https://wiki.alpinelinux.org/wiki/LXC LXC], [https://wiki.alpinelinux.org/wiki/Xen_Dom0 Xen] and others.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
The following commands provide &#039;&#039;&#039;libvirt&#039;&#039;&#039; as well as &#039;&#039;&#039;QEMU with emulation for x86_64&#039;&#039;&#039; and &#039;&#039;&#039;qemu-img&#039;&#039;&#039;, a necessary component for using various disk formats such as qcow2. Without qemu-img, only raw disks are available. It can also convert images between several formats like vhdx and vmdk. It also provides the metapackage &#039;&#039;&#039;qemu-modules&#039;&#039;&#039;, which provides subpackages needed for special features. In versions of Alpine before 3.13.0 these features were covered by &#039;&#039;&#039;QEMU with emulation for x86_64&#039;&#039;&#039;.&lt;br /&gt;
{{Cmd|&amp;lt;nowiki&amp;gt;# apk add libvirt-daemon qemu-img qemu-system-x86_64 qemu-modules openrc&lt;br /&gt;
# rc-update add libvirtd&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
By default, libvirt uses NAT for VM connectivity. If you want to use the default configuration, you need to load the tun module.&lt;br /&gt;
{{Cmd|# modprobe tun}}&lt;br /&gt;
&lt;br /&gt;
If you prefer bridging a guest over your Ethernet interface, you need to make a [https://wiki.alpinelinux.org/wiki/Bridge#Configuration_file bridge].&lt;br /&gt;
&lt;br /&gt;
It&#039;s quite common to use bridges with KVM environments. But when IPv6 is used, Alpine will assign itself a link-local address as well as an SLAAC address in case there&#039;s a router sending Router Advertisements. You don&#039;t want this because you don&#039;t want to have the KVM host an IP address in every network it serves to guests. Unfortunately IPv6 can not just be disabled for the bridge via a sysctl configuration file, because the bridge might not be up when the sysctl config is applied during boot. What works is to put a post-up hook into the /etc/network/interfaces file like this:&lt;br /&gt;
 auto brlan&lt;br /&gt;
 iface brlan inet manual&lt;br /&gt;
        bridge-ports eth1.5&lt;br /&gt;
        bridge-stp 0&lt;br /&gt;
        post-up ip -6 a flush dev brlan; sysctl -w net.ipv6.conf.brlan.disable_ipv6=1&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
For non-root management, you will need to add your user to the libvirt group.&lt;br /&gt;
{{Cmd|# addgroup user libvirt}}&lt;br /&gt;
&lt;br /&gt;
You can use libvirt&#039;s virsh at the CLI. It can execute commands as well as run as an interactive shell. Read its manual page and/or use the &amp;quot;help&amp;quot; command for more info. Some basic commands are:&lt;br /&gt;
&lt;br /&gt;
{{Cmd|&amp;lt;nowiki&amp;gt;virsh help&lt;br /&gt;
virsh list --all&lt;br /&gt;
virsh start $domain&lt;br /&gt;
virsh shutdown $domain&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
The libvirt project provides a GUI for managing hosts, called virt-manager. It handles local systems as well as remote ones via SSH.&lt;br /&gt;
{{Cmd|&amp;lt;nowiki&amp;gt;# apk add dbus polkit virt-manager terminus-font&lt;br /&gt;
# rc-update add dbus&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
In order to use libvirtd to remotely control KVM over ssh PolicyKit needs a .pkla informing it that this is allowed.&lt;br /&gt;
Write the following file to /etc/polkit-1/localauthority/50-local.d/50-libvirt-ssh-remote-access-policy.pkla&lt;br /&gt;
{{Cmd|&amp;lt;nowiki&amp;gt;[Remote libvirt SSH access]&lt;br /&gt;
 Identity=unix-group:libvirt&lt;br /&gt;
 Action=org.libvirt.unix.manage&lt;br /&gt;
 ResultAny=yes&lt;br /&gt;
 ResultInactive=yes&lt;br /&gt;
 ResultActive=yes&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Guest lifecycle management ==&lt;br /&gt;
The libvirt-guests service (available from Alpine 3.13.5) allows running guests to be automatically suspended or shut down when the host is shut down or rebooted.&lt;br /&gt;
&lt;br /&gt;
The service is configured in /etc/conf.d/libvirt-guests. Enable the service with {{Cmd|# rc-update add libvirt-guests}}&lt;br /&gt;
&lt;br /&gt;
== vfio ==&lt;br /&gt;
&lt;br /&gt;
VFIO is more flexible way to do PCI passthrough. Let&#039;s suppose you want to use following ethernet card as PCI device in a VM.&lt;br /&gt;
&lt;br /&gt;
 # lspci | grep 02:00.0&lt;br /&gt;
 02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)&lt;br /&gt;
 # lspci -n -s 02:00.0&lt;br /&gt;
 02:00.0 0200: 8086:10c9 (rev 01)&lt;br /&gt;
&lt;br /&gt;
First, create &#039;&#039;/etc/mkinitfs/features.d/vfio.modules&#039;&#039; with the following content, so mkinitfs includes the VFIO modules in the initramfs.&lt;br /&gt;
&lt;br /&gt;
 kernel/drivers/vfio/vfio.ko&lt;br /&gt;
 kernel/drivers/vfio/vfio_virqfd.ko&lt;br /&gt;
 kernel/drivers/vfio/vfio_iommu_type1.ko&lt;br /&gt;
 kernel/drivers/vfio/pci/vfio-pci.ko&lt;br /&gt;
&lt;br /&gt;
Add &#039;&#039;vfio&#039;&#039; the the list of features in &#039;&#039;/etc/mkinitfs/mkinitfs.conf&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Modify following file to instruct &#039;&#039;mkinitfs&#039;&#039; to load following module with the options and rebuild kernel ramdisk.&lt;br /&gt;
&lt;br /&gt;
 # cat /etc/modprobe.d/vfio.conf &amp;lt;&amp;lt;EOF&lt;br /&gt;
 options vfio-pci ids=8086:10c9&lt;br /&gt;
 options vfio_iommu_type1 allow_unsafe_interrupts=1&lt;br /&gt;
 softdep igb pre: vfio-pci&lt;br /&gt;
 EOF&lt;br /&gt;
 # mkinitfs&lt;br /&gt;
&lt;br /&gt;
Now modify GRUB, include &#039;&#039;intel_iommu=o iommu=pt&#039;&#039; for Intel platform (AMD uses &#039;&#039;amd_iommu=on&#039;&#039;) and add the VFIO modules.&lt;br /&gt;
&lt;br /&gt;
 # grep ^GRUB_CMDLINE_LINUX_DEFAULT /etc/default/grub&lt;br /&gt;
 GRUB_CMDLINE_LINUX_DEFAULT=&amp;quot;modules=sd-mod,usb-storage,ext4,raid1,vfio,vfio-pci,vfio_iommu_type1,vfio_virqfd nomodeset rootfstype=ext4 intel_iommu=on iommu=pt console=ttyS0,115200&amp;quot;&lt;br /&gt;
 # grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
Reboot and check dmesg.&lt;br /&gt;
&lt;br /&gt;
 # grep -i -e DMAR -e IOMMU /var/log/dmesg&lt;br /&gt;
 [    0.343795] DMAR: Host address width 36&lt;br /&gt;
 [    0.343797] DMAR: DRHD base: 0x000000fed90000 flags: 0x1&lt;br /&gt;
 [    0.343804] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap c90780106f0462 ecap f020e3&lt;br /&gt;
 [    0.343806] DMAR: RMRR base: 0x000000000ed000 end: 0x000000000effff&lt;br /&gt;
 [    0.343807] DMAR: RMRR base: 0x000000bf7ed000 end: 0x000000bf7fffff&lt;br /&gt;
 [    0.553830] iommu: Default domain type: Passthrough (set via kernel command line)&lt;br /&gt;
 [    0.902477] DMAR: No ATSR found&lt;br /&gt;
 [    0.902563] DMAR: dmar0: Using Queued invalidation&lt;br /&gt;
 ...&lt;br /&gt;
 [    0.903256] pci 0000:02:00.0: Adding to iommu group 12&lt;br /&gt;
 ...&lt;br /&gt;
 [    0.903768] DMAR: Intel(R) Virtualization Technology for Directed I/O&lt;br /&gt;
&lt;br /&gt;
If you do not run libvirt VMs under &#039;&#039;root&#039;&#039; (&#039;&#039;egrep &#039;^#*user&#039; /etc/libvirt/qemu.conf&#039;&#039;), then you must have correct permission on &#039;&#039;/dev/vfio/&amp;lt;iommu_group&amp;gt;&#039;&#039;, eg. &#039;&#039;/dev/vfio/12&#039;&#039;. You have to tune &#039;&#039;/etc/mdev.conf&#039;&#039; or UDEV rules. Also note that if there are multiple PCI devices in the same iommu group, you always have to add all of them to the VM otherwise you&#039;ll get an error message like &amp;quot;Please ensure all devices within the iommu_group are bound to their vfio bus driver&amp;quot;&lt;br /&gt;
&lt;br /&gt;
 # virsh dumpxml vm01 | xmllint --xpath &#039;//*/hostdev&#039; -&lt;br /&gt;
 &amp;lt;hostdev mode=&amp;quot;subsystem&amp;quot; type=&amp;quot;pci&amp;quot; managed=&amp;quot;yes&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;driver name=&amp;quot;vfio&amp;quot;/&amp;gt;&lt;br /&gt;
       &amp;lt;source&amp;gt;&lt;br /&gt;
         &amp;lt;address domain=&amp;quot;0x0000&amp;quot; bus=&amp;quot;0x02&amp;quot; slot=&amp;quot;0x00&amp;quot; function=&amp;quot;0x0&amp;quot;/&amp;gt;&lt;br /&gt;
       &amp;lt;/source&amp;gt;&lt;br /&gt;
       &amp;lt;alias name=&amp;quot;hostdev0&amp;quot;/&amp;gt;&lt;br /&gt;
       &amp;lt;address type=&amp;quot;pci&amp;quot; domain=&amp;quot;0x0000&amp;quot; bus=&amp;quot;0x00&amp;quot; slot=&amp;quot;0x06&amp;quot; function=&amp;quot;0x0&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;/hostdev&amp;gt;&lt;br /&gt;
 &amp;lt;hostdev mode=&amp;quot;subsystem&amp;quot; type=&amp;quot;pci&amp;quot; managed=&amp;quot;yes&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;driver name=&amp;quot;vfio&amp;quot;/&amp;gt;&lt;br /&gt;
       &amp;lt;source&amp;gt;&lt;br /&gt;
         &amp;lt;address domain=&amp;quot;0x0000&amp;quot; bus=&amp;quot;0x02&amp;quot; slot=&amp;quot;0x00&amp;quot; function=&amp;quot;0x1&amp;quot;/&amp;gt;&lt;br /&gt;
       &amp;lt;/source&amp;gt;&lt;br /&gt;
       &amp;lt;alias name=&amp;quot;hostdev1&amp;quot;/&amp;gt;&lt;br /&gt;
       &amp;lt;address type=&amp;quot;pci&amp;quot; domain=&amp;quot;0x0000&amp;quot; bus=&amp;quot;0x00&amp;quot; slot=&amp;quot;0x08&amp;quot; function=&amp;quot;0x0&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;/hostdev&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you directly use QEMU without libvirt and are trying to pass a GPU to your VM, you may get a &amp;quot;VFIO_MAP_DMA failed: Out of memory&amp;quot; error, when starting the VM as a non-root user. One way to fix it is to install the  &#039;&#039;shadow&#039;&#039; package, and increase the amount of memory the user can lock via the  &#039;&#039;/etc/security/limits.conf&#039;&#039; file:&lt;br /&gt;
{{Cmd|&amp;lt;nowiki&amp;gt;# apk add shadow&lt;br /&gt;
# echo &amp;quot;youruser soft memlock RAMamount \&lt;br /&gt;
youruser hard memlock RAMamount&amp;quot; &amp;gt;&amp;gt; /etc/security/limits.conf&lt;br /&gt;
# reboot&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
Replace &amp;quot;youruser&amp;quot; with the user you wish to run the VM as, and &amp;quot;RAMamount&amp;quot; with how much RAM your VM will need (in KB). The exact amount may throw the same error in the end, so you probably want to increase this value by a few dozen MB (typically +40).&lt;br /&gt;
&lt;br /&gt;
A lot of info at [https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF].&lt;br /&gt;
&lt;br /&gt;
[[Category:Virtualization]]&lt;/div&gt;</summary>
		<author><name>Opendna</name></author>
	</entry>
</feed>