QEMU

From Alpine Linux
Revision as of 17:13, 21 January 2024 by Zcrayfish (talk | contribs) (Enable this article to be partially transcluded on "Installing Alpine in a virtual machine")

QEMU is a very flexible open source virtual machine and emulator. QEMU is able to virtualize or emulate x86, x86_64, PowerPC, ARM, and S390 guests.


Install Alpine Linux in QEMU

This material is proposed for merging ...

It should be merged with Installing_Alpine_in_a_virtual_machine#KVM_/_QEMU. Installing Alpine in QEMU and installing QEMU in Alpine should be handled separately. (Discuss)

Before You Start

  • Download the latest Alpine image.
  • Install QEMU on your system (e.g. sudo apt install qemu on Ubuntu, yum -y install qemu on Fedora)

If you are using Alpine Linux, you can install:

# apk add qemu qemu-img qemu-system-x86_64 qemu-ui-gtk

Create the Virtual Machine

Create a disk image if you want to install Alpine Linux.

qemu-img create -f qcow2 alpine.qcow2 8G

The following command starts QEMU with the Alpine ISO image as CDROM, the default network configuration, 512MB RAM, the disk image that was created in the previous step, and CDROM as the boot device.

qemu-system-x86_64 -m 512 -nic user -boot d -cdrom alpine-standard-3.20.3-x86_64.iso -hda alpine.qcow2 -display gtk -enable-kvm

Tip: Remove option -enable-kvm if your hardware does not support this.

Log in as root (no password) and run:

setup-alpine

Follow the setup-alpine installation steps.

Run poweroff to shut down the machine.

Booting the Virtual Machine

After the installation, QEMU can be started from disk image (-boot c) without CDROM.

qemu-system-x86_64 -m 512 -nic user -hda alpine.qcow2

Live mode

To just give Alpine Linux a try in diskless mode, qemu can be used to boot the .iso file without any need for a virtual HDD image or further configuration.

qemu-system-x86_64 -m 512 -nic user -boot d -cdrom alpine-3.20.3-x86_64.iso --accel kvm

Letting the .iso image load an apkovl

This works by mounting a persistent filesystem under /media and selecting it to store the apkovl and the apkcache.

Preparing a KVM with a virtual drive:

mkdir -p /media/usb/images qemu-img create -f raw /media/usb/images/mykvm.config 32M qemu-system-x86_64 -enable-kvm -m 384 \ -name mykvm \ -cdrom /media/usb/images/alpine-3.20.3-x86_64.iso \ -drive file=/media/usb/images/mykvm.config,if=virtio \ -net lan \ -boot d &

And inside the KVM (running Alpine Linux):

fdisk /dev/vda  #creating a partition
mkdosfs /dev/vda1
mkdir -p /media/vda1
echo "/dev/vda1 /media/vda1 vfat rw 0 0" >> /etc/fstab
mount -a
setup-alpine  # (select vda1 for saving configs)
lbu commit

The next reboot then loads the generated apkovl and apkcache found on /dev/vda1 -- completely running-from-ram based on the latest official ISO.

Advanced network configuration

To get networking running correctly, you can use the tun/tap interface, which then becomes a real interface. The key is to define the virtual network interface on the correct virtual vlan, and the correct ifup script.

You need 2 net commands on the command line interface, one for the host:

-net tap,vlan=[somenumber],ifname=[host if],script=[some script]

one for the guest

-net nic,vlan=[samenumber]

So to have a single NIC on the qemu virtual system that is connected to tap0 on the physical host:

qemu -net tap,vlan=0,ifname=tap0,script=./qemu-ifup -net nic,vlan0 \
    -boot d -cdrom alpine*.iso}}


To create a qemu guest with more than one nic, just repeat the -net commands

qemu -net tap,vlan=0,ifname=tap0,script=./qemu-ifup -net nic,vlan0 \
      -net tap,vlan=0,ifname=tap1,script=./qemu-ifup -net nic,vlan0 \
      -net tap,vlan=0,ifname=tap2,script=./qemu-ifup -net nic,vlan0 \
      -boot d -cdrom alpine*.iso}}

Now your alpine guest will have 3 NICs, mapped to tap0, tap1, and tap2 respectively.

What's actually happening is you are effectively creating a point-to-point tunnel, with the phys tap0 device being one endpoint, and the virtual box's eth0 being on the other point of the tunnel.

So you need to assign ip addresses to BOTH sides of the tunnel. The qemu-ifup script is what does that for the host. Here's an example:

#!/bin/sh
case $1 in
      tun0 | tap0 )
              sudo /sbin/ip addr add 192.168.1.100/24 dev $1
              sudo /sbin/ip link set $1 up
              ;;
      tap1 | tun1 )
              sudo /sbin/ip addr add 192.168.2.100/24 dev $1
              sudo /sbin/ip link set $1 up
              ;;
      tap2 | tun2 )
              sudo /sbin/ip addr add 192.168.3.100/24 dev $1
              sudo /sbin/ip link set $1 up
              ;;
      esac

In your alpinebox, create an interfaces file like this:

iface eth0 inet static
      address 192.168.1.1
      netmask 255.255.255.0
      gateway 192.168.1.100

iface eth1 inet static
      address 192.168.2.1
      netmask 255.255.255.0

iface eth0 inet static
      address 192.168.3.1
      netmask 255.255.255.0

If on your host you now add a MASQUERADE rule for tap0 to your host's default nic, and you turn on ip_forward on your host, you can now do svn updates, surf, run tranmission, etc right from your qemu guest.

Using Xorg inside QEMU

The video driver needed for Xorg inside QEMU for Alpine 3.17 and older is xf86-video-modesetting. In newer releases of alpine, the driver is automatically pulled in as part of setup-xorg-base.

Tip: Probably for KVM/Qemu guests you want to use qxl Video and Display Spice. For this purpose install xf86-video-qxl on guest and run a Spice client on the host

If you decided to use a qxl Video on KVM/Qemu guest, add this configuration to /etc/X11/xorg.conf

Contents of /etc/X11/xorg.conf

Section "Device" Identifier "qxl" Driver "qxl" Option "ENABLE_SURFACES" "False" EndSection

Run a guest OS on Alpine Linux using KVM/QEMU

Install:

# apk add qemu-system-x86_64 qemu-modules libvirt libvirt-qemu

Note: also install virt-manager for a KVM/QEMU gui


Add tun to /etc/modules:

# echo tun >> /etc/modules


Starting tun now:

# modprobe tun


Add your user to the kvm and qemu groups

# addgroup <username> kvm

# adduser <username> qemu

Logout for the group changes to take effect


Adding services:

# rc-update add libvirtd

# rc-update add libvirt-guests

Starting the services now:

# rc-service libvirtd start

# rc-service libvirt-guests start


If you are interested in using a bridged network (so that the guest machine can be reached easily from the outside), see Bridge.