Install
This assumes that the bridge br0
has already been set up, and that
the necessary packages have been installed.
For permissions reasons (probably not the only or best way to deal
with this), /usr/share/qemu/ovmf-x86_64-code.bin
has been copied
into /home/roger/kvm-machines/efi-bios/
As user roger
, (not root
):
qemu-img create -f qcow2 /home/roger/kvm-machines/name.qcow2 16G
qemu-system-x86_64 -machine accel=kvm \
-hda /home/roger/kvm-machines/name.qcow2 -m 2048 \
-bios /usr/share/qemu/ovmf-x86_64-code.bin \
-net nic,model=virtio \
-net bridge,br=br0,helper=/usr/lib/qemu-bridge-helper \
-monitor stdio
It should now be able to PXE boot and get installed.
If you have the right logic as described there in the DHCP and TFTP
server, you can also install non-EFI systems by omitting the -bios
option above.)
Import
Then import it:
virt-install --osinfo detect=on,require=off \
--name name --memory 2048 \
--disk /home/roger/kvm-machines/name.qcow2 \
--import
After it fails (which it will, if it’s an EFI machine), edit
/home/roger/.config/libvirt/qemu/name.xml
Or, using virsh
:
virsh edit name
In <os>...</os>
add:
<loader type='pflash'>/home/roger/kvm-machines/efi-bios/ovmf-x86_64.bin</loader>
And also copy /usr/share/qemu/ovmf-x86_64-code.bin
to
/home/roger/kvm-machines/efi-bios/ovmf-x86_64.bin
Otherwise we see permissions errors (I’m sure there are better ways of
solving this, but it works.) Also for some systems it seems necessary
to provide a separate per-machine copy of ovmf-x86_64.bin
(see
section "Booting Problems" below ).
In interface
we need to change to:
<interface type='bridge'>
...
<source bridge='br0'/>
...
Getting a console
Getting virsh console
to work:
Inside the virtual machine, edit /etc/default/grub
.
Add console=ttyS0
to GRUB_CMDLINE_LINUX_DEFAULT
Then run grub2-mkconfig -o /boot/grub/grub.cfg
On older systems make the equivalent bootloader changes.
Also make sure that a getty is available on ttyS0
(on systemd
setups this will happen automatically).
On systems with /etc/inittab
it needs to be enabled there:
S0:12345:respawn:/sbin/agetty -L 9600 ttyS0 vt102
Mounting the disk image
Or, you could temporarily edit the default entry in
/boot/grub/grub.cfg
externally by mounting the disk image…
But… kpartx
won’t work with qcow2
images
So install guestfs-tools
and then use:
guestmount -a tumbleweed.img -m /dev/sda3 /mnt
Or, you can use:
modprobe nbd
qemu-nbd --connect=/dev/nbd0 /home/roger/kvm-machines/name.qcow2
and then proceed using kpartx -a /dev/nbd0
and mount the relevant
partitions that appear under /dev/mapper/
(Reference for mounting partitions in qcow2
images:
http://ask.xmodulo.com/mount-qcow2-disk-image-linux.html)
Remote viewing
For remote graphical viewing using remote-viewer
:
<graphics type='spice' autoport='yes' listen='0.0.0.0'>
<listen type='address' address='0.0.0.0'/>
</graphics>
Monitoring
To enable monitoring you can install the qemu-guest-agent
package
and add the following to the XML to make it work (you also need to
enable the service qemu-guest-agent
provides, or start qemu-ga
from /etc/init.d/boot.local
according to the operating system
version):
<channel type='unix'>
<target type='virtio' name='org.qemu.guest_agent.0'/>
</channel>
This lets you externally discover the running virtual machines' IP addresses for example, using
virsh domifaddr --source agent <name>
For SLES 15 / Leap 15, getting the service working is trickier.
The service we need is:
qemu-ga@virtio\x2dports-org.qemu.guest_agent.0.service
leap152:~ # ls -l /dev/virtio-ports
total 0
lrwxrwxrwx 1 root root 11 Jul 3 13:57 com.redhat.spice.0 -> ../vport0p1
lrwxrwxrwx 1 root root 11 Jul 3 13:57 org.qemu.guest_agent.0 -> ../vport0p2
# cat /etc/qemu-ga.conf
# qemu-ga configuration sample
[general]
daemonize = 0
pidfile = /var/run/qemu-ga.pid
verbose = 0
method = virtio-serial
path = /dev/virtio-ports/org.qemu.guest_agent.0
statedir = /var/run
Script to get all addresses
#! /usr/bin/env python
from pexpect import run
###############################################################################
machines = []
for i in run('virsh list').split('\n')[2:]:
if len(i) > 1:
machines.append(i.split()[1])
machines.sort()
###############################################################################
for m in machines:
print m.strip().ljust(14),
o = run('virsh domifaddr --source agent ' + m).split('\n')[2:]
if len(o) >=6:
for line in o:
if '192' in line:
print line
else:
print ' '
###############################################################################
Output:
$ getvmadds.py
sles11sp3 eth0 52:54:00:93:c0:ee ipv4 192.168.1.153/24
tumbleweed ens3 52:54:00:a9:29:3d ipv4 192.168.1.147/24
Booting problems
At least for older (pre-grub2 / elilo) systems like SLES 11 using EFI, it seems that one needs to store a separate copy of the EFI BIOS per machine and reference that in the XML, e.g.:
<os>
[...]
<loader type='pflash'>/home/roger/kvm-machines/sles11sp3-ovmf-x86_64.bin</loader>
<boot dev='hd'/>
</os>
Otherwise the BIOS may "forget" how it’s booting and we drop to the EFI shell at startup after booting a different machine.