Domain XML format

This section describes the XML format used to represent domains, there are variations on the format based on the kind of domains run and the options used to launch them. For hypervisor specific details consult the driver docs

Element and attribute overview

The root element required for all virtual machines is named domain. It has two attributes, the type specifies the hypervisor used for running the domain. The allowed values are driver specific, but include "xen", "kvm", "qemu", "lxc" and "kqemu". The second attribute is id which is a unique integer identifier for the running guest machine. Inactive machines have no id value.

General metadata

      <domain type='xen' id='3'>
        <name>fv0</name>
        <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid>
        ...
name
The content of the name element provides a short name for the virtual machine. This name should consist only of alpha-numeric characters and is required to be unique within the scope of a single host. It is often used to form the filename for storing the persistent configuration file. Since 0.0.1
uuid
The content of the uuid element provides a globally unique identifier for the virtual machine. The format must be RFC 4122 compliant, eg 3e3fce45-4f53-4fa7-bb32-11f34168b82b. If omitted when defining/creating a new machine, a random UUID is generated. Since 0.0.1

Operating system booting

There are a number of different ways to boot virtual machines each with their own pros and cons.

BIOS bootloader

Booting via the BIOS is available for hypervisors supporting full virtualization. In this case the BIOS has a boot order priority (floppy, harddisk, cdrom, network) determining where to obtain/find the boot image.

        ...
        <os>
          <type>hvm</type>
          <loader>/usr/lib/xen/boot/hvmloader</loader>
          <boot dev='hd'/>
        </os>
        ...
type
The content of the type element specifies the type of operating system to be booted in the virtual machine. hvm indicates that the OS is one designed to run on bare metal, so requires full virtualization. linux (badly named!) refers to an OS that supports the Xen 3 hypervisor guest ABI. There are also two optional attributes, arch specifying the CPU architecture to virtualization, and machine refering to the machine type. The Capabilities XML provides details on allowed values for these. Since 0.0.1
loader
The optional loader tag refers to a firmware blob used to assist the domain creation process. At this time, it is only needed by Xen fullyvirtualized domains. Since 0.1.0
boot
The dev attribute takes one of the values "fd", "hd", "cdrom" or "network" and is used to specify the next boot device to consider. The boot element can be repeated multiple times to setup a priority list of boot devices to try in turn. Since 0.1.3

Host bootloader

Hypervisors employing paravirtualization do not usually emulate a BIOS, and instead the host is responsible to kicking off the operating system boot. This may use a pseduo-bootloader in the host to provide an interface to choose a kernel for the guest. An example is pygrub with Xen.

        ...
	<bootloader>/usr/bin/pygrub</bootloader>
	<bootloader_args>--append single</bootloader_args>
        ...
bootloader
The content of the bootloader element provides a fullyqualified path to the bootloader executable in the host OS. This bootloader will be run to choose which kernel to boot. The required output of the bootloader is dependant on the hypervisor in use. Since 0.1.0
bootloader_args
The optional bootloader_args element allows command line arguments to be passed to the bootloader. Since 0.2.3

Direct kernel boot

When installing a new guest OS it is often useful to boot directly from a kernel and initrd stored in the host OS, allowing command line arguments to be passed directly to the installer. This capability is usually available for both para and full virtualized guests.

        ...
	<os>
          <type>hvm</type>
          <loader>/usr/lib/xen/boot/hvmloader</loader>
          <kernel>/root/f8-i386-vmlinuz</kernel>
          <initrd>/root/f8-i386-initrd</initrd>
          <cmdline>console=ttyS0 ks=http://example.com/f8-i386/os/</cmdline>
	</os>
	...
type
This element has the same semantics as described earlier in the BIOS boot section
loader
This element has the same semantics as described earlier in the BIOS boot section
kernel
The contents of this element specify the fully-qualified path to the kernel image in the host OS.
initrd
The contents of this element specify the fully-qualified path to the (optional) ramdisk image in the host OS.
cmdline
The contents of this element specify arguments to be passed to the kernel (or installer) at boottime. This is often used to specify an alternate primary console (eg serial port), or the installation media source / kickstart file

Basic resources

        ...
	<memory>524288</memory>
	<currentMemory>524288</currentMemory>
	<vcpu>1</vcpu>
	...
memory
The maximum allocation of memory for the guest at boot time. The units for this value are bytes
currentMemory
The actual allocation of memory for the guest. This value be less than the maximum allocation, to allow for ballooning up the guests memory on the fly. If this is omitted, it defaults to the same value as the memory element
vcpu
The content of this element defines the number of virtual CPUs allocated for the guest OS.

Lifecycle control

It is sometimes neccessary to override the default actions taken when a guest OS triggers a lifecycle operation. The following collections of elements allow the actions to be specified. A common use case is to force a reboot to be treated as a poweroff when doing the initial OS installation. This allows the VM to be re-configured for the first post-install bootup.

        ...
	<on_poweroff>destroy</on_poweroff>
	<on_reboot>restart</on_reboot>
	<on_crash>restart</on_crash>
	...
on_poweroff
The content of this element specifies the action to take when the guest requests a poweroff.
on_reboot
The content of this element specifies the action to take when the guest requests a reboot.
on_crash
The content of this element specifies the action to take when the guest crashes.

Each of these states allow for the same four possible actions.

destroy
The domain will be terminated completely and all resources released
restart
The domain will be terminated, and then restarted with the same configuration
preserve
The domain will be terminated, and its resource preserved to allow analysis.
rename-restart
The domain will be terminated, and then restarted with a new name

Hypervisor features

Hypervisors may allow certain CPU / machine features to be toggled on/off.

        ...
	<features>
	  <pae/>
	  <acpi/>
	  <apic/>
	</features>
	...

All features are listed within the features element, omitting a togglable feature tag turns it off. The available features can be found by asking for the capabilities XML, but a common set for fully virtualized domains are:

pae
Physical address extension mode allows 32-bit guests to address more than 4 GB of memory.
acpi
ACPI is useful for power management, for example, with KVM guests it is required for graceful shutdown to work.

Time keeping

The guest clock is typically initialized from the host clock. Most operating systems expect the hardware clock to be kept in UTC, and this is the default. Windows, however, expects it to be in so called 'localtime'.

        ...
        <clock sync="localtime"/>
	...
clock
The sync attribute takes either "utc" or "localtime" to specify how the guest clock is initialized in relation to the host OS.

Devices

The final set of XML elements are all used to descibe devices provided to the guest domain. All devices occur as children of the main devices element. Since 0.1.3

        ...
        <devices>
	  <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
          ...
emulator
The contents of the emulator element specify the fully qualified path to the device model emulator binary. The capabilities XML specifies the recommended default emulator to use for each particular domain type / architecture combination.

Hard drives, floppy disks, CDROMs

Any device that looks like a disk, be it a floppy, harddisk, cdrom, or paravirtualized driver is specified via the disk element.

          ...
	  <disk type='file'>
	    <driver name="tap" type="aio">
	    <source file='/var/lib/xen/images/fv0'/>
	    <target dev='hda' bus='ide'/>
	  </disk>
	  ...
disk
The disk element is the main container for describing disks. The type attribute is either "file" or "block" and refers to the underlying source for the disk. The optional device attribute indicates how the disk is to be exposed to the guest OS. Possible values for this attribute are "floppy", "disk" and "cdrom", defaulting to "disk". Since 0.0.3; "device" attribute since 0.1.4
source
If the disk type is "file", then the file attribute specifies the fully-qualified path to the file holding the disk. If the disk type is "block", then the dev attribute specifies the path to the host device to serve as the disk. Since 0.0.3
target
The target element controls the bus / device under which the disk is exposed to the guest OS. The dev attribute indicates the "logical" device name. The actual device name specified is not guarenteed to map to the device name in the guest OS. Treat it as a device ordering hint. The optional bus attribute specifies the type of disk device to emulate; possible values are driver specific, with typical values being "ide", "scsi", "virtio", "xen". If omitted, the bus type is inferred from the style of the device name. eg, a device named 'sda' will typically be exported using a SCSI bus. Since 0.0.3; bus attribute since 0.4.3
driver
If the hypervisor supports multiple backend drivers, then the optional driver element allows them to be selected. The name attribute is the primary backend driver name, while the optional type attribute provides the sub-type. Since 0.1.8

Network interfaces

          ...
	  <interface type='bridge'>
	    <source bridge='xenbr0'/>
	    <mac address='00:16:3e:5d:c7:9e'/>
	    <script path='vif-bridge'/>
	  </interface>
	  ...
Virtual network

This is the recommended config for general guest connectivity on hosts with dynamic / wireless networking configs

Provides a virtual network using a bridge device in the host. Depending on the virtual network configuration, the network may be totally isolated, NAT'ing to an explicit network device, or NAT'ing to the default route. DHCP and DNS are provided on the virtual network in all cases and the IP range can be determined by examining the virtual network config with 'virsh net-dumpxml [networkname]'. There is one virtual network called 'default' setup out of the box which does NAT'ing to the default route and has an IP range of 192.168.22.0/255.255.255.0. Each guest will have an associated tun device created with a name of vnetN, which can also be overridden with the <target> element.

      ...
      <interface type='network'>
        <source network='default'/>
      </interface>
      ...
      <interface type='network'>
        <source network='default'/>
        <target dev='vnet7'/>
        <mac address="11:22:33:44:55:66"/>
      </interface>
      ...
Bridge to to LAN

This is the recommended config for general guest connectivity on hosts with static wired networking configs

Provides a bridge from the VM directly onto the LAN. This assumes there is a bridge device on the host which has one or more of the hosts physical NICs enslaved. The guest VM will have an associated tun device created with a name of vnetN, which can also be overridden with the <target> element. The tun device will be enslaved to the bridge. The IP range / network configuration is whatever is used on the LAN. This provides the guest VM full incoming & outgoing net access just like a physical machine.

      ...
      <interface type='bridge'>
        <source bridge='br0'/>
      </interface>

      <interface type='bridge'>
        <source bridge='br0'/>
        <target dev='vnet7'/>
        <mac address="11:22:33:44:55:66"/>
      </interface>
      ...
Userspace SLIRP stack

Provides a virtual LAN with NAT to the outside world. The virtual network has DHCP & DNS services and will give the guest VM addresses starting from 10.0.2.15. The default router will be 10.0.2.2 and the DNS server will be 10.0.2.3. This networking is the only option for unprivileged users who need their VMs to have outgoing access.

      ...
      <interface type='user'/>
      ...
      <interface type='user'>
        <mac address="11:22:33:44:55:66"/>
      </interface>
      ...
Generic ethernet connection

Provides a means for the administrator to execute an arbitrary script to connect the guest's network to the LAN. The guest will have a tun device created with a name of vnetN, which can also be overridden with the <target> element. After creating the tun device a shell script will be run which is expected to do whatever host network integration is required. By default this script is called /etc/qemu-ifup but can be overridden.

      ...
      <interface type='ethernet'/>
      ...
      <interface type='ethernet'>
        <target dev='vnet7'/>
        <script path='/etc/qemu-ifup-mynet'/>
      </interface>
      ...
Multicast tunnel

A multicast group is setup to represent a virtual network. Any VMs whose network devices are in the same multicast group can talk to each other even across hosts. This mode is also available to unprivileged users. There is no default DNS or DHCP support and no outgoing network access. To provide outgoing network access, one of the VMs should have a 2nd NIC which is connected to one of the first 4 network types and do the appropriate routing. The multicast protocol is compatible with that used by user mode linux guests too. The source address used must be from the multicast address block.

      ...
      <interface type='mcast'>
        <source address='230.0.0.1' port='5558'/>
      </interface>
      ...
TCP tunnel

A TCP client/server architecture provides a virtual network. One VM provides the server end of the network, all other VMS are configured as clients. All network traffic is routed between the VMs via the server. This mode is also available to unprivileged users. There is no default DNS or DHCP support and no outgoing network access. To provide outgoing network access, one of the VMs should have a 2nd NIC which is connected to one of the first 4 network types and do the appropriate routing.

      ...
      <interface type='server'>
        <source address='192.168.0.1' port='5558'/>
      </interface>
      ...
      <interface type='client'>
      <source address='192.168.0.1' port='5558'/>
      </interface>
      ...

Input devices

Input devices allow interaction with the graphical framebuffer in the guest virtual machine. When enabling the framebuffer, an input device is automatically provided. It may be possible to add additional devices explicitly, for example, to provide a graphics tablet for absolute cursor movement.

          ...
	  <input type='mouse' bus='usb'/>
	  ...
input
The input element has one madatory attribute, the type whose value can be either 'mouse' or 'tablet'. The latter provides absolute cursor movement, while the former uses relative movement. The optional bus attribute can be used to refine the exact device type. It takes values "xen" (paravirtualized), "ps2" and "usb".

Graphical framebuffers

A graphics device allows for graphical interaction with the guest OS. A guest will typically have either a framebuffer or a text console configured to allow interaction with the admin.

          ...
	  <graphics type='vnc' port='5904'/>
	  ...
graphics
The graphics element has a mandatory type attribute which takes the value "sdl" or "vnc". The former displays a window on the host desktop, while the latter activates a VNC server. If the latter is used the port attribute specifies the TCP port number (with -1 as legacy syntax indicating that it should be auto-allocated). The autoport attribute is the new preferred syntax for indicating autoallocation of the TCP port to use. The listen attribute is an IP address for the server to listen on. The password attribute provides a VNC password in clear text.

Consoles, serial & parallel devices

A character device provides a way to interact with the virtual machine. Paravirtualized consoles, serial ports and parallel ports are all classed as character devices and so represented using the same syntax.

        ...
        <parallel type='pty'>
	  <source path='/dev/pts/2'/>
	  <target port='0'/>
        </parallel>
        <serial type='pty'>
	  <source path='/dev/pts/3'/>
	  <target port='0'/>
        </serial>
        <console type='pty'>
	  <source path='/dev/pts/4'/>
	  <target port='0'/>
        </console>
        </devices>
      </domain>
parallel
Represents a parallel port
serial
Represents a serial port
console
Represents the primary console. This can be the paravirtualized console with Xen guests, or duplicates the primary serial port for fully virtualized guests without a paravirtualized console.
source
The attributes available for the source element vary according to the type attribute on the parent tag. Allowed variations will be described below
target
The port number of the character device is specified via the port attribute, numbered starting from 1. There is usually only one console device, and 0, 1 or 2 serial devices or parallel devices.
Domain logfile

This disables all input on the character device, and sends output into the virtual machine's logfile

      ...
      <console type='stdio'>
        <target port='1'>
      </console>
      ...
Device logfile

A file is opened and all data sent to the character device is written to the file.

      ...
      <serial type="file">
        <source path="/var/log/vm/vm-serial.log"/>
        <target port="1"/>
      </serial>
      ...
Virtual console

Connects the character device to the graphical framebuffer in a virtual console. This is typically accessed via a special hotkey sequence such as "ctrl+alt+3"

      ...
      <serial type='vc'>
        <target port="1"/>
      </serial>
      ...
Null device

Connects the character device to the void. No data is ever provided to the input. All data written is discarded.

      ...
      <serial type='null'>
        <target port="1"/>
      </serial>
      ...
Pseudo TTY

A Pseudo TTY is allocated using /dev/ptmx. A suitable client such as 'virsh console' can connect to interact with the serial port locally.

      ...
      <serial type="pty">
        <source path="/dev/pts/3"/>
        <target port="1"/>
      </serial>
      ...

NB special case if <console type='pty'>, then the TTY path is also duplicated as an attribute tty='/dv/pts/3' on the top level <console> tag. This provides compat with existing syntax for <console> tags.

Host device proxy

The character device is passed through to the underlying physical character device. The device types must match, eg the emulated serial port should only be connected to a host serial port - dont connect a serial port to a parallel port.

      ...
      <serial type="dev">
        <source path="/dev/ttyS0"/>
        <target port="1"/>
      </serial>
      ...
TCP client/server

The character device acts as a TCP client connecting to a remote server, or as a server waiting for a client connection.

      ...
      <serial type="tcp">
        <source mode="connect" host="0.0.0.0" service="2445"/>
        <wiremode type="telnet"/>
        <target port="1"/>
      </serial>
      ...
UDP network console

The character device acts as a UDP netconsole service, sending and receiving packets. This is a lossy service.

      ...
      <serial type="udp">
        <source mode="bind" host="0.0.0.0" service="2445"/>
        <source mode="connect" host="0.0.0.0" service="2445"/>
        <target port="1"/>
      </serial>
      ...
UNIX domain socket client/server

The character device acts as a UNIX domain socket server, accepting connections from local clients.

      ...
      <serial type="unix">
        <source mode="bind" path="/tmp/foo"/>
        <target port="1"/>
      </serial>
      ...

Example configs

Example configurations for each driver are provide on the driver specific pages listed below