Domain XML format

This section describes the XML format used to represent domains, there are variations on the format based on the kind of domains run and the options used to launch them:

Normal paravirtualized Xen guests:

The root element must be called domain with no namespace, the type attribute indicates the kind of hypervisor used, 'xen' is the default value. The id attribute gives the domain id at runtime (not however that this may change, for example if the domain is saved to disk and restored). The domain has a few children whose order is not significant:

The format of the devices and their type may grow over time, but the following should be sufficient for basic use:

A disk device indicates a block device, it can have two values for the type attribute either 'file' or 'block' corresponding to the 2 options available at the Xen layer. It has two mandatory children, and one optional one in no specific order:

An interface element describes a network device mapped on the guest, it also has a type whose value is currently 'bridge', it also have a number of children in no specific order:

A console element describes a serial console connection to the guest. It has no children, and a single attribute tty which provides the path to the Pseudo TTY on which the guest console can be accessed

Life cycle actions for the domain can also be expressed in the XML format, they drive what should be happening if the domain crashes, is rebooted or is poweroff. There is various actions possible when this happen:

The following could be used for a Xen production system:

<domain>
  ...
  <on_reboot>restart</on_reboot>
  <on_poweroff>destroy</on_poweroff>
  <on_crash>rename-restart</on_crash>
  ...
</domain>

While the format may be extended in various ways as support for more hypervisor types and features are added, it is expected that this core subset will remain functional in spite of the evolution of the library.

Fully virtualized guests

There is a few things to notice specifically for HVM domains:

It is likely that the HVM description gets additional optional elements and attributes as the support for fully virtualized domain expands, especially for the variety of devices emulated and the graphic support options offered.

Networking interface options

The networking support in the QEmu and KVM case is more flexible, and support a variety of options:

  1. Userspace SLIRP stack

    Provides a virtual LAN with NAT to the outside world. The virtual network has DHCP & DNS services and will give the guest VM addresses starting from 10.0.2.15. The default router will be 10.0.2.2 and the DNS server will be 10.0.2.3. This networking is the only option for unprivileged users who need their VMs to have outgoing access. Example configs are:

    <interface type='user'/>
    <interface type='user'>
      <mac address="11:22:33:44:55:66"/>
    </interface>
        
  2. Virtual network

    Provides a virtual network using a bridge device in the host. Depending on the virtual network configuration, the network may be totally isolated, NAT'ing to an explicit network device, or NAT'ing to the default route. DHCP and DNS are provided on the virtual network in all cases and the IP range can be determined by examining the virtual network config with 'virsh net-dumpxml <network name>'. There is one virtual network called 'default' setup out of the box which does NAT'ing to the default route and has an IP range of 192.168.22.0/255.255.255.0. Each guest will have an associated tun device created with a name of vnetN, which can also be overridden with the <target> element. Example configs are:

    <interface type='network'>
      <source network='default'/>
    </interface>
    
    <interface type='network'>
      <source network='default'/>
      <target dev='vnet7'/>
      <mac address="11:22:33:44:55:66"/>
    </interface>
        
  3. Bridge to to LAN

    Provides a bridge from the VM directly onto the LAN. This assumes there is a bridge device on the host which has one or more of the hosts physical NICs enslaved. The guest VM will have an associated tun device created with a name of vnetN, which can also be overridden with the <target> element. The tun device will be enslaved to the bridge. The IP range / network configuration is whatever is used on the LAN. This provides the guest VM full incoming & outgoing net access just like a physical machine. Examples include:

    <interface type='bridge'>
     <source bridge='br0'/>
    </interface>
    
    <interface type='bridge'>
      <source bridge='br0'/>
      <target dev='vnet7'/>
      <mac address="11:22:33:44:55:66"/>
    </interface>
  4. Generic connection to LAN

    Provides a means for the administrator to execute an arbitrary script to connect the guest's network to the LAN. The guest will have a tun device created with a name of vnetN, which can also be overridden with the <target> element. After creating the tun device a shell script will be run which is expected to do whatever host network integration is required. By default this script is called /etc/qemu-ifup but can be overridden.

    <interface type='ethernet'/>
    
    <interface type='ethernet'>
      <target dev='vnet7'/>
      <script path='/etc/qemu-ifup-mynet'/>
    </interface>
  5. Multicast tunnel

    A multicast group is setup to represent a virtual network. Any VMs whose network devices are in the same multicast group can talk to each other even across hosts. This mode is also available to unprivileged users. There is no default DNS or DHCP support and no outgoing network access. To provide outgoing network access, one of the VMs should have a 2nd NIC which is connected to one of the first 4 network types and do the appropriate routing. The multicast protocol is compatible with that used by user mode linux guests too. The source address used must be from the multicast address block.

    <interface type='mcast'>
      <source address='230.0.0.1' port='5558'/>
    </interface>
  6. TCP tunnel

    A TCP client/server architecture provides a virtual network. One VM provides the server end of the network, all other VMS are configured as clients. All network traffic is routed between the VMs via the server. This mode is also available to unprivileged users. There is no default DNS or DHCP support and no outgoing network access. To provide outgoing network access, one of the VMs should have a 2nd NIC which is connected to one of the first 4 network types and do the appropriate routing.

    Example server config:

    <interface type='server'>
      <source address='192.168.0.1' port='5558'/>
    </interface>

    Example client config:

    <interface type='client'>
      <source address='192.168.0.1' port='5558'/>
    </interface>

To be noted, options 2, 3, 4 are also supported by Xen VMs, so it is possible to use these configs to have networking with both Xen & QEMU/KVMs connected to each other.

Example configs

Example configurations for each driver are provide on the driver specific pages listed below