<htmlxmlns="http://www.w3.org/1999/xhtml"><head><metahttp-equiv="Content-Type"content="text/html; charset=ISO-8859-1"/><linkrel="stylesheet"type="text/css"href="libvirt.css"/><linkrel="SHORTCUT ICON"href="/32favicon.png"/><title>XML Format</title></head><body><divid="container"><divid="intro"><divid="adjustments"></div><divid="pageHeader"></div><divid="content2"><h1class="style1">XML Format</h1><p>This section describes the XML format used to represent domains, there are
variations on the format based on the kind of domains run and the options
used to launch them:</p><ul><li><ahref="#Normal1">Normal paravirtualized Xen domains</a></li>
guests</a>:</h3><p>The library use an XML format to describe domains, as input to <ahref="html/libvirt-libvirt.html#virDomainCreateLinux">virDomainCreateLinux()</a>
and as the output of <ahref="html/libvirt-libvirt.html#virDomainGetXMLDesc">virDomainGetXMLDesc()</a>,
the following is an example of the format as returned by the shell command
<code>virsh xmldump fc4</code> , where fc4 was one of the running domains:</p><pre><domain type='xen' <spanstyle="color: #0071FF; background-color: #FFFFFF">id='18'</span>>
</domain></pre><p>There is a few things to notice specifically for HVM domains:</p><ul><li>the optional <code><features></code> block is used to enable
certain guest CPU / system features. For HVM guests the following
options offered.</p><h3><aname="KVM1"id="KVM1">KVM domain (added in 0.2.0)</a></h3><p>Support for the <ahref="http://kvm.qumranet.com/">KVM virtualization</a>
is provided in recent Linux kernels (2.6.20 and onward). This requires
specific hardware with acceleration support and the availability of the
special version of the <ahref="http://fabrice.bellard.free.fr/qemu/">QEmu</a> binary. Since this
relies on QEmu for the machine emulation like fully virtualized guests the
XML description is quite similar, here is a simple example:</p><pre><domain <spanstyle="color: #FF0000; background-color: #FFFFFF">type='kvm'</span>>
</domain></pre><p>The specific points to note if using KVM are:</p><ul><li>the top level domain element carries a type of 'kvm'</li>
<li>the <devices> emulator points to the special qemu binary required
for KVM</li>
<li>networking interface definitions definitions are somewhat different due
to a different model from Xen see below</li>
</ul><p>except those points the options should be quite similar to Xen HVM
ones.</p><h3><aname="Net1"id="Net1">Networking options for QEmu and KVM (added in 0.2.0)</a></h3><p>The networking support in the QEmu and KVM case is more flexible, and
support a variety of options:</p><ol><li>Userspace SLIRP stack
<p>Provides a virtual LAN with NAT to the outside world. The virtual
network has DHCP & DNS services and will give the guest VM addresses
starting from <code>10.0.2.15</code>. The default router will be
<code>10.0.2.2</code> and the DNS server will be <code>10.0.2.3</code>.
This networking is the only option for unprivileged users who need their
VMs to have outgoing access. Example configs are:</p>
<pre><interface type='user'/></pre>
<pre>
<interface type='user'>
<mac address="11:22:33:44:55:66:/>
</interface>
</pre>
</li>
<li>Virtual network
<p>Provides a virtual network using a bridge device in the host.
Depending on the virtual network configuration, the network may be
totally isolated,NAT'ing to aan explicit network device, or NAT'ing to
the default route. DHCP and DNS are provided on the virtual network in
all cases and the IP range can be determined by examining the virtual
network config with '<code>virsh net-dumpxml <network
name></code>'. There is one virtual network called'default' setup out
of the box which does NAT'ing to the default route and has an IP range of
<code>192.168.22.0/255.255.255.0</code>. Each guest will have an
associated tun device created with a name of vnetN, which can also be
overriden with the <target> element. Example configs are:</p>
<pre><interface type='network'>
<source network='default'/>
</interface>
<interface type='network'>
<source network='default'/>
<target dev='vnet7'/>
<mac address="11:22:33:44:55:66:/>
</interface>
</pre>
</li>
<li>Bridge to to LAN
<p>Provides a bridge from the VM directly onto the LAN. This assumes
there is a bridge device on the host which has one or more of the hosts
physical NICs enslaved. The guest VM will have an associated tun device
created with a name of vnetN, which can also be overriden with the
<target> element. The tun device will be enslaved to the bridge.
The IP range / network configuration is whatever is used on the LAN. This
provides the guest VM full incoming & outgoing net access just like a
physical machine. Examples include:</p>
<pre><interface type='bridge'>
<source dev='br0'/>
</interface>
<interface type='bridge'>
<source dev='br0'/>
<target dev='vnet7'/>
<mac address="11:22:33:44:55:66:/>
</interface><interface type='bridge'>
<source dev='br0'/>
<target dev='vnet7'/>
<mac address="11:22:33:44:55:66:/>
</interface></pre>
</li>
<li>Generic connection to LAN
<p>Provides a means for the administrator to execute an arbitrary script
to connect the guest's network to the LAN. The guest will have a tun
device created with a name of vnetN, which can also be overriden with the
<target> element. After creating the tun device a shell script will
be run which is expected to do whatever host network integration is
required. By default this script is called /etc/qemu-ifup but can be
overriden.</p>
<pre><interface type='ethernet'/>
<interface type='ethernet'>
<target dev='vnet7'/>
<script path='/etc/qemu-ifup-mynet'/>
</interface></pre>
</li>
<li>Multicast tunnel
<p>A multicast group is setup to represent a virtual network. Any VMs
whose network devices are in the same multicast group can talk to each
other even across hosts. This mode is also available to unprivileged
users. There is no default DNS or DHCP support and no outgoing network
access. To provide outgoing network access, one of the VMs should have a
2nd NIC which is connected to one of the first 4 network types and do the
appropriate routing. The multicast protocol is compatible with that used
by user mode linux guests too. The source address used must be from the
multicast address block.</p>
<pre><interface type='mcast'>
<source address='230.0.0.1' port='5558'/>
</interface></pre>
</li>
<li>TCP tunnel
<p>A TCP client/server architecture provides a virtual network. One VM
provides the server end of the netowrk, all other VMS are configured as
clients. All network traffic is routed between the VMs via the server.
This mode is also available to unprivileged users. There is no default
DNS or DHCP support and no outgoing network access. To provide outgoing
network access, one of the VMs should have a 2nd NIC which is connected
to one of the first 4 network types and do the appropriate routing.</p>
<p>Example server config:</p>
<pre><interface type='server'>
<source address='192.168.0.1' port='5558'/>
</interface></pre>
<p>Example client config:</p>
<pre><interface type='client'>
<source address='192.168.0.1' port='5558'/>
</interface></pre>
</li>
</ol><p>To be noted, options 2, 3, 4 are also supported by Xen VMs, so it is
possible to use these configs to have networking with both Xen &
QEMU/KVMs connected to each other.</p><h3>Q<aname="QEmu1"id="QEmu1">Emu domain (added in 0.2.0)</a></h3><p>Libvirt support for KVM and QEmu is the same code base with only minor
changes. The configuration is as a result nearly identical, the only changes
are related to QEmu ability to emulate <ahref="http://www.qemu.org/status.html">various CPU type and hardware
platforms</a>, and kqemu support (QEmu own kernel accelerator when the
emulated CPU is i686 as well as the target machine):</p><pre><domain <spanstyle="color: #FF0000; background-color: #FFFFFF">type='qemu'</span>>
</domain></pre><p>The difference here are:</p><ul><li>the value of type on top-level domain, it's 'qemu' or kqemu if asking
for <ahref="http://www.qemu.org/kqemu-tech.html">kernel assisted
acceleration</a></li>
<li>the os type block defines the architecture to be emulated, and
optionally the machine type, see the discovery API below</li>
<li>the emulator string must point to the right emulator for that
architecture</li>
</ul><h3><aname="Capa1"id="Capa1">Discovering virtualization capabilities (Added in 0.2.1)</a></h3><p>As new virtualization engine support gets added to libvirt, and to handle
cases like QEmu supporting a variety of emulations, a query interface has
been added in 0.2.1 allowing to list the set of supported virtualization
capabilities on the host:</p><pre> char * virConnectGetCapabilities (virConnectPtr conn);</pre><p>The value returned is an XML document listing the virtualization
capabilities of the host and virtualization engine to which
<code>@conn</code> is connected. One can test it using <code>virsh</code>
command line tool command '<code>capabilities</code>', it dumps the XML
associated to the current connection. For example in the case of a 64 bits
machine with hardware virtualization capabilities enabled in the chip and
</capabilities></pre><p>The fist block (in red) indicates the host hardware capbilities, currently
it is limited to the CPU properties but other information may be available,
it shows the CPU architecture, and the features of the chip (the feature
block is similar to what you will find in a Xen fully virtualized domain
description).</p><p>The second block (in blue) indicates the paravirtualization support of the
Xen support, you will see the os_type of xen to indicate a paravirtual
kernel, then architecture informations and potential features.</p><p>The third block (in green) gives similar informations but when running a
32 bit OS fully virtualized with Xen using the hvm support.</p><p>This section is likely to be updated and augmented in the future, see <ahref="https://www.redhat.com/archives/libvir-list/2007-March/msg00215.html">the
discussion</a> which led to the capabilities format in the mailing-list