libvirt/docs/formatdomain.html.in
Cole Robinson eb81396863 Document overriding domain interface target
* docs/formatdomain.html.in: document that vnet and vif are reserved
  names and will be ignored if manually specified.
2009-11-20 16:25:58 +01:00

1232 lines
43 KiB
HTML

<html>
<body>
<h1>Domain XML format</h1>
<ul id="toc"></ul>
<p>
This section describes the XML format used to represent domains, there are
variations on the format based on the kind of domains run and the options
used to launch them. For hypervisor specific details consult the
<a href="drivers.html">driver docs</a>
</p>
<h2><a name="elements">Element and attribute overview</a></h2>
<p>
The root element required for all virtual machines is
named <code>domain</code>. It has two attributes, the
<code>type</code> specifies the hypervisor used for running
the domain. The allowed values are driver specific, but
include "xen", "kvm", "qemu", "lxc" and "kqemu". The
second attribute is <code>id</code> which is a unique
integer identifier for the running guest machine. Inactive
machines have no id value.
</p>
<h3><a name="elementsMetadata">General metadata</a></h3>
<pre>
&lt;domain type='xen' id='3'&gt;
&lt;name&gt;fv0&lt;/name&gt;
&lt;uuid&gt;4dea22b31d52d8f32516782e98ab3fa0&lt;/uuid&gt;
...</pre>
<dl>
<dt><code>name</code></dt>
<dd>The content of the <code>name</code> element provides
a short name for the virtual machine. This name should
consist only of alpha-numeric characters and is required
to be unique within the scope of a single host. It is
often used to form the filename for storing the persistent
configuration file. <span class="since">Since 0.0.1</span></dd>
<dt><code>uuid</code></dt>
<dd>The content of the <code>uuid</code> element provides
a globally unique identifier for the virtual machine.
The format must be RFC 4122 compliant, eg <code>3e3fce45-4f53-4fa7-bb32-11f34168b82b</code>.
If omitted when defining/creating a new machine, a random
UUID is generated. <span class="since">Since 0.0.1</span></dd>
</dl>
<h3><a name="elementsOS">Operating system booting</a></h3>
<p>
There are a number of different ways to boot virtual machines
each with their own pros and cons.
</p>
<h4><a name="elementsOSBIOS">BIOS bootloader</a></h4>
<p>
Booting via the BIOS is available for hypervisors supporting
full virtualization. In this case the BIOS has a boot order
priority (floppy, harddisk, cdrom, network) determining where
to obtain/find the boot image.
</p>
<pre>
...
&lt;os&gt;
&lt;type&gt;hvm&lt;/type&gt;
&lt;loader&gt;/usr/lib/xen/boot/hvmloader&lt;/loader&gt;
&lt;boot dev='hd'/&gt;
&lt;/os&gt;
...</pre>
<dl>
<dt><code>type</code></dt>
<dd>The content of the <code>type</code> element specifies the
type of operating system to be booted in the virtual machine.
<code>hvm</code> indicates that the OS is one designed to run
on bare metal, so requires full virtualization. <code>linux</code>
(badly named!) refers to an OS that supports the Xen 3 hypervisor
guest ABI. There are also two optional attributes, <code>arch</code>
specifying the CPU architecture to virtualization, and <code>machine</code>
referring to the machine type. The <a href="formatcaps.html">Capabilities XML</a>
provides details on allowed values for these. <span class="since">Since 0.0.1</span></dd>
<dt><code>loader</code></dt>
<dd>The optional <code>loader</code> tag refers to a firmware blob
used to assist the domain creation process. At this time, it is
only needed by Xen fully virtualized domains. <span class="since">Since 0.1.0</span></dd>
<dt><code>boot</code></dt>
<dd>The <code>dev</code> attribute takes one of the values "fd", "hd",
"cdrom" or "network" and is used to specify the next boot device
to consider. The <code>boot</code> element can be repeated multiple
times to setup a priority list of boot devices to try in turn.
<span class="since">Since 0.1.3</span>
</dd>
</dl>
<h4><a name="elementsOSBootloader">Host bootloader</a></h4>
<p>
Hypervisors employing paravirtualization do not usually emulate
a BIOS, and instead the host is responsible to kicking off the
operating system boot. This may use a pseudo-bootloader in the
host to provide an interface to choose a kernel for the guest.
An example is <code>pygrub</code> with Xen.
</p>
<pre>
...
&lt;bootloader&gt;/usr/bin/pygrub&lt;/bootloader&gt;
&lt;bootloader_args&gt;--append single&lt;/bootloader_args&gt;
...</pre>
<dl>
<dt><code>bootloader</code></dt>
<dd>The content of the <code>bootloader</code> element provides
a fully qualified path to the bootloader executable in the
host OS. This bootloader will be run to choose which kernel
to boot. The required output of the bootloader is dependent
on the hypervisor in use. <span class="since">Since 0.1.0</span></dd>
<dt><code>bootloader_args</code></dt>
<dd>The optional <code>bootloader_args</code> element allows
command line arguments to be passed to the bootloader.
<span class="since">Since 0.2.3</span>
</dd>
</dl>
<h4><a name="elementsOSKernel">Direct kernel boot</a></h4>
<p>
When installing a new guest OS it is often useful to boot directly
from a kernel and initrd stored in the host OS, allowing command
line arguments to be passed directly to the installer. This capability
is usually available for both para and full virtualized guests.
</p>
<pre>
...
&lt;os&gt;
&lt;type&gt;hvm&lt;/type&gt;
&lt;loader&gt;/usr/lib/xen/boot/hvmloader&lt;/loader&gt;
&lt;kernel&gt;/root/f8-i386-vmlinuz&lt;/kernel&gt;
&lt;initrd&gt;/root/f8-i386-initrd&lt;/initrd&gt;
&lt;cmdline&gt;console=ttyS0 ks=http://example.com/f8-i386/os/&lt;/cmdline&gt;
&lt;/os&gt;
...</pre>
<dl>
<dt><code>type</code></dt>
<dd>This element has the same semantics as described earlier in the
<a href="#elementsOSBIOS">BIOS boot section</a></dd>
<dt><code>loader</code></dt>
<dd>This element has the same semantics as described earlier in the
<a href="#elementsOSBIOS">BIOS boot section</a></dd>
<dt><code>kernel</code></dt>
<dd>The contents of this element specify the fully-qualified path
to the kernel image in the host OS.</dd>
<dt><code>initrd</code></dt>
<dd>The contents of this element specify the fully-qualified path
to the (optional) ramdisk image in the host OS.</dd>
<dt><code>cmdline</code></dt>
<dd>The contents of this element specify arguments to be passed to
the kernel (or installer) at boottime. This is often used to
specify an alternate primary console (eg serial port), or the
installation media source / kickstart file</dd>
</dl>
<h3><a name="elementsResources">Basic resources</a></h3>
<pre>
...
&lt;memory&gt;524288&lt;/memory&gt;
&lt;currentMemory&gt;524288&lt;/currentMemory&gt;
&lt;memoryBacking&gt;
&lt;hugepages/&gt;
&lt;/memoryBacking&gt;
&lt;vcpu&gt;1&lt;/vcpu&gt;
...</pre>
<dl>
<dt><code>memory</code></dt>
<dd>The maximum allocation of memory for the guest at boot time.
The units for this value are kilobytes (i.e. blocks of 1024 bytes)</dd>
<dt><code>currentMemory</code></dt>
<dd>The actual allocation of memory for the guest. This value
be less than the maximum allocation, to allow for ballooning
up the guests memory on the fly. If this is omitted, it defaults
to the same value as the <code>memory<code> element</dd>
<dt><code>memoryBacking</code></dt>
<dd>The optional <code>memoryBacking</code> element, may have an
<code>hugepages</code> element set within it. This tells the
hypervisor that the guest should have its memory allocated using
hugepages instead of the normal native page size.</dd>
<dt><code>vcpu</code></dt>
<dd>The content of this element defines the number of virtual
CPUs allocated for the guest OS.</dd>
</dl>
<h3><a name="elementsLifecycle">Lifecycle control</a></h3>
<p>
It is sometimes necessary to override the default actions taken
when a guest OS triggers a lifecycle operation. The following
collections of elements allow the actions to be specified. A
common use case is to force a reboot to be treated as a poweroff
when doing the initial OS installation. This allows the VM to be
re-configured for the first post-install bootup.
</p>
<pre>
...
&lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
&lt;on_reboot&gt;restart&lt;/on_reboot&gt;
&lt;on_crash&gt;restart&lt;/on_crash&gt;
...</pre>
<dl>
<dt><code>on_poweroff</code></dt>
<dd>The content of this element specifies the action to take when
the guest requests a poweroff.</dd>
<dt><code>on_reboot</code></dt>
<dd>The content of this element specifies the action to take when
the guest requests a reboot.</dd>
<dt><code>on_crash</code></dt>
<dd>The content of this element specifies the action to take when
the guest crashes.</dd>
</dl>
<p>
Each of these states allow for the same four possible actions.
</p>
<dl>
<dt><code>destroy</code></dt>
<dd>The domain will be terminated completely and all resources
released</dd>
<dt><code>restart</code></dt>
<dd>The domain will be terminated, and then restarted with
the same configuration</dd>
<dt><code>preserve</code></dt>
<dd>The domain will be terminated, and its resource preserved
to allow analysis.</dd>
<dt><code>rename-restart</code></dt>
<dd>The domain will be terminated, and then restarted with
a new name</dd>
</dl>
<h3><a name="elementsFeatures">Hypervisor features</a></h3>
<p>
Hypervisors may allow certain CPU / machine features to be
toggled on/off.
</p>
<pre>
...
&lt;features&gt;
&lt;pae/&gt;
&lt;acpi/&gt;
&lt;apic/&gt;
&lt;/features&gt;
...</pre>
<p>
All features are listed within the <code>features</code>
element, omitting a togglable feature tag turns it off.
The available features can be found by asking
for the <a href="formatcaps.html">capabilities XML</a>,
but a common set for fully virtualized domains are:
</p>
<dl>
<dt><code>pae</code></dt>
<dd>Physical address extension mode allows 32-bit guests
to address more than 4 GB of memory.</dd>
<dt><code>acpi</code></dt>
<dd>ACPI is useful for power management, for example, with
KVM guests it is required for graceful shutdown to work.
</dd>
</dl>
<h3><a name="elementsTime">Time keeping</a></h3>
<p>
The guest clock is typically initialized from the host clock.
Most operating systems expect the hardware clock to be kept
in UTC, and this is the default. Windows, however, expects
it to be in so called 'localtime'.
</p>
<pre>
...
&lt;clock offset="localtime"/&gt;
...</pre>
<dl>
<dt><code>clock</code></dt>
<dd>The <code>offset</code> attribute takes either "utc" or
"localtime" to specify how the guest clock is initialized
in relation to the host OS.
</dd>
</dl>
<h3><a name="elementsDevices">Devices</a></h3>
<p>
The final set of XML elements are all used to describe devices
provided to the guest domain. All devices occur as children
of the main <code>devices</code> element.
<span class="since">Since 0.1.3</span>
</p>
<pre>
...
&lt;devices&gt;
&lt;emulator&gt;/usr/lib/xen/bin/qemu-dm&lt;/emulator&gt;
...</pre>
<dl>
<dt><code>emulator</code></dt>
<dd>
The contents of the <code>emulator</code> element specify
the fully qualified path to the device model emulator binary.
The <a href="formatcaps.html">capabilities XML</a> specifies
the recommended default emulator to use for each particular
domain type / architecture combination.
</dd>
</dl>
<h4><a name="elementsDisks">Hard drives, floppy disks, CDROMs</a></h4>
<p>
Any device that looks like a disk, be it a floppy, harddisk,
cdrom, or paravirtualized driver is specified via the <code>disk</code>
element.
</p>
<pre>
...
&lt;disk type='file'&gt;
&lt;driver name="tap" type="aio"&gt;
&lt;source file='/var/lib/xen/images/fv0'/&gt;
&lt;target dev='hda' bus='ide'/&gt;
&lt;encryption type='...'&gt;
...
&lt;/encryption&gt;
&lt;/disk&gt;
...</pre>
<dl>
<dt><code>disk</code></dt>
<dd>The <code>disk</code> element is the main container for describing
disks. The <code>type</code> attribute is either "file" or "block"
and refers to the underlying source for the disk. The optional
<code>device</code> attribute indicates how the disk is to be exposed
to the guest OS. Possible values for this attribute are "floppy", "disk"
and "cdrom", defaulting to "disk".
<span class="since">Since 0.0.3; "device" attribute since 0.1.4</span></dd>
<dt><code>source</code></dt>
<dd>If the disk <code>type</code> is "file", then the <code>file</code> attribute
specifies the fully-qualified path to the file holding the disk. If the disk
<code>type</code> is "block", then the <code>dev</code> attribute specifies
the path to the host device to serve as the disk. <span class="since">Since 0.0.3</span></dd>
<dt><code>target</code></dt>
<dd>The <code>target</code> element controls the bus / device under which the
disk is exposed to the guest OS. The <code>dev</code> attribute indicates
the "logical" device name. The actual device name specified is not guaranteed to map to
the device name in the guest OS. Treat it as a device ordering hint.
The optional <code>bus</code> attribute specifies the type of disk device
to emulate; possible values are driver specific, with typical values being
"ide", "scsi", "virtio", "xen" or "usb". If omitted, the bus type is
inferred from the style of the device name. eg, a device named 'sda'
will typically be exported using a SCSI bus.
<span class="since">Since 0.0.3; <code>bus</code> attribute since 0.4.3;
"usb" attribute value since after 0.4.4</span></dd>
<dt><code>driver</code></dt>
<dd>If the hypervisor supports multiple backend drivers, then the optional
<code>driver</code> element allows them to be selected. The <code>name</code>
attribute is the primary backend driver name, while the optional <code>type</code>
attribute provides the sub-type. <span class="since">Since 0.1.8</span>
</dd>
<dt><code>encryption</code></dt>
<dd>If present, specifies how the volume is encrypted. See
the <a href="formatstorageencryption.html">Storage Encryption</a> page
for more information.
</dd>
</dl>
<h4><a name="elementsUSB">USB and PCI devices</a></h4>
<p>
USB and PCI devices attached to the host can be passed through to the guest using
the <code>hostdev</code> element. <span class="since">since after
0.4.4 for USB and 0.6.0 for PCI (KVM only)</span>:
</p>
<pre>
...
&lt;hostdev mode='subsystem' type='usb'&gt;
&lt;source&gt;
&lt;vendor id='0x1234'/&gt;
&lt;product id='0xbeef'/&gt;
&lt;/source&gt;
&lt;/hostdev&gt;
...</pre>
<p>or:</p>
<pre>
...
&lt;hostdev mode='subsystem' type='pci'&gt;
&lt;source&gt;
&lt;address bus='0x06' slot='0x02' function='0x0'/&gt;
&lt;/source&gt;
&lt;/hostdev&gt;
...</pre>
<dl>
<dt><code>hostdev</code></dt>
<dd>The <code>hostdev</code> element is the main container for describing
host devices. For usb device passthrough <code>mode</code> is always
"subsystem" and <code>type</code> is "usb" for an USB device and "pci"
for a PCI device..
<dt><code>source</code></dt>
<dd>The source element describes the device as seen from the host.
The USB device can either be addressed by vendor / product id using the
<code>vendor</code> and <code>product</code> elements or by the device's
address on the hosts using the <code>address</code> element.
PCI devices on the other hand can only be described by their
<code>address</code></dd>
<dt><code>vendor</code>, <code>product</code></dt>
<dd>The <code>vendor</code> and <code>product</code> elements each have an
<code>id</code> attribute that specifies the USB vendor and product id.
The ids can be given in decimal, hexadecimal (starting with 0x) or
octal (starting with 0) form.</dd>
<dt><code>address</code></dt>
<dd>The <code>address</code> element for USB devices has a
<code>bus</code> and <code>device</code> attribute to specify the
USB bus and device number the device appears at on the host.
The values of these attributes can be given in decimal, hexadecimal
(starting with 0x) or octal (starting with 0) form.
For PCI devices the element carries 3 attributes allowing to designate
the device as can be found with the <code>lspci</code> or
with <code>virsh nodedev-list</code>. The
<code>bus</code> attribute allows the hexadecimal values 0 to ff, the
<code>slot</code> attribute allows the hexadecimal values 0 to 1f, and
the <code>function</code> attribute allows the hexadecimal values 0 to
7. There is also an optional <code>domain</code> attribute for the
PCI domain, with hexadecimal values 0 to ffff, but it is currently
not used by qemu.</dd>
</dl>
<h4><a name="elementsNICS">Network interfaces</a></h4>
<pre>
...
&lt;interface type='bridge'&gt;
&lt;source bridge='xenbr0'/&gt;
&lt;mac address='00:16:3e:5d:c7:9e'/&gt;
&lt;script path='vif-bridge'/&gt;
&lt;/interface&gt;
...</pre>
<h5><a name="elementsNICSVirtual">Virtual network</a></h5>
<p>
<strong><em>
This is the recommended config for general guest connectivity on
hosts with dynamic / wireless networking configs
</em></strong>
</p>
<p>
Provides a virtual network using a bridge device in the host.
Depending on the virtual network configuration, the network may be
totally isolated, NAT'ing to an explicit network device, or NAT'ing to
the default route. DHCP and DNS are provided on the virtual network in
all cases and the IP range can be determined by examining the virtual
network config with '<code>virsh net-dumpxml [networkname]</code>'.
There is one virtual network called 'default' setup out
of the box which does NAT'ing to the default route and has an IP range of
<code>192.168.22.0/255.255.255.0</code>. Each guest will have an
associated tun device created with a name of vnetN, which can also be
overridden with the &lt;target&gt; element (see
<a href="#elementsNICSTargetOverride">overriding the target element</a>).
</p>
<pre>
...
&lt;interface type='network'&gt;
&lt;source network='default'/&gt;
&lt;/interface&gt;
...
&lt;interface type='network'&gt;
&lt;source network='default'/&gt;
&lt;target dev='vnet7'/&gt;
&lt;mac address="11:22:33:44:55:66"/&gt;
&lt;/interface&gt;
...</pre>
<h5><a name="elementsNICSBridge">Bridge to LAN</a></h5>
<p>
<strong><em>
This is the recommended config for general guest connectivity on
hosts with static wired networking configs
</em></strong>
</p>
<p>
Provides a bridge from the VM directly onto the LAN. This assumes
there is a bridge device on the host which has one or more of the hosts
physical NICs enslaved. The guest VM will have an associated tun device
created with a name of vnetN, which can also be overridden with the
&lt;target&gt; element (see
<a href="#elementsNICSTargetOverride">overriding the target element</a>).
The tun device will be enslaved to the bridge. The IP range / network
configuration is whatever is used on the LAN. This provides the guest VM
full incoming &amp; outgoing net access just like a physical machine.
</p>
<pre>
...
&lt;interface type='bridge'&gt;
&lt;source bridge='br0'/&gt;
&lt;/interface&gt;
&lt;interface type='bridge'&gt;
&lt;source bridge='br0'/&gt;
&lt;target dev='vnet7'/&gt;
&lt;mac address="11:22:33:44:55:66"/&gt;
&lt;/interface&gt;
...</pre>
<h5><a name="elementsNICSSlirp">Userspace SLIRP stack</a></h5>
<p>
Provides a virtual LAN with NAT to the outside world. The virtual
network has DHCP &amp; DNS services and will give the guest VM addresses
starting from <code>10.0.2.15</code>. The default router will be
<code>10.0.2.2</code> and the DNS server will be <code>10.0.2.3</code>.
This networking is the only option for unprivileged users who need their
VMs to have outgoing access.
</p>
<pre>
...
&lt;interface type='user'/&gt;
...
&lt;interface type='user'&gt;
&lt;mac address="11:22:33:44:55:66"/&gt;
&lt;/interface&gt;
...</pre>
<h5><a name="elementsNICSEthernet">Generic ethernet connection</a></h5>
<p>
Provides a means for the administrator to execute an arbitrary script
to connect the guest's network to the LAN. The guest will have a tun
device created with a name of vnetN, which can also be overridden with the
&lt;target&gt; element. After creating the tun device a shell script will
be run which is expected to do whatever host network integration is
required. By default this script is called /etc/qemu-ifup but can be
overridden.
</p>
<pre>
...
&lt;interface type='ethernet'/&gt;
...
&lt;interface type='ethernet'&gt;
&lt;target dev='vnet7'/&gt;
&lt;script path='/etc/qemu-ifup-mynet'/&gt;
&lt;/interface&gt;
...</pre>
<h5><a name="elementsNICSMulticast">Multicast tunnel</a></h5>
<p>
A multicast group is setup to represent a virtual network. Any VMs
whose network devices are in the same multicast group can talk to each
other even across hosts. This mode is also available to unprivileged
users. There is no default DNS or DHCP support and no outgoing network
access. To provide outgoing network access, one of the VMs should have a
2nd NIC which is connected to one of the first 4 network types and do the
appropriate routing. The multicast protocol is compatible with that used
by user mode linux guests too. The source address used must be from the
multicast address block.
</p>
<pre>
...
&lt;interface type='mcast'&gt;
&lt;source address='230.0.0.1' port='5558'/&gt;
&lt;/interface&gt;
...</pre>
<h5><a name="elementsNICSTCP">TCP tunnel</a></h5>
<p>
A TCP client/server architecture provides a virtual network. One VM
provides the server end of the network, all other VMS are configured as
clients. All network traffic is routed between the VMs via the server.
This mode is also available to unprivileged users. There is no default
DNS or DHCP support and no outgoing network access. To provide outgoing
network access, one of the VMs should have a 2nd NIC which is connected
to one of the first 4 network types and do the appropriate routing.</p>
<pre>
...
&lt;interface type='server'&gt;
&lt;source address='192.168.0.1' port='5558'/&gt;
&lt;/interface&gt;
...
&lt;interface type='client'&gt;
&lt;source address='192.168.0.1' port='5558'/&gt;
&lt;/interface&gt;
...</pre>
<h5><a name="elementsNICSModel">Setting the NIC model</a></h5>
<pre>
...
&lt;interface type='network'&gt;
&lt;source network='default'/&gt;
&lt;target dev='vnet1'/&gt;
<b>&lt;model type='ne2k_pci'/&gt;</b>
&lt;/interface&gt;
...</pre>
<p>
For hypervisors which support this, you can set the model of
emulated network interface card.
</p>
<p>
The values for <code>type</code> aren't defined specifically by
libvirt, but by what the underlying hypervisor supports (if
any). For QEMU and KVM you can get a list of supported models
with these commands:
</p>
<pre>
qemu -net nic,model=? /dev/null
qemu-kvm -net nic,model=? /dev/null
</pre>
<p>
Typical values for QEMU and KVM include:
ne2k_isa i82551 i82557b i82559er ne2k_pci pcnet rtl8139 e1000 virtio
</p>
<h5><a name="elementsNICSTargetOverride">Overriding the target element</a></h5>
<pre>
...
&lt;interface type='network'&gt;
&lt;source network='default'/&gt;
<b>&lt;target dev='vnet1'/&gt;</b>
&lt;/interface&gt;
...</pre>
<p>
If no target is specified, certain hypervisors will automatically
generate a name for the created tun device. This name can be manually
specifed, however the name <i>must not start with either 'vnet' or
'vif'</i>, which are prefixes reserved by libvirt and certain
hypervisors. Manually specified targets using these prefixes will be
ignored.
</p>
<h4><a name="elementsInput">Input devices</a></h4>
<p>
Input devices allow interaction with the graphical framebuffer in the guest
virtual machine. When enabling the framebuffer, an input device is automatically
provided. It may be possible to add additional devices explicitly, for example,
to provide a graphics tablet for absolute cursor movement.
</p>
<pre>
...
&lt;input type='mouse' bus='usb'/&gt;
...</pre>
<dl>
<dt><code>input</code></dt>
<dd>The <code>input</code> element has one mandatory attribute, the <code>type</code>
whose value can be either 'mouse' or 'tablet'. The latter provides absolute
cursor movement, while the former uses relative movement. The optional
<code>bus</code> attribute can be used to refine the exact device type.
It takes values "xen" (paravirtualized), "ps2" and "usb".</dd>
</dl>
<h4><a name="elementsGraphics">Graphical framebuffers</a></h4>
<p>
A graphics device allows for graphical interaction with the
guest OS. A guest will typically have either a framebuffer
or a text console configured to allow interaction with the
admin.
</p>
<pre>
...
&lt;graphics type='sdl' display=':0.0'/&gt;
&lt;graphics type='vnc' port='5904'/&gt;
&lt;graphics type='rdp' autoport='yes' multiUser='yes' /&gt;
&lt;graphics type='desktop' fullscreen='yes'/&gt;
...</pre>
<dl>
<dt><code>graphics</code></dt>
<dd>The <code>graphics</code> element has a mandatory <code>type</code>
attribute which takes the value "sdl", "vnc", "rdp" or "desktop":
<dl>
<dt><code>"sdl"</code></dt>
<dd>
This displays a window on the host desktop, it can take 3 optional arguments:
a <code>display</code> attribute for the display to use, an <code>xauth</code>
attribute for the authentication identifier, and an optional <code>fullscreen</code>
attribute accepting values 'yes' or 'no'.
</dd>
<dt><code>"vnc"</code></dt>
<dd>
Starts a VNC server. The <code>port</code> attribute specifies the TCP
port number (with -1 as legacy syntax indicating that it should be
auto-allocated). The <code>autoport</code> attribute is the new
preferred syntax for indicating autoallocation of the TCP port to use.
The <code>listen</code> attribute is an IP address for the server to
listen on. The <code>passwd</code> attribute provides a VNC password
in clear text. The <code>keymap</code> attribute specifies the keymap
to use.
</dd>
<dt><code>"rdp"</code></dt>
<dd>
Starts a RDP server. The <code>port</code> attribute
specifies the TCP port number (with -1 as legacy syntax indicating
that it should be auto-allocated). The <code>autoport</code> attribute
is the new preferred syntax for indicating autoallocation of the TCP
port to use. The <code>replaceUser</code> attribute is a boolean deciding
whether multiple simultaneous connections to the VM are permitted.
The <code>multiUser</code> whether the existing connection must be dropped
and a new connection must be established by the VRDP server, when a new
client connects in single connection mode.
</dd>
<dt><code>"desktop"</code></dt>
<dd>
This value is reserved for VirtualBox domains for the moment. It displays
a window on the host desktop, similarly to "sdl", but using the VirtualBox
viewer. Just like "sdl", it accepts the optional attributes <code>display</code>
and <code>fullscreen</code>.
</dd>
</dl>
</dd>
</dl>
<h4><a name="elementsVideo">Video devices</a></h4>
<p>
A video device.
</p>
<pre>
...
&lt;video type='vga' nvram='8192' heads='1'&gt;
&lt;acceleration accel3d='yes' accel3d='yes' /&gt;
&lt;/video&gt;
...</pre>
<dl>
<dt><code>video</code></dt>
<dd>The <code>video</code> element has a mandatory <code>type</code>
attribute which takes the value "vga", "cirrus", "vmvga", "xen" or "vbox".
You can also provide the amount of video memory using <code>nvram</code>,
the number of screen with <code>heads</code>, and whether acceleration
should be enabled (if supported) using the <code>accel3d</code> and
<code>accel2d</code> attributes in the <code>acceleration</code> element.
</dl>
<h4><a name="elementsConsole">Consoles, serial, parallel &amp; channel devices</a></h4>
<p>
A character device provides a way to interact with the virtual machine.
Paravirtualized consoles, serial ports, parallel ports and channels are
all classed as character devices and so represented using the same syntax.
</p>
<pre>
...
&lt;parallel type='pty'&gt;
&lt;source path='/dev/pts/2'/&gt;
&lt;target port='0'/&gt;
&lt;/parallel&gt;
&lt;serial type='pty'&gt;
&lt;source path='/dev/pts/3'/&gt;
&lt;target port='0'/&gt;
&lt;/serial&gt;
&lt;console type='pty'&gt;
&lt;source path='/dev/pts/4'/&gt;
&lt;target port='0'/&gt;
&lt;/console&gt;
&lt;channel type='unix'&gt;
&lt;source mode='bind' path='/tmp/guestfwd'/&gt;
&lt;target type='guestfwd' address='10.0.2.1' port='4600'/&gt;
&lt;/channel&gt;
&lt;/devices&gt;
&lt;/domain&gt;</pre>
<p>
In each of these directives, the top-level element name (parallel, serial,
console, channel) describes how the device is presented to the guest. The
guest interface is configured by the <code>target</code> element.
</p>
<p>
The interface presented to the host is given in the <code>type</code>
attribute of the top-level element. The host interface is
configured by the <code>source</code> element.
</p>
<h5><a name="elementsCharGuestInterface">Guest interface</a></h5>
<p>
A character device presents itself to the guest as one of the following
types.
</p>
<h6><a name="elementCharParallel">Parallel port</a></h6>
<pre>
...
&lt;parallel type='pty'&gt;
&lt;source path='/dev/pts/2'/&gt;
&lt;target port='0'/&gt;
&lt;/parallel&gt;
...</pre>
<p>
<code>target</code> can have a <code>port</code> attribute, which
specifies the port number. Ports are numbered starting from 1. There are
usually 0, 1 or 2 parallel ports.
</p>
<h6><a name="elementCharSerial">Serial port</a></h6>
<pre>
...
&lt;serial type='pty'&gt;
&lt;source path='/dev/pts/3'/&gt;
&lt;target port='0'/&gt;
&lt;/serial&gt;
...</pre>
<p>
<code>target</code> can have a <code>port</code> attribute, which
specifies the port number. Ports are numbered starting from 1. There are
usually 0, 1 or 2 serial ports.
</p>
<h6><a name="elementCharConsole">Console</a></h6>
<p>
This represents the primary console. This can be the paravirtualized
console with Xen guests, or duplicates the primary serial port for fully
virtualized guests without a paravirtualized console.
</p>
<pre>
...
&lt;console type='pty'&gt;
&lt;source path='/dev/pts/4'/&gt;
&lt;target port='0'/&gt;
&lt;/console&gt;
...</pre>
<p>
If the console is presented as a serial port, the <code>target</code>
element has the same attributes as for a serial port. There is usually
only 1 console.
</p>
<h6><a name="elementCharChannel">Channel</a></h6>
<p>
This represents a private communication channel between the host and the
guest.
</p>
<pre>
...
&lt;channel type='unix'&gt;
&lt;source mode='bind' path='/tmp/guestfwd'/&gt;
&lt;target type='guestfwd' address='10.0.2.1' port='4600'/&gt;
&lt;/channel&gt;
...</pre>
<p>
This can be implemented in a variety of ways. The specific type of
channel is given in the <code>type</code> attribute of the
<code>target</code> element. Different channel types have different
<code>target</code> attributes.
</p>
<dl>
<dt><code>guestfwd</code></dt>
<dd>TCP traffic sent by the guest to a given IP address and port is
forwarded to the channel device on the host. The <code>target</code>
element must have <code>address</code> and <code>port</code> attributes.
<span class="since">Since 0.7.3</span></dd>
</dl>
<h5><a name="elementsCharHostInterface">Host interface</a></h5>
<p>
A character device presents itself to the host as one of the following
types.
</p>
<h6><a name="elementsCharSTDIO">Domain logfile</a></h6>
<p>
This disables all input on the character device, and sends output
into the virtual machine's logfile
</p>
<pre>
...
&lt;console type='stdio'&gt;
&lt;target port='1'&gt;
&lt;/console&gt;
...</pre>
<h6><a name="elementsCharFle">Device logfile</a></h6>
<p>
A file is opened and all data sent to the character
device is written to the file.
</p>
<pre>
...
&lt;serial type="file"&gt;
&lt;source path="/var/log/vm/vm-serial.log"/&gt;
&lt;target port="1"/&gt;
&lt;/serial&gt;
...</pre>
<h6><a name="elementsCharVC">Virtual console</a></h6>
<p>
Connects the character device to the graphical framebuffer in
a virtual console. This is typically accessed via a special
hotkey sequence such as "ctrl+alt+3"
</p>
<pre>
...
&lt;serial type='vc'&gt;
&lt;target port="1"/&gt;
&lt;/serial&gt;
...</pre>
<h6><a name="elementsCharNull">Null device</a></h6>
<p>
Connects the character device to the void. No data is ever
provided to the input. All data written is discarded.
</p>
<pre>
...
&lt;serial type='null'&gt;
&lt;target port="1"/&gt;
&lt;/serial&gt;
...</pre>
<h6><a name="elementsCharPTY">Pseudo TTY</a></h6>
<p>
A Pseudo TTY is allocated using /dev/ptmx. A suitable client
such as 'virsh console' can connect to interact with the
serial port locally.
</p>
<pre>
...
&lt;serial type="pty"&gt;
&lt;source path="/dev/pts/3"/&gt;
&lt;target port="1"/&gt;
&lt;/serial&gt;
...</pre>
<p>
NB special case if &lt;console type='pty'&gt;, then the TTY
path is also duplicated as an attribute tty='/dev/pts/3'
on the top level &lt;console&gt; tag. This provides compat
with existing syntax for &lt;console&gt; tags.
</p>
<h6><a name="elementsCharHost">Host device proxy</a></h6>
<p>
The character device is passed through to the underlying
physical character device. The device types must match,
eg the emulated serial port should only be connected to
a host serial port - don't connect a serial port to a parallel
port.
</p>
<pre>
...
&lt;serial type="dev"&gt;
&lt;source path="/dev/ttyS0"/&gt;
&lt;target port="1"/&gt;
&lt;/serial&gt;
...</pre>
<h6><a name="elementsCharPipe">Named pipe</a></h6>
<p>
The character device writes output to a named pipe. See pipe(7) for
more info.
</p>
<pre>
...
&lt;serial type="pipe"&gt;
&lt;source path="/tmp/mypipe"/&gt;
&lt;target port="1"/&gt;
&lt;/serial&gt;
...</pre>
<h6><a name="elementsCharTCP">TCP client/server</a></h6>
<p>
The character device acts as a TCP client connecting to a
remote server.
</p>
<pre>
...
&lt;serial type="tcp"&gt;
&lt;source mode="connect" host="0.0.0.0" service="2445"/&gt;
&lt;protocol type="raw"/&gt;
&lt;target port="1"/&gt;
&lt;/serial&gt;
...</pre>
<p>
Or as a TCP server waiting for a client connection.
</p>
<pre>
...
&lt;serial type="tcp"&gt;
&lt;source mode="bind" host="127.0.0.1" service="2445"/&gt;
&lt;protocol type="raw"/&gt;
&lt;target port="1"/&gt;
&lt;/serial&gt;
...</pre>
<p>
Alternatively you can use telnet instead of raw TCP.
<p>
<pre>
...
&lt;serial type="tcp"&gt;
&lt;source mode="connect" host="0.0.0.0" service="2445"/&gt;
&lt;protocol type="telnet"/&gt;
&lt;target port="1"/&gt;
&lt;/serial&gt;
...
&lt;serial type="tcp"&gt;
&lt;source mode="bind" host="127.0.0.1" service="2445"/&gt;
&lt;protocol type="telnet"/&gt;
&lt;target port="1"/&gt;
&lt;/serial&gt;
...</pre>
<h6><a name="elementsCharUDP">UDP network console</a></h6>
<p>
The character device acts as a UDP netconsole service,
sending and receiving packets. This is a lossy service.
</p>
<pre>
...
&lt;serial type="udp"&gt;
&lt;source mode="bind" host="0.0.0.0" service="2445"/&gt;
&lt;source mode="connect" host="0.0.0.0" service="2445"/&gt;
&lt;target port="1"/&gt;
&lt;/serial&gt;
...</pre>
<h6><a name="elementsCharUNIX">UNIX domain socket client/server</a></h6>
<p>
The character device acts as a UNIX domain socket server,
accepting connections from local clients.
</p>
<pre>
...
&lt;serial type="unix"&gt;
&lt;source mode="bind" path="/tmp/foo"/&gt;
&lt;target port="1"/&gt;
&lt;/serial&gt;
...</pre>
<h4><a name="elementsSound">Sound devices</a></h4>
<p>
A virtual sound card can be attached to the host via the
<code>sound</code> element. <span class="since">Since 0.4.3</span>
</p>
<pre>
...
&lt;sound model='es1370'/&gt;
...</pre>
<dl>
<dt><code>sound</code></dt>
<dd>
The <code>sound</code> element has one mandatory attribute,
<code>model</code>, which specifies what real sound device is emulated.
Valid values are specific to the underlying hypervisor, though typical
choices are 'es1370', 'sb16', and 'ac97'
(<span class="since">'ac97' only since 0.6.0</span>)
</dd>
</dl>
<h4><a name="elementsWatchdog">Watchdog device</a></h4>
<p>
A virtual hardware watchdog device can be added to the guest via
the <code>watchdog</code> element.
<span class="since">Since 0.7.3, QEMU and KVM only</span>
</p>
<p>
The watchdog device requires an additional driver and management
daemon in the guest. Just enabling the watchdog in the libvirt
configuration does not do anything useful on its own.
</p>
<p>
Currently libvirt does not support notification when the
watchdog fires. This feature is planned for a future version of
libvirt.
</p>
<pre>
...
&lt;watchdog model='i6300esb'/&gt;
...</pre>
<pre>
...
&lt;watchdog model='i6300esb' action='poweroff'/&gt;
...</pre>
<dl>
<dt><code>model</code></dt>
<dd>
<p>
The required <code>model</code> attribute specifies what real
watchdog device is emulated. Valid values are specific to the
underlying hypervisor.
</p>
<p>
QEMU and KVM support:
</p>
<ul>
<li> 'i6300esb' &mdash; the recommended device,
emulating a PCI Intel 6300ESB </li>
<li> 'ib700' &mdash; emulating an ISA iBase IB700 </li>
</ul>
</dd>
<dt><code>action</code></dt>
<dd>
<p>
The optional <code>action</code> attribute describes what
action to take when the watchdog expires. Valid values are
specific to the underlying hypervisor.
</p>
<p>
QEMU and KVM support:
</p>
<ul>
<li>'reset' &mdash; default, forcefully reset the guest</li>
<li>'shutdown' &mdash; gracefully shutdown the guest
(not recommended) </li>
<li>'poweroff' &mdash; forcefully power off the guest</li>
<li>'pause' &mdash; pause the guest</li>
<li>'none' &mdash; do nothing</li>
</ul>
<p>
Note that the 'shutdown' action requires that the guest
is responsive to ACPI signals. In the sort of situations
where the watchdog has expired, guests are usually unable
to respond to ACPI signals. Therefore using 'shutdown'
is not recommended.
</p>
</dd>
</dl>
<h2><a name="examples">Example configs</a></h2>
<p>
Example configurations for each driver are provide on the
driver specific pages listed below
</p>
<ul>
<li><a href="drvxen.html#xmlconfig">Xen examples</a></li>
<li><a href="drvqemu.html#xmlconfig">QEMU/KVM examples</a></li>
</ul>
</body>
</html>