diff --git a/ChangeLog b/ChangeLog index 0f2d2ad6e0..63d5734b9d 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,9 @@ +Thu May 8 10:19:11 EST 2008 Daniel P. Berrange + + * docs/page.xsl: Fix detection of sub-headings + * docs/domain.html, docs/domain.html.in: Re-write content to + reflect current domain XML format + Thu May 8 07:51:11 EST 2008 Daniel P. Berrange * src/auth.html.in, src/auth.html: Fix policykit config docs diff --git a/docs/formatdomain.html b/docs/formatdomain.html index 5d5df66a5d..429d9a98cc 100644 --- a/docs/formatdomain.html +++ b/docs/formatdomain.html @@ -114,206 +114,718 @@

Domain XML format

-

This section describes the XML format used to represent domains, there are -variations on the format based on the kind of domains run and the options -used to launch them:

-

Normal paravirtualized Xen -guests:

-

The root element must be called domain with no namespace, the -type attribute indicates the kind of hypervisor used, 'xen' is -the default value. The id attribute gives the domain id at -runtime (not however that this may change, for example if the domain is saved -to disk and restored). The domain has a few children whose order is not -significant:

-
  • name: the domain name, preferably ASCII based
  • memory: the maximum memory allocated to the domain in kilobytes
  • vcpu: the number of virtual cpu configured for the domain
  • os: a block describing the Operating System, its content will be - dependent on the OS type -
    • type: indicate the OS type, always linux at this point
    • kernel: path to the kernel on the Domain 0 filesystem
    • initrd: an optional path for the init ramdisk on the Domain 0 - filesystem
    • cmdline: optional command line to the kernel
    • root: the root filesystem from the guest viewpoint, it may be - passed as part of the cmdline content too
  • devices: a list of disk, interface and - console descriptions in no special order
-

The format of the devices and their type may grow over time, but the -following should be sufficient for basic use:

-

A disk device indicates a block device, it can have two -values for the type attribute either 'file' or 'block' corresponding to the 2 -options available at the Xen layer. It has two mandatory children, and one -optional one in no specific order:

-
  • source with a file attribute containing the path in Domain 0 to the - file or a dev attribute if using a block device, containing the device - name ('hda5' or '/dev/hda5')
  • target indicates in a dev attribute the device where it is mapped in - the guest
  • readonly an optional empty element indicating the device is - read-only
  • shareable an optional empty element indicating the device - can be used read/write with other domains
-

An interface element describes a network device mapped on the -guest, it also has a type whose value is currently 'bridge', it also have a -number of children in no specific order:

-
  • source: indicating the bridge name
  • mac: the optional mac address provided in the address attribute
  • ip: the optional IP address provided in the address attribute
  • script: the script used to bridge the interface in the Domain 0
  • target: and optional target indicating the device name.
-

A console element describes a serial console connection to -the guest. It has no children, and a single attribute tty which -provides the path to the Pseudo TTY on which the guest console can be -accessed

-

Life cycle actions for the domain can also be expressed in the XML format, -they drive what should be happening if the domain crashes, is rebooted or is -poweroff. There is various actions possible when this happen:

-
  • destroy: The domain is cleaned up (that's the default normal processing - in Xen)
  • restart: A new domain is started in place of the old one with the same - configuration parameters
  • preserve: The domain will remain in memory until it is destroyed - manually, it won't be running but allows for post-mortem debugging
  • rename-restart: a variant of the previous one but where the old domain - is renamed before being saved to allow a restart
-

The following could be used for a Xen production system:

-
<domain>
-  ...
-  <on_reboot>restart</on_reboot>
-  <on_poweroff>destroy</on_poweroff>
-  <on_crash>rename-restart</on_crash>
-  ...
-</domain>
-

While the format may be extended in various ways as support for more -hypervisor types and features are added, it is expected that this core subset -will remain functional in spite of the evolution of the library.

-

- Fully virtualized guests -

-

There is a few things to notice specifically for HVM domains:

-
  • the optional <features> block is used to enable - certain guest CPU / system features. For HVM guests the following - features are defined: -
    • pae - enable PAE memory addressing
    • apic - enable IO APIC
    • acpi - enable ACPI bios
  • the optional <clock> element is used to specify - whether the emulated BIOS clock in the guest is synced to either - localtime or utc. In general Windows will - want localtime while all other operating systems will - want utc. The default is thus utc
  • the <os> block description is very different, first - it indicates that the type is 'hvm' for hardware virtualization, then - instead of a kernel, boot and command line arguments, it points to an os - boot loader which will extract the boot information from the boot device - specified in a separate boot element. The dev attribute on - the boot tag can be one of: -
    • fd - boot from first floppy device
    • hd - boot from first harddisk device
    • cdrom - boot from first cdrom device
  • the <devices> section includes an emulator entry - pointing to an additional program in charge of emulating the devices
  • the disk entry indicates in the dev target section that the emulation - for the drive is the first IDE disk device hda. The list of device names - supported is dependent on the Hypervisor, but for Xen it can be any IDE - device hda-hdd, or a floppy device - fda, fdb. The <disk> element - also supports a 'device' attribute to indicate what kinda of hardware to - emulate. The following values are supported: -
    • floppy - a floppy disk controller
    • disk - a generic hard drive (the default it - omitted)
    • cdrom - a CDROM device
    - For Xen 3.0.2 and earlier a CDROM device can only be emulated on the - hdc channel, while for 3.0.3 and later, it can be emulated - on any IDE channel.
  • the <devices> section also include at least one - entry for the graphic device used to render the os. Currently there is - just 2 types possible 'vnc' or 'sdl'. If the type is 'vnc', then an - additional port attribute will be present indicating the TCP - port on which the VNC server is accepting client connections.
-

It is likely that the HVM description gets additional optional elements -and attributes as the support for fully virtualized domain expands, -especially for the variety of devices emulated and the graphic support -options offered.

+ +

+ This section describes the XML format used to represent domains, there are + variations on the format based on the kind of domains run and the options + used to launch them. For hypervisor specific details consult the + driver docs +

+

+ Element and attribute overview +

+

+ The root element required for all virtual machines is + named domain. It has two attributes, the + type specifies the hypervisor used for running + the domain. The allowed values are driver specific, but + include "xen", "kvm", "qemu", "lxc" and "kqemu". The + second attribute is id which is a unique + integer identifier for the running guest machine. Inactive + machines have no id value. +

- Networking interface options -

-

The networking support in the QEmu and KVM case is more flexible, and -support a variety of options:

-
  1. Userspace SLIRP stack -

    Provides a virtual LAN with NAT to the outside world. The virtual - network has DHCP & DNS services and will give the guest VM addresses - starting from 10.0.2.15. The default router will be - 10.0.2.2 and the DNS server will be 10.0.2.3. - This networking is the only option for unprivileged users who need their - VMs to have outgoing access. Example configs are:

    -
    <interface type='user'/>
    -
    -<interface type='user'>
    -  <mac address="11:22:33:44:55:66"/>
    -</interface>
    -    
    -
  2. Virtual network -

    Provides a virtual network using a bridge device in the host. - Depending on the virtual network configuration, the network may be - totally isolated, NAT'ing to an explicit network device, or NAT'ing to - the default route. DHCP and DNS are provided on the virtual network in - all cases and the IP range can be determined by examining the virtual - network config with 'virsh net-dumpxml <network - name>'. There is one virtual network called 'default' setup out - of the box which does NAT'ing to the default route and has an IP range of - 192.168.22.0/255.255.255.0. Each guest will have an - associated tun device created with a name of vnetN, which can also be - overridden with the <target> element. Example configs are:

    -
    <interface type='network'>
    -  <source network='default'/>
    -</interface>
    +          General metadata
    +        
    +        
    +      <domain type='xen' id='3'>
    +        <name>fv0</name>
    +        <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid>
    +        ...
    +
    name
    The content of the name element provides + a short name for the virtual machine. This name should + consist only of alpha-numeric characters and is required + to be unique within the scope of a single host. It is + often used to form the filename for storing the persistent + configuration file. Since 0.0.1
    uuid
    The content of the uuid element provides + a globally unique identifier for the virtual machine. + The format must be RFC 4122 compliant, eg 3e3fce45-4f53-4fa7-bb32-11f34168b82b. + If omitted when defining/creating a new machine, a random + UUID is generated. Since 0.0.1
    +

    + Operating system booting +

    +

    + There are a number of different ways to boot virtual machines + each with their own pros and cons. +

    +

    + BIOS bootloader +

    +

    + Booting via the BIOS is available for hypervisors supporting + full virtualization. In this case the BIOS has a boot order + priority (floppy, harddisk, cdrom, network) determining where + to obtain/find the boot image. +

    +
    +        ...
    +        <os>
    +          <type>hvm</type>
    +          <loader>/usr/lib/xen/boot/hvmloader</loader>
    +          <boot dev='hd'/>
    +        </os>
    +        ...
    +
    type
    The content of the type element specifies the + type of operating system to be booted in the virtual machine. + hvm indicates that the OS is one designed to run + on bare metal, so requires full virtualization. linux + (badly named!) refers to an OS that supports the Xen 3 hypervisor + guest ABI. There are also two optional attributes, arch + specifying the CPU architecture to virtualization, and machine + refering to the machine type. The Capabilities XML + provides details on allowed values for these. Since 0.0.1
    loader
    The optional loader tag refers to a firmware blob + used to assist the domain creation process. At this time, it is + only needed by Xen fullyvirtualized domains. Since 0.1.0
    boot
    The dev attribute takes one of the values "fd", "hd", + "cdrom" or "network" and is used to specify the next boot device + to consider. The boot element can be repeated multiple + times to setup a priority list of boot devices to try in turn. + Since 0.1.3 +
    +

    + Host bootloader +

    +

    + Hypervisors employing paravirtualization do not usually emulate + a BIOS, and instead the host is responsible to kicking off the + operating system boot. This may use a pseduo-bootloader in the + host to provide an interface to choose a kernel for the guest. + An example is pygrub with Xen. +

    +
    +        ...
    +	<bootloader>/usr/bin/pygrub</bootloader>
    +	<bootloader_args>--append single</bootloader_args>
    +        ...
    +
    bootloader
    The content of the bootloader element provides + a fullyqualified path to the bootloader executable in the + host OS. This bootloader will be run to choose which kernel + to boot. The required output of the bootloader is dependant + on the hypervisor in use. Since 0.1.0
    bootloader_args
    The optional bootloader_args element allows + command line arguments to be passed to the bootloader. + Since 0.2.3 +
    +

    + Direct kernel boot +

    +

    + When installing a new guest OS it is often useful to boot directly + from a kernel and initrd stored in the host OS, allowing command + line arguments to be passed directly to the installer. This capability + is usually available for both para and full virtualized guests. +

    +
    +        ...
    +	<os>
    +          <type>hvm</type>
    +          <loader>/usr/lib/xen/boot/hvmloader</loader>
    +          <kernel>/root/f8-i386-vmlinuz</kernel>
    +          <initrd>/root/f8-i386-initrd</initrd>
    +          <cmdline>console=ttyS0 ks=http://example.com/f8-i386/os/</cmdline>
    +	</os>
    +	...
    +
    type
    This element has the same semantics as described earlier in the + BIOS boot section
    type
    This element has the same semantics as described earlier in the + BIOS boot section
    kernel
    The contents of this element specify the fully-qualified path + to the kernel image in the host OS.
    initrd
    The contents of this element specify the fully-qualified path + to the (optional) ramdisk image in the host OS.
    cmdline
    The contents of this element specify arguments to be passed to + the kernel (or installer) at boottime. This is often used to + specify an alternate primary console (eg serial port), or the + installation media source / kickstart file
    +

    + Basic resources +

    +
    +        ...
    +	<memory>524288</memory>
    +	<currentMemory>524288</currentMemory>
    +	<vcpu>1</vcpu>
    +	...
    +
    memory
    The maximum allocation of memory for the guest at boot time. + The units for this value are bytes
    currentMemory
    The actual allocation of memory for the guest. This value + be less than the maximum allocation, to allow for ballooning + up the guests memory on the fly. If this is omitted, it defaults + to the same value as the memory element
    vcpu
    The content of this element defines the number of virtual + CPUs allocated for the guest OS.
    +

    + Lifecycle control +

    +

    + It is sometimes neccessary to override the default actions taken + when a guest OS triggers a lifecycle operation. The following + collections of elements allow the actions to be specified. A + common use case is to force a reboot to be treated as a poweroff + when doing the initial OS installation. This allows the VM to be + re-configured for the first post-install bootup. +

    +
    +        ...
    +	<on_poweroff>destroy</on_poweroff>
    +	<on_reboot>restart</on_reboot>
    +	<on_crash>restart</on_crash>
    +	...
    +
    on_poweroff
    The content of this element specifies the action to take when + the guest requests a poweroff.
    on_poweroff
    The content of this element specifies the action to take when + the guest requests a reboot.
    on_poweroff
    The content of this element specifies the action to take when + the guest crashes.
    +

    + Each of these states allow for the same four possible actions. +

    +
    destroy
    The domain will be terminated completely and all resources + released
    restart
    The domain will be terminated, and then restarted with + the same configuration
    preserve
    The domain will be terminated, and its resource preserved + to allow analysis.
    rename-restart
    The domain will be terminated, and then restarted with + a new name
    +

    + Hypervisor features +

    +

    + Hypervisors may allow certain CPU / machine features to be + toggled on/off. +

    +
    +        ...
    +	<features>
    +	  <pae/>
    +	  <acpi/>
    +	  <apic/>
    +	</features>
    +	...
    +

    + All features are listed within the features + element, omitting a togglable feature tag turns it off. + The available features can be found by asking + for the capabilities XML, + but a common set for fully virtualized domains are: +

    +
    pae
    Physical address extension mode allows 32-bit guests + to address more than 4 GB of memory.
    acpi
    ACPI is useful for power management, for example, with + KVM guests it is required for graceful shutdown to work. +
    +

    + Time keeping +

    +

    + The guest clock is typically initialized from the host clock. + Most operating systems expect the hardware clock to be kept + in UTC, and this is the default. Windows, however, expects + it to be in so called 'localtime'. +

    +
    +        ...
    +        <clock sync="localtime"/>
    +	...
    +
    clock
    The sync attribute takes either "utc" or + "localtime" to specify how the guest clock is initialized + in relation to the host OS. +
    +

    + Devices +

    +

    + The final set of XML elements are all used to descibe devices + provided to the guest domain. All devices occur as children + of the main devices element. + Since 0.1.3 +

    +
    +        ...
    +        <devices>
    +	  <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
    +          ...
    +
    emulator
    + The contents of the emulator element specify + the fully qualified path to the device model emulator binary. + The capabilities XML specifies + the recommended default emulator to use for each particular + domain type / architecture combination. +
    +

    + Hard drives, floppy disks, CDROMs +

    +

    + Any device that looks like a disk, be it a floppy, harddisk, + cdrom, or paravirtualized driver is specified via the disk + element. +

    +
    +          ...
    +	  <disk type='file'>
    +	    <driver name="tap" type="aio">
    +	    <source file='/var/lib/xen/images/fv0'/>
    +	    <target dev='hda' bus='ide'/>
    +	  </disk>
    +	  ...
    +
    disk
    The disk element is the main container for describing + disks. The type attribute is either "file" or "block" + and refers to the underlying source for the disk. The optional + device attribute indicates how the disk is to be exposed + to the guest OS. Possible values for this attribute are "floppy", "disk" + and "cdrom", defaulting to "disk". + Since 0.0.3; "device" attribute since 0.1.4
    source
    If the disk type is "file", then the file attribute + specifies the fully-qualified path to the file holding the disk. If the disk + type is "block", then the dev attribute specifies + the path to the host device to serve as the disk. Since 0.0.3
    target
    The target element controls the bus / device under which the + disk is exposed to the guest OS. The dev attribute indicates + the "logical" device name. The actual device name specified is not guarenteed to map to + the device name in the guest OS. Treat it as a device ordering hint. + The optional bus attribute specifies the type of disk device + to emulate; possible values are driver specific, with typical values being + "ide", "scsi", "virtio", "xen". If omitted, the bus type is inferred from + the style of the device name. eg, a device named 'sda' will typically be + exported using a SCSI bus. + Since 0.0.3; bus attribute since 0.4.3
    driver
    If the hypervisor supports multiple backend drivers, then the optional + driver element allows them to be selected. The name + attribute is the primary backend driver name, while the optional type + attribute provides the sub-type. Since 0.1.8 +
    +

    + Network interfaces +

    +
    +          ...
    +	  <interface type='bridge'>
    +	    <source bridge='xenbr0'/>
    +	    <mac address='00:16:3e:5d:c7:9e'/>
    +	    <script path='vif-bridge'/>
    +	  </interface>
    +	  ...
    +
    + Virtual network +
    +

    + + This is the recommended config for general guest connectivity on + hosts with dynamic / wireless networking configs + +

    +

    + Provides a virtual network using a bridge device in the host. + Depending on the virtual network configuration, the network may be + totally isolated, NAT'ing to an explicit network device, or NAT'ing to + the default route. DHCP and DNS are provided on the virtual network in + all cases and the IP range can be determined by examining the virtual + network config with 'virsh net-dumpxml [networkname]'. + There is one virtual network called 'default' setup out + of the box which does NAT'ing to the default route and has an IP range of + 192.168.22.0/255.255.255.0. Each guest will have an + associated tun device created with a name of vnetN, which can also be + overridden with the <target> element. +

    +
    +      ...
    +      <interface type='network'>
    +        <source network='default'/>
    +      </interface>
    +      ...
    +      <interface type='network'>
    +        <source network='default'/>
    +        <target dev='vnet7'/>
    +        <mac address="11:22:33:44:55:66"/>
    +      </interface>
    +      ...
    +
    + Bridge to to LAN +
    +

    + + This is the recommended config for general guest connectivity on + hosts with static wired networking configs + +

    +

    + Provides a bridge from the VM directly onto the LAN. This assumes + there is a bridge device on the host which has one or more of the hosts + physical NICs enslaved. The guest VM will have an associated tun device + created with a name of vnetN, which can also be overridden with the + <target> element. The tun device will be enslaved to the bridge. + The IP range / network configuration is whatever is used on the LAN. This + provides the guest VM full incoming & outgoing net access just like a + physical machine. +

    +
    +      ...
    +      <interface type='bridge'>
    +        <source bridge='br0'/>
    +      </interface>
     
    -<interface type='network'>
    -  <source network='default'/>
    -  <target dev='vnet7'/>
    -  <mac address="11:22:33:44:55:66"/>
    -</interface>
    -    
    -
  3. Bridge to to LAN -

    Provides a bridge from the VM directly onto the LAN. This assumes - there is a bridge device on the host which has one or more of the hosts - physical NICs enslaved. The guest VM will have an associated tun device - created with a name of vnetN, which can also be overridden with the - <target> element. The tun device will be enslaved to the bridge. - The IP range / network configuration is whatever is used on the LAN. This - provides the guest VM full incoming & outgoing net access just like a - physical machine. Examples include:

    -
    <interface type='bridge'>
    - <source bridge='br0'/>
    -</interface>
    -
    -<interface type='bridge'>
    -  <source bridge='br0'/>
    -  <target dev='vnet7'/>
    -  <mac address="11:22:33:44:55:66"/>
    -</interface>
    -
  4. Generic connection to LAN -

    Provides a means for the administrator to execute an arbitrary script - to connect the guest's network to the LAN. The guest will have a tun - device created with a name of vnetN, which can also be overridden with the - <target> element. After creating the tun device a shell script will - be run which is expected to do whatever host network integration is - required. By default this script is called /etc/qemu-ifup but can be - overridden.

    -
    <interface type='ethernet'/>
    -
    -<interface type='ethernet'>
    -  <target dev='vnet7'/>
    -  <script path='/etc/qemu-ifup-mynet'/>
    -</interface>
    -
  5. Multicast tunnel -

    A multicast group is setup to represent a virtual network. Any VMs - whose network devices are in the same multicast group can talk to each - other even across hosts. This mode is also available to unprivileged - users. There is no default DNS or DHCP support and no outgoing network - access. To provide outgoing network access, one of the VMs should have a - 2nd NIC which is connected to one of the first 4 network types and do the - appropriate routing. The multicast protocol is compatible with that used - by user mode linux guests too. The source address used must be from the - multicast address block.

    -
    <interface type='mcast'>
    -  <source address='230.0.0.1' port='5558'/>
    -</interface>
    -
  6. TCP tunnel -

    A TCP client/server architecture provides a virtual network. One VM - provides the server end of the network, all other VMS are configured as - clients. All network traffic is routed between the VMs via the server. - This mode is also available to unprivileged users. There is no default - DNS or DHCP support and no outgoing network access. To provide outgoing - network access, one of the VMs should have a 2nd NIC which is connected - to one of the first 4 network types and do the appropriate routing.

    -

    Example server config:

    -
    <interface type='server'>
    -  <source address='192.168.0.1' port='5558'/>
    -</interface>
    -

    Example client config:

    -
    <interface type='client'>
    -  <source address='192.168.0.1' port='5558'/>
    -</interface>
    -
-

To be noted, options 2, 3, 4 are also supported by Xen VMs, so it is -possible to use these configs to have networking with both Xen & -QEMU/KVMs connected to each other.

-

Example configs

+ <interface type='bridge'> + <source bridge='br0'/> + <target dev='vnet7'/> + <mac address="11:22:33:44:55:66"/> + </interface> + ... +
+ Userspace SLIRP stack +
+

+ Provides a virtual LAN with NAT to the outside world. The virtual + network has DHCP & DNS services and will give the guest VM addresses + starting from 10.0.2.15. The default router will be + 10.0.2.2 and the DNS server will be 10.0.2.3. + This networking is the only option for unprivileged users who need their + VMs to have outgoing access. +

+
+      ...
+      <interface type='user'/>
+      ...
+      <interface type='user'>
+        <mac address="11:22:33:44:55:66"/>
+      </interface>
+      ...
+
+ Generic ethernet connection +
+

+ Provides a means for the administrator to execute an arbitrary script + to connect the guest's network to the LAN. The guest will have a tun + device created with a name of vnetN, which can also be overridden with the + <target> element. After creating the tun device a shell script will + be run which is expected to do whatever host network integration is + required. By default this script is called /etc/qemu-ifup but can be + overridden. +

+
+      ...
+      <interface type='ethernet'/>
+      ...
+      <interface type='ethernet'>
+        <target dev='vnet7'/>
+        <script path='/etc/qemu-ifup-mynet'/>
+      </interface>
+      ...
+
+ Multicast tunnel +
+

+ A multicast group is setup to represent a virtual network. Any VMs + whose network devices are in the same multicast group can talk to each + other even across hosts. This mode is also available to unprivileged + users. There is no default DNS or DHCP support and no outgoing network + access. To provide outgoing network access, one of the VMs should have a + 2nd NIC which is connected to one of the first 4 network types and do the + appropriate routing. The multicast protocol is compatible with that used + by user mode linux guests too. The source address used must be from the + multicast address block. +

+
+      ...
+      <interface type='mcast'>
+        <source address='230.0.0.1' port='5558'/>
+      </interface>
+      ...
+
+ TCP tunnel +
+

+ A TCP client/server architecture provides a virtual network. One VM + provides the server end of the network, all other VMS are configured as + clients. All network traffic is routed between the VMs via the server. + This mode is also available to unprivileged users. There is no default + DNS or DHCP support and no outgoing network access. To provide outgoing + network access, one of the VMs should have a 2nd NIC which is connected + to one of the first 4 network types and do the appropriate routing.

+
+      ...
+      <interface type='server'>
+        <source address='192.168.0.1' port='5558'/>
+      </interface>
+      ...
+      <interface type='client'>
+      <source address='192.168.0.1' port='5558'/>
+      </interface>
+      ...
+

+ Input devices +

+

+ Input devices allow interaction with the graphical framebuffer in the guest + virtual machine. When enabling the framebuffer, an input device is automatically + provided. It may be possible to add additional devices explicitly, for example, + to provide a graphics tablet for absolute cursor movement. +

+
+          ...
+	  <input type='mouse' bus='usb'/>
+	  ...
+
input
The input element has one madatory attribute, the type + whose value can be either 'mouse' or 'tablet'. The latter provides absolute + cursor movement, while the former uses relative movement. The optional + bus attribute can be used to refine the exact device type. + It takes values "xen" (paravirtualized), "ps2" and "usb".
+

+ Graphical framebuffers +

+

+ A graphics device allows for graphical interaction with the + guest OS. A guest will typically have either a framebuffer + or a text console configured to allow interaction with the + admin. +

+
+          ...
+	  <graphics type='vnc' port='5904'/>
+	  ...
+
graphics
The graphics element has a mandatory type + attribute which takes the value "sdl" or "vnc". The former displays + a window on the host desktop, while the latter activates a VNC server. + If the latter is used the port attributes specifies the + TCP port number (with -1 indicating that it should be auto-allocated). + The listen attribute is an IP address for the server to + listen on. The password attribute provides a VNC password + in clear text.
+

+ Consoles, serial & parallel devices +

+

+ A character device provides a way to interact with the virtual machine. + Paravirtualized consoles, serial ports and parallel ports are all + classed as character devices and so represented using the same syntax. +

+
+        ...
+        <parallel type='pty'>
+	  <source path='/dev/pts/2'/>
+	  <target port='0'/>
+        </parallel>
+        <serial type='pty'>
+	  <source path='/dev/pts/3'/>
+	  <target port='0'/>
+        </serial>
+        <console type='pty'>
+	  <source path='/dev/pts/4'/>
+	  <target port='0'/>
+        </console>
+        </devices>
+      </domain>
+
parallel
Represents a parallel port
serial
Represents a serial port
console
Represents the primary console. This can be the paravirtualized + console with Xen guests, or duplicates the primary serial port + for fully virtualized guests without a paravirtualized console.
source
The attributes available for the source element + vary according to the type attribute on the parent + tag. Allowed variations will be described below
target
The port number of the character device is specified via the + port attribute, numbered starting from 1. There is + usually only one console device, and 0, 1 or 2 serial devices + or parallel devices. +
+
+ Domain logfile +
+

+ This disables all input on the character device, and sends output + into the virtual machine's logfile +

+
+      ...
+      <console type='stdio'>
+        <target port='1'>
+      </console>
+      ...
+
+ Device logfile +
+

+ A file is opened and all data sent to the character + device is written to the file. +

+
+      ...
+      <serial type="file">
+        <source path="/var/log/vm/vm-serial.log"/>
+        <target port="1"/>
+      </serial>
+      ...
+
+ Virtual console +
+

+ Connects the character device to the graphical framebuffer in + a virtual console. This is typically accessed via a special + hotkey sequence such as "ctrl+alt+3" +

+
+      ...
+      <serial type='vc'>
+        <target port="1"/>
+      </serial>
+      ...
+
+ Null device +
+

+ Connects the character device to the void. No data is ever + provided to the input. All data written is discarded. +

+
+      ...
+      <serial type='null'>
+        <target port="1"/>
+      </serial>
+      ...
+
+ Pseudo TTY +
+

+ A Pseudo TTY is allocated using /dev/ptmx. A suitable client + such as 'virsh console' can connect to interact with the + serial port locally. +

+
+      ...
+      <serial type="pty">
+        <source path="/dev/pts/3"/>
+        <target port="1"/>
+      </serial>
+      ...
+

+ NB special case if <console type='pty'>, then the TTY + path is also duplicated as an attribute tty='/dv/pts/3' + on the top level <console> tag. This provides compat + with existing syntax for <console> tags. +

+
+ Host device proxy +
+

+ The character device is passed through to the underlying + physical character device. The device types must match, + eg the emulated serial port should only be connected to + a host serial port - dont connect a serial port to a parallel + port. +

+
+      ...
+      <serial type="dev">
+        <source path="/dev/ttyS0"/>
+        <target port="1"/>
+      </serial>
+      ...
+
+ TCP client/server +
+

+ The character device acts as a TCP client connecting to a + remote server, or as a server waiting for a client connection. +

+
+      ...
+      <serial type="tcp">
+        <source mode="connect" host="0.0.0.0" service="2445"/>
+        <wiremode type="telnet"/>
+        <target port="1"/>
+      </serial>
+      ...
+
+ UDP network console +
+

+ The character device acts as a UDP netconsole service, + sending and receiving packets. This is a lossy service. +

+
+      ...
+      <serial type="udp">
+        <source mode="bind" host="0.0.0.0" service="2445"/>
+        <source mode="connect" host="0.0.0.0" service="2445"/>
+        <target port="1"/>
+      </serial>
+      ...
+
+ UNIX domain socket client/server +
+

+ The character device acts as a UNIX domain socket server, + accepting connections from local clients. +

+
+      ...
+      <serial type="unix">
+        <source mode="bind" path="/tmp/foo"/>
+        <target port="1"/>
+      </serial>
+      ...
+

+ Example configs +

Example configurations for each driver are provide on the driver specific pages listed below diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index aa4a9039fb..02ca509660 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -2,245 +2,789 @@

Domain XML format

-

This section describes the XML format used to represent domains, there are -variations on the format based on the kind of domains run and the options -used to launch them:

+
    -

    Normal paravirtualized Xen -guests:

    +

    + This section describes the XML format used to represent domains, there are + variations on the format based on the kind of domains run and the options + used to launch them. For hypervisor specific details consult the + driver docs +

    -

    The root element must be called domain with no namespace, the -type attribute indicates the kind of hypervisor used, 'xen' is -the default value. The id attribute gives the domain id at -runtime (not however that this may change, for example if the domain is saved -to disk and restored). The domain has a few children whose order is not -significant:

    -
      -
    • name: the domain name, preferably ASCII based
    • -
    • memory: the maximum memory allocated to the domain in kilobytes
    • -
    • vcpu: the number of virtual cpu configured for the domain
    • -
    • os: a block describing the Operating System, its content will be - dependent on the OS type -
      • type: indicate the OS type, always linux at this point
      • kernel: path to the kernel on the Domain 0 filesystem
      • initrd: an optional path for the init ramdisk on the Domain 0 - filesystem
      • cmdline: optional command line to the kernel
      • root: the root filesystem from the guest viewpoint, it may be - passed as part of the cmdline content too
    • -
    • devices: a list of disk, interface and - console descriptions in no special order
    • -
    -

    The format of the devices and their type may grow over time, but the -following should be sufficient for basic use:

    -

    A disk device indicates a block device, it can have two -values for the type attribute either 'file' or 'block' corresponding to the 2 -options available at the Xen layer. It has two mandatory children, and one -optional one in no specific order:

    -
      -
    • source with a file attribute containing the path in Domain 0 to the - file or a dev attribute if using a block device, containing the device - name ('hda5' or '/dev/hda5')
    • -
    • target indicates in a dev attribute the device where it is mapped in - the guest
    • -
    • readonly an optional empty element indicating the device is - read-only
    • -
    • shareable an optional empty element indicating the device - can be used read/write with other domains
    • -
    -

    An interface element describes a network device mapped on the -guest, it also has a type whose value is currently 'bridge', it also have a -number of children in no specific order:

    -
      -
    • source: indicating the bridge name
    • -
    • mac: the optional mac address provided in the address attribute
    • -
    • ip: the optional IP address provided in the address attribute
    • -
    • script: the script used to bridge the interface in the Domain 0
    • -
    • target: and optional target indicating the device name.
    • -
    -

    A console element describes a serial console connection to -the guest. It has no children, and a single attribute tty which -provides the path to the Pseudo TTY on which the guest console can be -accessed

    -

    Life cycle actions for the domain can also be expressed in the XML format, -they drive what should be happening if the domain crashes, is rebooted or is -poweroff. There is various actions possible when this happen:

    -
      -
    • destroy: The domain is cleaned up (that's the default normal processing - in Xen)
    • -
    • restart: A new domain is started in place of the old one with the same - configuration parameters
    • -
    • preserve: The domain will remain in memory until it is destroyed - manually, it won't be running but allows for post-mortem debugging
    • -
    • rename-restart: a variant of the previous one but where the old domain - is renamed before being saved to allow a restart
    • -
    -

    The following could be used for a Xen production system:

    -
    <domain>
    -  ...
    -  <on_reboot>restart</on_reboot>
    -  <on_poweroff>destroy</on_poweroff>
    -  <on_crash>rename-restart</on_crash>
    -  ...
    -</domain>
    -

    While the format may be extended in various ways as support for more -hypervisor types and features are added, it is expected that this core subset -will remain functional in spite of the evolution of the library.

    -

    Fully virtualized guests

    -

    There is a few things to notice specifically for HVM domains:

    -
      -
    • the optional <features> block is used to enable - certain guest CPU / system features. For HVM guests the following - features are defined: -
      • pae - enable PAE memory addressing
      • apic - enable IO APIC
      • acpi - enable ACPI bios
    • -
    • the optional <clock> element is used to specify - whether the emulated BIOS clock in the guest is synced to either - localtime or utc. In general Windows will - want localtime while all other operating systems will - want utc. The default is thus utc
    • -
    • the <os> block description is very different, first - it indicates that the type is 'hvm' for hardware virtualization, then - instead of a kernel, boot and command line arguments, it points to an os - boot loader which will extract the boot information from the boot device - specified in a separate boot element. The dev attribute on - the boot tag can be one of: -
      • fd - boot from first floppy device
      • hd - boot from first harddisk device
      • cdrom - boot from first cdrom device
    • -
    • the <devices> section includes an emulator entry - pointing to an additional program in charge of emulating the devices
    • -
    • the disk entry indicates in the dev target section that the emulation - for the drive is the first IDE disk device hda. The list of device names - supported is dependent on the Hypervisor, but for Xen it can be any IDE - device hda-hdd, or a floppy device - fda, fdb. The <disk> element - also supports a 'device' attribute to indicate what kinda of hardware to - emulate. The following values are supported: -
      • floppy - a floppy disk controller
      • disk - a generic hard drive (the default it - omitted)
      • cdrom - a CDROM device
      - For Xen 3.0.2 and earlier a CDROM device can only be emulated on the - hdc channel, while for 3.0.3 and later, it can be emulated - on any IDE channel.
    • -
    • the <devices> section also include at least one - entry for the graphic device used to render the os. Currently there is - just 2 types possible 'vnc' or 'sdl'. If the type is 'vnc', then an - additional port attribute will be present indicating the TCP - port on which the VNC server is accepting client connections.
    • -
    -

    It is likely that the HVM description gets additional optional elements -and attributes as the support for fully virtualized domain expands, -especially for the variety of devices emulated and the graphic support -options offered.

    +

    Element and attribute overview

    + +

    + The root element required for all virtual machines is + named domain. It has two attributes, the + type specifies the hypervisor used for running + the domain. The allowed values are driver specific, but + include "xen", "kvm", "qemu", "lxc" and "kqemu". The + second attribute is id which is a unique + integer identifier for the running guest machine. Inactive + machines have no id value. +

    + + +

    General metadata

    -

    - Networking interface options -

    -

    The networking support in the QEmu and KVM case is more flexible, and -support a variety of options:

    -
      -
    1. Userspace SLIRP stack -

      Provides a virtual LAN with NAT to the outside world. The virtual - network has DHCP & DNS services and will give the guest VM addresses - starting from 10.0.2.15. The default router will be - 10.0.2.2 and the DNS server will be 10.0.2.3. - This networking is the only option for unprivileged users who need their - VMs to have outgoing access. Example configs are:

      -
      <interface type='user'/>
      -<interface type='user'>
      -  <mac address="11:22:33:44:55:66"/>
      -</interface>
      -    
      -
    2. -
    3. Virtual network -

      Provides a virtual network using a bridge device in the host. - Depending on the virtual network configuration, the network may be - totally isolated, NAT'ing to an explicit network device, or NAT'ing to - the default route. DHCP and DNS are provided on the virtual network in - all cases and the IP range can be determined by examining the virtual - network config with 'virsh net-dumpxml <network - name>'. There is one virtual network called 'default' setup out - of the box which does NAT'ing to the default route and has an IP range of - 192.168.22.0/255.255.255.0. Each guest will have an - associated tun device created with a name of vnetN, which can also be - overridden with the <target> element. Example configs are:

      -
      <interface type='network'>
      -  <source network='default'/>
      -</interface>
      +      <domain type='xen' id='3'>
      +        <name>fv0</name>
      +        <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid>
      +        ...
      -<interface type='network'> - <source network='default'/> - <target dev='vnet7'/> - <mac address="11:22:33:44:55:66"/> -</interface> - -
    4. -
    5. Bridge to to LAN -

      Provides a bridge from the VM directly onto the LAN. This assumes - there is a bridge device on the host which has one or more of the hosts - physical NICs enslaved. The guest VM will have an associated tun device - created with a name of vnetN, which can also be overridden with the - <target> element. The tun device will be enslaved to the bridge. - The IP range / network configuration is whatever is used on the LAN. This - provides the guest VM full incoming & outgoing net access just like a - physical machine. Examples include:

      -
      <interface type='bridge'>
      - <source bridge='br0'/>
      -</interface>
      +    
      +
      name
      +
      The content of the name element provides + a short name for the virtual machine. This name should + consist only of alpha-numeric characters and is required + to be unique within the scope of a single host. It is + often used to form the filename for storing the persistent + configuration file. Since 0.0.1
      +
      uuid
      +
      The content of the uuid element provides + a globally unique identifier for the virtual machine. + The format must be RFC 4122 compliant, eg 3e3fce45-4f53-4fa7-bb32-11f34168b82b. + If omitted when defining/creating a new machine, a random + UUID is generated. Since 0.0.1
      +
      -<interface type='bridge'> - <source bridge='br0'/> - <target dev='vnet7'/> - <mac address="11:22:33:44:55:66"/> -</interface>
      -
    6. -
    7. Generic connection to LAN -

      Provides a means for the administrator to execute an arbitrary script - to connect the guest's network to the LAN. The guest will have a tun - device created with a name of vnetN, which can also be overridden with the - <target> element. After creating the tun device a shell script will - be run which is expected to do whatever host network integration is - required. By default this script is called /etc/qemu-ifup but can be - overridden.

      -
      <interface type='ethernet'/>
      +    

      Operating system booting

      -<interface type='ethernet'> - <target dev='vnet7'/> - <script path='/etc/qemu-ifup-mynet'/> -</interface>
      -
    8. -
    9. Multicast tunnel -

      A multicast group is setup to represent a virtual network. Any VMs - whose network devices are in the same multicast group can talk to each - other even across hosts. This mode is also available to unprivileged - users. There is no default DNS or DHCP support and no outgoing network - access. To provide outgoing network access, one of the VMs should have a - 2nd NIC which is connected to one of the first 4 network types and do the - appropriate routing. The multicast protocol is compatible with that used - by user mode linux guests too. The source address used must be from the - multicast address block.

      -
      <interface type='mcast'>
      -  <source address='230.0.0.1' port='5558'/>
      -</interface>
      -
    10. -
    11. TCP tunnel -

      A TCP client/server architecture provides a virtual network. One VM - provides the server end of the network, all other VMS are configured as - clients. All network traffic is routed between the VMs via the server. - This mode is also available to unprivileged users. There is no default - DNS or DHCP support and no outgoing network access. To provide outgoing - network access, one of the VMs should have a 2nd NIC which is connected - to one of the first 4 network types and do the appropriate routing.

      -

      Example server config:

      -
      <interface type='server'>
      -  <source address='192.168.0.1' port='5558'/>
      -</interface>
      -

      Example client config:

      -
      <interface type='client'>
      -  <source address='192.168.0.1' port='5558'/>
      -</interface>
      -
    12. -
    -

    To be noted, options 2, 3, 4 are also supported by Xen VMs, so it is -possible to use these configs to have networking with both Xen & -QEMU/KVMs connected to each other.

    +

    + There are a number of different ways to boot virtual machines + each with their own pros and cons. +

    -

    Example configs

    +

    BIOS bootloader

    + +

    + Booting via the BIOS is available for hypervisors supporting + full virtualization. In this case the BIOS has a boot order + priority (floppy, harddisk, cdrom, network) determining where + to obtain/find the boot image. +

    + +
    +        ...
    +        <os>
    +          <type>hvm</type>
    +          <loader>/usr/lib/xen/boot/hvmloader</loader>
    +          <boot dev='hd'/>
    +        </os>
    +        ...
    + +
    +
    type
    +
    The content of the type element specifies the + type of operating system to be booted in the virtual machine. + hvm indicates that the OS is one designed to run + on bare metal, so requires full virtualization. linux + (badly named!) refers to an OS that supports the Xen 3 hypervisor + guest ABI. There are also two optional attributes, arch + specifying the CPU architecture to virtualization, and machine + refering to the machine type. The Capabilities XML + provides details on allowed values for these. Since 0.0.1
    +
    loader
    +
    The optional loader tag refers to a firmware blob + used to assist the domain creation process. At this time, it is + only needed by Xen fullyvirtualized domains. Since 0.1.0
    +
    boot
    +
    The dev attribute takes one of the values "fd", "hd", + "cdrom" or "network" and is used to specify the next boot device + to consider. The boot element can be repeated multiple + times to setup a priority list of boot devices to try in turn. + Since 0.1.3 +
    +
    + +

    Host bootloader

    + +

    + Hypervisors employing paravirtualization do not usually emulate + a BIOS, and instead the host is responsible to kicking off the + operating system boot. This may use a pseduo-bootloader in the + host to provide an interface to choose a kernel for the guest. + An example is pygrub with Xen. +

    + +
    +        ...
    +	<bootloader>/usr/bin/pygrub</bootloader>
    +	<bootloader_args>--append single</bootloader_args>
    +        ...
    + +
    +
    bootloader
    +
    The content of the bootloader element provides + a fullyqualified path to the bootloader executable in the + host OS. This bootloader will be run to choose which kernel + to boot. The required output of the bootloader is dependant + on the hypervisor in use. Since 0.1.0
    +
    bootloader_args
    +
    The optional bootloader_args element allows + command line arguments to be passed to the bootloader. + Since 0.2.3 +
    + +
    + +

    Direct kernel boot

    + +

    + When installing a new guest OS it is often useful to boot directly + from a kernel and initrd stored in the host OS, allowing command + line arguments to be passed directly to the installer. This capability + is usually available for both para and full virtualized guests. +

    + +
    +        ...
    +	<os>
    +          <type>hvm</type>
    +          <loader>/usr/lib/xen/boot/hvmloader</loader>
    +          <kernel>/root/f8-i386-vmlinuz</kernel>
    +          <initrd>/root/f8-i386-initrd</initrd>
    +          <cmdline>console=ttyS0 ks=http://example.com/f8-i386/os/</cmdline>
    +	</os>
    +	...
    + +
    +
    type
    +
    This element has the same semantics as described earlier in the + BIOS boot section
    +
    type
    +
    This element has the same semantics as described earlier in the + BIOS boot section
    +
    kernel
    +
    The contents of this element specify the fully-qualified path + to the kernel image in the host OS.
    +
    initrd
    +
    The contents of this element specify the fully-qualified path + to the (optional) ramdisk image in the host OS.
    +
    cmdline
    +
    The contents of this element specify arguments to be passed to + the kernel (or installer) at boottime. This is often used to + specify an alternate primary console (eg serial port), or the + installation media source / kickstart file
    +
    + +

    Basic resources

    + +
    +        ...
    +	<memory>524288</memory>
    +	<currentMemory>524288</currentMemory>
    +	<vcpu>1</vcpu>
    +	...
    + +
    +
    memory
    +
    The maximum allocation of memory for the guest at boot time. + The units for this value are bytes
    +
    currentMemory
    +
    The actual allocation of memory for the guest. This value + be less than the maximum allocation, to allow for ballooning + up the guests memory on the fly. If this is omitted, it defaults + to the same value as the memory element
    +
    vcpu
    +
    The content of this element defines the number of virtual + CPUs allocated for the guest OS.
    +
    + +

    Lifecycle control

    + +

    + It is sometimes neccessary to override the default actions taken + when a guest OS triggers a lifecycle operation. The following + collections of elements allow the actions to be specified. A + common use case is to force a reboot to be treated as a poweroff + when doing the initial OS installation. This allows the VM to be + re-configured for the first post-install bootup. +

    + +
    +        ...
    +	<on_poweroff>destroy</on_poweroff>
    +	<on_reboot>restart</on_reboot>
    +	<on_crash>restart</on_crash>
    +	...
    + +
    +
    on_poweroff
    +
    The content of this element specifies the action to take when + the guest requests a poweroff.
    +
    on_poweroff
    +
    The content of this element specifies the action to take when + the guest requests a reboot.
    +
    on_poweroff
    +
    The content of this element specifies the action to take when + the guest crashes.
    +
    + +

    + Each of these states allow for the same four possible actions. +

    + +
    +
    destroy
    +
    The domain will be terminated completely and all resources + released
    +
    restart
    +
    The domain will be terminated, and then restarted with + the same configuration
    +
    preserve
    +
    The domain will be terminated, and its resource preserved + to allow analysis.
    +
    rename-restart
    +
    The domain will be terminated, and then restarted with + a new name
    +
    + +

    Hypervisor features

    + +

    + Hypervisors may allow certain CPU / machine features to be + toggled on/off. +

    + +
    +        ...
    +	<features>
    +	  <pae/>
    +	  <acpi/>
    +	  <apic/>
    +	</features>
    +	...
    + +

    + All features are listed within the features + element, omitting a togglable feature tag turns it off. + The available features can be found by asking + for the capabilities XML, + but a common set for fully virtualized domains are: +

    + +
    +
    pae
    +
    Physical address extension mode allows 32-bit guests + to address more than 4 GB of memory.
    +
    acpi
    +
    ACPI is useful for power management, for example, with + KVM guests it is required for graceful shutdown to work. +
    +
    + +

    Time keeping

    + +

    + The guest clock is typically initialized from the host clock. + Most operating systems expect the hardware clock to be kept + in UTC, and this is the default. Windows, however, expects + it to be in so called 'localtime'. +

    + +
    +        ...
    +        <clock sync="localtime"/>
    +	...
    + +
    +
    clock
    +
    The sync attribute takes either "utc" or + "localtime" to specify how the guest clock is initialized + in relation to the host OS. +
    +
    + +

    Devices

    + +

    + The final set of XML elements are all used to descibe devices + provided to the guest domain. All devices occur as children + of the main devices element. + Since 0.1.3 +

    + +
    +        ...
    +        <devices>
    +	  <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
    +          ...
    + +
    +
    emulator
    +
    + The contents of the emulator element specify + the fully qualified path to the device model emulator binary. + The capabilities XML specifies + the recommended default emulator to use for each particular + domain type / architecture combination. +
    +
    + +

    Hard drives, floppy disks, CDROMs

    + +

    + Any device that looks like a disk, be it a floppy, harddisk, + cdrom, or paravirtualized driver is specified via the disk + element. +

    + +
    +          ...
    +	  <disk type='file'>
    +	    <driver name="tap" type="aio">
    +	    <source file='/var/lib/xen/images/fv0'/>
    +	    <target dev='hda' bus='ide'/>
    +	  </disk>
    +	  ...
    + +
    +
    disk
    +
    The disk element is the main container for describing + disks. The type attribute is either "file" or "block" + and refers to the underlying source for the disk. The optional + device attribute indicates how the disk is to be exposed + to the guest OS. Possible values for this attribute are "floppy", "disk" + and "cdrom", defaulting to "disk". + Since 0.0.3; "device" attribute since 0.1.4
    +
    source
    +
    If the disk type is "file", then the file attribute + specifies the fully-qualified path to the file holding the disk. If the disk + type is "block", then the dev attribute specifies + the path to the host device to serve as the disk. Since 0.0.3
    +
    target
    +
    The target element controls the bus / device under which the + disk is exposed to the guest OS. The dev attribute indicates + the "logical" device name. The actual device name specified is not guarenteed to map to + the device name in the guest OS. Treat it as a device ordering hint. + The optional bus attribute specifies the type of disk device + to emulate; possible values are driver specific, with typical values being + "ide", "scsi", "virtio", "xen". If omitted, the bus type is inferred from + the style of the device name. eg, a device named 'sda' will typically be + exported using a SCSI bus. + Since 0.0.3; bus attribute since 0.4.3
    +
    driver
    +
    If the hypervisor supports multiple backend drivers, then the optional + driver element allows them to be selected. The name + attribute is the primary backend driver name, while the optional type + attribute provides the sub-type. Since 0.1.8 +
    +
    + +

    Network interfaces

    + +
    +          ...
    +	  <interface type='bridge'>
    +	    <source bridge='xenbr0'/>
    +	    <mac address='00:16:3e:5d:c7:9e'/>
    +	    <script path='vif-bridge'/>
    +	  </interface>
    +	  ...
    + +
    Virtual network
    + +

    + + This is the recommended config for general guest connectivity on + hosts with dynamic / wireless networking configs + +

    + +

    + Provides a virtual network using a bridge device in the host. + Depending on the virtual network configuration, the network may be + totally isolated, NAT'ing to an explicit network device, or NAT'ing to + the default route. DHCP and DNS are provided on the virtual network in + all cases and the IP range can be determined by examining the virtual + network config with 'virsh net-dumpxml [networkname]'. + There is one virtual network called 'default' setup out + of the box which does NAT'ing to the default route and has an IP range of + 192.168.22.0/255.255.255.0. Each guest will have an + associated tun device created with a name of vnetN, which can also be + overridden with the <target> element. +

    + +
    +      ...
    +      <interface type='network'>
    +        <source network='default'/>
    +      </interface>
    +      ...
    +      <interface type='network'>
    +        <source network='default'/>
    +        <target dev='vnet7'/>
    +        <mac address="11:22:33:44:55:66"/>
    +      </interface>
    +      ...
    + +
    Bridge to to LAN
    + +

    + + This is the recommended config for general guest connectivity on + hosts with static wired networking configs + +

    + +

    + Provides a bridge from the VM directly onto the LAN. This assumes + there is a bridge device on the host which has one or more of the hosts + physical NICs enslaved. The guest VM will have an associated tun device + created with a name of vnetN, which can also be overridden with the + <target> element. The tun device will be enslaved to the bridge. + The IP range / network configuration is whatever is used on the LAN. This + provides the guest VM full incoming & outgoing net access just like a + physical machine. +

    + +
    +      ...
    +      <interface type='bridge'>
    +        <source bridge='br0'/>
    +      </interface>
    +
    +      <interface type='bridge'>
    +        <source bridge='br0'/>
    +        <target dev='vnet7'/>
    +        <mac address="11:22:33:44:55:66"/>
    +      </interface>
    +      ...
    + +
    Userspace SLIRP stack
    + +

    + Provides a virtual LAN with NAT to the outside world. The virtual + network has DHCP & DNS services and will give the guest VM addresses + starting from 10.0.2.15. The default router will be + 10.0.2.2 and the DNS server will be 10.0.2.3. + This networking is the only option for unprivileged users who need their + VMs to have outgoing access. +

    + +
    +      ...
    +      <interface type='user'/>
    +      ...
    +      <interface type='user'>
    +        <mac address="11:22:33:44:55:66"/>
    +      </interface>
    +      ...
    + + +
    Generic ethernet connection
    + +

    + Provides a means for the administrator to execute an arbitrary script + to connect the guest's network to the LAN. The guest will have a tun + device created with a name of vnetN, which can also be overridden with the + <target> element. After creating the tun device a shell script will + be run which is expected to do whatever host network integration is + required. By default this script is called /etc/qemu-ifup but can be + overridden. +

    + +
    +      ...
    +      <interface type='ethernet'/>
    +      ...
    +      <interface type='ethernet'>
    +        <target dev='vnet7'/>
    +        <script path='/etc/qemu-ifup-mynet'/>
    +      </interface>
    +      ...
    + +
    Multicast tunnel
    + +

    + A multicast group is setup to represent a virtual network. Any VMs + whose network devices are in the same multicast group can talk to each + other even across hosts. This mode is also available to unprivileged + users. There is no default DNS or DHCP support and no outgoing network + access. To provide outgoing network access, one of the VMs should have a + 2nd NIC which is connected to one of the first 4 network types and do the + appropriate routing. The multicast protocol is compatible with that used + by user mode linux guests too. The source address used must be from the + multicast address block. +

    + +
    +      ...
    +      <interface type='mcast'>
    +        <source address='230.0.0.1' port='5558'/>
    +      </interface>
    +      ...
    + +
    TCP tunnel
    + +

    + A TCP client/server architecture provides a virtual network. One VM + provides the server end of the network, all other VMS are configured as + clients. All network traffic is routed between the VMs via the server. + This mode is also available to unprivileged users. There is no default + DNS or DHCP support and no outgoing network access. To provide outgoing + network access, one of the VMs should have a 2nd NIC which is connected + to one of the first 4 network types and do the appropriate routing.

    + +
    +      ...
    +      <interface type='server'>
    +        <source address='192.168.0.1' port='5558'/>
    +      </interface>
    +      ...
    +      <interface type='client'>
    +      <source address='192.168.0.1' port='5558'/>
    +      </interface>
    +      ...
    + + +

    Input devices

    + +

    + Input devices allow interaction with the graphical framebuffer in the guest + virtual machine. When enabling the framebuffer, an input device is automatically + provided. It may be possible to add additional devices explicitly, for example, + to provide a graphics tablet for absolute cursor movement. +

    + +
    +          ...
    +	  <input type='mouse' bus='usb'/>
    +	  ...
    + +
    +
    input
    +
    The input element has one madatory attribute, the type + whose value can be either 'mouse' or 'tablet'. The latter provides absolute + cursor movement, while the former uses relative movement. The optional + bus attribute can be used to refine the exact device type. + It takes values "xen" (paravirtualized), "ps2" and "usb".
    +
    + + +

    Graphical framebuffers

    + +

    + A graphics device allows for graphical interaction with the + guest OS. A guest will typically have either a framebuffer + or a text console configured to allow interaction with the + admin. +

    + +
    +          ...
    +	  <graphics type='vnc' port='5904'/>
    +	  ...
    + +
    +
    graphics
    +
    The graphics element has a mandatory type + attribute which takes the value "sdl" or "vnc". The former displays + a window on the host desktop, while the latter activates a VNC server. + If the latter is used the port attributes specifies the + TCP port number (with -1 indicating that it should be auto-allocated). + The listen attribute is an IP address for the server to + listen on. The password attribute provides a VNC password + in clear text.
    +
    + +

    Consoles, serial & parallel devices

    + +

    + A character device provides a way to interact with the virtual machine. + Paravirtualized consoles, serial ports and parallel ports are all + classed as character devices and so represented using the same syntax. +

    + +
    +        ...
    +        <parallel type='pty'>
    +	  <source path='/dev/pts/2'/>
    +	  <target port='0'/>
    +        </parallel>
    +        <serial type='pty'>
    +	  <source path='/dev/pts/3'/>
    +	  <target port='0'/>
    +        </serial>
    +        <console type='pty'>
    +	  <source path='/dev/pts/4'/>
    +	  <target port='0'/>
    +        </console>
    +        </devices>
    +      </domain>
    + + +
    +
    parallel
    +
    Represents a parallel port
    +
    serial
    +
    Represents a serial port
    +
    console
    +
    Represents the primary console. This can be the paravirtualized + console with Xen guests, or duplicates the primary serial port + for fully virtualized guests without a paravirtualized console.
    +
    source
    +
    The attributes available for the source element + vary according to the type attribute on the parent + tag. Allowed variations will be described below
    +
    target
    +
    The port number of the character device is specified via the + port attribute, numbered starting from 1. There is + usually only one console device, and 0, 1 or 2 serial devices + or parallel devices. +
    + +
    Domain logfile
    + +

    + This disables all input on the character device, and sends output + into the virtual machine's logfile +

    + +
    +      ...
    +      <console type='stdio'>
    +        <target port='1'>
    +      </console>
    +      ...
    + + +
    Device logfile
    + +

    + A file is opened and all data sent to the character + device is written to the file. +

    + +
    +      ...
    +      <serial type="file">
    +        <source path="/var/log/vm/vm-serial.log"/>
    +        <target port="1"/>
    +      </serial>
    +      ...
    + +
    Virtual console
    + +

    + Connects the character device to the graphical framebuffer in + a virtual console. This is typically accessed via a special + hotkey sequence such as "ctrl+alt+3" +

    + +
    +      ...
    +      <serial type='vc'>
    +        <target port="1"/>
    +      </serial>
    +      ...
    + +
    Null device
    + +

    + Connects the character device to the void. No data is ever + provided to the input. All data written is discarded. +

    + +
    +      ...
    +      <serial type='null'>
    +        <target port="1"/>
    +      </serial>
    +      ...
    + +
    Pseudo TTY
    + +

    + A Pseudo TTY is allocated using /dev/ptmx. A suitable client + such as 'virsh console' can connect to interact with the + serial port locally. +

    + +
    +      ...
    +      <serial type="pty">
    +        <source path="/dev/pts/3"/>
    +        <target port="1"/>
    +      </serial>
    +      ...
    + +

    + NB special case if <console type='pty'>, then the TTY + path is also duplicated as an attribute tty='/dv/pts/3' + on the top level <console> tag. This provides compat + with existing syntax for <console> tags. +

    + +
    Host device proxy
    + +

    + The character device is passed through to the underlying + physical character device. The device types must match, + eg the emulated serial port should only be connected to + a host serial port - dont connect a serial port to a parallel + port. +

    + +
    +      ...
    +      <serial type="dev">
    +        <source path="/dev/ttyS0"/>
    +        <target port="1"/>
    +      </serial>
    +      ...
    + +
    TCP client/server
    + +

    + The character device acts as a TCP client connecting to a + remote server, or as a server waiting for a client connection. +

    + +
    +      ...
    +      <serial type="tcp">
    +        <source mode="connect" host="0.0.0.0" service="2445"/>
    +        <wiremode type="telnet"/>
    +        <target port="1"/>
    +      </serial>
    +      ...
    + +
    UDP network console
    + +

    + The character device acts as a UDP netconsole service, + sending and receiving packets. This is a lossy service. +

    + +
    +      ...
    +      <serial type="udp">
    +        <source mode="bind" host="0.0.0.0" service="2445"/>
    +        <source mode="connect" host="0.0.0.0" service="2445"/>
    +        <target port="1"/>
    +      </serial>
    +      ...
    + +
    UNIX domain socket client/server
    + +

    + The character device acts as a UNIX domain socket server, + accepting connections from local clients. +

    + +
    +      ...
    +      <serial type="unix">
    +        <source mode="bind" path="/tmp/foo"/>
    +        <target port="1"/>
    +      </serial>
    +      ...
    + +

    Example configs

    Example configurations for each driver are provide on the diff --git a/docs/page.xsl b/docs/page.xsl index c306982be8..88b2fa7929 100644 --- a/docs/page.xsl +++ b/docs/page.xsl @@ -62,28 +62,30 @@

      - +
    • - +
        - - + +
      • - +
          - + +
        • - +
            - + +
          • - +
              - +