diff --git a/ChangeLog b/ChangeLog index 1556b5e049..789737d761 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,9 @@ +Wed Feb 20 10:50:27 EST 2008 Daniel P. Berrange + + * docs/libvir.html, docs/site.xsl: Added webpage describing + the storage management capabilities + * docs/*.html: Re-generate content + Wed Feb 20 10:49:27 EST 2008 Daniel P. Berrange * src/test.c: no-op stub impl of storage APIs diff --git a/docs/FAQ.html b/docs/FAQ.html index 5c3d47a00e..2186e98cf1 100644 --- a/docs/FAQ.html +++ b/docs/FAQ.html @@ -77,4 +77,4 @@ via the pkg-config command line tool, like:

pkg-config libvirt --libs

-

+

diff --git a/docs/architecture.html b/docs/architecture.html index e9aa01af97..7a8ca26512 100644 --- a/docs/architecture.html +++ b/docs/architecture.html @@ -64,4 +64,4 @@ drivers present in driver.h:

Note that a given driver may only implement a subset of those functions, (for example saving a Xen domain state to disk and restoring it is only possible though the Xen Daemon), in that case the driver entry points for -unsupported functions are initialized to NULL.

+unsupported functions are initialized to NULL.

diff --git a/docs/auth.html b/docs/auth.html index 6ba191aa5d..261a02b24e 100644 --- a/docs/auth.html +++ b/docs/auth.html @@ -140,4 +140,4 @@ Any client application wishing to connect to a Kerberos enabled libvirt server merely needs to run kinit to gain a user principle. This may well be done automatically when a user logs into a desktop session, if PAM is setup to authenticate against Kerberos. -

+

diff --git a/docs/bugs.html b/docs/bugs.html index 89fd3128ed..0b143273d7 100644 --- a/docs/bugs.html +++ b/docs/bugs.html @@ -14,4 +14,4 @@ network. Use the settings:

But there is no guarantee that someone will be watching or able to reply, -use the mailing-list if you don't get an answer there.

+use the mailing-list if you don't get an answer there.

diff --git a/docs/downloads.html b/docs/downloads.html index f5aad09051..e7727456bb 100644 --- a/docs/downloads.html +++ b/docs/downloads.html @@ -7,4 +7,4 @@ available, first register onto the server:

cvs -d :pserver:anoncvs@l checkout the development tree with:

cvs -d :pserver:anoncvs@libvirt.org:2401/data/cvs co libvirt

Use ./autogen.sh to configure the local checkout, then make and make install, as usual. All normal cvs commands are now -available except commiting to the base.

+available except commiting to the base.

diff --git a/docs/errors.html b/docs/errors.html index 5d7bab3e79..110c1e9980 100644 --- a/docs/errors.html +++ b/docs/errors.html @@ -66,4 +66,4 @@ this point, see the error.py example about it:

def handler(ctxt, err):
 
 libvirt.registerErrorHandler(handler, 'context') 

the second argument to the registerErrorHandler function is passed as the first argument of the callback like in the C version. The error is a tuple -containing the same field as a virError in C, but cast to Python.

+containing the same field as a virError in C, but cast to Python.

diff --git a/docs/format.html b/docs/format.html index 29fb10a343..716c589c1b 100644 --- a/docs/format.html +++ b/docs/format.html @@ -418,4 +418,4 @@ Xen support, you will see the os_type of xen to indicate a paravirtual kernel, then architecture informations and potential features.

The third block (in green) gives similar informations but when running a 32 bit OS fully virtualized with Xen using the hvm support.

This section is likely to be updated and augmented in the future, see the discussion which led to the capabilities format in the mailing-list -archives.

+archives.

diff --git a/docs/hvsupport.html b/docs/hvsupport.html index e2b5521fad..618bd5faa3 100644 --- a/docs/hvsupport.html +++ b/docs/hvsupport.html @@ -392,4 +392,4 @@ first appeared in libvirt 0.2.0. virNetworkLookupByUUIDString 0.2.0 virNetworkSetAutostart 0.2.1 virNetworkUndefine 0.2.0 -

+

diff --git a/docs/index.html b/docs/index.html index 21b21528ca..6b4eca0273 100644 --- a/docs/index.html +++ b/docs/index.html @@ -77,6 +77,9 @@ virtualization mechanisms. It currently also supports Hypervisor support +
  • + Storage Management +
  • API Menu
  • diff --git a/docs/intro.html b/docs/intro.html index ff285d83d4..4b7c25816f 100644 --- a/docs/intro.html +++ b/docs/intro.html @@ -28,4 +28,4 @@ exception being domain migration between node capabilities which may need to be added at the libvirt level). Where possible libvirt should be extendable to be able to provide the same API for remote nodes, however this is not the case at the moment, the code currently handle only local node accesses -(extension for remote access support is being worked on, see the mailing list discussions about it).

    +(extension for remote access support is being worked on, see the mailing list discussions about it).

    diff --git a/docs/libvir.html b/docs/libvir.html index 7e464aeb45..ac5fe72291 100644 --- a/docs/libvir.html +++ b/docs/libvir.html @@ -3914,5 +3914,581 @@ first appeared in libvirt 0.2.0. +

    Storage Management

    + +

    +This page describes the storage management capabilities in +libvirt. +

    + +
      +
    • Core concepts
    • +
    • Storage pool XML + +
    • +
    • Storage volume XML + +
    • +
    • Storage backend drivers + + +

      Core concepts

      + +

      +The storage management APIs are based around 2 core concepts +

      + +
        +
      1. Volume - a single storage volume which can +be assigned to a guest, or used for creating further pools. A +volume is either a block device, a raw file, or a special format +file.
      2. +
      3. Pool - provides a means for taking a chunk +of storage and carving it up into volumes. A pool can be used to +manage things such as a physical disk, a NFS server, a iSCSI target, +a host adapter, an LVM group.
      4. +
      + +

      +These two concepts are mapped through to two libvirt objects, a +virStorageVolPtr and a virStoragePoolPtr, +each with a collection of APIs for their management. +

      + + +

      Storage pool XML

      + +

      +Although all storage pool backends share the same public APIs and +XML format, they have varying levels of capabilities. Some may +allow creation of volumes, others may only allow use of pre-existing +volumes. Some may have constraints on volume size, or placement. +

      + +

      The is the top level tag for a storage pool document is 'pool'. It has +a single attribute type, which is one of dir, +fs,netfs,disk,iscsi, +logical. This corresponds to the storage backend drivers +listed further along in this document. +

      + + +

      First level elements

      + +
      +
      name
      +
      Providing a name for the pool which is unique to the host. +This is mandatory when defining a pool
      + +
      uuid
      +
      Providing an identifier for the pool which is globally unique. +This is optional when defining a pool, a UUID will be generated if +omitted
      + +
      allocation
      +
      Providing the total storage allocation for the pool. This may +be larger than the sum of the allocation of all volumes due to +metadata overhead. This value is in bytes. This is not applicable +when creating a pool.
      + +
      capacity
      +
      Providing the total storage capacity for the pool. Due to +underlying device constraints it may not be possible to use the +full capacity for storage volumes. This value is in bytes. This +is not applicable when creating a pool.
      + +
      available
      +
      Providing the free space available for allocating new volums +in the pool. Due to underlying device constraints it may not be +possible to allocate the entire free space to a single volume. +This value is in bytes. This is not applicable when creating a +pool.
      + +
      source
      +
      Provides information about the source of the pool, such as +the underlying host devices, or remote server
      + +
      target
      +
      Provides information about the representation of the pool +on the local host.
      +
      + +

      Source elements

      + +
      +
      device
      +
      Provides the source for pools backed by physical devices. +May be repeated multiple times depending on backend driver. Contains +a single attribute path which is the fully qualified +path to the block device node.
      +
      directory
      +
      Provides the source for pools backed by directories. May +only occur once. Contains a single attribute path +which is the fully qualified path to the block device node.
      +
      host
      +
      Provides the source for pools backed by storage from a +remote server. Will be used in combination with a directory +or device element. Contains an attribute name +which is the hostname or IP address of the server. May optionally +contain a port attribute for the protocol specific +port number.
      +
      format
      +
      Provides information about the format of the pool. This +contains a single attribute type whose value is +backend specific. This is typically used to indicate filesystem +type, or network filesystem type, or partition table type, or +LVM metadata type. All drivers are required to have a default +value for this, so it is optional.
      +
      + +

      Target elements

      + +
      +
      path
      +
      Provides the location at which the pool will be mapped into +the local filesystem namespace. For a filesystem/directory based +pool it will be the name of the directory in which volumes will +be created. For device based pools it will tbe directory in which +devices nodes exist. For the latter /dev/ may seem +like the logical choice, however, devices nodes there are not +guarenteed stable across reboots, since they are allocated on +demand. It is preferrable to use a stable location such as one +of the /dev/disk/by-{path,id,uuid,label locations. +
      +
      permissions
      +
      Provides information about the default permissions to use +when creating volumes. This is currently only useful for directory +or filesystem based pools, where the volumes allocated are simple +files. For pools where the volumes are device nodes, the hotplug +scripts determine permissions. It contains 4 child elements. The +mode element contains the octal permission set. The +owner element contains the numeric user ID. The group +element contains the numeric group ID. The label element +contains the MAC (eg SELinux) label string. +
      +
      + +

      Device extents

      + +

      +If a storage pool exposes information about its underlying +placement / allocation scheme, the device element +within the source element may contain information +about its avilable extents. Some pools have a constraint that +a volume must be allocated entirely within a single constraint +(eg disk partition pools). Thus the extent information allows an +application to determine the maximum possible size for a new +volume +

      + +

      +For storage pools supporting extent information, within each +device element there will be zero or more freeExtent +elements. Each of these elements contains two attributes, start +and end which provide the boundaries of the extent on the +device, measured in bytes. +

      + +

      Storage volume XML

      + +

      +A storage volume will be either a file or a device node. +

      + +

      First level elements

      + +
      +
      name
      +
      Providing a name for the pool which is unique to the host. +This is mandatory when defining a pool
      + +
      uuid
      +
      Providing an identifier for the pool which is globally unique. +This is optional when defining a pool, a UUID will be generated if +omitted
      + +
      allocation
      +
      Providing the total storage allocation for the volume. This +may be smaller than the logical capacity if the volume is sparsely +allocated. It may also be larger than the logical capacity if the +volume has substantial metadata overhead. This value is in bytes. +If omitted when creating a volume, the volume will be fully +allocated at time of creation. If set to a value smaller than the +capacity, the pool has the option of deciding +to sparsely allocate a volume. It does not have to honour requests +for sparse allocation though.
      + +
      capacity
      +
      Providing the logical capacity for the volume. This value is +in bytes. This is compulsory when creating a volume
      + +
      source
      +
      Provides information about the underlying storage allocation +of the volume. This may not be available for some pool types.
      + +
      target
      +
      Provides information about the representation of the volume +on the local host.
      +
      + +

      Target elements

      + +
      +
      path
      +
      Provides the location at which the pool will be mapped into +the local filesystem namespace. For a filesystem/directory based +pool it will be the name of the directory in which volumes will +be created. For device based pools it will tbe directory in which +devices nodes exist. For the latter /dev/ may seem +like the logical choice, however, devices nodes there are not +guarenteed stable across reboots, since they are allocated on +demand. It is preferrable to use a stable location such as one +of the /dev/disk/by-{path,id,uuid,label locations. +
      +
      format
      +
      Provides information about the pool specific volume format. +For disk pools it will provide the partition type. For filesystem +or directory pools it will provide the file format type, eg cow, +qcow, vmdk, raw. If omitted when creating a volume, the pool's +default format will be used. The actual format is specified via +the type. Consult the pool-specific docs for the +list of valid values.
      +
      permissions
      +
      Provides information about the default permissions to use +when creating volumes. This is currently only useful for directory +or filesystem based pools, where the volumes allocated are simple +files. For pools where the volumes are device nodes, the hotplug +scripts determine permissions. It contains 4 child elements. The +mode element contains the octal permission set. The +owner element contains the numeric user ID. The group +element contains the numeric group ID. The label element +contains the MAC (eg SELinux) label string. +
      +
      + + + +

      Storage backend drivers

      + +

      +This section illustrates the capabilities / format for each of +the different backend storage pool drivers +

      + +

      Directory pool

      + +

      +A pool with a type of dir provides the means to manage +files within a directory. The files can be fully allocated raw files, +sparsely allocated raw files, or one of the special disk formats +such as qcow,qcow2,vmdk, +cow, etc as supported by the qemu-img +program. If the directory does not exist at the time the pool is +defined, the build operation can be used to create it. +

      + +
      Example pool input definition
      + +
      +<pool type="dir">
      +  <name>virtimages</name>
      +  <target>
      +    <path>/var/lib/virt/images</path>
      +  </target>
      +</pool>
      +
      + +
      Valid pool format types
      + +

      +The directory pool does not use the pool format type element. +

      + +
      Valid volume format types
      + +

      +One of the following options: +

      + +
        +
      • raw: a plain file
      • +
      • bochs: Bochs disk image format
      • +
      • cloop: compressed loopback disk image format
      • +
      • cow: User Mode Linux disk image format
      • +
      • dmg: Mac disk image format
      • +
      • iso: CDROM disk image format
      • +
      • qcow: QEMU v1 disk image format
      • +
      • qcow2: QEMU v2 disk image format
      • +
      • vmdk: VMWare disk image format
      • +
      • vpc: VirtualPC disk image format
      • +
      + +

      +When listing existing volumes all these formats are supported +natively. When creating new volumes, only a subset may be +available. The raw type is guarenteed always +available. The qcow2 type can be created if +either qemu-img or qcow-create tools +are present. The others are dependant on support of the +qemu-img tool. + +

      Filesystem pool

      + +

      +This is a variant of the directory pool. Instead of creating a +directory on an existing mounted filesystem though, it expects +a source block device to be named. This block device will be +mounted and files managed in the directory of its mount point. +It will default to allowing the kernel to automatically discover +the filesystem type, though it can be specified manually if +required. +

      + +
      Example pool input
      + +
      +<pool type="fs">
      +  <name>virtimages</name>
      +  <source>
      +    <device path="/dev/VolGroup00/VirtImages"/>
      +  </source>
      +  <target>
      +    <path>/var/lib/virt/images</path>
      +  </target>
      +</pool>
      +
      + +
      Valid pool format types
      + +

      +The fileystem pool supports the following formats: +

      + +
        +
      • auto - automatically determine format
      • +
      • ext2
      • +
      • ext3
      • +
      • ext4
      • +
      • ufs
      • +
      • iso9660
      • +
      • udf
      • +
      • gfs
      • +
      • gfs2
      • +
      • vfat
      • +
      • hfs+
      • +
      • xfs
      • +
      + +
      Valid volume format types
      + +

      +The valid volume types are the same as for the directory +pool type. +

      + +

      Network filesystem pool

      + +

      +This is a variant of the filesystem pool. Instead of requiring +a local block device as the source, it requires the name of a +host and path of an exported directory. It will mount this network +filesystem and manage files within the directory of its mount +point. It will default to using NFS as the protocol. +

      + +
      Example pool input
      + +
      +<pool type="netfs">
      +  <name>virtimages</name>
      +  <source>
      +    <host name="nfs.example.com"/>
      +    <dir path="/var/lib/virt/images"/>
      +  </source>
      +  <target>
      +    <path>/var/lib/virt/images</path>
      +  </target>
      +</pool>
      +
      + +
      Valid pool format types
      + +

      +The network fileystem pool supports the following formats: +

      + +
        +
      • auto - automatically determine format
      • +
      • nfs
      • +
      + +
      Valid volume format types
      + +

      +The valid volume types are the same as for the directory +pool type. +

      + +

      Logical volume pools

      + +

      +This provides a pool based on an LVM volume group. For a +pre-defined LVM volume group, simply providing the group +name is sufficient, while to build a new group requires +providing a list of source devices to serve as physical +volumes. Volumes will be allocated by carving out chunks +of storage from the volume group. +

      + +
      Example pool input
      + +
      +<pool type="logical">
      +  <name>HostVG</name>
      +  <source>
      +    <device path="/dev/sda1"/>
      +    <device path="/dev/sdb1"/>
      +    <device path="/dev/sdc1"/>
      +  </source>
      +  <target>
      +    <path>/dev/HostVG</path>
      +  </target>
      +</pool>
      +
      + +
      Valid pool format types
      + +

      +The logical volume pool does not use the pool format type element. +

      + +
      Valid volume format types
      + +

      +The logical volume pool does not use the volume format type element. +

      + + +

      Disk volume pools

      + +

      +This provides a pool based on a physical disk. Volumes are created +by adding partitions to the disk. Disk pools are have constraints +on the size and placement of volumes. The 'free extents' +information will detail the regions which are available for creating +new volumes. A volume cannot span across 2 different free extents. +

      + +
      Example pool input
      + +
      +<pool type="disk">
      +  <name>sda</name>
      +  <source>
      +    <device path='/dev/sda'/>
      +  </source>
      +  <target>
      +    <path>/dev</path>
      +  </target>
      +</pool>
      +
      + +
      Valid pool format types
      + +

      +The disk volume pool accepts the following pool format types, representing +the common partition table types: +

      + +
        +
      • dos
      • +
      • dvh
      • +
      • gpt
      • +
      • mac
      • +
      • bsd
      • +
      • pc98
      • +
      • sun
      • +
      + +

      +The dos or gpt formats are recommended for +best portability - the latter is needed for disks larger than 2TB. +

      + +
      Valid volume format types
      + +

      +The disk volume pool accepts the following volume format types, representing +the common partition entry types: +

      + +
        +
      • none
      • +
      • linux
      • +
      • fat16
      • +
      • fat32
      • +
      • linux-swap
      • +
      • linux-lvm
      • +
      • linux-raid
      • +
      • extended
      • +
      + + +

      iSCSI volume pools

      + +

      +This provides a pool based on an iSCSI target. Volumes must be +pre-allocated on the iSCSI server, and cannot be created via +the libvirt APIs. Since /dev/XXX names may change each time libvirt +logs into the iSCSI target, it is recommended to configure the pool +to use /dev/disk/by-path or /dev/disk/by-id +for the target path. These provide persistent stable naming for LUNs +

      + +
      Example pool input
      + +
      +<pool type="iscsi">
      +  <name>virtimages</name>
      +  <source>
      +    <host name="iscsi.example.com"/>
      +    <device path="demo-target"/>
      +  </source>
      +  <target>
      +    <path>/dev/disk/by-path</path>
      +  </target>
      +</pool>
      +
      + +
      Valid pool format types
      + +

      +The logical volume pool does not use the pool format type element. +

      + +
      Valid volume format types
      + +

      +The logical volume pool does not use the volume format type element. +

      + + + diff --git a/docs/news.html b/docs/news.html index 1047b8ed66..19b8d10228 100644 --- a/docs/news.html +++ b/docs/news.html @@ -424,4 +424,4 @@ and check the ChangeLog to gauge progress.

      0

    0.0.1: Dec 19 2005

    • First release
    • Basic management of existing Xen domains
    • Minimal autogenerated Python bindings
    • -

    +

    diff --git a/docs/python.html b/docs/python.html index b129db5580..884c35f449 100644 --- a/docs/python.html +++ b/docs/python.html @@ -62,4 +62,4 @@ from the C API, the only points to notice are:

    • the import of the modu
    • extracting and printing some informations about the domain using various methods associated to the virDomain class.
    • -

    +

    diff --git a/docs/remote.html b/docs/remote.html index 48ca4365bc..bdc5087a49 100644 --- a/docs/remote.html +++ b/docs/remote.html @@ -650,4 +650,4 @@ also possible.

    The protocol contains support for multiple program types and protocol versioning, modelled after SunRPC. -

    +

    diff --git a/docs/site.xsl b/docs/site.xsl index 977432b447..2619d1fa69 100644 --- a/docs/site.xsl +++ b/docs/site.xsl @@ -66,6 +66,9 @@ windows.html + + storage.html + unknown.html diff --git a/docs/storage.html b/docs/storage.html new file mode 100644 index 0000000000..eb09b9db2b --- /dev/null +++ b/docs/storage.html @@ -0,0 +1,531 @@ + + +Storage Management

    Storage Management

    +This page describes the storage management capabilities in +libvirt. +

    • Core concepts
    • +
    • Storage pool XML +
    • +
    • Storage volume XML +
    • +
    • Storage backend drivers +

      Core concepts

      + +

      +The storage management APIs are based around 2 core concepts +

      + +
      1. Volume - a single storage volume which can +be assigned to a guest, or used for creating further pools. A +volume is either a block device, a raw file, or a special format +file.
      2. +
      3. Pool - provides a means for taking a chunk +of storage and carving it up into volumes. A pool can be used to +manage things such as a physical disk, a NFS server, a iSCSI target, +a host adapter, an LVM group.
      4. +

      +These two concepts are mapped through to two libvirt objects, a +virStorageVolPtr and a virStoragePoolPtr, +each with a collection of APIs for their management. +

      + + +

      Storage pool XML

      + +

      +Although all storage pool backends share the same public APIs and +XML format, they have varying levels of capabilities. Some may +allow creation of volumes, others may only allow use of pre-existing +volumes. Some may have constraints on volume size, or placement. +

      + +

      The is the top level tag for a storage pool document is 'pool'. It has +a single attribute type, which is one of dir, +fs,netfs,disk,iscsi, +logical. This corresponds to the storage backend drivers +listed further along in this document. +

      + + +

      First level elements

      + +
      name
      +
      Providing a name for the pool which is unique to the host. +This is mandatory when defining a pool
      + +
      uuid
      +
      Providing an identifier for the pool which is globally unique. +This is optional when defining a pool, a UUID will be generated if +omitted
      + +
      allocation
      +
      Providing the total storage allocation for the pool. This may +be larger than the sum of the allocation of all volumes due to +metadata overhead. This value is in bytes. This is not applicable +when creating a pool.
      + +
      capacity
      +
      Providing the total storage capacity for the pool. Due to +underlying device constraints it may not be possible to use the +full capacity for storage volumes. This value is in bytes. This +is not applicable when creating a pool.
      + +
      available
      +
      Providing the free space available for allocating new volums +in the pool. Due to underlying device constraints it may not be +possible to allocate the entire free space to a single volume. +This value is in bytes. This is not applicable when creating a +pool.
      + +
      source
      +
      Provides information about the source of the pool, such as +the underlying host devices, or remote server
      + +
      target
      +
      Provides information about the representation of the pool +on the local host.
      +

      Source elements

      + +
      device
      +
      Provides the source for pools backed by physical devices. +May be repeated multiple times depending on backend driver. Contains +a single attribute path which is the fully qualified +path to the block device node.
      +
      directory
      +
      Provides the source for pools backed by directories. May +only occur once. Contains a single attribute path +which is the fully qualified path to the block device node.
      +
      host
      +
      Provides the source for pools backed by storage from a +remote server. Will be used in combination with a directory +or device element. Contains an attribute name +which is the hostname or IP address of the server. May optionally +contain a port attribute for the protocol specific +port number.
      +
      format
      +
      Provides information about the format of the pool. This +contains a single attribute type whose value is +backend specific. This is typically used to indicate filesystem +type, or network filesystem type, or partition table type, or +LVM metadata type. All drivers are required to have a default +value for this, so it is optional.
      +

      Target elements

      + +
      path
      +
      Provides the location at which the pool will be mapped into +the local filesystem namespace. For a filesystem/directory based +pool it will be the name of the directory in which volumes will +be created. For device based pools it will tbe directory in which +devices nodes exist. For the latter /dev/ may seem +like the logical choice, however, devices nodes there are not +guarenteed stable across reboots, since they are allocated on +demand. It is preferrable to use a stable location such as one +of the /dev/disk/by-{path,id,uuid,label locations. +
      +
      permissions
      +
      Provides information about the default permissions to use +when creating volumes. This is currently only useful for directory +or filesystem based pools, where the volumes allocated are simple +files. For pools where the volumes are device nodes, the hotplug +scripts determine permissions. It contains 4 child elements. The +mode element contains the octal permission set. The +owner element contains the numeric user ID. The group +element contains the numeric group ID. The label element +contains the MAC (eg SELinux) label string. +
      +

      Device extents

      + +

      +If a storage pool exposes information about its underlying +placement / allocation scheme, the device element +within the source element may contain information +about its avilable extents. Some pools have a constraint that +a volume must be allocated entirely within a single constraint +(eg disk partition pools). Thus the extent information allows an +application to determine the maximum possible size for a new +volume +

      + +

      +For storage pools supporting extent information, within each +device element there will be zero or more freeExtent +elements. Each of these elements contains two attributes, start +and end which provide the boundaries of the extent on the +device, measured in bytes. +

      + +

      Storage volume XML

      + +

      +A storage volume will be either a file or a device node. +

      + +

      First level elements

      + +
      name
      +
      Providing a name for the pool which is unique to the host. +This is mandatory when defining a pool
      + +
      uuid
      +
      Providing an identifier for the pool which is globally unique. +This is optional when defining a pool, a UUID will be generated if +omitted
      + +
      allocation
      +
      Providing the total storage allocation for the volume. This +may be smaller than the logical capacity if the volume is sparsely +allocated. It may also be larger than the logical capacity if the +volume has substantial metadata overhead. This value is in bytes. +If omitted when creating a volume, the volume will be fully +allocated at time of creation. If set to a value smaller than the +capacity, the pool has the option of deciding +to sparsely allocate a volume. It does not have to honour requests +for sparse allocation though.
      + +
      capacity
      +
      Providing the logical capacity for the volume. This value is +in bytes. This is compulsory when creating a volume
      + +
      source
      +
      Provides information about the underlying storage allocation +of the volume. This may not be available for some pool types.
      + +
      target
      +
      Provides information about the representation of the volume +on the local host.
      +

      Target elements

      + +
      path
      +
      Provides the location at which the pool will be mapped into +the local filesystem namespace. For a filesystem/directory based +pool it will be the name of the directory in which volumes will +be created. For device based pools it will tbe directory in which +devices nodes exist. For the latter /dev/ may seem +like the logical choice, however, devices nodes there are not +guarenteed stable across reboots, since they are allocated on +demand. It is preferrable to use a stable location such as one +of the /dev/disk/by-{path,id,uuid,label locations. +
      +
      format
      +
      Provides information about the pool specific volume format. +For disk pools it will provide the partition type. For filesystem +or directory pools it will provide the file format type, eg cow, +qcow, vmdk, raw. If omitted when creating a volume, the pool's +default format will be used. The actual format is specified via +the type. Consult the pool-specific docs for the +list of valid values.
      +
      permissions
      +
      Provides information about the default permissions to use +when creating volumes. This is currently only useful for directory +or filesystem based pools, where the volumes allocated are simple +files. For pools where the volumes are device nodes, the hotplug +scripts determine permissions. It contains 4 child elements. The +mode element contains the octal permission set. The +owner element contains the numeric user ID. The group +element contains the numeric group ID. The label element +contains the MAC (eg SELinux) label string. +
      +

      Storage backend drivers

      + +

      +This section illustrates the capabilities / format for each of +the different backend storage pool drivers +

      + +

      Directory pool

      + +

      +A pool with a type of dir provides the means to manage +files within a directory. The files can be fully allocated raw files, +sparsely allocated raw files, or one of the special disk formats +such as qcow,qcow2,vmdk, +cow, etc as supported by the qemu-img +program. If the directory does not exist at the time the pool is +defined, the build operation can be used to create it. +

      + +
      Example pool input definition
      + +
      +<pool type="dir">
      +  <name>virtimages</name>
      +  <target>
      +    <path>/var/lib/virt/images</path>
      +  </target>
      +</pool>
      +
      + +
      Valid pool format types
      + +

      +The directory pool does not use the pool format type element. +

      + +
      Valid volume format types
      + +

      +One of the following options: +

      + +
      • raw: a plain file
      • +
      • bochs: Bochs disk image format
      • +
      • cloop: compressed loopback disk image format
      • +
      • cow: User Mode Linux disk image format
      • +
      • dmg: Mac disk image format
      • +
      • iso: CDROM disk image format
      • +
      • qcow: QEMU v1 disk image format
      • +
      • qcow2: QEMU v2 disk image format
      • +
      • vmdk: VMWare disk image format
      • +
      • vpc: VirtualPC disk image format
      • +

      +When listing existing volumes all these formats are supported +natively. When creating new volumes, only a subset may be +available. The raw type is guarenteed always +available. The qcow2 type can be created if +either qemu-img or qcow-create tools +are present. The others are dependant on support of the +qemu-img tool. + +

      Filesystem pool

      + +

      +This is a variant of the directory pool. Instead of creating a +directory on an existing mounted filesystem though, it expects +a source block device to be named. This block device will be +mounted and files managed in the directory of its mount point. +It will default to allowing the kernel to automatically discover +the filesystem type, though it can be specified manually if +required. +

      + +
      Example pool input
      + +
      +<pool type="fs">
      +  <name>virtimages</name>
      +  <source>
      +    <device path="/dev/VolGroup00/VirtImages"/>
      +  </source>
      +  <target>
      +    <path>/var/lib/virt/images</path>
      +  </target>
      +</pool>
      +
      + +
      Valid pool format types
      + +

      +The fileystem pool supports the following formats: +

      + +
      • auto - automatically determine format
      • +
      • ext2
      • +
      • ext3
      • +
      • ext4
      • +
      • ufs
      • +
      • iso9660
      • +
      • udf
      • +
      • gfs
      • +
      • gfs2
      • +
      • vfat
      • +
      • hfs+
      • +
      • xfs
      • +
      Valid volume format types
      + +

      +The valid volume types are the same as for the directory +pool type. +

      + +

      Network filesystem pool

      + +

      +This is a variant of the filesystem pool. Instead of requiring +a local block device as the source, it requires the name of a +host and path of an exported directory. It will mount this network +filesystem and manage files within the directory of its mount +point. It will default to using NFS as the protocol. +

      + +
      Example pool input
      + +
      +<pool type="netfs">
      +  <name>virtimages</name>
      +  <source>
      +    <host name="nfs.example.com"/>
      +    <dir path="/var/lib/virt/images"/>
      +  </source>
      +  <target>
      +    <path>/var/lib/virt/images</path>
      +  </target>
      +</pool>
      +
      + +
      Valid pool format types
      + +

      +The network fileystem pool supports the following formats: +

      + +
      • auto - automatically determine format
      • +
      • nfs
      • +
      Valid volume format types
      + +

      +The valid volume types are the same as for the directory +pool type. +

      + +

      Logical volume pools

      + +

      +This provides a pool based on an LVM volume group. For a +pre-defined LVM volume group, simply providing the group +name is sufficient, while to build a new group requires +providing a list of source devices to serve as physical +volumes. Volumes will be allocated by carving out chunks +of storage from the volume group. +

      + +
      Example pool input
      + +
      +<pool type="logical">
      +  <name>HostVG</name>
      +  <source>
      +    <device path="/dev/sda1"/>
      +    <device path="/dev/sdb1"/>
      +    <device path="/dev/sdc1"/>
      +  </source>
      +  <target>
      +    <path>/dev/HostVG</path>
      +  </target>
      +</pool>
      +
      + +
      Valid pool format types
      + +

      +The logical volume pool does not use the pool format type element. +

      + +
      Valid volume format types
      + +

      +The logical volume pool does not use the volume format type element. +

      + + +

      Disk volume pools

      + +

      +This provides a pool based on a physical disk. Volumes are created +by adding partitions to the disk. Disk pools are have constraints +on the size and placement of volumes. The 'free extents' +information will detail the regions which are available for creating +new volumes. A volume cannot span across 2 different free extents. +

      + +
      Example pool input
      + +
      +<pool type="disk">
      +  <name>sda</name>
      +  <source>
      +    <device path='/dev/sda'/>
      +  </source>
      +  <target>
      +    <path>/dev</path>
      +  </target>
      +</pool>
      +
      + +
      Valid pool format types
      + +

      +The disk volume pool accepts the following pool format types, representing +the common partition table types: +

      + +
      • dos
      • +
      • dvh
      • +
      • gpt
      • +
      • mac
      • +
      • bsd
      • +
      • pc98
      • +
      • sun
      • +

      +The dos or gpt formats are recommended for +best portability - the latter is needed for disks larger than 2TB. +

      + +
      Valid volume format types
      + +

      +The disk volume pool accepts the following volume format types, representing +the common partition entry types: +

      + +
      • none
      • +
      • linux
      • +
      • fat16
      • +
      • fat32
      • +
      • linux-swap
      • +
      • linux-lvm
      • +
      • linux-raid
      • +
      • extended
      • +

      iSCSI volume pools

      + +

      +This provides a pool based on an iSCSI target. Volumes must be +pre-allocated on the iSCSI server, and cannot be created via +the libvirt APIs. Since /dev/XXX names may change each time libvirt +logs into the iSCSI target, it is recommended to configure the pool +to use /dev/disk/by-path or /dev/disk/by-id +for the target path. These provide persistent stable naming for LUNs +

      + +
      Example pool input
      + +
      +<pool type="iscsi">
      +  <name>virtimages</name>
      +  <source>
      +    <host name="iscsi.example.com"/>
      +    <device path="demo-target"/>
      +  </source>
      +  <target>
      +    <path>/dev/disk/by-path</path>
      +  </target>
      +</pool>
      +
      + +
      Valid pool format types
      + +

      +The logical volume pool does not use the pool format type element. +

      + +
      Valid volume format types
      + +

      +The logical volume pool does not use the volume format type element. +

      + + + +

    diff --git a/docs/uri.html b/docs/uri.html index 332e69c8fb..adf0b23736 100644 --- a/docs/uri.html +++ b/docs/uri.html @@ -168,4 +168,4 @@ connection.

    You should consider using libvirt remote support in future. -

    +

    diff --git a/docs/windows.html b/docs/windows.html index 60eed9606d..18083ad284 100644 --- a/docs/windows.html +++ b/docs/windows.html @@ -230,4 +230,4 @@ python

    -

    +