libvirt/docs/storage.html
2008-02-29 12:53:10 +00:00

532 lines
21 KiB
HTML

<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" /><link rel="stylesheet" type="text/css" href="libvirt.css" /><link rel="SHORTCUT ICON" href="/32favicon.png" /><title>Storage Management</title></head><body><div id="container"><div id="intro"><div id="adjustments"></div><div id="pageHeader"></div><div id="content2"><h1 class="style1">Storage Management</h1><p>
This page describes the storage management capabilities in
libvirt.
</p><ul><li><a href="#StorageCore">Core concepts</a></li>
<li><a href="#StoragePool">Storage pool XML</a>
<ul><li><a href="#StoragePoolFirst">First level elements</a></li>
<li><a href="#StoragePoolSource">Source elements</a></li>
<li><a href="#StoragePoolTarget">Target elements</a></li>
<li><a href="#StoragePoolExtents">Device extents</a></li>
</ul></li>
<li><a href="#StorageVol">Storage volume XML</a>
<ul><li><a href="#StorageVolFirst">First level elements</a></li>
<li><a href="#StorageVolSource">Source elements</a></li>
<li><a href="#StorageVolTarget">Target elements</a></li>
</ul></li>
<li><a href="#StorageBackend">Storage backend drivers</a>
<ul><li><a href="#StorageBackendDir">Directory backend</a></li>
<li><a href="#StorageBackendFS">Local filesystem backend</a></li>
<li><a href="#StorageBackendNetFS">Network filesystem backend</a></li>
<li><a href="#StorageBackendLogical">Logical backend</a></li>
<li><a href="#StorageBackendDisk">Disk backend</a></li>
<li><a href="#StorageBackendISCSI">iSCSI backend</a></li>
</ul><h3><a name="StorageCore" id="StorageCore">Core concepts</a></h3>
<p>
The storage management APIs are based around 2 core concepts
</p>
<ol><li><strong>Volume</strong> - a single storage volume which can
be assigned to a guest, or used for creating further pools. A
volume is either a block device, a raw file, or a special format
file.</li>
<li><strong>Pool</strong> - provides a means for taking a chunk
of storage and carving it up into volumes. A pool can be used to
manage things such as a physical disk, a NFS server, a iSCSI target,
a host adapter, an LVM group.</li>
</ol><p>
These two concepts are mapped through to two libvirt objects, a
<code>virStorageVolPtr</code> and a <code>virStoragePoolPtr</code>,
each with a collection of APIs for their management.
</p>
<h3><a name="StoragePool" id="StoragePool">Storage pool XML</a></h3>
<p>
Although all storage pool backends share the same public APIs and
XML format, they have varying levels of capabilities. Some may
allow creation of volumes, others may only allow use of pre-existing
volumes. Some may have constraints on volume size, or placement.
</p>
<p>The is the top level tag for a storage pool document is 'pool'. It has
a single attribute <code>type</code>, which is one of <code>dir</code>,
<code>fs</code>,<code>netfs</code>,<code>disk</code>,<code>iscsi</code>,
<code>logical</code>. This corresponds to the storage backend drivers
listed further along in this document.
</p>
<h4><a name="StoragePoolFirst" id="StoragePoolFirst">First level elements</a></h4>
<dl><dt>name</dt>
<dd>Providing a name for the pool which is unique to the host.
This is mandatory when defining a pool</dd>
<dt>uuid</dt>
<dd>Providing an identifier for the pool which is globally unique.
This is optional when defining a pool, a UUID will be generated if
omitted</dd>
<dt>allocation</dt>
<dd>Providing the total storage allocation for the pool. This may
be larger than the sum of the allocation of all volumes due to
metadata overhead. This value is in bytes. This is not applicable
when creating a pool.</dd>
<dt>capacity</dt>
<dd>Providing the total storage capacity for the pool. Due to
underlying device constraints it may not be possible to use the
full capacity for storage volumes. This value is in bytes. This
is not applicable when creating a pool.</dd>
<dt>available</dt>
<dd>Providing the free space available for allocating new volumes
in the pool. Due to underlying device constraints it may not be
possible to allocate the entire free space to a single volume.
This value is in bytes. This is not applicable when creating a
pool.</dd>
<dt>source</dt>
<dd>Provides information about the source of the pool, such as
the underlying host devices, or remote server</dd>
<dt>target</dt>
<dd>Provides information about the representation of the pool
on the local host.</dd>
</dl><h4><a name="StoragePoolSource" id="StoragePoolSource">Source elements</a></h4>
<dl><dt>device</dt>
<dd>Provides the source for pools backed by physical devices.
May be repeated multiple times depending on backend driver. Contains
a single attribute <code>path</code> which is the fully qualified
path to the block device node.</dd>
<dt>directory</dt>
<dd>Provides the source for pools backed by directories. May
only occur once. Contains a single attribute <code>path</code>
which is the fully qualified path to the block device node.</dd>
<dt>host</dt>
<dd>Provides the source for pools backed by storage from a
remote server. Will be used in combination with a <code>directory</code>
or <code>device</code> element. Contains an attribute <code>name<code>
which is the hostname or IP address of the server. May optionally
contain a <code>port</code> attribute for the protocol specific
port number.</code></code></dd>
<dt>format</dt>
<dd>Provides information about the format of the pool. This
contains a single attribute <code>type</code> whose value is
backend specific. This is typically used to indicate filesystem
type, or network filesystem type, or partition table type, or
LVM metadata type. All drivers are required to have a default
value for this, so it is optional.</dd>
</dl><h4><a name="StoragePoolTarget" id="StoragePoolTarget">Target elements</a></h4>
<dl><dt>path</dt>
<dd>Provides the location at which the pool will be mapped into
the local filesystem namespace. For a filesystem/directory based
pool it will be the name of the directory in which volumes will
be created. For device based pools it will tbe directory in which
devices nodes exist. For the latter <code>/dev/</code> may seem
like the logical choice, however, devices nodes there are not
guaranteed stable across reboots, since they are allocated on
demand. It is preferable to use a stable location such as one
of the <code>/dev/disk/by-{path,id,uuid,label</code> locations.
</dd>
<dt>permissions<dt>
</dt></dt><dd>Provides information about the default permissions to use
when creating volumes. This is currently only useful for directory
or filesystem based pools, where the volumes allocated are simple
files. For pools where the volumes are device nodes, the hotplug
scripts determine permissions. It contains 4 child elements. The
<code>mode</code> element contains the octal permission set. The
<code>owner</code> element contains the numeric user ID. The <code>group</code>
element contains the numeric group ID. The <code>label</code> element
contains the MAC (eg SELinux) label string.
</dd>
</dl><h4><a name="StoragePoolExtents" id="StoragePoolExtents">Device extents</a></h4>
<p>
If a storage pool exposes information about its underlying
placement / allocation scheme, the <code>device</code> element
within the <code>source</code> element may contain information
about its available extents. Some pools have a constraint that
a volume must be allocated entirely within a single constraint
(eg disk partition pools). Thus the extent information allows an
application to determine the maximum possible size for a new
volume
</p>
<p>
For storage pools supporting extent information, within each
<code>device</code> element there will be zero or more <code>freeExtent</code>
elements. Each of these elements contains two attributes, <code>start</code>
and <code>end</code> which provide the boundaries of the extent on the
device, measured in bytes.
</p>
<h3><a name="StorageVol" id="StorageVol">Storage volume XML</a></h3>
<p>
A storage volume will be either a file or a device node.
</p>
<h4><a name="StorageVolFirst" id="StorageVolFirst">First level elements</a></h4>
<dl><dt>name</dt>
<dd>Providing a name for the pool which is unique to the host.
This is mandatory when defining a pool</dd>
<dt>uuid</dt>
<dd>Providing an identifier for the pool which is globally unique.
This is optional when defining a pool, a UUID will be generated if
omitted</dd>
<dt>allocation</dt>
<dd>Providing the total storage allocation for the volume. This
may be smaller than the logical capacity if the volume is sparsely
allocated. It may also be larger than the logical capacity if the
volume has substantial metadata overhead. This value is in bytes.
If omitted when creating a volume, the volume will be fully
allocated at time of creation. If set to a value smaller than the
capacity, the pool has the <strong>option</strong> of deciding
to sparsely allocate a volume. It does not have to honour requests
for sparse allocation though.</dd>
<dt>capacity</dt>
<dd>Providing the logical capacity for the volume. This value is
in bytes. This is compulsory when creating a volume</dd>
<dt>source</dt>
<dd>Provides information about the underlying storage allocation
of the volume. This may not be available for some pool types.</dd>
<dt>target</dt>
<dd>Provides information about the representation of the volume
on the local host.</dd>
</dl><h4><a name="StorageVolTarget" id="StorageVolTarget">Target elements</a></h4>
<dl><dt>path</dt>
<dd>Provides the location at which the pool will be mapped into
the local filesystem namespace. For a filesystem/directory based
pool it will be the name of the directory in which volumes will
be created. For device based pools it will tbe directory in which
devices nodes exist. For the latter <code>/dev/</code> may seem
like the logical choice, however, devices nodes there are not
guaranteed stable across reboots, since they are allocated on
demand. It is preferable to use a stable location such as one
of the <code>/dev/disk/by-{path,id,uuid,label</code> locations.
</dd>
<dt>format</dt>
<dd>Provides information about the pool specific volume format.
For disk pools it will provide the partition type. For filesystem
or directory pools it will provide the file format type, eg cow,
qcow, vmdk, raw. If omitted when creating a volume, the pool's
default format will be used. The actual format is specified via
the <code>type</code>. Consult the pool-specific docs for the
list of valid values.</dd>
<dt>permissions<dt>
</dt></dt><dd>Provides information about the default permissions to use
when creating volumes. This is currently only useful for directory
or filesystem based pools, where the volumes allocated are simple
files. For pools where the volumes are device nodes, the hotplug
scripts determine permissions. It contains 4 child elements. The
<code>mode</code> element contains the octal permission set. The
<code>owner</code> element contains the numeric user ID. The <code>group</code>
element contains the numeric group ID. The <code>label</code> element
contains the MAC (eg SELinux) label string.
</dd>
</dl><h3><a name="StorageBackend" id="StorageBackend">Storage backend drivers</a></h3>
<p>
This section illustrates the capabilities / format for each of
the different backend storage pool drivers
</p>
<h4><a name="StorageBackendDir" id="StorageBackendDir">Directory pool</a></h4>
<p>
A pool with a type of <code>dir</code> provides the means to manage
files within a directory. The files can be fully allocated raw files,
sparsely allocated raw files, or one of the special disk formats
such as <code>qcow</code>,<code>qcow2</code>,<code>vmdk</code>,
<code>cow</code>, etc as supported by the <code>qemu-img</code>
program. If the directory does not exist at the time the pool is
defined, the <code>build</code> operation can be used to create it.
</p>
<h5>Example pool input definition</h5>
<pre>
&lt;pool type="dir"&gt;
&lt;name&gt;virtimages&lt;/name&gt;
&lt;target&gt;
&lt;path&gt;/var/lib/virt/images&lt;/path&gt;
&lt;/target&gt;
&lt;/pool&gt;
</pre>
<h5>Valid pool format types</h5>
<p>
The directory pool does not use the pool format type element.
</p>
<h5>Valid volume format types</h5>
<p>
One of the following options:
</p>
<ul><li><code>raw</code>: a plain file</li>
<li><code>bochs</code>: Bochs disk image format</li>
<li><code>cloop</code>: compressed loopback disk image format</li>
<li><code>cow</code>: User Mode Linux disk image format</li>
<li><code>dmg</code>: Mac disk image format</li>
<li><code>iso</code>: CDROM disk image format</li>
<li><code>qcow</code>: QEMU v1 disk image format</li>
<li><code>qcow2</code>: QEMU v2 disk image format</li>
<li><code>vmdk</code>: VMWare disk image format</li>
<li><code>vpc</code>: VirtualPC disk image format</li>
</ul><p>
When listing existing volumes all these formats are supported
natively. When creating new volumes, only a subset may be
available. The <code>raw</code> type is guaranteed always
available. The <code>qcow2</code> type can be created if
either <code>qemu-img</code> or <code>qcow-create</code> tools
are present. The others are dependent on support of the
<code>qemu-img</code> tool.
</p><h4><a name="StorageBackendFS" id="StorageBackendFS">Filesystem pool</a></h4>
<p>
This is a variant of the directory pool. Instead of creating a
directory on an existing mounted filesystem though, it expects
a source block device to be named. This block device will be
mounted and files managed in the directory of its mount point.
It will default to allowing the kernel to automatically discover
the filesystem type, though it can be specified manually if
required.
</p>
<h5>Example pool input</h5>
<pre>
&lt;pool type="fs"&gt;
&lt;name&gt;virtimages&lt;/name&gt;
&lt;source&gt;
&lt;device path="/dev/VolGroup00/VirtImages"/&gt;
&lt;/source&gt;
&lt;target&gt;
&lt;path&gt;/var/lib/virt/images&lt;/path&gt;
&lt;/target&gt;
&lt;/pool&gt;
</pre>
<h5>Valid pool format types</h5>
<p>
The filesystem pool supports the following formats:
</p>
<ul><li><code>auto</code> - automatically determine format</li>
<li><code>ext2</code></li>
<li><code>ext3</code></li>
<li><code>ext4</code></li>
<li><code>ufs</code></li>
<li><code>iso9660</code></li>
<li><code>udf</code></li>
<li><code>gfs</code></li>
<li><code>gfs2</code></li>
<li><code>vfat</code></li>
<li><code>hfs+</code></li>
<li><code>xfs</code></li>
</ul><h5>Valid volume format types</h5>
<p>
The valid volume types are the same as for the <code>directory</code>
pool type.
</p>
<h4><a name="StorageBackendNetFS" id="StorageBackendNetFS">Network filesystem pool</a></h4>
<p>
This is a variant of the filesystem pool. Instead of requiring
a local block device as the source, it requires the name of a
host and path of an exported directory. It will mount this network
filesystem and manage files within the directory of its mount
point. It will default to using NFS as the protocol.
</p>
<h5>Example pool input</h5>
<pre>
&lt;pool type="netfs"&gt;
&lt;name&gt;virtimages&lt;/name&gt;
&lt;source&gt;
&lt;host name="nfs.example.com"/&gt;
&lt;dir path="/var/lib/virt/images"/&gt;
&lt;/source&gt;
&lt;target&gt;
&lt;path&gt;/var/lib/virt/images&lt;/path&gt;
&lt;/target&gt;
&lt;/pool&gt;
</pre>
<h5>Valid pool format types</h5>
<p>
The network filesystem pool supports the following formats:
</p>
<ul><li><code>auto</code> - automatically determine format</li>
<li><code>nfs</code></li>
</ul><h5>Valid volume format types</h5>
<p>
The valid volume types are the same as for the <code>directory</code>
pool type.
</p>
<h4><a name="StorageBackendLogical" id="StorageBackendLogical">Logical volume pools</a></h4>
<p>
This provides a pool based on an LVM volume group. For a
pre-defined LVM volume group, simply providing the group
name is sufficient, while to build a new group requires
providing a list of source devices to serve as physical
volumes. Volumes will be allocated by carving out chunks
of storage from the volume group.
</p>
<h5>Example pool input</h5>
<pre>
&lt;pool type="logical"&gt;
&lt;name&gt;HostVG&lt;/name&gt;
&lt;source&gt;
&lt;device path="/dev/sda1"/&gt;
&lt;device path="/dev/sdb1"/&gt;
&lt;device path="/dev/sdc1"/&gt;
&lt;/source&gt;
&lt;target&gt;
&lt;path&gt;/dev/HostVG&lt;/path&gt;
&lt;/target&gt;
&lt;/pool&gt;
</pre>
<h5>Valid pool format types</h5>
<p>
The logical volume pool does not use the pool format type element.
</p>
<h5>Valid volume format types</h5>
<p>
The logical volume pool does not use the volume format type element.
</p>
<h4><a name="StorageBackendDisk" id="StorageBackendDisk">Disk volume pools</a></h4>
<p>
This provides a pool based on a physical disk. Volumes are created
by adding partitions to the disk. Disk pools are have constraints
on the size and placement of volumes. The 'free extents'
information will detail the regions which are available for creating
new volumes. A volume cannot span across 2 different free extents.
</p>
<h5>Example pool input</h5>
<pre>
&lt;pool type="disk"&gt;
&lt;name&gt;sda&lt;/name&gt;
&lt;source&gt;
&lt;device path='/dev/sda'/&gt;
&lt;/source&gt;
&lt;target&gt;
&lt;path&gt;/dev&lt;/path&gt;
&lt;/target&gt;
&lt;/pool&gt;
</pre>
<h5>Valid pool format types</h5>
<p>
The disk volume pool accepts the following pool format types, representing
the common partition table types:
</p>
<ul><li><code>dos</code></li>
<li><code>dvh</code></li>
<li><code>gpt</code></li>
<li><code>mac</code></li>
<li><code>bsd</code></li>
<li><code>pc98</code></li>
<li><code>sun</code></li>
</ul><p>
The <code>dos</code> or <code>gpt</code> formats are recommended for
best portability - the latter is needed for disks larger than 2TB.
</p>
<h5>Valid volume format types</h5>
<p>
The disk volume pool accepts the following volume format types, representing
the common partition entry types:
</p>
<ul><li><code>none</code></li>
<li><code>linux</code></li>
<li><code>fat16</code></li>
<li><code>fat32</code></li>
<li><code>linux-swap</code></li>
<li><code>linux-lvm</code></li>
<li><code>linux-raid</code></li>
<li><code>extended</code></li>
</ul><h4><a name="StorageBackendISCSI" id="StorageBackendISCSI">iSCSI volume pools</a></h4>
<p>
This provides a pool based on an iSCSI target. Volumes must be
pre-allocated on the iSCSI server, and cannot be created via
the libvirt APIs. Since /dev/XXX names may change each time libvirt
logs into the iSCSI target, it is recommended to configure the pool
to use <code>/dev/disk/by-path</code> or <code>/dev/disk/by-id</code>
for the target path. These provide persistent stable naming for LUNs
</p>
<h5>Example pool input</h5>
<pre>
&lt;pool type="iscsi"&gt;
&lt;name&gt;virtimages&lt;/name&gt;
&lt;source&gt;
&lt;host name="iscsi.example.com"/&gt;
&lt;device path="demo-target"/&gt;
&lt;/source&gt;
&lt;target&gt;
&lt;path&gt;/dev/disk/by-path&lt;/path&gt;
&lt;/target&gt;
&lt;/pool&gt;
</pre>
<h5>Valid pool format types</h5>
<p>
The logical volume pool does not use the pool format type element.
</p>
<h5>Valid volume format types</h5>
<p>
The logical volume pool does not use the volume format type element.
</p>
</li></ul></div></div><div class="linkList2"><div class="llinks2"><h3 class="links2"><span>main menu</span></h3><ul><li><a href="index.html">Home</a></li><li><a href="news.html">Releases</a></li><li><a href="intro.html">Introduction</a></li><li><a href="architecture.html">libvirt architecture</a></li><li><a href="downloads.html">Downloads</a></li><li><a href="format.html">XML Format</a></li><li><a href="python.html">Bindings for other languages</a></li><li><a href="errors.html">Handling of errors</a></li><li><a href="FAQ.html">FAQ</a></li><li><a href="bugs.html">Reporting bugs and getting help</a></li><li><a href="windows.html">Windows support</a></li><li><a href="remote.html">Remote support</a></li><li><a href="auth.html">Access control</a></li><li><a href="uri.html">Connection URIs</a></li><li><a href="hvsupport.html">Hypervisor support</a></li><li><a href="storage.html">Storage Management</a></li><li><a href="html/index.html">API Menu</a></li><li><a href="examples/index.html">C code examples</a></li><li><a href="ChangeLog.html">Recent Changes</a></li></ul></div><div class="llinks2"><h3 class="links2"><span>related links</span></h3><ul><li><a href="https://www.redhat.com/archives/libvir-list/">Mail archive</a></li><li><a href="https://bugzilla.redhat.com/bugzilla/buglist.cgi?product=Fedora+Core&amp;component=libvirt&amp;bug_status=NEW&amp;bug_status=ASSIGNED&amp;bug_status=REOPENED&amp;bug_status=MODIFIED&amp;short_desc_type=allwordssubstr&amp;short_desc=&amp;long_desc_type=allwordssubstr">Open bugs</a></li><li><a href="http://virt-manager.et.redhat.com/">virt-manager</a></li><li><a href="http://search.cpan.org/~danberr/Sys-Virt-0.1.0/">Perl bindings</a></li><li><a href="http://libvirt.org/ocaml/">OCaml bindings</a></li><li><a href="http://libvirt.org/ruby/">Ruby bindings</a></li><li><a href="http://www.cl.cam.ac.uk/Research/SRG/netos/xen/index.html">Xen project</a></li><li><form action="search.php" enctype="application/x-www-form-urlencoded" method="get"><input name="query" type="text" size="12" value="Search..." /><input name="submit" type="submit" value="Go" /></form></li><li><a href="http://xmlsoft.org/"><img src="Libxml2-Logo-90x34.gif" alt="Made with Libxml2 Logo" /></a></li></ul><p class="credits">Graphics and design by <a href="mail:dfong@redhat.com">Diana Fong</a></p></div></div><div id="bottom"><p class="p1"></p></div></div></body></html>