mirror of
https://gitlab.com/libvirt/libvirt.git
synced 2024-11-08 22:39:56 +00:00
29bc4fe646
This patch brings support to manage sheepdog pools and volumes to libvirt. It uses the "collie" command-line utility that comes with sheepdog for that. A sheepdog pool in libvirt maps to a sheepdog cluster. It needs a host and port to connect to, which in most cases is just going to be the default of localhost on port 7000. A sheepdog volume in libvirt maps to a sheepdog vdi. To create one specify the pool, a name and the capacity. Volumes can also be resized later. In the volume XML the vdi name has to be put into the <target><path>. To use the volume as a disk source for virtual machines specify the vdi name as "name" attribute of the <source>. The host and port information from the pool are specified inside the host tag. <disk type='network'> ... <source protocol="sheepdog" name="vdi_name"> <host name="localhost" port="7000"/> </source> </disk> To work right this patch parses the output of collie, so it relies on the raw output option. There recently was a bug which caused size information to be reported wrong. This is fixed upstream already and will be in the next release. Signed-off-by: Sebastian Wiedenroth <wiedi@frubar.net>
632 lines
21 KiB
XML
632 lines
21 KiB
XML
<?xml version="1.0"?>
|
|
<html>
|
|
<body>
|
|
<h1 >Storage Management</h1>
|
|
<p>
|
|
Libvirt provides storage management on the physical host through
|
|
storage pools and volumes.
|
|
</p>
|
|
<p>
|
|
A storage pool is a quantity of storage set aside by an
|
|
administrator, often a dedicated storage administrator, for use
|
|
by virtual machines. Storage pools are divided into storage
|
|
volumes either by the storage administrator or the system
|
|
administrator, and the volumes are assigned to VMs as block
|
|
devices.
|
|
</p>
|
|
<p>
|
|
For example, the storage administrator responsible for an NFS
|
|
server creates a share to store virtual machines' data. The
|
|
system administrator defines a pool on the virtualization host
|
|
with the details of the share
|
|
(e.g. nfs.example.com:/path/to/share should be mounted on
|
|
/vm_data). When the pool is started, libvirt mounts the share
|
|
on the specified directory, just as if the system administrator
|
|
logged in and executed 'mount nfs.example.com:/path/to/share
|
|
/vmdata'. If the pool is configured to autostart, libvirt
|
|
ensures that the NFS share is mounted on the directory specified
|
|
when libvirt is started.
|
|
</p>
|
|
<p>
|
|
Once the pool is started, the files in the NFS share are
|
|
reported as volumes, and the storage volumes' paths may be
|
|
queried using the libvirt APIs. The volumes' paths can then be
|
|
copied into the section of a VM's XML definition describing the
|
|
source storage for the VM's block devices. In the case of NFS,
|
|
an application using the libvirt APIs can create and delete
|
|
volumes in the pool (files in the NFS share) up to the limit of
|
|
the size of the pool (the storage capacity of the share). Not
|
|
all pool types support creating and deleting volumes. Stopping
|
|
the pool (somewhat unfortunately referred to by virsh and the
|
|
API as "pool-destroy") undoes the start operation, in this case,
|
|
unmounting the NFS share. The data on the share is not modified
|
|
by the destroy operation, despite the name. See man virsh for
|
|
more details.
|
|
</p>
|
|
<p>
|
|
A second example is an iSCSI pool. A storage administrator
|
|
provisions an iSCSI target to present a set of LUNs to the host
|
|
running the VMs. When libvirt is configured to manage that
|
|
iSCSI target as a pool, libvirt will ensure that the host logs
|
|
into the iSCSI target and libvirt can then report the available
|
|
LUNs as storage volumes. The volumes' paths can be queried and
|
|
used in VM's XML definitions as in the NFS example. In this
|
|
case, the LUNs are defined on the iSCSI server, and libvirt
|
|
cannot create and delete volumes.
|
|
</p>
|
|
<p>
|
|
Storage pools and volumes are not required for the proper
|
|
operation of VMs. Pools and volumes provide a way for libvirt
|
|
to ensure that a particular piece of storage will be available
|
|
for a VM, but some administrators will prefer to manage their
|
|
own storage and VMs will operate properly without any pools or
|
|
volumes defined. On systems that do not use pools, system
|
|
administrators must ensure the availability of the VMs' storage
|
|
using whatever tools they prefer, for example, adding the NFS
|
|
share to the host's fstab so that the share is mounted at boot
|
|
time.
|
|
</p>
|
|
<p>
|
|
If at this point the value of pools and volumes over traditional
|
|
system administration tools is unclear, note that one of the
|
|
features of libvirt is its remote protocol, so it's possible to
|
|
manage all aspects of a virtual machine's lifecycle as well as
|
|
the configuration of the resources required by the VM. These
|
|
operations can be performed on a remote host entirely within the
|
|
libvirt API. In other words, a management application using
|
|
libvirt can enable a user to perform all the required tasks for
|
|
configuring the host for a VM: allocating resources, running the
|
|
VM, shutting it down and deallocating the resources, without
|
|
requiring shell access or any other control channel.
|
|
</p>
|
|
<p>
|
|
Libvirt supports the following storage pool types:
|
|
</p>
|
|
<ul>
|
|
<li>
|
|
<a href="#StorageBackendDir">Directory backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendFS">Local filesystem backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendNetFS">Network filesystem backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendLogical">Logical backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendDisk">Disk backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendISCSI">iSCSI backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendSCSI">SCSI backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendMultipath">Multipath backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendRBD">RBD (RADOS Block Device) backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendSheepdog">Sheepdog backend</a>
|
|
</li>
|
|
</ul>
|
|
|
|
<h2><a name="StorageBackendDir">Directory pool</a></h2>
|
|
<p>
|
|
A pool with a type of <code>dir</code> provides the means to manage
|
|
files within a directory. The files can be fully allocated raw files,
|
|
sparsely allocated raw files, or one of the special disk formats
|
|
such as <code>qcow</code>,<code>qcow2</code>,<code>vmdk</code>,
|
|
<code>cow</code>, etc as supported by the <code>qemu-img</code>
|
|
program. If the directory does not exist at the time the pool is
|
|
defined, the <code>build</code> operation can be used to create it.
|
|
</p>
|
|
|
|
<h3>Example pool input definition</h3>
|
|
<pre>
|
|
<pool type="dir">
|
|
<name>virtimages</name>
|
|
<target>
|
|
<path>/var/lib/virt/images</path>
|
|
</target>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The directory pool does not use the pool format type element.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
One of the following options:
|
|
</p>
|
|
<ul>
|
|
<li><code>raw</code>: a plain file</li>
|
|
<li><code>bochs</code>: Bochs disk image format</li>
|
|
<li><code>cloop</code>: compressed loopback disk image format</li>
|
|
<li><code>cow</code>: User Mode Linux disk image format</li>
|
|
<li><code>dmg</code>: Mac disk image format</li>
|
|
<li><code>iso</code>: CDROM disk image format</li>
|
|
<li><code>qcow</code>: QEMU v1 disk image format</li>
|
|
<li><code>qcow2</code>: QEMU v2 disk image format</li>
|
|
<li><code>vmdk</code>: VMWare disk image format</li>
|
|
<li><code>vpc</code>: VirtualPC disk image format</li>
|
|
</ul>
|
|
<p>
|
|
When listing existing volumes all these formats are supported
|
|
natively. When creating new volumes, only a subset may be
|
|
available. The <code>raw</code> type is guaranteed always
|
|
available. The <code>qcow2</code> type can be created if
|
|
either <code>qemu-img</code> or <code>qcow-create</code> tools
|
|
are present. The others are dependent on support of the
|
|
<code>qemu-img</code> tool.
|
|
|
|
</p>
|
|
|
|
<h2><a name="StorageBackendFS">Filesystem pool</a></h2>
|
|
<p>
|
|
This is a variant of the directory pool. Instead of creating a
|
|
directory on an existing mounted filesystem though, it expects
|
|
a source block device to be named. This block device will be
|
|
mounted and files managed in the directory of its mount point.
|
|
It will default to allowing the kernel to automatically discover
|
|
the filesystem type, though it can be specified manually if
|
|
required.
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="fs">
|
|
<name>virtimages</name>
|
|
<source>
|
|
<device path="/dev/VolGroup00/VirtImages"/>
|
|
</source>
|
|
<target>
|
|
<path>/var/lib/virt/images</path>
|
|
</target>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The filesystem pool supports the following formats:
|
|
</p>
|
|
<ul>
|
|
<li><code>auto</code> - automatically determine format</li>
|
|
<li>
|
|
<code>ext2</code>
|
|
</li>
|
|
<li>
|
|
<code>ext3</code>
|
|
</li>
|
|
<li>
|
|
<code>ext4</code>
|
|
</li>
|
|
<li>
|
|
<code>ufs</code>
|
|
</li>
|
|
<li>
|
|
<code>iso9660</code>
|
|
</li>
|
|
<li>
|
|
<code>udf</code>
|
|
</li>
|
|
<li>
|
|
<code>gfs</code>
|
|
</li>
|
|
<li>
|
|
<code>gfs2</code>
|
|
</li>
|
|
<li>
|
|
<code>vfat</code>
|
|
</li>
|
|
<li>
|
|
<code>hfs+</code>
|
|
</li>
|
|
<li>
|
|
<code>xfs</code>
|
|
</li>
|
|
</ul>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The valid volume types are the same as for the <code>directory</code>
|
|
pool type.
|
|
</p>
|
|
|
|
|
|
<h2><a name="StorageBackendNetFS">Network filesystem pool</a></h2>
|
|
<p>
|
|
This is a variant of the filesystem pool. Instead of requiring
|
|
a local block device as the source, it requires the name of a
|
|
host and path of an exported directory. It will mount this network
|
|
filesystem and manage files within the directory of its mount
|
|
point. It will default to using NFS as the protocol.
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="netfs">
|
|
<name>virtimages</name>
|
|
<source>
|
|
<host name="nfs.example.com"/>
|
|
<dir path="/var/lib/virt/images"/>
|
|
</source>
|
|
<target>
|
|
<path>/var/lib/virt/images</path>
|
|
</target>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The network filesystem pool supports the following formats:
|
|
</p>
|
|
<ul>
|
|
<li><code>auto</code> - automatically determine format</li>
|
|
<li>
|
|
<code>nfs</code>
|
|
</li>
|
|
</ul>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The valid volume types are the same as for the <code>directory</code>
|
|
pool type.
|
|
</p>
|
|
|
|
|
|
<h2><a name="StorageBackendLogical">Logical volume pools</a></h2>
|
|
<p>
|
|
This provides a pool based on an LVM volume group. For a
|
|
pre-defined LVM volume group, simply providing the group
|
|
name is sufficient, while to build a new group requires
|
|
providing a list of source devices to serve as physical
|
|
volumes. Volumes will be allocated by carving out chunks
|
|
of storage from the volume group.
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="logical">
|
|
<name>HostVG</name>
|
|
<source>
|
|
<device path="/dev/sda1"/>
|
|
<device path="/dev/sdb1"/>
|
|
<device path="/dev/sdc1"/>
|
|
</source>
|
|
<target>
|
|
<path>/dev/HostVG</path>
|
|
</target>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The logical volume pool does not use the pool format type element.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The logical volume pool does not use the volume format type element.
|
|
</p>
|
|
|
|
|
|
<h2><a name="StorageBackendDisk">Disk volume pools</a></h2>
|
|
<p>
|
|
This provides a pool based on a physical disk. Volumes are created
|
|
by adding partitions to the disk. Disk pools are have constraints
|
|
on the size and placement of volumes. The 'free extents'
|
|
information will detail the regions which are available for creating
|
|
new volumes. A volume cannot span across 2 different free extents.
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="disk">
|
|
<name>sda</name>
|
|
<source>
|
|
<device path='/dev/sda'/>
|
|
</source>
|
|
<target>
|
|
<path>/dev</path>
|
|
</target>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The disk volume pool accepts the following pool format types, representing
|
|
the common partition table types:
|
|
</p>
|
|
<ul>
|
|
<li>
|
|
<code>dos</code>
|
|
</li>
|
|
<li>
|
|
<code>dvh</code>
|
|
</li>
|
|
<li>
|
|
<code>gpt</code>
|
|
</li>
|
|
<li>
|
|
<code>mac</code>
|
|
</li>
|
|
<li>
|
|
<code>bsd</code>
|
|
</li>
|
|
<li>
|
|
<code>pc98</code>
|
|
</li>
|
|
<li>
|
|
<code>sun</code>
|
|
</li>
|
|
</ul>
|
|
<p>
|
|
The <code>dos</code> or <code>gpt</code> formats are recommended for
|
|
best portability - the latter is needed for disks larger than 2TB.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The disk volume pool accepts the following volume format types, representing
|
|
the common partition entry types:
|
|
</p>
|
|
<ul>
|
|
<li>
|
|
<code>none</code>
|
|
</li>
|
|
<li>
|
|
<code>linux</code>
|
|
</li>
|
|
<li>
|
|
<code>fat16</code>
|
|
</li>
|
|
<li>
|
|
<code>fat32</code>
|
|
</li>
|
|
<li>
|
|
<code>linux-swap</code>
|
|
</li>
|
|
<li>
|
|
<code>linux-lvm</code>
|
|
</li>
|
|
<li>
|
|
<code>linux-raid</code>
|
|
</li>
|
|
<li>
|
|
<code>extended</code>
|
|
</li>
|
|
</ul>
|
|
|
|
|
|
<h2><a name="StorageBackendISCSI">iSCSI volume pools</a></h2>
|
|
<p>
|
|
This provides a pool based on an iSCSI target. Volumes must be
|
|
pre-allocated on the iSCSI server, and cannot be created via
|
|
the libvirt APIs. Since /dev/XXX names may change each time libvirt
|
|
logs into the iSCSI target, it is recommended to configure the pool
|
|
to use <code>/dev/disk/by-path</code> or <code>/dev/disk/by-id</code>
|
|
for the target path. These provide persistent stable naming for LUNs
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="iscsi">
|
|
<name>virtimages</name>
|
|
<source>
|
|
<host name="iscsi.example.com"/>
|
|
<device path="demo-target"/>
|
|
</source>
|
|
<target>
|
|
<path>/dev/disk/by-path</path>
|
|
</target>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The iSCSI volume pool does not use the pool format type element.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The iSCSI volume pool does not use the volume format type element.
|
|
</p>
|
|
|
|
<h2><a name="StorageBackendSCSI">SCSI volume pools</a></h2>
|
|
<p>
|
|
This provides a pool based on a SCSI HBA. Volumes are preexisting SCSI
|
|
LUNs, and cannot be created via the libvirt APIs. Since /dev/XXX names
|
|
aren't generally stable, it is recommended to configure the pool
|
|
to use <code>/dev/disk/by-path</code> or <code>/dev/disk/by-id</code>
|
|
for the target path. These provide persistent stable naming for LUNs
|
|
<span class="since">Since 0.6.2</span>
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="scsi">
|
|
<name>virtimages</name>
|
|
<source>
|
|
<adapter name="host0"/>
|
|
</source>
|
|
<target>
|
|
<path>/dev/disk/by-path</path>
|
|
</target>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The SCSI volume pool does not use the pool format type element.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The SCSI volume pool does not use the volume format type element.
|
|
</p>
|
|
|
|
<h2><a name="StorageBackendMultipath">Multipath pools</a></h2>
|
|
<p>
|
|
This provides a pool that contains all the multipath devices on the
|
|
host. Volume creating is not supported via the libvirt APIs.
|
|
The target element is actually ignored, but one is required to appease
|
|
the libvirt XML parser.<br/>
|
|
<br/>
|
|
Configuring multipathing is not currently supported, this just covers
|
|
the case where users want to discover all the available multipath
|
|
devices, and assign them to guests.
|
|
<span class="since">Since 0.7.1</span>
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="mpath">
|
|
<name>virtimages</name>
|
|
<target>
|
|
<path>/dev/mapper</path>
|
|
</target>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The Multipath volume pool does not use the pool format type element.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The Multipath volume pool does not use the volume format type element.
|
|
</p>
|
|
|
|
<h2><a name="StorageBackendRBD">RBD pools</a></h2>
|
|
<p>
|
|
This storage driver provides a pool which contains all RBD
|
|
images in a RADOS pool. RBD (RADOS Block Device) is part
|
|
of the Ceph distributed storage project.<br/>
|
|
This backend <i>only</i> supports Qemu with RBD support. Kernel RBD
|
|
which exposes RBD devices as block devices in /dev is <i>not</i>
|
|
supported. RBD images created with this storage backend
|
|
can be accessed through kernel RBD if configured manually, but
|
|
this backend does not provide mapping for these images.<br/>
|
|
Images created with this backend can be attached to Qemu guests
|
|
when Qemu is build with RBD support (Since Qemu 0.14.0). The
|
|
backend supports cephx authentication for communication with the
|
|
Ceph cluster. Storing the cephx authentication key is done with
|
|
the libvirt secret mechanism. The UUID in the example pool input
|
|
refers to the UUID of the stored secret.
|
|
<span class="since">Since 0.9.13</span>
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="rbd">
|
|
<name>myrbdpool</name>
|
|
<source>
|
|
<name>rbdpool</name>
|
|
<host name='1.2.3.4' port='6789'/>
|
|
<host name='my.ceph.monitor' port='6789'/>
|
|
<host name='third.ceph.monitor' port='6789'/>
|
|
<auth username='admin' type='ceph'>
|
|
<secret uuid='2ec115d7-3a88-3ceb-bc12-0ac909a6fd87'/>
|
|
</auth>
|
|
</source>
|
|
</pool></pre>
|
|
|
|
<h3>Example volume output</h3>
|
|
<pre>
|
|
<volume>
|
|
<name>myvol</name>
|
|
<key>rbd/myvol</key>
|
|
<source>
|
|
</source>
|
|
<capacity unit='bytes'>53687091200</capacity>
|
|
<allocation unit='bytes'>53687091200</allocation>
|
|
<target>
|
|
<path>rbd:rbd/myvol</path>
|
|
<format type='unknown'/>
|
|
<permissions>
|
|
<mode>00</mode>
|
|
<owner>0</owner>
|
|
<group>0</group>
|
|
</permissions>
|
|
</target>
|
|
</volume></pre>
|
|
|
|
<h3>Example disk attachement</h3>
|
|
<p>RBD images can be attached to Qemu guests when Qemu is built
|
|
with RBD support. Information about attaching a RBD image to a
|
|
guest can be found
|
|
at <a href="formatdomain.html#elementsDisks">format domain</a>
|
|
page.</p>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The RBD pool does not use the pool format type element.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The RBD pool does not use the volume format type element.
|
|
</p>
|
|
|
|
<h2><a name="StorageBackendSheepdog">Sheepdog pools</a></h2>
|
|
<p>
|
|
This provides a pool based on a Sheepdog Cluster.
|
|
Sheepdog is a distributed storage system for QEMU/KVM.
|
|
It provides highly available block level storage volumes that
|
|
can be attached to QEMU/KVM virtual machines.
|
|
|
|
The cluster must already be formatted.
|
|
|
|
<span class="since">Since 0.9.13</span>
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="sheepdog">
|
|
<name>mysheeppool</name>
|
|
<source>
|
|
<name>mysheeppool</name>
|
|
<host name='localhost' port='7000'/>
|
|
</source>
|
|
</pool></pre>
|
|
|
|
<h3>Example volume output</h3>
|
|
<pre>
|
|
<volume>
|
|
<name>myvol</name>
|
|
<key>sheep/myvol</key>
|
|
<source>
|
|
</source>
|
|
<capacity unit='bytes'>53687091200</capacity>
|
|
<allocation unit='bytes'>53687091200</allocation>
|
|
<target>
|
|
<path>sheepdog:myvol</path>
|
|
<format type='unknown'/>
|
|
<permissions>
|
|
<mode>00</mode>
|
|
<owner>0</owner>
|
|
<group>0</group>
|
|
</permissions>
|
|
</target>
|
|
</volume></pre>
|
|
|
|
<h3>Example disk attachment</h3>
|
|
<p>Sheepdog images can be attached to Qemu guests.
|
|
Information about attaching a Sheepdog image to a
|
|
guest can be found
|
|
at the <a href="formatdomain.html#elementsDisks">format domain</a>
|
|
page.</p>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The Sheepdog pool does not use the pool format type element.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The Sheepdog pool does not use the volume format type element.
|
|
</p>
|
|
|
|
</body>
|
|
</html>
|