mirror of
https://gitlab.com/libvirt/libvirt.git
synced 2024-12-23 06:05:27 +00:00
a77056bdb5
https://bugzilla.redhat.com/show_bug.cgi?id=1232606 Since an mpath pool contains all the Multipath devices on a host, allowing more than one defined on a host at a time should be disallowed under the policy of disallowing duplicate source pools for the host. Adjust to docs to clarify the Multipath target path value usage for both the storage driver (only 1 pool per host) and formatstorage references (ignore the target element in favor of the default target mapping of /dev/mapper).
793 lines
28 KiB
XML
793 lines
28 KiB
XML
<?xml version="1.0" encoding="UTF-8"?>
|
|
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
|
|
<html xmlns="http://www.w3.org/1999/xhtml">
|
|
<body>
|
|
<h1 >Storage Management</h1>
|
|
<p>
|
|
Libvirt provides storage management on the physical host through
|
|
storage pools and volumes.
|
|
</p>
|
|
<p>
|
|
A storage pool is a quantity of storage set aside by an
|
|
administrator, often a dedicated storage administrator, for use
|
|
by virtual machines. Storage pools are divided into storage
|
|
volumes either by the storage administrator or the system
|
|
administrator, and the volumes are assigned to VMs as block
|
|
devices.
|
|
</p>
|
|
<p>
|
|
For example, the storage administrator responsible for an NFS
|
|
server creates a share to store virtual machines' data. The
|
|
system administrator defines a pool on the virtualization host
|
|
with the details of the share
|
|
(e.g. nfs.example.com:/path/to/share should be mounted on
|
|
/vm_data). When the pool is started, libvirt mounts the share
|
|
on the specified directory, just as if the system administrator
|
|
logged in and executed 'mount nfs.example.com:/path/to/share
|
|
/vmdata'. If the pool is configured to autostart, libvirt
|
|
ensures that the NFS share is mounted on the directory specified
|
|
when libvirt is started.
|
|
</p>
|
|
<p>
|
|
Once the pool is started, the files in the NFS share are
|
|
reported as volumes, and the storage volumes' paths may be
|
|
queried using the libvirt APIs. The volumes' paths can then be
|
|
copied into the section of a VM's XML definition describing the
|
|
source storage for the VM's block devices. In the case of NFS,
|
|
an application using the libvirt APIs can create and delete
|
|
volumes in the pool (files in the NFS share) up to the limit of
|
|
the size of the pool (the storage capacity of the share). Not
|
|
all pool types support creating and deleting volumes. Stopping
|
|
the pool (somewhat unfortunately referred to by virsh and the
|
|
API as "pool-destroy") undoes the start operation, in this case,
|
|
unmounting the NFS share. The data on the share is not modified
|
|
by the destroy operation, despite the name. See man virsh for
|
|
more details.
|
|
</p>
|
|
<p>
|
|
A second example is an iSCSI pool. A storage administrator
|
|
provisions an iSCSI target to present a set of LUNs to the host
|
|
running the VMs. When libvirt is configured to manage that
|
|
iSCSI target as a pool, libvirt will ensure that the host logs
|
|
into the iSCSI target and libvirt can then report the available
|
|
LUNs as storage volumes. The volumes' paths can be queried and
|
|
used in VM's XML definitions as in the NFS example. In this
|
|
case, the LUNs are defined on the iSCSI server, and libvirt
|
|
cannot create and delete volumes.
|
|
</p>
|
|
<p>
|
|
Storage pools and volumes are not required for the proper
|
|
operation of VMs. Pools and volumes provide a way for libvirt
|
|
to ensure that a particular piece of storage will be available
|
|
for a VM, but some administrators will prefer to manage their
|
|
own storage and VMs will operate properly without any pools or
|
|
volumes defined. On systems that do not use pools, system
|
|
administrators must ensure the availability of the VMs' storage
|
|
using whatever tools they prefer, for example, adding the NFS
|
|
share to the host's fstab so that the share is mounted at boot
|
|
time.
|
|
</p>
|
|
<p>
|
|
If at this point the value of pools and volumes over traditional
|
|
system administration tools is unclear, note that one of the
|
|
features of libvirt is its remote protocol, so it's possible to
|
|
manage all aspects of a virtual machine's lifecycle as well as
|
|
the configuration of the resources required by the VM. These
|
|
operations can be performed on a remote host entirely within the
|
|
libvirt API. In other words, a management application using
|
|
libvirt can enable a user to perform all the required tasks for
|
|
configuring the host for a VM: allocating resources, running the
|
|
VM, shutting it down and deallocating the resources, without
|
|
requiring shell access or any other control channel.
|
|
</p>
|
|
<p>
|
|
Libvirt supports the following storage pool types:
|
|
</p>
|
|
<ul>
|
|
<li>
|
|
<a href="#StorageBackendDir">Directory backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendFS">Local filesystem backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendNetFS">Network filesystem backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendLogical">Logical backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendDisk">Disk backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendISCSI">iSCSI backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendSCSI">SCSI backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendMultipath">Multipath backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendRBD">RBD (RADOS Block Device) backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendSheepdog">Sheepdog backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendGluster">Gluster backend</a>
|
|
</li>
|
|
<li>
|
|
<a href="#StorageBackendZFS">ZFS backend</a>
|
|
</li>
|
|
</ul>
|
|
|
|
<h2><a name="StorageBackendDir">Directory pool</a></h2>
|
|
<p>
|
|
A pool with a type of <code>dir</code> provides the means to manage
|
|
files within a directory. The files can be fully allocated raw files,
|
|
sparsely allocated raw files, or one of the special disk formats
|
|
such as <code>qcow</code>,<code>qcow2</code>,<code>vmdk</code>,
|
|
<code>cow</code>, etc as supported by the <code>qemu-img</code>
|
|
program. If the directory does not exist at the time the pool is
|
|
defined, the <code>build</code> operation can be used to create it.
|
|
</p>
|
|
|
|
<h3>Example pool input definition</h3>
|
|
<pre>
|
|
<pool type="dir">
|
|
<name>virtimages</name>
|
|
<target>
|
|
<path>/var/lib/virt/images</path>
|
|
</target>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The directory pool does not use the pool format type element.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
One of the following options:
|
|
</p>
|
|
<ul>
|
|
<li><code>raw</code>: a plain file</li>
|
|
<li><code>bochs</code>: Bochs disk image format</li>
|
|
<li><code>cloop</code>: compressed loopback disk image format</li>
|
|
<li><code>cow</code>: User Mode Linux disk image format</li>
|
|
<li><code>dmg</code>: Mac disk image format</li>
|
|
<li><code>iso</code>: CDROM disk image format</li>
|
|
<li><code>qcow</code>: QEMU v1 disk image format</li>
|
|
<li><code>qcow2</code>: QEMU v2 disk image format</li>
|
|
<li><code>qed</code>: QEMU Enhanced Disk image format</li>
|
|
<li><code>vmdk</code>: VMWare disk image format</li>
|
|
<li><code>vpc</code>: VirtualPC disk image format</li>
|
|
</ul>
|
|
<p>
|
|
When listing existing volumes all these formats are supported
|
|
natively. When creating new volumes, only a subset may be
|
|
available. The <code>raw</code> type is guaranteed always
|
|
available. The <code>qcow2</code> type can be created if
|
|
either <code>qemu-img</code> or <code>qcow-create</code> tools
|
|
are present. The others are dependent on support of the
|
|
<code>qemu-img</code> tool.
|
|
|
|
</p>
|
|
|
|
<h2><a name="StorageBackendFS">Filesystem pool</a></h2>
|
|
<p>
|
|
This is a variant of the directory pool. Instead of creating a
|
|
directory on an existing mounted filesystem though, it expects
|
|
a source block device to be named. This block device will be
|
|
mounted and files managed in the directory of its mount point.
|
|
It will default to allowing the kernel to automatically discover
|
|
the filesystem type, though it can be specified manually if
|
|
required.
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="fs">
|
|
<name>virtimages</name>
|
|
<source>
|
|
<device path="/dev/VolGroup00/VirtImages"/>
|
|
</source>
|
|
<target>
|
|
<path>/var/lib/virt/images</path>
|
|
</target>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The filesystem pool supports the following formats:
|
|
</p>
|
|
<ul>
|
|
<li><code>auto</code> - automatically determine format</li>
|
|
<li>
|
|
<code>ext2</code>
|
|
</li>
|
|
<li>
|
|
<code>ext3</code>
|
|
</li>
|
|
<li>
|
|
<code>ext4</code>
|
|
</li>
|
|
<li>
|
|
<code>ufs</code>
|
|
</li>
|
|
<li>
|
|
<code>iso9660</code>
|
|
</li>
|
|
<li>
|
|
<code>udf</code>
|
|
</li>
|
|
<li>
|
|
<code>gfs</code>
|
|
</li>
|
|
<li>
|
|
<code>gfs2</code>
|
|
</li>
|
|
<li>
|
|
<code>vfat</code>
|
|
</li>
|
|
<li>
|
|
<code>hfs+</code>
|
|
</li>
|
|
<li>
|
|
<code>xfs</code>
|
|
</li>
|
|
<li>
|
|
<code>ocfs2</code>
|
|
</li>
|
|
</ul>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The valid volume types are the same as for the <code>directory</code>
|
|
pool type.
|
|
</p>
|
|
|
|
|
|
<h2><a name="StorageBackendNetFS">Network filesystem pool</a></h2>
|
|
<p>
|
|
This is a variant of the filesystem pool. Instead of requiring
|
|
a local block device as the source, it requires the name of a
|
|
host and path of an exported directory. It will mount this network
|
|
filesystem and manage files within the directory of its mount
|
|
point. It will default to using <code>auto</code> as the
|
|
protocol, which generally tries a mount via NFS first.
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="netfs">
|
|
<name>virtimages</name>
|
|
<source>
|
|
<host name="nfs.example.com"/>
|
|
<dir path="/var/lib/virt/images"/>
|
|
<format type='nfs'/>
|
|
</source>
|
|
<target>
|
|
<path>/var/lib/virt/images</path>
|
|
</target>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The network filesystem pool supports the following formats:
|
|
</p>
|
|
<ul>
|
|
<li><code>auto</code> - automatically determine format</li>
|
|
<li>
|
|
<code>nfs</code>
|
|
</li>
|
|
<li>
|
|
<code>glusterfs</code> - use the glusterfs FUSE file system.
|
|
For now, the <code>dir</code> specified as the source can only
|
|
be a gluster volume name, as gluster does not provide a way to
|
|
directly mount subdirectories within a volume. (To bypass the
|
|
file system completely, see
|
|
the <a href="#StorageBackendGluster">gluster</a> pool.)
|
|
</li>
|
|
<li>
|
|
<code>cifs</code> - use the SMB (samba) or CIFS file system.
|
|
The mount will use "-o guest" to mount the directory anonymously.
|
|
</li>
|
|
</ul>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The valid volume types are the same as for the <code>directory</code>
|
|
pool type.
|
|
</p>
|
|
|
|
|
|
<h2><a name="StorageBackendLogical">Logical volume pools</a></h2>
|
|
<p>
|
|
This provides a pool based on an LVM volume group. For a
|
|
pre-defined LVM volume group, simply providing the group
|
|
name is sufficient, while to build a new group requires
|
|
providing a list of source devices to serve as physical
|
|
volumes. Volumes will be allocated by carving out chunks
|
|
of storage from the volume group.
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="logical">
|
|
<name>HostVG</name>
|
|
<source>
|
|
<device path="/dev/sda1"/>
|
|
<device path="/dev/sdb1"/>
|
|
<device path="/dev/sdc1"/>
|
|
</source>
|
|
<target>
|
|
<path>/dev/HostVG</path>
|
|
</target>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The logical volume pool supports only the <code>lvm2</code> format,
|
|
although not supplying a format value will result in automatic
|
|
selection of the<code>lvm2</code> format.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The logical volume pool does not use the volume format type element.
|
|
</p>
|
|
|
|
|
|
<h2><a name="StorageBackendDisk">Disk volume pools</a></h2>
|
|
<p>
|
|
This provides a pool based on a physical disk. Volumes are created
|
|
by adding partitions to the disk. Disk pools have constraints
|
|
on the size and placement of volumes. The 'free extents'
|
|
information will detail the regions which are available for creating
|
|
new volumes. A volume cannot span across 2 different free extents.
|
|
It will default to using <code>msdos</code> as the pool source format.
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="disk">
|
|
<name>sda</name>
|
|
<source>
|
|
<device path='/dev/sda'/>
|
|
</source>
|
|
<target>
|
|
<path>/dev</path>
|
|
</target>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The disk volume pool accepts the following pool format types, representing
|
|
the common partition table types:
|
|
</p>
|
|
<ul>
|
|
<li>
|
|
<code>dos</code>
|
|
</li>
|
|
<li>
|
|
<code>dvh</code>
|
|
</li>
|
|
<li>
|
|
<code>gpt</code>
|
|
</li>
|
|
<li>
|
|
<code>mac</code>
|
|
</li>
|
|
<li>
|
|
<code>bsd</code>
|
|
</li>
|
|
<li>
|
|
<code>pc98</code>
|
|
</li>
|
|
<li>
|
|
<code>sun</code>
|
|
</li>
|
|
<li>
|
|
<code>lvm2</code>
|
|
</li>
|
|
</ul>
|
|
<p>
|
|
The <code>dos</code> or <code>gpt</code> formats are recommended for
|
|
best portability - the latter is needed for disks larger than 2TB.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The disk volume pool accepts the following volume format types, representing
|
|
the common partition entry types:
|
|
</p>
|
|
<ul>
|
|
<li>
|
|
<code>none</code>
|
|
</li>
|
|
<li>
|
|
<code>linux</code>
|
|
</li>
|
|
<li>
|
|
<code>fat16</code>
|
|
</li>
|
|
<li>
|
|
<code>fat32</code>
|
|
</li>
|
|
<li>
|
|
<code>linux-swap</code>
|
|
</li>
|
|
<li>
|
|
<code>linux-lvm</code>
|
|
</li>
|
|
<li>
|
|
<code>linux-raid</code>
|
|
</li>
|
|
<li>
|
|
<code>extended</code>
|
|
</li>
|
|
</ul>
|
|
|
|
|
|
<h2><a name="StorageBackendISCSI">iSCSI volume pools</a></h2>
|
|
<p>
|
|
This provides a pool based on an iSCSI target. Volumes must be
|
|
pre-allocated on the iSCSI server, and cannot be created via
|
|
the libvirt APIs. Since /dev/XXX names may change each time libvirt
|
|
logs into the iSCSI target, it is recommended to configure the pool
|
|
to use <code>/dev/disk/by-path</code> or <code>/dev/disk/by-id</code>
|
|
for the target path. These provide persistent stable naming for LUNs
|
|
</p>
|
|
<p>
|
|
The libvirt iSCSI storage backend does not resolve the provided
|
|
host name or IP address when finding the available target IQN's
|
|
on the host; therefore, defining two pools to use the same IQN
|
|
on the same host will fail the duplicate source pool checks.
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="iscsi">
|
|
<name>virtimages</name>
|
|
<source>
|
|
<host name="iscsi.example.com"/>
|
|
<device path="iqn.2013-06.com.example:iscsi-pool"/>
|
|
</source>
|
|
<target>
|
|
<path>/dev/disk/by-path</path>
|
|
</target>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The iSCSI volume pool does not use the pool format type element.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The iSCSI volume pool does not use the volume format type element.
|
|
</p>
|
|
|
|
<h2><a name="StorageBackendSCSI">SCSI volume pools</a></h2>
|
|
<p>
|
|
This provides a pool based on a SCSI HBA. Volumes are preexisting SCSI
|
|
LUNs, and cannot be created via the libvirt APIs. Since /dev/XXX names
|
|
aren't generally stable, it is recommended to configure the pool
|
|
to use <code>/dev/disk/by-path</code> or <code>/dev/disk/by-id</code>
|
|
for the target path. These provide persistent stable naming for LUNs
|
|
<span class="since">Since 0.6.2</span>
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="scsi">
|
|
<name>virtimages</name>
|
|
<source>
|
|
<adapter name="host0"/>
|
|
</source>
|
|
<target>
|
|
<path>/dev/disk/by-path</path>
|
|
</target>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The SCSI volume pool does not use the pool format type element.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The SCSI volume pool does not use the volume format type element.
|
|
</p>
|
|
|
|
<h2><a name="StorageBackendMultipath">Multipath pools</a></h2>
|
|
<p>
|
|
This provides a pool that contains all the multipath devices on the
|
|
host. Therefore, only one Multipath pool may be configured per host.
|
|
Volume creating is not supported via the libvirt APIs.
|
|
The target element is actually ignored, but one is required to appease
|
|
the libvirt XML parser.<br/>
|
|
<br/>
|
|
Configuring multipathing is not currently supported, this just covers
|
|
the case where users want to discover all the available multipath
|
|
devices, and assign them to guests.
|
|
<span class="since">Since 0.7.1</span>
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="mpath">
|
|
<name>virtimages</name>
|
|
<target>
|
|
<path>/dev/mapper</path>
|
|
</target>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The Multipath volume pool does not use the pool format type element.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The Multipath volume pool does not use the volume format type element.
|
|
</p>
|
|
|
|
<h2><a name="StorageBackendRBD">RBD pools</a></h2>
|
|
<p>
|
|
This storage driver provides a pool which contains all RBD
|
|
images in a RADOS pool. RBD (RADOS Block Device) is part
|
|
of the Ceph distributed storage project.<br/>
|
|
This backend <i>only</i> supports Qemu with RBD support. Kernel RBD
|
|
which exposes RBD devices as block devices in /dev is <i>not</i>
|
|
supported. RBD images created with this storage backend
|
|
can be accessed through kernel RBD if configured manually, but
|
|
this backend does not provide mapping for these images.<br/>
|
|
Images created with this backend can be attached to Qemu guests
|
|
when Qemu is build with RBD support (Since Qemu 0.14.0). The
|
|
backend supports cephx authentication for communication with the
|
|
Ceph cluster. Storing the cephx authentication key is done with
|
|
the libvirt secret mechanism. The UUID in the example pool input
|
|
refers to the UUID of the stored secret.
|
|
<span class="since">Since 0.9.13</span>
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="rbd">
|
|
<name>myrbdpool</name>
|
|
<source>
|
|
<name>rbdpool</name>
|
|
<host name='1.2.3.4' port='6789'/>
|
|
<host name='my.ceph.monitor' port='6789'/>
|
|
<host name='third.ceph.monitor' port='6789'/>
|
|
<auth username='admin' type='ceph'>
|
|
<secret uuid='2ec115d7-3a88-3ceb-bc12-0ac909a6fd87'/>
|
|
</auth>
|
|
</source>
|
|
</pool></pre>
|
|
|
|
<h3>Example volume output</h3>
|
|
<pre>
|
|
<volume>
|
|
<name>myvol</name>
|
|
<key>rbd/myvol</key>
|
|
<source>
|
|
</source>
|
|
<capacity unit='bytes'>53687091200</capacity>
|
|
<allocation unit='bytes'>53687091200</allocation>
|
|
<target>
|
|
<path>rbd:rbd/myvol</path>
|
|
<format type='unknown'/>
|
|
<permissions>
|
|
<mode>00</mode>
|
|
<owner>0</owner>
|
|
<group>0</group>
|
|
</permissions>
|
|
</target>
|
|
</volume></pre>
|
|
|
|
<h3>Example disk attachment</h3>
|
|
<p>RBD images can be attached to Qemu guests when Qemu is built
|
|
with RBD support. Information about attaching a RBD image to a
|
|
guest can be found
|
|
at <a href="formatdomain.html#elementsDisks">format domain</a>
|
|
page.</p>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The RBD pool does not use the pool format type element.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The RBD pool does not use the volume format type element.
|
|
</p>
|
|
|
|
<h2><a name="StorageBackendSheepdog">Sheepdog pools</a></h2>
|
|
<p>
|
|
This provides a pool based on a Sheepdog Cluster.
|
|
Sheepdog is a distributed storage system for QEMU/KVM.
|
|
It provides highly available block level storage volumes that
|
|
can be attached to QEMU/KVM virtual machines.
|
|
|
|
The cluster must already be formatted.
|
|
|
|
<span class="since">Since 0.9.13</span>
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="sheepdog">
|
|
<name>mysheeppool</name>
|
|
<source>
|
|
<name>mysheeppool</name>
|
|
<host name='localhost' port='7000'/>
|
|
</source>
|
|
</pool></pre>
|
|
|
|
<h3>Example volume output</h3>
|
|
<pre>
|
|
<volume>
|
|
<name>myvol</name>
|
|
<key>sheep/myvol</key>
|
|
<source>
|
|
</source>
|
|
<capacity unit='bytes'>53687091200</capacity>
|
|
<allocation unit='bytes'>53687091200</allocation>
|
|
<target>
|
|
<path>sheepdog:myvol</path>
|
|
<format type='unknown'/>
|
|
<permissions>
|
|
<mode>00</mode>
|
|
<owner>0</owner>
|
|
<group>0</group>
|
|
</permissions>
|
|
</target>
|
|
</volume></pre>
|
|
|
|
<h3>Example disk attachment</h3>
|
|
<p>Sheepdog images can be attached to Qemu guests.
|
|
Information about attaching a Sheepdog image to a
|
|
guest can be found
|
|
at the <a href="formatdomain.html#elementsDisks">format domain</a>
|
|
page.</p>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The Sheepdog pool does not use the pool format type element.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The Sheepdog pool does not use the volume format type element.
|
|
</p>
|
|
|
|
<h2><a name="StorageBackendGluster">Gluster pools</a></h2>
|
|
<p>
|
|
This provides a pool based on native Gluster access. Gluster is
|
|
a distributed file system that can be exposed to the user via
|
|
FUSE, NFS or SMB (see the <a href="#StorageBackendNetfs">netfs</a>
|
|
pool for that usage); but for minimal overhead, the ideal access
|
|
is via native access (only possible for QEMU/KVM compiled with
|
|
libgfapi support).
|
|
|
|
The cluster and storage volume must already be running, and it
|
|
is recommended that the volume be configured with <code>gluster
|
|
volume set $volname storage.owner-uid=$uid</code>
|
|
and <code>gluster volume set $volname
|
|
storage.owner-gid=$gid</code> for the uid and gid that qemu will
|
|
be run as. It may also be necessary to
|
|
set <code>rpc-auth-allow-insecure on</code> for the glusterd
|
|
service, as well as <code>gluster set $volname
|
|
server.allow-insecure on</code>, to allow access to the gluster
|
|
volume.
|
|
|
|
<span class="since">Since 1.2.0</span>
|
|
</p>
|
|
|
|
<h3>Example pool input</h3>
|
|
<p>A gluster volume corresponds to a libvirt storage pool. If a
|
|
gluster volume could be mounted as <code>mount -t glusterfs
|
|
localhost:/volname /some/path</code>, then the following example
|
|
will describe the same pool without having to create a local
|
|
mount point. Remember that with gluster, the mount point can be
|
|
through any machine in the cluster, and gluster will
|
|
automatically pick the ideal transport to the actual bricks
|
|
backing the gluster volume, even if on a different host than the
|
|
one named in the <code>host</code> designation.
|
|
The <code><name></code> element is always the volume name
|
|
(no slash). The pool source also supports an
|
|
optional <code><dir></code> element with
|
|
a <code>path</code> attribute that lists the absolute name of a
|
|
subdirectory relative to the gluster volume to use instead of
|
|
the top-level directory of the volume.</p>
|
|
<pre>
|
|
<pool type="gluster">
|
|
<name>myglusterpool</name>
|
|
<source>
|
|
<name>volname</name>
|
|
<host name='localhost'/>
|
|
<dir path='/'/>
|
|
</source>
|
|
</pool></pre>
|
|
|
|
<h3>Example volume output</h3>
|
|
<p>Libvirt storage volumes associated with a gluster pool
|
|
correspond to the files that can be found when mounting the
|
|
gluster volume. The <code>name</code> is the path relative to
|
|
the effective mount specified for the pool; and
|
|
the <code>key</code> is a string that identifies a single volume
|
|
uniquely. Currently the <code>key</code> attribute consists of the
|
|
URI of the volume but it may be changed to a UUID of the volume
|
|
in the future.</p>
|
|
<pre>
|
|
<volume>
|
|
<name>myfile</name>
|
|
<key>gluster://localhost/volname/myfile</key>
|
|
<source>
|
|
</source>
|
|
<capacity unit='bytes'>53687091200</capacity>
|
|
<allocation unit='bytes'>53687091200</allocation>
|
|
</volume></pre>
|
|
|
|
<h3>Example disk attachment</h3>
|
|
<p>Files within a gluster volume can be attached to Qemu guests.
|
|
Information about attaching a Gluster image to a
|
|
guest can be found
|
|
at the <a href="formatdomain.html#elementsDisks">format domain</a>
|
|
page.</p>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The Gluster pool does not use the pool format type element.
|
|
</p>
|
|
|
|
<h3>Valid volume format types</h3>
|
|
<p>
|
|
The valid volume types are the same as for the <code>directory</code>
|
|
pool type.
|
|
</p>
|
|
|
|
<h2><a name="StorageBackendZFS">ZFS pools</a></h2>
|
|
<p>
|
|
This provides a pool based on the ZFS filesystem. It is currently
|
|
supported on FreeBSD only.
|
|
</p>
|
|
|
|
<p>A pool could either be created manually using the <code>zpool create</code>
|
|
command and its name specified in the source section or <span class="since">
|
|
since 1.2.9</span> source devices could be specified to create a pool using
|
|
libvirt.
|
|
</p>
|
|
|
|
<p>Please refer to the ZFS documentation for details on a pool creation.</p>
|
|
|
|
<p><span class="since">Since 1.2.8</span></p>.
|
|
|
|
<h3>Example pool input</h3>
|
|
<pre>
|
|
<pool type="zfs">
|
|
<name>myzfspool</name>
|
|
<source>
|
|
<name>zpoolname</name>
|
|
<device path="/dev/ada1"/>
|
|
<device path="/dev/ada2"/>
|
|
</source>
|
|
</pool></pre>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The ZFS volume pool does not use the pool format type element.
|
|
</p>
|
|
|
|
<h3>Valid pool format types</h3>
|
|
<p>
|
|
The ZFS volume pool does not use the volume format type element.
|
|
</p>
|
|
|
|
</body>
|
|
</html>
|