Add support for a new <pool type='gluster'>, similar to
RBD and Sheepdog. Terminology wise, a gluster volume
forms a libvirt storage pool, within the gluster volume,
individual files are treated as libvirt storage volumes.
* docs/schemas/storagepool.rng (poolgluster): New pool type.
* docs/formatstorage.html.in: Document gluster.
* docs/storage.html.in: Likewise, and contrast it with netfs.
* tests/storagepoolxml2xmlin/pool-gluster.xml: New test.
* tests/storagepoolxml2xmlout/pool-gluster.xml: Likewise.
* tests/storagepoolxml2xmltest.c (mymain): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
While trying to compare netfs against my new gluster pool, I
discovered two things:
virt-xml-validate chokes on valid xml produced by 'virsh pool-dumpxml'
[yet another reason that ALL patches that add new xml should be adding
corresponding tests]
When using glusterfs FUSE mounts, you cannot access a subdirectory
of a gluster volume. The recommended workaround in the gluster
community is to mount the volume to an intermediate location, then
bind-mount the desired subdirectory to the final location. Maybe
we should teach libvirt to do bind-mounting, but for now I chose to
just document the limitation.
* docs/storage.html.in: Improve documentation.
* docs/schemas/storagepool.rng (sourcefmtnetfs): Allow all
formats, and drop redundant info-vendor.
* tests/storagepoolxml2xmltest.c (mymain): New test.
* tests/storagepoolxml2xmlin/pool-netfs-gluster.xml: New file.
* tests/storagepoolxml2xmlout/pool-netfs-gluster.xml: Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
The rule generating the HTML docs passing the --html flag
to xsltproc. This makes it use the legacy HTML parser, which
either ignores or tries to fix all sorts of broken XML tags.
There's no reason why we should be writing broken XML in
the first place, so removing --html and adding the XHTML
doctype to all files forces us to create good XML.
This adds the XHTML doc type and fixes many, many XML tag
problems it exposes.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Add qed for dirfs pool.
Add ocfs2 for disk pool.
Add lvm2 for disk and logical pool.
Add cifs and glusterfs for netfs pool.
Note: POOL_DISK_LVM2 can not be created by "parted mklabel", but is only
returned from auto-detection on disk pools.
Signed-off-by: Philipp Hahn <hahn@univention.de>
This patch brings support to manage sheepdog pools and volumes to libvirt.
It uses the "collie" command-line utility that comes with sheepdog for that.
A sheepdog pool in libvirt maps to a sheepdog cluster.
It needs a host and port to connect to, which in most cases
is just going to be the default of localhost on port 7000.
A sheepdog volume in libvirt maps to a sheepdog vdi.
To create one specify the pool, a name and the capacity.
Volumes can also be resized later.
In the volume XML the vdi name has to be put into the <target><path>.
To use the volume as a disk source for virtual machines specify
the vdi name as "name" attribute of the <source>.
The host and port information from the pool are specified inside the host tag.
<disk type='network'>
...
<source protocol="sheepdog" name="vdi_name">
<host name="localhost" port="7000"/>
</source>
</disk>
To work right this patch parses the output of collie,
so it relies on the raw output option. There recently was a bug which caused
size information to be reported wrong. This is fixed upstream already and
will be in the next release.
Signed-off-by: Sebastian Wiedenroth <wiedi@frubar.net>
This patch adds support for a new storage backend with RBD support.
RBD is the RADOS Block Device and is part of the Ceph distributed storage
system.
It comes in two flavours: Qemu-RBD and Kernel RBD, this storage backend only
supports Qemu-RBD, thus limiting the use of this storage driver to Qemu only.
To function this backend relies on librbd and librados being present on the
local system.
The backend also supports Cephx authentication for safe authentication with
the Ceph cluster.
For storing credentials it uses the built-in secret mechanism of libvirt.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
The storage pools page contains details about the capabilities of the
various pool types, but not an overview of how they are intended to be
used. This patch adds some explanation of what pools and volumes can
be used for and why an administrator might want to use them.