Fix the ZFS Valid Volume Format Types label and add the
Valid pool format types for Vstorage pools.
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Michal Privoznik <mprivozn@redhat.com>
Found that it was missing in formatstorage and had a few typos
in the storage driver page.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
The default disk storage pool type in XML is 'dos', not 'msdos'.
But tweak wording to keep the term 'msdos' in the text for the
sake of grep searches.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Introducing the pool as a noop. Integration inside the build
system. Implementation will be in the following commits.
Signed-off-by: Clementine Hayat <clem@lse.epita.fr>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The HTML5 doctype is simply
<!DOCTYPE html>
no DTD is present because HTML5 is no longer defined as an
extension of SGML.
XSL has no way to natively output a doctype without a public
or system identifier, so we have to use an <xsl:text> hack
instead.
See also
https://dev.w3.org/html5/html-author/#doctype-declaration
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The 'name' attribute on <a...> elements is deprecated in favour
of the 'id' attribute which is allowed on any element. HTML5
drops 'name' support entirely.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Since we do have this template at hand, why not using it wherever
possible (list of supported pool types and remote access section).
Also, perform some stylistic micro adjustments.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
The <pre/> section is rendered as-is on the page. That is, if all
the lines are prefixed with 4 spaces the rendered page will also
have them. Problem is if we put a box around such <pre/> because
the content might not fix into it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If no port number was provided for a storage pool libvirt defaults to
port 6789; however, librbd/librados already default to 6789 when no port
number is provided.
In the future Ceph will switch to a new port for the Ceph monitors since
port 6789 is already assigned to a different application by IANA.
Port 6789 is assigned to SMC-HTTPS and Ceph now has port 3300 assigned as
the 'Ceph monitor' port.
In this case it is the best solution to not hardcode any port number into
libvirt and let librados handle the connection.
Only if a user specifies a different port number we pass it down to librados,
otherwise we leave it blank.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
merge
https://bugzilla.redhat.com/show_bug.cgi?id=1232606
Since an mpath pool contains all the Multipath devices on a host, allowing
more than one defined on a host at a time should be disallowed under the
policy of disallowing duplicate source pools for the host.
Adjust to docs to clarify the Multipath target path value usage for both
the storage driver (only 1 pool per host) and formatstorage references
(ignore the target element in favor of the default target mapping of
/dev/mapper).
https://bugzilla.redhat.com/show_bug.cgi?id=1186969
When generating the path to the dir for a CIFS/Samba driver, the code
would generate a source path for the mount using "%s:%s" while the
mount.cifs expects to see "//%s/%s". So check for the cifsfs and
format the source path appropriately.
Additionally, since there is no means to authenticate, the mount
needs a "-o guest" on the command line in order to anonymously mount
the Samba directory.
https://bugzilla.redhat.com/show_bug.cgi?id=1171984https://bugzilla.redhat.com/show_bug.cgi?id=1188463
Remove the check for the source host name for iSCSI source XML processing
declaring duplicate sources when the source device path and if present the
initiator of a proposed storage pool matches an existing storage pool.
The backend iSCSI storage driver uses 'iscsiadm --mode session' to query
available iscsid target sessions. The output displayed is the IP address
and the IQN (target path) of known targets. The displayed IP address
is a resolved address based on the session --login. Additionally, iscsid
keeps track of the various ways to define the host name (IPv4 Address,
IPv6 Address, /etc/hosts, etc.) for that IQN (see output of an 'iscsiadm
--mode node'). If an incoming IQN matches and the host name provided by
libvirt is resolved to the existing IQN, then iscsid will "reuse" the
session. Although libvirt could do the same name resolution, if there
is a difference, iscsid could still declare two seemingly different sources
to be the same and not create a new session which means libvirt now has
two storage pools looking at the same source. Thus to avoid any strange
host name resolution issues, just rely on iscsid for that and do not
allow multiple pools on the same host to use the same device path (IQN).
https://bugzilla.redhat.com/show_bug.cgi?id=1181062
According to the formatstorage.html description for <source> element
and "format" attribute: "All drivers are required to have a default
value for this, so it is optional."
As it turns out the disk backend did not choose a default value, so I
added a default of "msdos" if the source type is "unknown" as well as
updating the storage.html backend disk volume driver documentation to
indicate the default format is dos.
According to our documentation logical pool supports formats 'auto' and
'lvm2'. However, in storage_conf.c we previously defined storage pool
formats: unknown, lvm2. Due to backward compatibility reasons
we must continue refer to pool format type 'unknown' instead of 'auto'.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1123767
- docs/formatstorage.html.in: document 'zfs' pool type, add it
to a list of pool types that could use source physical devices
- docs/storage.html.in: update a ZFS pool example XML with
source physical devices, mention that starting from 1.2.9 a
pool could be created from this devices by libvirt and in earlier
versions user still has to create a pool manually
- docs/drvbhyve.html.in: add an example with ZFS pools
Implement ZFS storage backend driver. Currently supported
only on FreeBSD because of ZFS limitations on Linux.
Features supported:
- pool-start, pool-stop
- pool-info
- vol-list
- vol-create / vol-delete
Pool definition looks like that:
<pool type='zfs'>
<name>myzfspool</name>
<source>
<name>actualpoolname</name>
</source>
</pool>
The 'actualpoolname' value is a name of the pool on the system,
such as shown by 'zpool list' command. Target makes no sense
here because volumes path is always /dev/zvol/$poolname/$volname.
User has to create a pool on his own, this driver doesn't
support pool creation currently.
A volume could be used with Qemu by adding an entry like this:
<disk type='volume' device='disk'>
<driver name='qemu' type='raw'/>
<source pool='myzfspool' volume='vol5'/>
<target dev='hdc' bus='ide'/>
</disk>
According to our documentation the "key" value has the following
meaning: "Providing an identifier for the volume which identifies a
single volume." The currently used keys for gluster volumes consist of
the gluster volume name and file path. This can't be considered unique
as a different storage server can serve a volume with the same name.
Unfortunately I wasn't able to figure out a way to retrieve the gluster
volume UUID which would avoid the possibility of having two distinct
keys identifying a single volume.
Use the full URI as the key for the volume to avoid the more critical
ambiguity problem and document the possible change to UUID.
Add support for a new <pool type='gluster'>, similar to
RBD and Sheepdog. Terminology wise, a gluster volume
forms a libvirt storage pool, within the gluster volume,
individual files are treated as libvirt storage volumes.
* docs/schemas/storagepool.rng (poolgluster): New pool type.
* docs/formatstorage.html.in: Document gluster.
* docs/storage.html.in: Likewise, and contrast it with netfs.
* tests/storagepoolxml2xmlin/pool-gluster.xml: New test.
* tests/storagepoolxml2xmlout/pool-gluster.xml: Likewise.
* tests/storagepoolxml2xmltest.c (mymain): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
While trying to compare netfs against my new gluster pool, I
discovered two things:
virt-xml-validate chokes on valid xml produced by 'virsh pool-dumpxml'
[yet another reason that ALL patches that add new xml should be adding
corresponding tests]
When using glusterfs FUSE mounts, you cannot access a subdirectory
of a gluster volume. The recommended workaround in the gluster
community is to mount the volume to an intermediate location, then
bind-mount the desired subdirectory to the final location. Maybe
we should teach libvirt to do bind-mounting, but for now I chose to
just document the limitation.
* docs/storage.html.in: Improve documentation.
* docs/schemas/storagepool.rng (sourcefmtnetfs): Allow all
formats, and drop redundant info-vendor.
* tests/storagepoolxml2xmltest.c (mymain): New test.
* tests/storagepoolxml2xmlin/pool-netfs-gluster.xml: New file.
* tests/storagepoolxml2xmlout/pool-netfs-gluster.xml: Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
The rule generating the HTML docs passing the --html flag
to xsltproc. This makes it use the legacy HTML parser, which
either ignores or tries to fix all sorts of broken XML tags.
There's no reason why we should be writing broken XML in
the first place, so removing --html and adding the XHTML
doctype to all files forces us to create good XML.
This adds the XHTML doc type and fixes many, many XML tag
problems it exposes.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Add qed for dirfs pool.
Add ocfs2 for disk pool.
Add lvm2 for disk and logical pool.
Add cifs and glusterfs for netfs pool.
Note: POOL_DISK_LVM2 can not be created by "parted mklabel", but is only
returned from auto-detection on disk pools.
Signed-off-by: Philipp Hahn <hahn@univention.de>
This patch brings support to manage sheepdog pools and volumes to libvirt.
It uses the "collie" command-line utility that comes with sheepdog for that.
A sheepdog pool in libvirt maps to a sheepdog cluster.
It needs a host and port to connect to, which in most cases
is just going to be the default of localhost on port 7000.
A sheepdog volume in libvirt maps to a sheepdog vdi.
To create one specify the pool, a name and the capacity.
Volumes can also be resized later.
In the volume XML the vdi name has to be put into the <target><path>.
To use the volume as a disk source for virtual machines specify
the vdi name as "name" attribute of the <source>.
The host and port information from the pool are specified inside the host tag.
<disk type='network'>
...
<source protocol="sheepdog" name="vdi_name">
<host name="localhost" port="7000"/>
</source>
</disk>
To work right this patch parses the output of collie,
so it relies on the raw output option. There recently was a bug which caused
size information to be reported wrong. This is fixed upstream already and
will be in the next release.
Signed-off-by: Sebastian Wiedenroth <wiedi@frubar.net>
This patch adds support for a new storage backend with RBD support.
RBD is the RADOS Block Device and is part of the Ceph distributed storage
system.
It comes in two flavours: Qemu-RBD and Kernel RBD, this storage backend only
supports Qemu-RBD, thus limiting the use of this storage driver to Qemu only.
To function this backend relies on librbd and librados being present on the
local system.
The backend also supports Cephx authentication for safe authentication with
the Ceph cluster.
For storing credentials it uses the built-in secret mechanism of libvirt.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
The storage pools page contains details about the capabilities of the
various pool types, but not an overview of how they are intended to be
used. This patch adds some explanation of what pools and volumes can
be used for and why an administrator might want to use them.