https://bugzilla.redhat.com/show_bug.cgi?id=1581670
During storage driver backend initialization, let's save
which backends are available in the storage pool capabilities.
In order to format those, we need add a connectGetCapabilities
processor to the storageHypervisorDriver. This allows a storage
connection, such as "storage:///system" to find the API and
format the results, such as:
virsh -c storage:///system capabilities
<capabilities>
<pool>
<enum name='type'>
<value>dir</value>
<value>fs</value>
<value>netfs</value>
<value>logical</value>
<value>iscsi</value>
<value>iscsi-direct</value>
<value>scsi</value>
<value>mpath</value>
<value>disk</value>
<value>rbd</value>
<value>sheepdog</value>
<value>gluster</value>
<value>zfs</value>
</enum>
</pool>
</capabilities>
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Michal Privoznik <mprivozn@redhat.com>
Require that all headers are guarded by a symbol named
LIBVIRT_$FILENAME
where $FILENAME is the uppercased filename, with all characters
outside a-z changed into '_'.
Note we do not use a leading __ because that is technically a
namespace reserved for the toolchain.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
As advertised in previous commit, there are three APIs that might
run for quite some time (because they read/write data from/to a
volume) and these three are: downloadVol, uploadVol, wipeVol.
Release pool object lock and reacquire it later to allow more
concurrency.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The storage driver backends are serving the public storage pools API,
while the storage file backends are serving the internal QEMU driver and
/ or libvirt utility code.
To prep for moving this storage file backend framework into the utility
code, split out the backend definitions.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Now that we can open connections to the secondary drivers on demand,
there is no need to pass a virConnectPtr into all the backend
functions.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Move all the StoragePoolObj related API's into their own module
virstorageobj from the storage_conf
Purely code motion at this point, plus adjustments to cleanly build
Signed-off-by: John Ferlan <jferlan@redhat.com>
Add a new storage driver registration function that will force the
backend code to fail if any of the storage backend modules can't be
loaded. This will make sure that they work and are present.
Add APIs that allow to dynamically register driver backends so that the
list of available drivers does not need to be known during compile time.
This will allow us to modularize the storage driver on runtime.
The file became a garbage dump for all kinds of utility functions over
time. Move them to a separate file so that the files can become a clean
interface for the storage backends.
For volume processing in virStorageBackendUpdateVolTargetInfo to get
the capacity commit id 'a760ba3a7' added the ability to probe a volume
that didn't list a target format. Unfortunately, the code used the
virStorageSource (e.g. target->type - virStorageType) rather than
virStorageVolDef (e.g. vol->type - virStorageVolType) in order to
make the comparison. As it turns out target->type for a volume is
not filled in at all for a voldef as the code relies on vol->type.
Ironically the result is that only VIR_STORAGE_VOL_BLOCK's would get
their capacity updated.
This patch will adjust the code to check the "vol->type" field instead
as an argument. This way for a voldef, the correct comparison is made.
Additionally for a backingStore, the 'type' field is never filled in;
however, since we know that the provided path is a location at which
the backing store can be accessed on the local filesystem thus just
pass VIR_STORAGE_VOL_FILE in order to satisfy the adjusted voltype
check. Whether it's a FILE or a BLOCK only matters if we're trying to
get more data based on the target->format.
The tool is used for pool discovery. Since we call an external binary we
don't really need to compile out the code that uses it. We can check
whether it exists at runtime.
It's possible that the API could be called from a startup path in
order to check whether the label on the device matches what our
format is. In order to handle that condition, add a 'writelabel'
boolean to the API in order to indicate whether a write or just
read is about to happen.
This alters two "error" conditions that would care about knowing.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rename virStorageBackendFileSystemProbe and to virStorageBackendBLKIDFindFS
and move to the more common storage_backend module.
Create a shim virStorageBackendDeviceIsEmpty which will make the call
to the virStorageBackendBLKIDFindFS and check the return value.
Signed-off-by: John Ferlan <jferlan@redhat.com>
The code at the very bottom of the DAC secdriver that calls
chown() should be fine with read-only data. If something needs to
be prepared it should have been done beforehand.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Partially resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1301021
If the volume xml was looking to create a luks volume take the necessary
steps in order to make that happen.
The processing will be:
1. create a temporary file (virStorageBackendCreateQemuImgSecretPath)
1a. use the storage driver state dir path that uses the pool and
volume name as a base.
2. create a secret object (virStorageBackendCreateQemuImgSecretObject)
2a. use an alias combinding the volume name and "_luks0"
2b. add the file to the object
3. create/add luks options to the commandline (virQEMUBuildLuksOpts)
3a. at the very least a "key-secret=%s" using the secret object alias
3b. if found in the XML the various "cipher" and "ivgen" options
Signed-off-by: John Ferlan <jferlan@redhat.com>
Remove all the plumbing needed for the different qcow-create/kvm-img
non-raw file creation.
We can drop the error messages because CreateQemuImg will thrown an
error for us but with slightly less fidelity (unable to find qemu-img),
which I think is acceptable given the unlikeliness of that error in
practice.
Refreshes meta-information such as allocation, capacity, format, etc.
Ploop volumes differ from other volume types. Path to volume is the path
to directory with image file root.hds and DiskDescriptor.xml.
https://openvz.org/Ploop/format
Due to this fact, operations of opening the volume have to be done once
again. get the information.
To decide whether the given volume is ploops one, it is necessary to check
the presence of root.hds and DiskDescriptor.xml files in volumes' directory.
Only in this case the volume can be manipulated as the ploops one.
Such strategy helps us to resolve problems that might occure, when we
upload some other volume type from ploop source.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
These callbacks let us to create ploop volumes in dir, fs and etc. pools.
If a ploop volume was created via buildVol callback, then this volume
is an empty ploop device with DiskDescriptor.xml.
If the volume was created via .buildFrom - then its content is similar to
input volume content.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Similar to the openflags VIR_STORAGE_VOL_OPEN_NOERROR processing, if some
read processing operation fails, check the readflags for the corresponding
error flag being set. If so, rather then causing an error - use VIR_WARN
to flag the error, but return -2 which some callers can use to perform
specific actions. Use a new VIR_STORAGE_VOL_READ_NOERROR flag in a new
VolReadErrorMode enum.
Similar to the openflags which allow VIR_STORAGE_VOL_OPEN_NOERROR to be
passed to avoid open errors, add a 'readflags' variable so that in the
future read failures could also be ignored.
https://bugzilla.redhat.com/show_bug.cgi?id=1233003
Commit id 'fdda3760' only managed a symptom where it was possible to
create a file in a pool without libvirt's knowledge, so it was reverted.
The real fix is to have all the createVol API's which actually create
a volume (disk, logical, zfs) and the buildVol API's which handle the
real creation of some volume file (fs, rbd, sheepdog) manage deleting
any volume which they create when there is some sort of error in
processing the volume.
This way the onus isn't left up to the storage_driver to determine whether
the buildVol failure was due to some failure as a result of adjustments
made to the volume after creation such as getting sizes, changing ownership,
changing volume protections, etc. or simple a failure in creation.
Without needing to consider that the volume has to be removed, the
buildVol failure path only needs to remove the volume from the pool.
This way if a creation failed due to duplicate name, libvirt wouldn't
remove a volume that it didn't create in the pool target.
The XML parser sets a default <mode> if none is explicitly passed in.
This is then used at pool/vol creation time, and unconditionally reported
in the XML.
The problem with this approach is that it's impossible for other code
to determine if the user explicitly requested a storage mode. There
are some cases where we want to make this distinction, but we currently
can't.
Handle <mode> parsing like we handle <owner>/<group>: if no value is
passed in, set it to -1, and adjust the internal consumers to handle
it.
For virStorageBackendStablePath, in order to make decisions in other code
split out the checks regarding whether the pool's target is empty, using /dev,
using /dev/, or doesn't start with /dev
In order to be able to use 'checkPool' inside functions which do not
have any connection reference, 'conn' attribute needs to be discarded
from the checkPool's signature, since it's not used by any storage backend
anyway.
Gluster storage works on a similar principle to NFS where it takes the
uid and gid of the actual process and uses it to access the storage
volume on the remote server. This introduces a need to chown storage
files on gluster via native API.
For non-local storage drivers we can't expect to use the "scrub" tool to
wipe the volume. Split the code into a separate backend function so that
we can add protocol specific code later.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1118710
The next patch will move the storage volume wiping code into the
individual backends. This patch splits out the common code to wipe a
local volume into a separate backend helper so that the next patch is
simpler.
For non-local storage drivers we can't expect to use the FDStream
backend for up/downloading volumes. Split the code into a separate
backend function so that we can add protocol specific code later.
Add a new function wrapper and tweak the storage file backend lookup
function so that it can be used without reporting error. This will be
useful in the metadata crawler code where we need silently break if
metadata retrieval is not supported for the current storage type.
Different protocols have different means to uniquely identify a storage
file. This patch implements a storage driver API to retrieve a unique
string describing a volume. The current implementation works for local
storage only and returns the canonical path of the volume.
To add caching support the local filesystem driver now has a private
structure holding the cached string, which is created only when it's
initially accessed.
This patch provides the implementation for local files only for start.
Add storage driver based functions to access headers of storage files
for metadata extraction. Along with this patch a local filesystem and
gluster via libgfapi implementation is provided. The gluster
implementation is based on code of the saferead_lim function.
To allow using the storage driver APIs to access files on various
storage sources in a universal fashion possibly on storage such as nfs
with root squash we'll need to store the desired uid/gid in the
metadata.
Add new initialisation API that will store the desired uid/gid and a
wrapper for the current use. Additionally add docs for the two APIs.
Commit id 'ac9a0963' refactored out the 'withCapacity' for the
virStorageBackendUpdateVolInfo() API. See:
http://www.redhat.com/archives/libvir-list/2014-April/msg00043.html
This resulted in a difference in how 'virsh vol-info --pool <poolName>
<volume>' or 'virsh vol-list vol-list --pool <poolName> --details' outputs
the capacity information for a directory pool with a qcow2 sparse file.
For example, using the following XML
mkdir /home/TestPool
cat testpool.xml
<pool type='dir'>
<name>TestPool</name>
<uuid>6bf80895-10b6-75a6-6059-89fdea2aefb7</uuid>
<source>
</source>
<target>
<path>/home/TestPool</path>
<permissions>
<mode>0755</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
virsh pool-create testpool.xml
virsh vol-create-as --pool TestPool temp_vol_1 \
--capacity 1048576 --allocation 1048576 --format qcow2
virsh vol-info --pool TestPool temp_vol_1
Results in listing a Capacity value. Prior to the commit, the value would
be '1.0 MiB' (1048576 bytes). However, after the commit the output would be
(for example) '192.50 KiB', which for my system was the size of the volume
in my file system (eg 'ls -l TestPool/temp_vol_1' results in '197120' bytes
or 192.50 KiB). While perhaps technically correct, it's not necessarily
what the user expected (certainly virt-test didn't expect it).
This patch restores the code to not update the target capacity for this path
Now that we store all metadata about a storage image in a
virStorageSource struct let's use it also to store information needed by
the storage driver to access and do operations on the files.
Currently VolOpen notifies the user of a potentially non-fatal failure by
returning -2 and logging a VIR_WARN or VIR_INFO. Unfortunately most
callers treat -2 as fatal but don't actually report any message with
the error APIs.
Rename the VOL_OPEN_ERROR flag to VOL_OPEN_NOERROR. If NOERROR is specified,
we preserve the current behavior of returning -2 (there's only one caller
that wants this).
However in the default case, only return -1, and actually use the error
APIs. Fix up a couple callers as a result.
Now that each virStorageSource can track allocation information,
and given that we already have the information without extra
syscalls, it's easier to just always populate the information
directly into the struct than it is to sometimes pass the address
of the struct members down the call chain.
* src/storage/storage_backend.h (virStorageBackendUpdateVolInfo)
(virStorageBackendUpdateVolTargetInfo)
(virStorageBackendUpdateVolTargetInfoFD): Update signature.
* src/storage/storage_backend.c (virStorageBackendUpdateVolInfo)
(virStorageBackendUpdateVolTargetInfo)
(virStorageBackendUpdateVolTargetInfoFD): Always populate struct
members instead.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskMakeDataVol): Update client.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget)
(virStorageBackendFileSystemRefresh)
(virStorageBackendFileSystemVolRefresh): Likewise.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
* src/storage/storage_backend_logical.c
(virStorageBackendLogicalMakeVol): Likewise.
* src/storage/storage_backend_mpath.c
(virStorageBackendMpathNewVol): Likewise.
* src/storage/storage_backend_scsi.c
(virStorageBackendSCSINewLun): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
A fairly smooth transition. And now that domain disks and
storage volumes share a common struct, it opens the doors for
a future patch to expose more details in the XML for both
objects.
* src/conf/storage_conf.h (_virStorageVolTarget): Delete.
(_virStorageVolDef): Use common type.
* src/conf/storage_conf.c (virStorageVolDefFree)
(virStorageVolTargetDefFormat): Update clients.
* src/storage/storage_backend.h: Likewise.
* src/storage/storage_backend.c
(virStorageBackendDetectBlockVolFormatFD)
(virStorageBackendUpdateVolTargetInfo)
(virStorageBackendUpdateVolTargetInfoFD): Likewise.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1072714
Use the "gluster" command line tool to retrieve information about remote
volumes on a gluster server to allow storage pool source lookup.
Unfortunately gluster doesn't provide a management library so that we
could use that directly, instead the RPC calls are hardcoded in the
command line tool.
Without this, using /dev/mapper as a directory pool
fails in virStorageBackendUpdateVolTargetInfoFD:
cannot seek to end of file '/dev/mapper/control': Illegal seek
Skip over character devices by default.
https://bugzilla.redhat.com/show_bug.cgi?id=710866
In commit e32268184b I accidentally added
twice a typedef for virStorageFileBackend when I moved it between files
across patch iterations. The double declaration breaks build on older
compilers in RHEL5 and FreeBSD.
Remove the spurious definition.