Extract out command line setup and run from storageBackendCreateQemuImg
as we'll need to run it twice soon.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Split up virStorageBackendCreateQemuImgCmdFromVol into two parts.
It's too long anyway and virStorageBackendCreateQemuImgCmdFromVol
should just handle the command line processing.
NB: Requires changing info.* into info->* references.
Signed-off-by: John Ferlan <jferlan@redhat.com>
The only way preallocate could be set is if the info->format was
not RAW (see storageBackendCreateQemuImgSetBacking), so let's just
extract it from the if/else surrounding the application of the
encryption options.
Signed-off-by: John Ferlan <jferlan@redhat.com>
The only way backing_fmts could be set is if the info->format was
not RAW (see storageBackendCreateQemuImgSetBacking), so let's just
extract it from the if/else surrounding the application of the
encryption options.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Remove the "luks" distinction as the code is about to become more
generic and be able to support qcow encryption as well.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Move generation of secretPath to storageBackendGenerateSecretData
and simplify a bit since we know vol->target.encryption is set plus
we have a local @enc.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than having storageBackendCreateQemuImgCheckEncryption
perform the virStorageGenerateQcowEncryption, let's just do that
earlier during storageBackendCreateQemuImg so that the check
helper is just a check helper rather doing something different
based on whether the format is qcow[2] or raw based encryption.
This fixes an issue in the storageBackendResizeQemuImg processing
for qcow encryption where if a secret was not available for a
volume, a new secret will not be generated and instead an error
message will be generated.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id 'a48c71411' altered the logic a bit and didn't
remove an unnecessary check as info.encryption is true when
vol->target.encryption != NULL, so if we enter the if segment
with info.format == VIR_STORAGE_FILE_RAW && vol->target.encryption
!= NULL, then there's no way info.encryption could be false.
Signed-off-by: John Ferlan <jferlan@redhat.com>
We have been checking whether qemu-img supports the -o compat
option by scraping the -help output.
Since we require QEMU 1.5.0 now and this option was introduced in 1.1,
assume we support it and ditch the help parsing code along with the
extra qemu-img invocation.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The latter is impossible to mock on platforms that use the
gnulib implementation, such as FreeBSD, while the former
doesn't suffer from this limitation.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The storage file drivers are currently loaded as a side effect of
loading the storage driver. This is a bogus dependancy because the
storage file code has no interaction with the storage drivers, and
even ultimately be running in a completely separate daemon.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The storage file code needs to be run in the hypervisor drivers, while
the storage backend code needs to be run in the storage driver. Split
the source code as a preparatory step for creating separate loadable
modules.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The storage file code needs to be run in the hypervisor drivers, while
the storage backend code needs to be run in the storage driver. Split
the source code as a preparatory step for creating separate loadable
modules.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The driver.{c,h} files are primarily targetted at loading hypervisor
drivers and some helper functions in that area. It also, however,
contains a generically useful function for loading extension modules
that is called by the storage driver. Split that functionality off
into a new virmodule.{c,h} file to isolate it.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Currently the driver module loading code does not report an error if the
driver module is physically missing on disk. This is useful for distro
packaging optional pieces. When the daemons are split up into one daemon
per driver, we will expect module loading to always succeed. If a driver
is not desired, the entire daemon should not be installed.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The virFileFindResource method merely builds up the expected fully
qualified path to the resource. It does not actually check if it exists
on disk. The loadable module callers were mistakenly thinking a NULL
indicates the file doesn't exist on disk, whereas it in fact indicates
an out of memory error.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Now that we've activated two hacks to prevent unloading of modules,
there is no point passing back a pointer to the loaded library handle.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Ensuring that we don't call the virDrvConnectOpen method with a NULL URI
means that the drivers can drop various checks for NULL URIs. These were
not needed anymore since the probe functionality was split
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Declare what URI schemes a driver supports in its virConnectDriver
struct. This allows us to skip trying to open the driver entirely
if the URI scheme doesn't match.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Add a localOnly flag to the virConnectDriver struct which allows a
driver to indicate whether it is local-only, or permits remote
connections. Stateful drivers running inside libvirtd are generally
local only. This allows us to remote the check for uri->server != NULL
from most drivers.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This patch adds support to qcow2 formatted filesystem object storage by
instructing qemu-img to build them with preallocation=falloc whenever the
XML described storage <allocation> matches its <capacity>. For all other
cases the filesystem stored objects are built with preallocation=metadata.
Signed-off-by: Wim ten Have <wim.ten.have@oracle.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The storagePoolLookupByTargetPath() method in the storage driver is used
by the QEMU driver during block migration. If there's a valid use case
for this in the QEMU driver, then external apps likely have similar
needs. Exposing it in the public API removes the direct dependancy from
the QEMU driver to the storage driver.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The virStorageTranslateDiskSourcePool method modifies a virDomainDiskDef
to resolve any storage pool reference. For some reason this was added
into the storage driver code, despite working entirely in terms of the
public APIs. Move it into the domain conf file and rename it to match the
object it modifies.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The QEMU driver loadable module needs to be able to resolve all ELF
symbols it references against libvirt.so. Some of its symbols can only
be resolved against the storage_driver.so loadable module which creates
a hard dependancy between them. By moving the storage file backend
framework into the util directory, this gets included directly in the
libvirt.so library. The actual backend implementations are still done as
loadable modules, so this doesn't re-add deps on gluster libraries.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The storage driver backends are serving the public storage pools API,
while the storage file backends are serving the internal QEMU driver and
/ or libvirt utility code.
To prep for moving this storage file backend framework into the utility
code, split out the backend definitions.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Now that we can open connections to the secondary drivers on demand,
there is no need to pass a virConnectPtr into all the backend
functions.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Instead of passing around a virConnectPtr object, just open a connection
to the nodedev driver at time of use. Opening connections on demand will
be beneficial when the nodedev driver is in a separate daemon. It also
solves the problem that a number of callers just pass in a NULL
connection today which prevents nodedev lookup working at all.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Instead of passing around a virConnectPtr object, just open a connection
to the secret driver at time of use. Opening connections on demand will
be beneficial when the secret driver is in a separate daemon. It also
solves the problem that a number of callers just pass in a NULL
connection today which prevents secret lookup working at all.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Allow the possibility of opening a connection to only the storage
driver, by defining storage:///system and storage:///session URIs
and registering a fake hypervisor driver that supports them.
The hypervisor drivers can now directly open a storage driver
connection at time of need, instead of having to pass around a
virConnectPtr through many functions. This will facilitate the later
change to support separate daemons for each driver.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
By convention the last thing in the driver.c files should be the driver
callback table and function to register it.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Commit 000e950455 tried to fix improper bracketing when refreshing disk
volume stats for a backing volume. Unfortunately the condition is still
wrong as in cases as the backing store being inaccessible
storageBackendUpdateVolTargetInfo returns -2 if instructed to ignore
errors. The condition does not take this into account.
Dumping XML of a volume which has inacessible backing store would then
result into:
# virsh vol-dumpxml http.img --pool default
error: An error occurred, but the cause is unknown
Properly ignore -2 for backing volumes.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1540022
Alter the logic such that we only add the volume to the pool once
we've filled in all the information and cause failure to go to a
common error: label. Patches to place the @vol into a few hash tables
will soon "require" that at least the keys (name, target.path, and key)
be populated with valid data.
For a disk backend, the deleteVol code will clear all the
volumes in the pool and perform a pool refresh, thus the
storageVolDeleteInternal should not use access @voldef
after deleteVol succeeds.
After commit a693fdb 'vol-dumpxml' missed the ability to show backingStore
information. This commit adds a volume type for files that fixes this
problem.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1529663
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Now that we have a private storage pool list, we can take the next
step and convert to using objects. In this case, we're going to use
RWLockable objects (just like every other driver) with two hash
tables for lookup by UUID or Name.
Along the way the ForEach and Search API's will be adjusted to use
the related Hash API's and the various FindBy functions altered and
augmented to allow for HashLookup w/ and w/o the pool lock already
taken.
After virStoragePoolObjRemove we will need to virObjectUnref(obj)
after to indicate the caller is "done" with it's reference. The
Unlock occurs during the Remove.
The NumOf, GetNames, and Export functions all have their own callback
functions to return the required data and the FindDuplicate code
can use the HashSearch function callbacks.
Commit id '5ab746b8' introduced the function as perhaps a copy
of storageVolLookupByPath; however, it did not use the @cleanpath
variable even though it used the virFileSanitizePath. So in essance
the only "check" being done for failure is whether it was possible
to strdup the path.
Looking at the virStoragePoolDefParseXML one will note that the
target.path is stored using the result of virFileSanitizePath.
Therefore, this function should sanitize and use the input @path
for the argument to storagePoolLookupByTargetPathCallback which
is comparing against stored target.path values.
Additionally, if there was an error we should use the proper error
of VIR_ERR_NO_STORAGE_POOL (instead of VIR_ERR_NO_STORAGE_VOL).
virStorageFileReportBrokenChain uses data from the driver private data
pointer to print the user and group. This would lead to a crash in call
paths where we did not initialize the storage backend as recently added
in commit 24e47ee2b9 to qemuDomainDetermineDiskChain.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1522682
Commit id '5d5c732d7' had an incorrect assignment and was found
by travis build:
storage/storage_driver.c:1668:14: error: equality comparison with extraneous
parentheses [-Werror,-Wparentheses-equality]
if ((obj == virStoragePoolObjListSearch(&driver->pools,
~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~