After restart of libvirtd the 'checkPool' method is supposed to validate
that the pool is online. Since libvirt then refreshes the pool contents
anyways just return whether the pool was supposed to be online so that
the code can be reached. This is necessary since if a pool does not
implement the method it's automatically considered as inactive.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1436065
Use the relative lookup specifier rather than the global one. Otherwise
only the first name would be looked up. Add a test case to cover the
scenario.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1436574
The native gluster pool source list data differs from the data used for
attaching gluster volumes as netfs pools. Currently the only difference
was the format. Since native pools don't use it and later there will be
more differences add a more deterministic way to switch between the
types instead.
https://bugzilla.redhat.com/show_bug.cgi?id=1371892
The 'capacity' value (e.g. guest logical size) for a LUKS volume is
smaller than the 'physical' value of the file in the file system, so
we need to account for that.
When peeking at the encryption information about the volume add a fetch
of the payload_offset which is described as the offset to the start of
the volume data (in 512 byte sectors) in QEMU's QCryptoBlockLUKSHeader.
Then adjust the ->capacity appropriately when we determine that the
volume target encryption has a payload_offset value.
If a transient storage pool is deemed inactive after libvirtd restart it
would not be deleted from the list. Reuse virStoragePoolUpdateInactive
along with a refactor necessary to properly update the state.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1242801
After a pool is made inactive the definition objects need to be updated
(if a new definition is prepared) and transient pools need to be
completely removed. Split out the code doing these steps into a separate
function for later reuse.
When registering a storage poll backend, the code would use
virStorageTypeToString instead of virStoragePoolTypeToString. The
following message would be logged:
virDriverLoadModuleFunc:71 : Lookup function 'virStorageBackendSCSIRegister'
virStorageBackendRegister:174 : Registering storage backend '(null)'
off_t is signed and it's size is the same as long only on 64b archs.
Thus it cannot be formatted as %lu.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1430679
As it turns out some file headers (e.g. ext4) may be larger/longer than
the 512 bytes of zeros being written prior to a pvcreate, so let's write
out 2048 bytes similar to how the pvcreate sources would peek at the first
4 sectors of the device.
Make sure there is at enough bytes on the device to clear before doing
doing the clear - just to be sure.
There is no reason for it not to be in the utils, all global symbols
under that file already have prefix vir* and there is no reason for it
to be part of DRIVER_SOURCES because that is just a leftover from
older days (pre-driver modules era, I believe).
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Use "virStoragePoolObj" as a prefix for any external API in virstorageobj.
Also a couple of functions were local to virstorageobj.c, so remove their
external defs iin virstorageobj.h.
NB: The virStorageVolDef* API's won't change.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Move all the StoragePoolObj related API's into their own module
virstorageobj from the storage_conf
Purely code motion at this point, plus adjustments to cleanly build
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than returning true/false and having the caller check if the
vHBA was actually created, let's do that check within the CreateVport
function. That way the caller can faithfully assume success based
on a name start the thread looking for the LUNs. Prior to this change
it's possible that the vHBA wasn't really created (e.g if the call to
virVHBAGetHostByWWN returned NULL), we'd claim success, but in reality
there'd be no vHBA for the pool. This also fixes a second yet seen
issue that if the nodedev was present, but the parent by name wasn't
provided (perhaps parent by wwnn/wwpn or by fabric_name), then a failure
would be returned. For this path it shouldn't be an error - we should
just be happy that something else is managing the device and we don't
have to create/delete it.
The end result is that the createVport code can now just start the
refresh thread once it gets a non NULL name back.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Move the bulk of createVport and rename to virNodeDeviceCreateVport.
Remove the deleteVport entirely and replace with virNodeDeviceDeleteVport
Signed-off-by: John Ferlan <jferlan@redhat.com>
The function is actually in virutil.c, but prototyped in virfile.h.
This patch fixes that by renaming the function to virWaitForDevices,
adding the prototype in virutil.h and libvirt_private.syms, and then
changing the callers to use the new name.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Move the virStoragePoolSourceAdapter from storage_conf.h and rename
to virStorageAdapter.
Continue with code realignment for brevity and flow.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rework the code to use the new FCHost specific adapter structures.
Also rework the parameters to only pass what's need and leave logic in
the caller for the adapter type and the need to call the helpers.
Signed-off-by: John Ferlan <jferlan@redhat.com>
$ virsh vol-clone /tmp/test.iso new.iso
error: Failed to clone vol from test.iso
error: internal error: Child process (/bin/qemu-img convert -f iso -O iso /tmp/test.iso /tmp/new.iso) unexpected exit status 1: qemu-img: Could not open '/tmp/test.iso': Unknown driver 'iso'
Map iso->raw before sending the format value to qemu-img
https://bugzilla.redhat.com/show_bug.cgi?id=972784https://bugzilla.redhat.com/show_bug.cgi?id=1419395
The build system for libvirt correctly detects the location of blkid
using PKG_CONFIG_PATH environment variable. The file blkid.pc states
that the include flags should be: 'Cflags: -I${includedir}/blkid' but
libvirt searches for blkid.h inside ${includedir}/blkid/blkid, which is
wrong. Until now, the compilation for libvirt succeeded because of pure
luck, as it had -I/usr/include as a CFLAG. This issue was faced while
compiling libvirt on Ubuntu 16.04.2 with bare minimum dev packages and a
custom compiled blkid kept in a non-standard $prefix.
Signed-off-by: Nehal J Wani <nehaljw.kkd1@gmail.com>
Add a new storage driver registration function that will force the
backend code to fail if any of the storage backend modules can't be
loaded. This will make sure that they work and are present.
If driver modules are enabled turn storage driver backends into
dynamically loadable objects. This will allow greater modularity for
binary distributions, where heavyweight dependencies as rbd and gluster
can be avoided by selecting only a subset of drivers if the rest is not
necessary.
The storage modules are installed into 'LIBDIR/libvirt/storage-backend/'
and users can override the location by using
'LIBVIRT_STORAGE_BACKEND_DIR' environment variable.
rpm based distros will at this point install all the backends when
libvirt-daemon-driver-storage package is installed.
Add APIs that allow to dynamically register driver backends so that the
list of available drivers does not need to be known during compile time.
This will allow us to modularize the storage driver on runtime.
Create a virscsihost.c and place the functions there. That removes the
last #ifdef __linux__ from virutil.c.
Take the opporunity to also change the function names and in one case
the parameters slightly
Use the new virNodeDeviceGetParentName instead. Modify the callers to
build the node device scsi_host# name string in order to call the new
function so that proper lookup occurs.
Rather than have them mixed in with the virutil apis, create a separate
virvhba.c module and move the vHBA related calls into there. Soon there
will be more added.
Also modify the names of the functions and some arguments to be more
indicative of what is really happening. Adjust the callers respectively.
While I was changing fchosttest, rather than the non-descriptive names
test1...test6, rename them to match what the test is doing.
Right now, we use simple string comparison both on the source paths
(mount's output vs pool's source) and the target (mount's mnt_dir vs
pool's target). The problem are symlinks and mount indeed returns
symlinks in its output, e.g. /dev/mappper/lvm_symlink. The same goes for
the pool's source/target, so in order to successfully compare these two
replace plain string comparison with virFileComparePaths which will
resolve all symlinks and canonicalize the paths prior to comparison.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1417203
Signed-off-by: Erik Skultety <eskultet@redhat.com>
When FS pool's source is already mounted on the target location instead
of just simply marking the pool as active, thus starting it we fail with
an error stating that the source is indeed already mounted on the target.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Commit id '5f07c3c07' broke the freebsd build in the libvirt CI test
environment because the UMOUNT was not defined unless WITH_STORAGE_FS
is defined.
So remove the virStorageBackendUmountLocal from storage_util.c,h and
restore the code back in the storage_backend_fs.c and _vstorage.c
modules.
Added create/define/etc pool operations for vstorage backend.
Used the common/local pool API's from storage_util for operations
that are not specific to vstorage. In particular Refresh and Delete
Pool operations as well as all the Volume operations.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
Added general definitions for vstorage pool backend including
the build options to add --with-storage-vstorage checking.
In order to use vstorage as a backend for a storage pool
vstorage tools (vstorage and vstorage-mount) need to be installed.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
Move all the volume functions to storage_util to create local/common helpers
using the same naming syntax as the existing upload, download, and wipe
virStorageBackend*Local API's.
In the process of doing so, found more API's that can now become local
to storage_util. In order to distinguish between local/external - I
changed the names of the now local only ones from "virStorageBackend..."
to just "storageBackend..."
Signed-off-by: John Ferlan <jferlan@redhat.com>
Move some pool functions to storage_util to create local/common helpers
using the same naming syntax as the existing upload, download, and wipe
virStorageBackend*Local API's.
In the process of doing so, found a few API's that can now become local
to storage_util. In order to distinguish between local/external - I
changed the names of the now local only ones from "virStorageBackend..."
to just "storageBackend..."
Signed-off-by: John Ferlan <jferlan@redhat.com>
Just moving code around with minor adjustment to have the Stop
code combine with the Unmount code since all the Stop code did
was call the Unmount code.
Previous commit tried to change configure logic such that the
GLUSTER_CLI parameter would always be set:
commit 9e97c8c0f0
Author: Peter Krempa <pkrempa@redhat.com>
Date: Mon Jan 9 15:56:12 2017 +0100
storage: gluster: Remove build-time dependency on the 'gluster' cli tool
This missed the fact that the AC_PATH_PROG call was itself inside an 'if'
conditional that would not be called in with_storage_gluster was false. As
a result, GLUSTER_CLI was still conditionally defined.
Just kill the GLUSTER_CLI parameter and AC_PATH_PROG call entirely and pass a
bare "gluster" string to virFindFileInPath instead.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The iSCSI backend driver was using stuff from the SCSI driver without
making sure that it's compiled in. Move the common code into the
storage_util.c since it does not contain any specific code.
The file backend code was mistakenly put into #if WITH_STORAGE_FS. This
is not necessary since all the backends just access files on disk, and
thus the code for WITH_STORAGE_DIR is sufficient to compile everything.
The file became a garbage dump for all kinds of utility functions over
time. Move them to a separate file so that the files can become a clean
interface for the storage backends.
https://bugzilla.redhat.com/show_bug.cgi?id=1346566
If libvirt_parthelper is erroneously told to append the partition
separator 'p' onto the generated output for a disk pool using device
mapper that has 'user_friendly_names' set to true, then the error
recovery path will fail to find volume resulting in the pool being
in an unusable state.
So, augment the documentation to provide the better hint that the
part_separator='yes' should be set when user_friendly_names are not
being used. Additionally, once we're in the error path where the
returned name doesn't match the expected partition name try to see
if the reason is because the 'p' was erroneosly added. If so alter
the about to be removed vol->target.path so that the DiskDeleteVol
code can find the partition that was created and remove it.
If the voldef type is VIR_STORAGE_VOL_BLOCK, then as long as the
format is known, let's allow the probe to happen - gets a truer value
and the same probe/update would be allowed for the same volume defined
in a domain.
For volume processing in virStorageBackendUpdateVolTargetInfo to get
the capacity commit id 'a760ba3a7' added the ability to probe a volume
that didn't list a target format. Unfortunately, the code used the
virStorageSource (e.g. target->type - virStorageType) rather than
virStorageVolDef (e.g. vol->type - virStorageVolType) in order to
make the comparison. As it turns out target->type for a volume is
not filled in at all for a voldef as the code relies on vol->type.
Ironically the result is that only VIR_STORAGE_VOL_BLOCK's would get
their capacity updated.
This patch will adjust the code to check the "vol->type" field instead
as an argument. This way for a voldef, the correct comparison is made.
Additionally for a backingStore, the 'type' field is never filled in;
however, since we know that the provided path is a location at which
the backing store can be accessed on the local filesystem thus just
pass VIR_STORAGE_VOL_FILE in order to satisfy the adjusted voltype
check. Whether it's a FILE or a BLOCK only matters if we're trying to
get more data based on the target->format.
The tool is used for pool discovery. Since we call an external binary we
don't really need to compile out the code that uses it. We can check
whether it exists at runtime.