While trying to build with -Os couple of compile errors showed
up.
conf/domain_conf.c: In function 'virDomainChrRemove':
conf/domain_conf.c:13666:24: error: 'ret' may be used uninitialized in this function [-Werror=maybe-uninitialized]
virDomainChrDefPtr ret, **arrPtr = NULL;
^
Compiler fails to see that @ret is used only if set in the loop,
but whatever, there's no harm in initializing the variable.
In vboxAttachDrivesNew and _vboxAttachDrivesOld compiler thinks
that @rc may be used uninitialized. Well, not directly, but maybe
after some optimization. Yet again, no harm in initializing a
variable.
In file included from ./util/virthread.h:26:0,
from ./datatypes.h:28,
from vbox/vbox_tmpl.c:43,
from vbox/vbox_V3_1.c:37:
vbox/vbox_tmpl.c: In function '_vboxAttachDrivesOld':
./util/virerror.h:181:5: error: 'rc' may be used uninitialized in this function [-Werror=maybe-uninitialized]
virReportErrorHelper(VIR_FROM_THIS, code, __FILE__, \
^
In file included from vbox/vbox_V3_1.c:37:0:
vbox/vbox_tmpl.c:1041:14: note: 'rc' was declared here
nsresult rc;
^
Yet again, one uninitialized variable:
qemu/qemu_driver.c: In function 'qemuDomainBlockCommit':
qemu/qemu_driver.c:17194:9: error: 'baseSource' may be used uninitialized in this function [-Werror=maybe-uninitialized]
qemuDomainPrepareDiskChainElement(driver, vm, baseSource,
^
And another one:
storage/storage_backend_logical.c: In function 'virStorageBackendLogicalMatchPoolSource.isra.2':
storage/storage_backend_logical.c:618:33: error: 'thisSource' may be used uninitialized in this function [-Werror=maybe-uninitialized]
thisSource->devices[j].path))
^
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Found by inspection - after calling virStoragePoolObjAssignDef the
pool is part of the driver->pools.objs list and the failure path
for the virStoragePoolObjSaveDef will use virStoragePoolObjRemove
to remove the pool from the objs list which will unlock and free
the pool pointer (as pools->objs[i] during the loop). Since the call
doesn't clear the pool address from the callee, we need to set it
to NULL; otherwise, the virStoragePoolObjUnlock in the cleanup: code
will fail miserably.
Generates a false positive for Coverity, but it turns out there's no need
to check ret == -1 since if VIR_APPEND_ELEMENT is successful, the local
vol pointer is cleared anyway.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Found by my Coverity checker - virCheckFlags call could return -1, but
not virCommandFree(destroy_cmd).
Signed-off-by: John Ferlan <jferlan@redhat.com>
%zu is not always synonymous with uint64_t; on 32-bit machines,
size_t is only 32 bits. Prefer "%lld"/'unsigned long long' when
the variable is under our control, and "%"PRIu64 when we are
stuck with 'uint64_t' from RBD.
Fixes errors such as:
../../src/storage/storage_backend_rbd.c: In function 'virStorageBackendRBDVolWipe':
../../src/storage/storage_backend_rbd.c:1281:15: error: format '%zu' expects argument of type 'size_t', but argument 8 has type 'uint64_t {aka long long unsigned int}' [-Werror=format=]
VIR_DEBUG("Need to wipe %zu bytes from RBD image %s/%s",
^
../../src/util/virlog.h:90:73: note: in definition of macro 'VIR_DEBUG_INT'
virLogMessage(src, VIR_LOG_DEBUG, filename, linenr, funcname, NULL, __VA_ARGS__)
^
../../src/storage/storage_backend_rbd.c:1281:5: note: in expansion of macro 'VIR_DEBUG'
VIR_DEBUG("Need to wipe %zu bytes from RBD image %s/%s",
^
Signed-off-by: Eric Blake <eblake@redhat.com>
Checking whether x > 0 before looping over [0..x] items doesn't make
sense and multi-line body must have curly brackets around it.
Best viewed with '-w'.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
This reverts commit 611a278fa4.
According to the original commit message, this is dead code:
It is highly unlikely that a backend will know how to create a
volume from a different volume (buildVolFrom) and not know how to
create an empty volume (createVol).
Since Ceph version Infernalis (9.2.0) the new fast-diff mechanism
of RBD allows for querying actual volume usage.
Prior to this version there was no easy and fast way to query how
much allocation a RBD volume had inside a Ceph cluster.
To use the fast-diff feature it needs to be enabled per RBD image
and is only supported by Ceph cluster running version Infernalis
(9.2.0) or newer.
Without the fast-diff feature enabled libvirt will report an allocation
identical to the image capacity. This is how libvirt behaves currently.
'virsh vol-info rbd/image2' might output for example:
Name: image2
Type: network
Capacity: 1,00 GiB
Allocation: 124,00 MiB
Newly created volumes will have the fast-diff feature enabled if the
backing Ceph cluster supports it.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
In commit 0b15f920 there is a #ifdef which requires LIBRBD_VERSION_CODE
266 or newer for rbd_diff_iterate2()
rbd_diff_iterate2() is available since 266, so this if-statement should
require anything newer than 265.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
As more and more features are added to RBD volumes we will need to
call this method more often.
By moving it into a internal function we can re-use code inside the
storage backend.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
It is highly unlikely that a backend will know how to create a
volume from a different volume (buildVolFrom) and not know how to
create an empty volume (createVol). But:
1) we call the function without any prior check so if that's the
case we would SIGSEGV immediatelly
2) it's better to be safe than sorry.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Firstly, we realloc internal list to hold new item (=volume that
will be potentially created) and then we check whether we
actually know how to create it. If we don't we consume more
memory than we really need for no good reason.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Do not store the return value of called functions in the same variable
as the (future) return value of the current function.
This makes tracking the origin of the value easier and reduces
the chance of introducing a new point of exit without resetting
the return value back to -1.
The virStringListLength function does not ever modify the passed
string list. It merely counts the items in it. Make sure that we
reflect this bit in the function header.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
(crobinso: fix up spacing and squash in sheepdog bit suggested
by Andrea)
This was only used in debugging messages and not in any real code.
Ceph/RBD uses uint64_t for sizes internally and they can be printed
with %zu without any need for casting.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Through the years the RBD storage pool code hasn't maintained the
same or correct coding standard which applies to libvirt.
This patch doesn't change any logic in the code, it only applies
the proper coding standards to the code where possible without
making large changes.
This way the code style used in this storage pool is consistent
throughout the whole file.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Rather than have a unwieldy regex string - split it up into its components
each having it's own #define and then combine in a different #define
Signed-off-by: John Ferlan <jferlan@redhat.com>
There are slight differences in various ZFS implementations.
Specifically, ZFS on FreeBSD requires to set value of 'volmode'
option to 'dev' to expose volumes as raw disk device (that's what
we need) rather than geom provides, for example.
With ZFS on Linux, however, such option is not available and
volumes exposed like we need by default.
To make our implementation more flexible, only pass 'volmode'
when it's supported. Support is checked by parsing usage
information of the 'zfs get' command.
Rather than a loop reallocating space to build the regex, just allocate
it once up front, then if there's more than 1 nextent, append a comma and
another regex_unit string.
Signed-off-by: John Ferlan <jferlan@redhat.com>
The 'stripes' value is described as the "Number of stripes or mirrors in
a logical volume". So add "mirror" and anything that starts with "raid"
to the list of segtypes that can have an 'nextents' value greater than one.
Use of raid segtypes (raid1, raid4, raid5*, raid6*, and raid10) is favored
over mirror in more recent lvm code.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than preallocating a set number of elements, then walking through
the extents and adjusting the specific element in place, use the APPEND
macros to handle that chore.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Create a helper routine in order to parse any extents information
including the extent size, length, and the device string contained
within the generated 'lvs' output string.
A future patch would then be able to avoid the code more cleanly
Signed-off-by: John Ferlan <jferlan@redhat.com>
By opening a RBD volume in Read-Only we do not register a
watcher on the header object inside the Ceph cluster.
Refreshing a volume only calls rbd_stat() which is a operation
which does not write to a RBD image.
This allows us to use a cephx user which has no write
permissions if we would want to use the libvirt storage pool
for informational purposes only.
It also saves us a write into the Ceph cluster which should
speed up refreshing a RBD pool.
rbd_open_read_only() is available in all librbd versions which
also support rbd_open().
Signed-off-by: Wido den Hollander <wido@widodh.nl>
RBD supports cloning by creating a snapshot, protecting it and create
a child image based on that snapshot afterwards.
The RBD storage driver will try to find a snapshot with zero deltas between
the current state of the original volume and the snapshot.
If such a snapshot is found a clone/child image will be created using
the rbd_clone2() function from librbd.
rbd_clone2() is available in librbd since Ceph version Dumpling (0.67) which
dates back to August 2013.
It will use the same features, strip size and stripe count as the parent image.
This implementation will only create a single snapshot on the parent image if
never changes. This reduces the amount of snapshots created for that RBD image
which benefits the performance of the Ceph cluster.
During build the decision will be made to use either rbd_diff_iterate() or
rbd_diff_iterate2().
The latter is faster, but only available on Ceph versions after 0.94 (Hammer).
Cloning is only supported if RBD format 2 is used. All images created by libvirt
are already format 2.
If a RBD format 1 image is used as the original volume the backend will report
a VIR_ERR_OPERATION_UNSUPPORTED error.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Using VIR_STORAGE_VOL_WIPE_ALG_TRIM a RBD volume can be trimmed down
to 0 bytes using rbd_discard()
Effectively all the data on the volume will be lost/gone, but the volume
remains available for use afterwards.
Starting at offset 0 the storage pool will call rbd_discard() in stripe
size * count increments which is usually 4MB. Stripe size being 4MB and
count 1.
rbd_discard() is available since Ceph version Dumpling (0.67) which dates
back to August 2013.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
This new algorithm adds support for wiping volumes using TRIM.
It does not overwrite all the data in a volume, but it tells the
backing storage pool/driver that all bytes in a volume can be
discarded.
It depends on the backing storage pool how this is handled.
A SCSI backend might send UNMAP commands to remove all data present
on a LUN.
A Ceph backend might use rbd_discard() to instruct the Ceph cluster
that all data on that RBD volume can be discarded.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
When wiping the RBD image will be filled with zeros started
at offset 0 and until the end of the volume.
This will result in the RBD volume growing to it's full allocation
on the Ceph cluster. All data on the volume will be overwritten
however, making it unavailable.
It does NOT take any RBD snapshots into account. The original data
might still be in a snapshot of that RBD volume.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Use the cast of (virStorageVolWipeAlgorithm) adding the missing case:'s
(VIR_STORAGE_VOL_WIPE_ALG_ZERO and VIR_STORAGE_VOL_WIPE_ALG_LAST).
Additionally, the old code would also still run the SCRUB command on
default since it didn't go to cleanup when a invalid flag was supplied.
We now go to cleanup and exit if a invalid flag would be provided.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
When commit id '82c1740a' made changes to the output format (changing from
using a ',' separator to '#'), the examples in the lvs output from the
comments weren't changed.
Additionally, the two new fields added ('segtype' and 'stripes') were
not included in the output, leaving it well confusing.
This patch fixes the sample output, adds a 'striped' example, and makes
other comment related adjustments for long line and spacing between followup
'NB' remarks (while I'm there).
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1265694
In order to be able to process disk storage pool's using a multipath
device to handle the partitions, libvirt_parthelper will need a way to
not automatically add a partition separator "p" to the generated device
name for each partition found. This is designed to mimic the multipath
features known as 'user_friendly_names' and custom 'alias' name.
If the part_separator attribute is set to "no", then generation of the
multipath partition name will not include the "p" partition separator
unless the source device path name ends with a number. The generated
partition names that get passed back to libvirt are processed in order
to find the device mapper multipath (dm-#) path device.
For example, device path "/dev/mapper/mpatha" would create partitions
"/dev/mapper/mpatha1", "/dev/mapper/mpatha2", etc. instead of
"/dev/mapper/mpathap1", "/dev/mapper/mpathap2", etc. If the device
path ends with a number "/dev/mapper/mpatha1", then the algorithm
to generate names "/dev/mapper/mpatha1p1", "/dev/mapper/mpatha1p2", etc.
would be utilized.
Signed-off-by: John Ferlan <jferlan@redhat.com>
This was reported in bug #1298024 where r would be filled with the
return code of rbd_open().
Should rbd_snap_unprotect() fail for any reason the virReportSystemError
call would return 'Success' since rbd_open() succeeded.
https://bugzilla.redhat.com/show_bug.cgi?id=1298024
Signed-off-by: Wido den Hollander <wido@widodh.nl>
If no port number was provided for a storage pool libvirt defaults to
port 6789; however, librbd/librados already default to 6789 when no port
number is provided.
In the future Ceph will switch to a new port for the Ceph monitors since
port 6789 is already assigned to a different application by IANA.
Port 6789 is assigned to SMC-HTTPS and Ceph now has port 3300 assigned as
the 'Ceph monitor' port.
In this case it is the best solution to not hardcode any port number into
libvirt and let librados handle the connection.
Only if a user specifies a different port number we pass it down to librados,
otherwise we leave it blank.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
merge
It could happen that rbd_list() returns X names, but that while
refreshing the pool one of those RBD images is removed from Ceph
through a different route then libvirt.
We do not need to error out in such case, we can simply ignore the
volume and continue.
error : volStorageBackendRBDRefreshVolInfo:289 :
failed to open the RBD image 'vol-998': No such file or directory
It could also be that one or more Placement Groups (PGs) inside Ceph
are inactive due to a system failure.
If that happens it could be that some RBD images can not be refreshed
and a timeout will be raised by librados.
error : volStorageBackendRBDRefreshVolInfo:289 :
failed to open the RBD image 'vol-893': Connection timed out
Ignore the error and continue to refresh the rest of the pool's
contents.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
It could be that we error out while the RBD image has not been
opened yet. This would cause us to call rbd_close() on pointer
which has not been initialized.
Set it to NULL by default and only close if it is not NULL.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Commit id 'aeb1078ab' added a buildPool option and failure path which
calls virStoragePoolObjRemove, which unlocks the pool, clears the 'pool'
variable, and goto cleanup. However, at cleanup virStoragePoolObjUnlock
is called without check if pool is non NULL.
This used to return 'unkown' and that was not correct.
A vol-dumpxml now returns:
<volume type='network'>
<name>image3</name>
<key>libvirt/image3</key>
<source>
</source>
<capacity unit='bytes'>10737418240</capacity>
<allocation unit='bytes'>10737418240</allocation>
<target>
<path>libvirt/image3</path>
<format type='raw'/>
</target>
</volume>
The RBD driver will now error out if a different format than RAW
is provided when creating a volume.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Valgrind complained:
==28277== 38 bytes in 1 blocks are definitely lost in loss record 298 of 957
==28277== at 0x4A06A2E: malloc (vg_replace_malloc.c:270)
==28277== by 0x82D7F57: __vasprintf_chk (in /lib64/libc-2.12.so)
==28277== by 0x52EF16A: virVasprintfInternal (stdio2.h:199)
==28277== by 0x52EF25C: virAsprintfInternal (virstring.c:514)
==28277== by 0x52B1FA9: virFileBuildPath (virfile.c:2831)
==28277== by 0x19B1947C: storageDriverAutostart (storage_driver.c:191)
==28277== by 0x19B196A7: storageStateAutoStart (storage_driver.c:307)
==28277== by 0x538527E: virStateInitialize (libvirt.c:793)
==28277== by 0x11D7CF: daemonRunStateInit (libvirtd.c:947)
==28277== by 0x52F4694: virThreadHelper (virthread.c:206)
==28277== by 0x6E08A50: start_thread (in /lib64/libpthread-2.12.so)
==28277== by 0x82BE93C: clone (in /lib64/libc-2.12.so)
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
The initial commit '74951eade' did not include the proper check for whether
any flags are supported by the driver.
Even though the driver doesn't support VIR_STORAGE_VOL_DELETE_ZEROED,
it still checks and allows the processing to continue
Also add the new VIR_STORAGE_VOL_DELETE_WITH_SNAPSHOTS since it is handled
as of commit id '3c7590e0a'.
https://bugzilla.redhat.com/show_bug.cgi?id=830056
Add flags handling to the virStoragePoolCreate and virStoragePoolCreateXML
API's which will allow the caller to provide the capability for the storage
pool create API's to also perform a pool build during creation rather than
requiring the additional buildPool step. This will allow transient pools
to be defined, built, and started.
The new flags are:
* VIR_STORAGE_POOL_CREATE_WITH_BUILD
Perform buildPool without any flags passed.
* VIR_STORAGE_POOL_CREATE_WITH_BUILD_OVERWRITE
Perform buildPool using VIR_STORAGE_POOL_BUILD_OVERWRITE flag.
* VIR_STORAGE_POOL_CREATE_WITH_BUILD_NO_OVERWRITE
Perform buildPool using VIR_STORAGE_POOL_BUILD_NO_OVERWRITE flag.
It is up to the backend to handle the processing of build flags. The
overwrite and no-overwrite flags are mutually exclusive.
NB:
This patch is loosely based upon code originally authored by Osier
Yang that were not reviewed and pushed, see:
https://www.redhat.com/archives/libvir-list/2012-July/msg01328.html
Commit id '71b803ac' assumed that the storage pool source device path
was required for a 'logical' pool. This resulted in a failure to start
a pool without any device path defined.
So, adjust the virStorageBackendLogicalMatchPoolSource logic to
return success if at least the pool name matches the vgs output
when no pool source device path is/are provided.
https://bugzilla.redhat.com/show_bug.cgi?id=1270709
When a volume wipe is successful, perform a volume refresh afterwards to
update any volume data that may be used in future volume commands, such as
volume resize. For a raw file volume, a wipe could truncate the file and
a followup volume resize the capacity may fail because the volume target
allocation isn't updated to reflect the wipe activity.
The only caller always passes 0 for the extent start.
Drop the 'extent_start' parameter, as well as the mention of extents
from the function name.
Change off_t extent_length to unsigned long long wipe_len, as well as the
'remain' variable.
Return -1:
* on all failures of fdatasync. Instead of propagating -errno
all the way up to the virStorageVolWipe API, which is documented
to return 0 or -1.
* after a partial wipe. If safewrite failed, we would re-use the
non-negative return value of lseek (which should be 0 in this case,
because that's the only offset we seek to).
https://bugzilla.redhat.com/show_bug.cgi?id=1025230
Add a new helper virStorageBackendLogicalMatchPoolSource to compare the
pool's source name against the output from a 'pvs' command to list all
volume group physical volume data on the host. In addition, compare the
pool's source device list against the particular volume group's device
list to ensure the source device(s) listed for the pool match what the
was listed for the volume group.
Then for pool startup or check API's we need to call this new API in
order to ensure that the pool we're about to start or declare active
during checkPool has a valid definition vs. the running host.
Rework virStorageBackendLogicalFindPoolSources a bit to create a
helper virStorageBackendLogicalGetPoolSources that will make the
pvs call in order to generate a list of associated pv_name and vg_name's.
A future patch will make use of this for start/check processing to
ensure the storage pool source definition matches expectations.
https://bugzilla.redhat.com/show_bug.cgi?id=1025230
When determining whether a FS pool is mounted, rather than assuming that
the FS pool is mounted just because the target.path is in the mount list,
let's make sure that the FS pool source matches what is mounted
Refactor the code that builds the pool source string during the FS
storage pool mount to be a separate helper.
A future patch will use the helper in order to validate the mounted
FS matches the pool's expectation during poolCheck processing
The libvirt file system storage driver determines what file to
act on by concatenating the pool location with the volume name.
If a user is able to pick names like "../../../etc/passwd", then
they can escape the bounds of the pool. For that matter,
virStoragePoolListVolumes() doesn't descend into subdirectories,
so a user really shouldn't use a name with a slash.
Normally, only privileged users can coerce libvirt into creating
or opening existing files using the virStorageVol APIs; and such
users already have full privilege to create any domain XML (so it
is not an escalation of privilege). But in the case of
fine-grained ACLs, it is feasible that a user can be granted
storage_vol:create but not domain:write, and it violates
assumptions if such a user can abuse libvirt to access files
outside of the storage pool.
Therefore, prevent all use of volume names that contain "/",
whether or not such a name is actually attempting to escape the
pool.
This changes things from:
$ virsh vol-create-as default ../../../../../../etc/haha --capacity 128
Vol ../../../../../../etc/haha created
$ rm /etc/haha
to:
$ virsh vol-create-as default ../../../../../../etc/haha --capacity 128
error: Failed to create vol ../../../../../../etc/haha
error: Requested operation is not valid: volume name '../../../../../../etc/haha' cannot contain '/'
Signed-off-by: Eric Blake <eblake@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1276198
Prior to commit id '98322052' failure to saferead the block device would
cause an error to be logged and the device to be skipped while attempting
to discover/create a stable target path for a new LUN (NPIV).
This was because virStorageBackendSCSIFindLUs ignored errors from
processLU and virStorageBackendSCSINewLun.
Ignoring the failure allowed a multipath device with an "active" and
"ghost" to be present on the host with the "ghost" block device being
ignored. This patch will return a -2 to the caller indicating the desire
to ignore the block device since it cannot be used directly rather than
fail the pool startup.
I found this useful while processing a volume that wouldn't end up
showing up in the resulting list of block volumes. In this case, the
partition type wasn't found in the disk_types table.
Similar to the openflags VIR_STORAGE_VOL_OPEN_NOERROR processing, if some
read processing operation fails, check the readflags for the corresponding
error flag being set. If so, rather then causing an error - use VIR_WARN
to flag the error, but return -2 which some callers can use to perform
specific actions. Use a new VIR_STORAGE_VOL_READ_NOERROR flag in a new
VolReadErrorMode enum.
While processing the volume for lseek, virFileReadHeaderFD, and
virStorageFileGetMetadataFromBuf - failure would cause an error,
but ret would not be set. That would result in an error message being
sent, but successful status being returned.
Just so it's clearer what to expect upon input and what types of return
values could be generated. These were loosely copied from existing
virStorageBackendUpdateVolTargetInfoFD.
Similar to the openflags which allow VIR_STORAGE_VOL_OPEN_NOERROR to be
passed to avoid open errors, add a 'readflags' variable so that in the
future read failures could also be ignored.
https://bugzilla.redhat.com/show_bug.cgi?id=1282288
Rather than using just open on the path, allow for the possibility that
the path to be opened resides on an NFS root-squash target and was created
under a different uid/gid.
Without using virFileOpenAs an attempt to get the volume size data may fail
if the current user doesn't have permissions to read the volume, such as
would be the case if mode wasn't supplied in the volume XML and the default
VIR_STORAGE_DEFAULT_VOL_PERM_MODE (e.g. 0600) was used. Under this scenario
the owner/group is not root:root, thus this path run under root would fail
to open/read the volume.
NB: The virFileOpenAs code using OPEN_FORK will only work when the failure
is not EACESS/EPERM and the path resolves to a shared file system.
https://bugzilla.redhat.com/show_bug.cgi?id=1282288
Although commit id '77346f27' resolves part of the problem regarding creating
a qemu-img image in an NFS root-squash environment, it really didn't fix the
entire problem. Unfortunately it only masked the problem. It seems qemu-img
must open/create the image using 0644, which if used by target.perms would
result in the chmod not being called since the mode desired and set match.
Although qemu-img could conceivably ignore the mode when creating, libvirt
has more knowledge of the environment and can make the adjustment to the
mode far more easily by using virFileOpenAs with VIR_FILE_OPEN_FORCE_MODE.
If that's successful, then we know on return the file will have the right
owner and mode, so we can declare success
https://bugzilla.redhat.com/show_bug.cgi?id=1277781
The virStoragePoolFCRefreshThread had passed a pointer to the pool obj
in the virStoragePoolFCRefreshInfoPtr; however, we cannot assume that
the pool exists still since we don't keep the pool lock throughout
the duration of the thread.
Therefore, instead of passing the pool obj pointer, pass the UUID of
the pool and perform a lookup. If found, then we can perform the
refresh using the locked pool obj pointer; otherwise, we just exit
the thread since the pool is now gone.
https://bugzilla.redhat.com/show_bug.cgi?id=1233003
Commit id 'fdda3760' only managed a symptom where it was possible to
create a file in a pool without libvirt's knowledge, so it was reverted.
The real fix is to have all the createVol API's which actually create
a volume (disk, logical, zfs) and the buildVol API's which handle the
real creation of some volume file (fs, rbd, sheepdog) manage deleting
any volume which they create when there is some sort of error in
processing the volume.
This way the onus isn't left up to the storage_driver to determine whether
the buildVol failure was due to some failure as a result of adjustments
made to the volume after creation such as getting sizes, changing ownership,
changing volume protections, etc. or simple a failure in creation.
Without needing to consider that the volume has to be removed, the
buildVol failure path only needs to remove the volume from the pool.
This way if a creation failed due to duplicate name, libvirt wouldn't
remove a volume that it didn't create in the pool target.
This reverts commit fdda37608a.
This commit only manages a symptom of finding a buildRet failure
where a volume was not listed in the pool, but someone created the
volume outside of libvirt in the pool being managed by libvirt.
After successfully returning from virFileOpenAs, if subsequent calls fail,
then we need to remove the file since our caller expects that failures after
creation will remove the created file.
After a successful qemu-img/qcow-create of the backing file, if we
fail to stat the file, change it owner/group, or mode, then the
cleanup path should remove the file.
Currently the code does not handle the NFS root squash environment
properly since if the file gets created, then the subsequent chmod
will fail in a root squash environment where we're creating a file
in the pool with qemu tools, such as seen via:
$ virsh vol-create-from $pool $file.xml file.img --inputpool $pool
assuming $file.xml is creating a file of "<format type='qcow2'"> from
an existing file.img in the pool of "<format type='raw'>".
This patch will utilize the virCommandSetUmask when creating the file
in the NETFS pool. The virCommandSetUmask API was added in commit id
'0e1a1a8c4', which was after the original code was developed in commit
id 'e1f27784' to attempt to handle the root squash environment.
Also, rather than blindly attempting to chmod, check to see if the
st_mode bits from the stat match what we're trying to set and only
make the chmod if they don't.
Also, a slight adjustment to the fallback algorithm to move the
virCommandSetUID/virCommandSetGID inside the if (!filecreated) since
they're only useful if we need to attempt to create the file again.
When a RBD volume has snapshots it can not be removed.
This patch introduces a new flag to force volume removal,
VIR_STORAGE_VOL_DELETE_WITH_SNAPSHOTS.
With this flag any existing snapshots will be removed prior to
removing the volume.
No existing mechanism in libvirt allowed us to pass such information,
so that's why a new flag was introduced.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
We have macros for both positive and negative string matching.
Therefore there is no need to use !STREQ or !STRNEQ. At the same
time as we are dropping this, new syntax-check rule is
introduced to make sure we won't introduce it again.
Signed-off-by: Ishmanpreet Kaur Khera <khera.ishman@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1233003
Track when the logical volume was successfully created in order to
properly handle the call to virStorageBackendLogicalDeleteVol. It's
possible that the failure to create was because someone created an
LV in the pool outside of libvirt's knowledge. In this case, we don't
want to delete that LV. A subsequent or future refresh of the pool
will find the volume and cause an earlier failure
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id '1b5685da' refactored the code to move buildvoldef inside
the buildVol conditional; however, the VIR_FREE of the memory was
left only when 'buildret' failed, thus we're leaking memory.
Signed-off-by: John Ferlan <jferlan@redhat.com>
As of commit id '155ca616' a 'refreshVol' is called after a buildVol
succeeds in storageVolCreateXML, thus a volStorageBackendSheepdogRefreshVolInfo
call in virStorageBackendSheepdogBuildVol is no longer necessary.
Additionally, the 'conn' parameter becomes unused.
Signed-off-by: John Ferlan <jferlan@redhat.com>
As of commit id '155ca616' a 'refreshVol' is called after the buildVol
succeeds in storageVolCreateXML, thus the volStorageBackendRBDRefreshVolInfo
call in virStorageBackendRBDBuildVol is no longer necessary.
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1256999
After creating a copy of the 'authdef' in a pool -> disk translation,
unconditionally clear the 'authType' in the resulting disk auth def
structure since that's used for a storage pool and not a disk. This
ensures virStorageAuthDefFormat will properly format the <auth> XML
for a <disk> (e.g. it won't have a <auth type='%s'.../>).
https://bugzilla.redhat.com/show_bug.cgi?id=1247987
Calculation of the extended and logical partition values for the disk
pool is complex. As the bz points out an extended partition should have
it's allocation initialized to 0 (zero) and keep the capacity as the size
dictated by the extents read. Then for each logical partition found,
adjust the allocation of the extended partition.
Finally, previous logic tried to avoid recalculating things if a logical
partition was deleted; however, since we now have special logic to handle
the allocation of the extended partition, just make life easier by reading
the partition table again - rather than doing the reverse adjustment.
https://bugzilla.redhat.com/show_bug.cgi?id=1251461
When 'starting' up a disk pool, we need to make sure the label on the
device is valid; otherwise, the followup refreshPool will assume the
disk has been properly formatted for use. If we don't find the valid
label, then refuse the start and give a proper reason.
Let's check to ensure we can find the Partition Table in the label
and that libvirt actually recognizes that type; otherwise, when we
go to read the partitions during a refresh operation we may not be
reading what we expect.
This will expand upon the types of errors or reason that a build
would fail, so we can create more direct error messages.
Modify virStorageBackendDiskValidLabel to add a 'writelabel' parameter.
While initially for the purpose of determining whether the label should
be written during DiskBuild, a future use during DiskStart could determine
whether the pool should be started using the label found. Augment the
error messages also to give a hint as to what someone may need to do
or why the command failed.
Create a new function virStorageBackendDiskValidLabel to handle checking
whether there is a label on the device and whether it's valid or not.
While initially for the purpose of determining whether the label can be
overwritten during DiskBuild, a future use during DiskStart could determine
whether the pool should be started using the label found.
https://bugzilla.redhat.com/show_bug.cgi?id=1233003
Although perhaps bordering on a don't do that type scenario, if
someone creates a volume in a pool outside of libvirt, then uses that
same name to create a volume in the pool via libvirt, then the creation
will fail and in some cases cause the same name volume to be deleted.
This patch will refresh the pool just prior to checking whether the
named volume exists prior to creating the volume in the pool. While
it's still possible to have a timing window to create a file after the
check - at least we tried. At that point, someone is being malicious.
Since commit e0139e3, we update the pool allocation with
the user-provided allocation values.
For qcow2, the allocation is ignored for volume building,
but we still subtracted it from pool's allocation.
This can result in interesting values if the user-provided
allocation is large enough:
Capacity: 104.71 GiB
Allocation: 109.13 GiB
Available: 16.00 EiB
We already do a VolRefresh on volume creation. Also refresh
the volume after creating and use the new value to update the pool.
https://bugzilla.redhat.com/show_bug.cgi?id=1163091
Similar to commit id '35847860', it's possible to attempt to create
a 'netfs' directory in an NFS root-squash environment which will cause
the 'vol-delete' command to fail. It's also possible error paths from
the 'vol-create' would result in an error to remove a created directory
if the permissions were incorrect (and disallowed root access).
Thus rename the virFileUnlink to be virFileRemove to match the C API
functionality, adjust the code to following using rmdir or unlink
depending on the path type, and then use/call it for the VIR_STORAGE_VOL_DIR
Commit id '155ca616' added the 'refreshVol' API. In an NFS root-squash
environment it was possible that if the just created volume from XML wasn't
properly created with the right uid/gid and/or mode, then the followup
refreshVol will fail to open the volume in order to get the allocation/
capacity values. This would leave the volume still on the server and
cause a libvirtd crash because 'voldef' would be in the pool list, but
the cleanup code would free it.
Commit id '7c2d65dde2' changed the default value of mode to be -1 if not
supplied in the XML, which should cause creation of the volume using the
default mode of VIR_STORAGE_DEFAULT_VOL_PERM_MODE; however, the check
made was whether mode was '0' or not to use default or provided value.
This patch fixes the issue to check if the 'mode' was provided in the XML
and use that value.
In an NFS root-squashed environment the 'vol-delete' command will fail to
'unlink' the target volume since it was created under a different uid:gid.
This code continues the concepts introduced in virFileOpenForked and
virDirCreate[NoFork] with respect to running the unlink command under
the uid/gid of the child. Unlike the other two, don't retry on EACCES
(that's why we're here doing this now).
While a zero allocation in safezero should be fine it isn't when we use
posix_fallocate which returns EINVAL on a zero allocation.
While we could skip the zero allocation in safezero_posix_fallocate it's
an optimization to do it for all allocations.
This fixes vm installation via virtinst for me which otherwise aborts
like:
Starting install...
Retrieving file linux... | 5.9 MB 00:01 ...
Retrieving file initrd.gz... | 29 MB 00:07 ...
ERROR Couldn't create storage volume 'virtinst-linux.sBgds4': 'cannot fill file '/var/lib/libvirt/boot/virtinst-linux.sBgds4': Invalid argument'
The error was introduced by e30297b0 as spotted by Chunyan Liu
In commit 155ca616e, a change was introduced that no longer allowed defining
volumes via XML with a capacity of '0'. Because we check for info.size_arg
to be non-zero, this use-case fails. This patch allows info.size_arg to be
zero if no backing store is specified.
Signed-off-by: Chris J Arges <chris.j.arges@canonical.com>
Currently, when trying to virsh pool-define/virsh pool-build a new
'dir' pool, if the target directory already exists, virsh
pool-build/virStoragePoolBuild will error out. This is a change of
behaviour compared to eg libvirt 1.2.13
This is caused by the wrong type being used for the dir_create_flags
variable in virStorageBackendFileSystemBuild , it's defined as a bool
but is used as a flag bit field so should be unsigned int (this matches
the type virDirCreate expects for this variable).
This should fix https://bugzilla.gnome.org/show_bug.cgi?id=752417 (GNOME
Boxes) and https://bugzilla.redhat.com/show_bug.cgi?id=1244080
(downstream virt-manager).
Resolving an error reporting bug introduced by commit id '761491e' which
just took the return of virStorageBackendRBDCreateImage and used it as
the basis for the message generated. This would generate EPERM regardless
of error seen.
We used to look at the librbd code version and depending on that
we would invoke rbd_create3() or rbd_create().
Since librbd version 0.67.9 we can however tell RBD that it should
create rbd format 2 images even if we invoke rbd_create().
The less options we pass to librbd, the more we can lean on the sane
defaults it uses.
For rbd_create3() we had things like the stripe count and unit hardcoded
in libvirt and that might cause problems down the road.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
When virsh vol-clone is attempted on a raw file where capacity > allocation,
the resulting cloned volume has a size that matches the virtual-size of
the parent; in place of matching its actual, disk size.
This patch fixes the cloned disk to have same _allocated_size_ as
the parent file from which it was cloned.
Ref: http://www.redhat.com/archives/libvir-list/2015-May/msg00050.html
Also fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1130739
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Instead of storing the remaining bytes, store the position of the first
unallocated byte. This will allow changing the amount of bytes copied
by virStorageBackendCopyToFD without changing the safezero call.
No functional impact.
This patch reverts commit 4749d82a which tried to tweak the logic in
volume creation. We did realloc and update our object list before we executed
volume building within a specific storage backend. If that failed, we
had to update (again) our object list to the original state as it was before the
build and delete the volume from the pool (even though it didn't exist - this
truly depends on the backend).
I misunderstood the base idea to be able to poll the status of the volume
creation using vol-info. After commit 4749d82a this wasn't possible
anymore, although no BZ has been reported yet.
Commit 4749d82a also claimed to fix
https://bugzilla.redhat.com/show_bug.cgi?id=1223177, but commit c8be606b of the
same series as 4749d82ad (which was more of a refactor than a fix)
fixes the same issue so the revert should be pretty straightforward.
Further more, BZ https://bugzilla.redhat.com/show_bug.cgi?id=1241454 can be
fixed with this revert.
Commit 2a31c5f0 introduced support for storage pool state XMLs, however
it also introduced a regression:
if (!virstoragePoolObjIsActive(pool)) {
virStoragePoolObjUnlock(pool);
continue;
}
The idea behind this was that since we've got state XMLs and the pool
wasn't marked as active by autostart routine (if the autostart flag had been
set earlier), the pool is inactive and we can leave it be and continue with
other pools. However, filesystem type pools like fs,dir, possibly netfs are
supposed to be active if the filesystem is mounted on the host. And this is
exactly where the regression occurs, e.g. pool type 'dir' which has been
previously destroyed and marked as !autostart gets filtered out
by the condition above.
The resolution should be simply to remove the condition completely,
all pools will get their 'active' flag updated by check callback and if
they do not support such callback, the logic doesn't change and such
pools will be inactive by default (e.g. RBD, even if a state XML exists).
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1238610
https://bugzilla.redhat.com/show_bug.cgi?id=1230664
Per the devmapper docs, use "/dev/mapper" or "/dev/dm-n" in order to
determine if a device is under control of DM Multipath.
So add "/dev/mapper" to the virFileExists, leaving the "/dev/mpath"
as a "legacy" option since it appears for a while it was the preferred
mechanism, but is no longer maintained
Libvirt periodically refreshes all volumes in a storage pool, including
the volumes being cloned.
While cloning a storage volume from parent, we drop pool locks. Subsequent
volume refresh sometimes changes allocation for an ongoing copy, and leads
to corrupt images.
Fix: Introduce a shadow volume that isolates the volume object under refresh
from the base which has a copy ongoing.
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1227664
If the requested format type for the new entry in the file system pool
is a 'dir', then be sure to set the vol->type correctly as would be done
when the pool is refreshed.
Related to :
https://bugzilla.redhat.com/show_bug.cgi?id=1171933
Rather than ignore the return status from virStorageBackendSCSIFindLUs,
cause a failure to start the pool if a -1 is returned. Issue was noted
during testing of the bz for iscsi that 'scsi' and 'fc' pools don't fail.
Commit id '832a9256' adjusted the code to recognize when the default
type of "unknown" was provided as the format type and to use "dos" if
found. Since the pool is built with "dos" and it could cause some
confusion when formatting the XML after building by seeing "unknown"
in the output, let's just adjust the pool's setting to "dos" so that
subsequent formats will see the value.
https://bugzilla.redhat.com/show_bug.cgi?id=1224233
Currently it's not possible to determine the difference between a
fatal memory allocation or failure to open/read the directory error
with a perhaps less fatal, I didn't find the "block" device in the
directory (which may be a disk entry without a block device).
In the case of the latter, we shouldn't cause failure to continue
searching in the caller (virStorageBackendSCSIFindLUs), rather we
should allow trying reading the next directory entry.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1186969
When generating the path to the dir for a CIFS/Samba driver, the code
would generate a source path for the mount using "%s:%s" while the
mount.cifs expects to see "//%s/%s". So check for the cifsfs and
format the source path appropriately.
Additionally, since there is no means to authenticate, the mount
needs a "-o guest" on the command line in order to anonymously mount
the Samba directory.
In order for the glusterfs boolean to be set, the pool->def->type must be
VIR_STORAGE_POOL_NETFS, thus the check within virCommandNewArgList whether
pool->def->type is VIR_STORAGE_POOL_FS will never be true, so remove it
Instead of initializing return value to zero (success) and overwriting
it on every failure just before the control jumps onto 'out' label,
let's initialize to an error value and set to zero only when we are
sure about the success. Just follow the pattern we have in the rest of
the code.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Instead of initializing return value to zero (success) and overwriting
it on every failure just before the control jumps onto 'out' label,
let's initialize to an error value and set to zero only when we are
sure about the success. Just follow the pattern we have in the rest of
the code.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1225694
Check if the disk partition to be wiped is the extended partition, if
so then disallow it. Do this via changing the wipeVol backend to check
the volume before passing to the common virStorageBackendVolWipeLocal
https://bugzilla.redhat.com/show_bug.cgi?id=1200206
Commit id '1b4eaa61' added the ability to have a mode='direct' for
an iscsi disk volume. It relied on virStorageTranslateDiskSourcePool
in order to copy any disk source pool authentication information to
the direct disk volume, but it neglected to also copy the 'secrettype'
field which ends up being used in the domain volume formatting code.
Adding a secrettype for this case will allow for proper formatting later
and allow disk snapshotting to work properly
Additionally libvirtd restart processing would fail to find the domain
since the translation processing code is run after domain xml processing,
so handle the the case where the authdef could have an empty secrettype
field when processing the auth and additionally ignore performing the
actual and expected auth secret type checks for a DISK_VOLUME since that
data will be reassembled later during translation processing of the
running domain.
https://bugzilla.redhat.com/show_bug.cgi?id=1181087
The virStorageBackendFileSystemIsMounted is called from three source paths
checkPool, startPool, and stopPool. Both start and stop validate the FS
fields before calling *IsMounted; however the check path there is no call.
This could lead the code into returning a true in "isActive" if for some
reason the target path for the pool was mounted. The assumption being
that if it was mounted, then we believe we started/mounted it.
It's also of note that commit id '81165294' added an error message for
the start/mount path regarding that the target is already mounted so
fail the start. That check was adjusted by commit id '13fde7ce' to
only message if actually mounted.
At one time this led to the libvirtd restart autostart code to declare
that the pool was active even though the startPool would inhibit startup
and the stopPool would inhibit shutdown. The autostart path changed as
of commit id '2a31c5f0' as part of the keep storage pools started between
libvirtd restarts.
This patch adds the same check made prior to start/mount and stop/unmount
to ensure we have a valid configuration before attempting to see if the
target is already mounted to declare "isActive" or not. Finding an improper
configuration will now cause an error at checkPool, which should make it
so we can no longer be left in a situation where the pool was started and
we have no way to stop it.
https://bugzilla.redhat.com/show_bug.cgi?id=1181087
Currently the assumption on the error message is that there are
no source device paths defined when the number of devices check
fails, but in reality the XML could have had none or it could have
had more than the value supported. Adjust the error message accordingly
to make it clearer what the error really is.
We do update pool volume object list before we actually create any
volume. If buildVol fails, we then try to delete the volume in the
storage as well as remove it from our structures. The problem is, that
any backend that supports both buildVol and deleteVol would fail in this
case which is completely unnecessary. This patch causes the update to
take place after we know a volume has been created successfully, thus no
removal in case of a buildVol failure is necessary.
https://bugzilla.redhat.com/show_bug.cgi?id=1223177
https://bugzilla.redhat.com/show_bug.cgi?id=1224018
The disk pool recalculates the pool allocation, capacity, and available
values each time through processing a newly created disk partition. This
created an issue with the allocation setting since the code used is shared
with the refresh path. Each path calls virStorageBackendDiskReadPartitions
which initializes the pool values and then processes the partition table
from the 'libvirt_parthelper' utility output with the only difference being
create passes a specific volume to be processed while refresh pass a NULL
indicating to process all volumes. That passed volume is check during the
virStorageBackendDiskMakeVol call to see if the current partition described
by the volume key already exists. If it exists, then no adjustments are
made to the allocation and the next entry in the output is checked.
For the create path this resulted in only the most recently created
partition size would be accounted for in the 'allocation' setting. This
patch thus checks whether the incoming volume is NULL before clearing
the pool allocation value.
Commit id '2ac0e647' for https://bugzilla.redhat.com/show_bug.cgi?id=1206521
was meant to be a generic check for the CreateVol, CreateVolFrom, and
DeleteVol paths to check if the storage backend's changed the pool's view
of allocation or available values.
Unfortunately as it turns out this caused a side effect when the disk backend
created an extended partition there would be no actual storage removed from
the pool, thus the changes would not find any change in allocation or
available and incorrectly update the pool values using the size of the
extended partition. A subsequent refresh of the pool would reset the
values appropriately.
This patch modifies those checks in order to specifically not update the
pool allocation and available for only the disk backend rather than be
generic before and after checks.
This never worked.
In 0.9.10 when this API was introduced, it was intended that
the SHRINK flag combined with DELTA would shrink the volume by
the specified capacity (to avoid passing negative numbers).
See commit 055bbf4.
When the SHRINK flag was finally implemented for the first backend
in 1.2.13 (commit aa9aa6a), it was only implemented for the absolute
values and with the delta flag the volume is always extended,
regardless of the SHRINK flag.
Treat the SHRINK flag as a minus sign when used together with DELTA,
to allow shrinking volumes as was documented in the API since 0.9.10.
https://bugzilla.redhat.com/show_bug.cgi?id=1220213
Since shrinking a volume below existing allocation is not allowed,
it is not possible for a successful resize with VOL_RESIZE_ALLOCATE
to increase the pool's available value.
Even with the SHRINK flag it is possible to extend the current
allocation or even the capacity. Remove the overflow when
computing delta with this flag and do the check even if the
flag was specified.
https://bugzilla.redhat.com/show_bug.cgi?id=1073305
The code already exists there, it just modified different flags. I just
noticed this when looking at the code. This patch is better to view
with bigger context or '-W'.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Only set directory permissions at pool build time, if:
- User explicitly requested a mode via the XML
- The directory needs to be created
- We need to do the crazy NFS root-squash workaround
This allows qemu:///session to call build on an existing directory
like /tmp.
The XML parser sets a default <mode> if none is explicitly passed in.
This is then used at pool/vol creation time, and unconditionally reported
in the XML.
The problem with this approach is that it's impossible for other code
to determine if the user explicitly requested a storage mode. There
are some cases where we want to make this distinction, but we currently
can't.
Handle <mode> parsing like we handle <owner>/<group>: if no value is
passed in, set it to -1, and adjust the internal consumers to handle
it.
Coverity points out it's possible for one of the virCommand{Output|Error}*
API's to have not allocated 'output' and/or 'error' in which case the
strstr comparison will cause a NULL deref
Signed-off-by: John Ferlan <jferlan@redhat.com>
Just as we allow stopping filesystem pools when they were unmounted
externally, do not fail to stop an iscsi pool when someone else
closed the session externally.
Reported at:
https://bugzilla.redhat.com/show_bug.cgi?id=1171984
Trying to use qemu:///session to create a storage pool pointing at
/tmp will usually fail with something like:
$ virsh pool-start tmp
error: Failed to start pool tmp
error: cannot open volume '/tmp/systemd-private-c38cf0418d7a4734a66a8175996c384f-colord.service-kEyiTA': Permission denied
If any volume in an FS pool can't be opened by the daemon, the refresh
fails, and the pool can't be used.
This causes pain for virt-install/virt-manager though. Imaging a user
downloads a disk image to /tmp. virt-manager wants to import /tmp as
a storage pool, so we can detect what disk format it is, and set the
XML correctly. However this case will likely fail as explained above.
Change the logic here to skip volumes that fail to open. This could
conceivably cause user complaints along the lines of 'why doesn't
libvirt show $ROOT-OWNED-VOLUME-FOO', but figuring that currently
the pool won't even startup, I don't think there are any current
users that care about that case.
https://bugzilla.redhat.com/show_bug.cgi?id=1103308
If you end up with a state file for a pool that no longer starts up
or refreshes correctly, the state file is never removed and adds
noise to the logs everytime libvirtd is started.
If the initial state syncing fails, delete the statefile.
After pool startup we call refreshPool(). If that fails, we leave
a stale pool state file hanging around.
Hit this trying to create a pool with qemu:///session containing
root owned files.
https://bugzilla.redhat.com/show_bug.cgi?id=1171933
Adjust the processLU error returns to be a bit more logical. Currently,
the calling code cannot determine the difference between a non disk/lun
volume and a processed/found disk/lun. It can also not differentiate
between perhaps real/fatal error and one that won't necessarily stop
the code from finding other volumes.
After this patch virStorageBackendSCSIFindLUsInternal will stop processing
as soon as a "fatal" message occurs rather than continuting processing
for no apparent reason. It will also only set the *found value when
at least one of the processLU's was successful.
With the failed return, if the reason for the stop was that the pool
target path did not exist, was /dev, was /dev/, or did not start with
/dev, then iSCSI pool startup and refresh will fail.
Rather than passing/returning a pointer to a boolean to indicate that
perhaps we should try again - adjust the return of the call to return
the count of LU's found during processing, then let the caller decide
what to do with that value.
Use virStorageBackendPoolUseDevPath API to determine whether creation of
stable target path is possible for the volume.
This will differentiate a failed virStorageBackendStablePath which won't
need to be fatal. Thus, we'll add a -2 return value to differentiate that
the failure was a result of either the inability to find the symlink for
the device or failure to open the target path directory
For virStorageBackendStablePath, in order to make decisions in other code
split out the checks regarding whether the pool's target is empty, using /dev,
using /dev/, or doesn't start with /dev