We do update pool volume object list before we actually create any
volume. If buildVol fails, we then try to delete the volume in the
storage as well as remove it from our structures. The problem is, that
any backend that supports both buildVol and deleteVol would fail in this
case which is completely unnecessary. This patch causes the update to
take place after we know a volume has been created successfully, thus no
removal in case of a buildVol failure is necessary.
https://bugzilla.redhat.com/show_bug.cgi?id=1223177
https://bugzilla.redhat.com/show_bug.cgi?id=1224018
The disk pool recalculates the pool allocation, capacity, and available
values each time through processing a newly created disk partition. This
created an issue with the allocation setting since the code used is shared
with the refresh path. Each path calls virStorageBackendDiskReadPartitions
which initializes the pool values and then processes the partition table
from the 'libvirt_parthelper' utility output with the only difference being
create passes a specific volume to be processed while refresh pass a NULL
indicating to process all volumes. That passed volume is check during the
virStorageBackendDiskMakeVol call to see if the current partition described
by the volume key already exists. If it exists, then no adjustments are
made to the allocation and the next entry in the output is checked.
For the create path this resulted in only the most recently created
partition size would be accounted for in the 'allocation' setting. This
patch thus checks whether the incoming volume is NULL before clearing
the pool allocation value.
Commit id '2ac0e647' for https://bugzilla.redhat.com/show_bug.cgi?id=1206521
was meant to be a generic check for the CreateVol, CreateVolFrom, and
DeleteVol paths to check if the storage backend's changed the pool's view
of allocation or available values.
Unfortunately as it turns out this caused a side effect when the disk backend
created an extended partition there would be no actual storage removed from
the pool, thus the changes would not find any change in allocation or
available and incorrectly update the pool values using the size of the
extended partition. A subsequent refresh of the pool would reset the
values appropriately.
This patch modifies those checks in order to specifically not update the
pool allocation and available for only the disk backend rather than be
generic before and after checks.
This never worked.
In 0.9.10 when this API was introduced, it was intended that
the SHRINK flag combined with DELTA would shrink the volume by
the specified capacity (to avoid passing negative numbers).
See commit 055bbf4.
When the SHRINK flag was finally implemented for the first backend
in 1.2.13 (commit aa9aa6a), it was only implemented for the absolute
values and with the delta flag the volume is always extended,
regardless of the SHRINK flag.
Treat the SHRINK flag as a minus sign when used together with DELTA,
to allow shrinking volumes as was documented in the API since 0.9.10.
https://bugzilla.redhat.com/show_bug.cgi?id=1220213
Since shrinking a volume below existing allocation is not allowed,
it is not possible for a successful resize with VOL_RESIZE_ALLOCATE
to increase the pool's available value.
Even with the SHRINK flag it is possible to extend the current
allocation or even the capacity. Remove the overflow when
computing delta with this flag and do the check even if the
flag was specified.
https://bugzilla.redhat.com/show_bug.cgi?id=1073305
The code already exists there, it just modified different flags. I just
noticed this when looking at the code. This patch is better to view
with bigger context or '-W'.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Only set directory permissions at pool build time, if:
- User explicitly requested a mode via the XML
- The directory needs to be created
- We need to do the crazy NFS root-squash workaround
This allows qemu:///session to call build on an existing directory
like /tmp.
The XML parser sets a default <mode> if none is explicitly passed in.
This is then used at pool/vol creation time, and unconditionally reported
in the XML.
The problem with this approach is that it's impossible for other code
to determine if the user explicitly requested a storage mode. There
are some cases where we want to make this distinction, but we currently
can't.
Handle <mode> parsing like we handle <owner>/<group>: if no value is
passed in, set it to -1, and adjust the internal consumers to handle
it.
Coverity points out it's possible for one of the virCommand{Output|Error}*
API's to have not allocated 'output' and/or 'error' in which case the
strstr comparison will cause a NULL deref
Signed-off-by: John Ferlan <jferlan@redhat.com>
Just as we allow stopping filesystem pools when they were unmounted
externally, do not fail to stop an iscsi pool when someone else
closed the session externally.
Reported at:
https://bugzilla.redhat.com/show_bug.cgi?id=1171984
Trying to use qemu:///session to create a storage pool pointing at
/tmp will usually fail with something like:
$ virsh pool-start tmp
error: Failed to start pool tmp
error: cannot open volume '/tmp/systemd-private-c38cf0418d7a4734a66a8175996c384f-colord.service-kEyiTA': Permission denied
If any volume in an FS pool can't be opened by the daemon, the refresh
fails, and the pool can't be used.
This causes pain for virt-install/virt-manager though. Imaging a user
downloads a disk image to /tmp. virt-manager wants to import /tmp as
a storage pool, so we can detect what disk format it is, and set the
XML correctly. However this case will likely fail as explained above.
Change the logic here to skip volumes that fail to open. This could
conceivably cause user complaints along the lines of 'why doesn't
libvirt show $ROOT-OWNED-VOLUME-FOO', but figuring that currently
the pool won't even startup, I don't think there are any current
users that care about that case.
https://bugzilla.redhat.com/show_bug.cgi?id=1103308
If you end up with a state file for a pool that no longer starts up
or refreshes correctly, the state file is never removed and adds
noise to the logs everytime libvirtd is started.
If the initial state syncing fails, delete the statefile.
After pool startup we call refreshPool(). If that fails, we leave
a stale pool state file hanging around.
Hit this trying to create a pool with qemu:///session containing
root owned files.
https://bugzilla.redhat.com/show_bug.cgi?id=1171933
Adjust the processLU error returns to be a bit more logical. Currently,
the calling code cannot determine the difference between a non disk/lun
volume and a processed/found disk/lun. It can also not differentiate
between perhaps real/fatal error and one that won't necessarily stop
the code from finding other volumes.
After this patch virStorageBackendSCSIFindLUsInternal will stop processing
as soon as a "fatal" message occurs rather than continuting processing
for no apparent reason. It will also only set the *found value when
at least one of the processLU's was successful.
With the failed return, if the reason for the stop was that the pool
target path did not exist, was /dev, was /dev/, or did not start with
/dev, then iSCSI pool startup and refresh will fail.
Rather than passing/returning a pointer to a boolean to indicate that
perhaps we should try again - adjust the return of the call to return
the count of LU's found during processing, then let the caller decide
what to do with that value.
Use virStorageBackendPoolUseDevPath API to determine whether creation of
stable target path is possible for the volume.
This will differentiate a failed virStorageBackendStablePath which won't
need to be fatal. Thus, we'll add a -2 return value to differentiate that
the failure was a result of either the inability to find the symlink for
the device or failure to open the target path directory
For virStorageBackendStablePath, in order to make decisions in other code
split out the checks regarding whether the pool's target is empty, using /dev,
using /dev/, or doesn't start with /dev
https://bugzilla.redhat.com/show_bug.cgi?id=1206521
If the backend driver updates the pool available and/or allocation values,
then the storage_driver VolCreateXML, VolCreateXMLFrom, and VolDelete APIs
should not change the value; otherwise, it will appear as if the values
were "doubled" for each change. Additionally since unsigned arithmetic will
be used depending on the size and operation, either or both values could be
appear to be much larger than they should be (in the EiB range).
Currently only the disk pool updates the values, but other pools could.
Assume a "fresh" disk pool of 500 MiB using /dev/sde:
$ virsh pool-info disk-pool
...
Capacity: 509.88 MiB
Allocation: 0.00 B
Available: 509.84 MiB
$ virsh vol-create-as disk-pool sde1 --capacity 300M
$ virsh pool-info disk-pool
...
Capacity: 509.88 MiB
Allocation: 600.47 MiB
Available: 16.00 EiB
Following assumes disk backend updated to refresh the disk pool at deletion
of primary partition as well as extended partition:
$ virsh vol-delete --pool disk-pool sde1
Vol sde1 deleted
$ virsh pool-info disk-pool
...
Capacity: 509.88 MiB
Allocation: 9.73 EiB
Available: 6.27 EiB
This patch will check if the backend updated the pool values and honor that
update.
Commit id '471e1c4e' only considered updating the pool if the extended
partition was removed. As it turns out removing a primary partition
would also need to update the freeExtent list otherwise the following
sequence would fail (assuming a "fresh" disk pool for /dev/sde of 500M):
$ virsh pool-info disk-pool
...
Capacity: 509.88 MiB
Allocation: 0.00 B
Available: 509.84 MiB
$ virsh vol-create-as disk-pool sde1 --capacity 300M
$ virsh vol-delete --pool disk-pool sde1
$ virsh vol-create-as disk-pool sde1 --capacity 300M
error: Failed to create vol sde1
error: internal error: no large enough free extent
$
This patch will refresh the pool, rereading the partitions, and
return
https://bugzilla.redhat.com/show_bug.cgi?id=1073305
When creating a volume in a pool, the creation allows the 'capacity'
value to be larger than the available space in the pool. As long as
the 'allocation' value will fit in the space, the volume will be created.
However, resizing the volume checks were made with the new absolute
capacity value against existing capacity + the available space without
regard for whether the new absolute capacity was actually allocating
space or not. For example, a pool with 75G of available space creates
a volume of 10G using a capacity of 100G and allocation of 10G will succeed;
however, if the allocation used a capacity of 10G instead and then tried
to resize the allocation to 100G the code would fail to allow the backend
to try the resize.
Furthermore, when updating the pool "available" and "allocation" values,
the resize code would just "blindly" adjust them regardless of whether
space was "allocated" or just "capacity" was being adjusted. This left
a scenario whereby a resize to 100G would fail; however, a resize to 50G
followed by one to 100G would both succeed. Again, neither was adjusting
the allocation value, just the "capacity" value.
This patch adds more logic to the resize code to understand whether the
new capacity value is actually "allocating" space as well and whether it
shrinking or expanding. Since unsigned arithmatic is involved, the possibility
that we adjust the pool size values incorrectly is probable.
This patch also ensures that updates to the pool values only occur if we
actually performed the allocation.
NB: The storageVolDelete, storageVolCreateXML, and storageVolCreateXMLFrom
each only updates the pool allocation/availability values by the target
volume allocation value.
The 'checkPool' callback was originally part of the storageDriverAutostart function,
but the pools need to be checked earlier during initialization phase,
otherwise we can't start a domain which mounts a volume after the
libvirtd daemon restarted. This is because qemuProcessReconnect is called
earlier than storageDriverAutostart. Therefore the 'checkPool' logic has been
moved to storagePoolUpdateAllState which is called inside storageDriverInitialize.
We also need a valid 'conn' reference to be able to execute 'refreshPool'
during initialization phase. Though it isn't available until storageDriverAutostart
all of our storage backends do ignore 'conn' pointer, except for RBD,
but RBD doesn't support 'checkPool' callback, so it's safe to pass
conn = NULL in this case.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1177733
This patch introduces new virStorageDriverState element stateDir.
Also adds necessary changes to storageStateInitialize, so that
directories initialization becomes more generic.
If the call to virStorageBackendISCSIGetHostNumber failed, we set
retval = -1, but yet still called virStorageBackendSCSIFindLUs.
Need to add a goto cleanup - while at it, adjust the logic to
initialize retval to -1 and only changed to 0 (zero) on success.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Don't supercede the error message virStorageBackendSCSIFindLUs as the
message such as "error: Failed to find LUs on host 60: ..." is not overly
clear as to what the real problem might be.
Signed-off-by: John Ferlan <jferlan@redhat.com>
In order to be able to use 'checkPool' inside functions which do not
have any connection reference, 'conn' attribute needs to be discarded
from the checkPool's signature, since it's not used by any storage backend
anyway.
A helper that never returns an error and treats bits out of bitmap range
as false.
Use it everywhere we use ignore_value on virBitmapGetBit, or loop over
the bitmap size.
The virStorageBackendISCSIFindPoolSources API only needs the 'host' name
in order to discover iSCSI pools, it returns the various device paths.
On input, it's also possible to further restrict a search by providing the
port attribute for the host element and the (undocumented) initiator element.
For example:
$ virsh find-storage-pool-sources-as iscsi
error: Failed to find any iscsi pool sources
error: invalid argument: hostname and device path must be specified for iscsi sources
$ virsh find-storage-pool-sources-as iscsi 192.168.122.1
<sources>
<source>
<host name='192.168.122.1' port='3260'/>
<device path='iqn.2013-12.com.example:iscsi-chap-lclpool'/>
</source>
</sources>
https://bugzilla.redhat.com/show_bug.cgi?id=1181062
According to the formatstorage.html description for <source> element
and "format" attribute: "All drivers are required to have a default
value for this, so it is optional."
As it turns out the disk backend did not choose a default value, so I
added a default of "msdos" if the source type is "unknown" as well as
updating the storage.html backend disk volume driver documentation to
indicate the default format is dos.
Instead of just looking at the output of fstat, call
virStorageFileGetMetadata to get the full capacity from
image headers.
Note that the capacity is probed unconditionally. The updateCapacity
bool parameter is ignored and will be removed in the following commit.
In virStorageVolCreateXML, add VIR_VOL_XML_PARSE_NO_CAPACITY
to the call parsing the XML of the new volume to make the capacity
optional.
If the capacity is omitted, use the capacity of the old volume.
We already do that for values that are less than the original
volume capacity.
Not all files we want to find using virFileFindResource{,Full} are
generated when libvirt is built, some of them (such as RNG schemas) are
distributed with sources. The current API was not able to find source
files if libvirt was built in VPATH.
Both RNG schemas and cpu_map.xml are distributed in source tarball.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
While the main storage driver code allows the flag
VIR_STORAGE_VOL_RESIZE_SHRINK to be set, none of the backend
drivers are supporting it. At the very least this can work
for plain file based volumes since we just ftruncate() them
to the new size. It does not work with qcow2 volumes, but we
can arguably delegate to qemu-img for error reporting for that
instead of second guessing this for ourselves:
$ virsh vol-resize --shrink /home/berrange/VirtualMachines/demo.qcow2 2G
error: Failed to change size of volume 'demo.qcow2' to 2G
error: internal error: Child process (/usr/bin/qemu-img resize /home/berrange/VirtualMachines/demo.qcow2 2147483648) unexpected exit status 1: qemu-img: qcow2 doesn't support shrinking images yet
qemu-img: This image does not support resize
See also https://bugzilla.redhat.com/show_bug.cgi?id=1021802
https://bugzilla.redhat.com/show_bug.cgi?id=1176510
When storageDriverAutostart is called path virStateReload via a 'service
libvirtd reload', then because the volume list in the pool wasn't cleared
prior to the call, each volume would be listed multiple times (as many
times as we reload). I believe the issue would be introduced by commit
id '9e093f0b' at least for the libvirtd reload path, although I suppose
the introduction of virStateReload (commit id '70da0494') could be a
different cause.
Thus like other places prior to calling refreshPool, we need to call
virStoragePoolObjClearVols
https://bugzilla.redhat.com/show_bug.cgi?id=1138516
If the provided volume name doesn't match what parted generated as the
partition name, then return a failure.
Update virsh.pod and formatstorage.html.in to describe the 'name' restriction
for disk pools as well as the usage of the <target>'s <format type='value'>.
When removing a volume that is the extended partition, all the logical
volume partitions that exist within the extended partition will also be
removed, so we need to refresh the pool to have the updated list
During virStorageBackendDiskMakeDataVol processing, if we find an extended
partition, then handle it specially when updating the capacity/allocation
rather than calling virStorageBackendUpdateVolInfo.
As it turns out, once a logical partition exists, any attempt to refresh
the pool or after libvirtd restart/reload will result in a failure to open
the extended partition device resulting in the inability to start the pool.
The downside to this is we will lose the <permissions> and <timestamps> for
the extended partition upon subsequent restart, refresh, reload since the
stat() in virStorageBackendUpdateVolTargetInfoFD will not be called. However,
since it's really only a container and shouldn't directly be used for
storage that seems reasonable.
Therefore, only use the existing code that already had a comment about
getting the allocation wrong for extended partitions for just the setting
of the extended partition data.
While checking the existing partitions in virStorageBackendDiskPartFormat,
the code would erroneously compare the volume target format type (eg, the
virStoragePartedFsType) rather than the source partition type (eg, the
virStorageVolTypeDisk) which is set during virStorageBackendDiskReadPartitions.
During virStorageBackendDiskCreateVol if virStorageBackendDiskReadPartitions
fails, then we were leaving with an error and a partition on the disk for
which there was no corresponding volume and used space on the disk which
could be reclaimable through direct parted activity. On a subsequent restart,
reload, or refresh the volume may magically appear too.
Move the API to before virStorageBackendDiskCreateVol in order to be
able to call the DeleteVol API when virStorageBackendDiskReadPartitions
fails so that we don't by chance leave a partition on the disk.
When creating a RAW file, we don't take advantage
of clone of btrfs.
Add a VIR_STORAGE_VOL_CREATE_REFLINK flag to request
a reflink copy.
Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
For stateless, client side drivers, it is never correct to
probe for secondary drivers. It is only ever appropriate to
use the secondary driver that is associated with the
hypervisor in question. As a result the ESX & HyperV drivers
have both been forced to do hacks where they register no-op
drivers for the ones they don't implement.
For stateful, server side drivers, we always just want to
use the same built-in shared driver. The exception is
virtualbox which is really a stateless driver and so wants
to use its own server side secondary drivers. To deal with
this virtualbox has to be built as 3 separate loadable
modules to allow registration to work in the right order.
This can all be simplified by introducing a new struct
recording the precise set of secondary drivers each
hypervisor driver wants
struct _virConnectDriver {
virHypervisorDriverPtr hypervisorDriver;
virInterfaceDriverPtr interfaceDriver;
virNetworkDriverPtr networkDriver;
virNodeDeviceDriverPtr nodeDeviceDriver;
virNWFilterDriverPtr nwfilterDriver;
virSecretDriverPtr secretDriver;
virStorageDriverPtr storageDriver;
};
Instead of registering the hypervisor driver, we now
just register a virConnectDriver instead. This allows
us to remove all probing of secondary drivers. Once we
have chosen the primary driver, we immediately know the
correct secondary drivers to use.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Remove the resize flag and use the same code path for all callers.
This flag was added by commit 18f0316 to allow virStorageFileResize
use 'safezero' while preserving the behavior.
Explicitly return -2 when a fallback to a different method should
be done, to make the code path more obvious.
Fail immediately when ftruncate fails in the mmap method,
as we did before commit 18f0316.
A recent lvm change has resulted in a change for the "default" type of
logical volume created when the "--virtualsize" or "--V" is supplied on
the command line (e.g. when the allocation and capacity values of a to
be created volume differ). It seems that at the very least the following
change adjusts the default type:
https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=e0164f21
and the following may also have some impact.
https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=87fc3b71
When using the virsh vol-create-as or vol-create xmlfile commands, the
result is that libvirt will now create a "thin logical volume" and a
"thin logical volume pool" rather than just a "thin snapshot logical
volume". For example the following sequence:
# lvcreate --name test -L 2M -V 5M lvm_test
Rounding up size to full physical extent 4.00 MiB
Rounding up size to full physical extent 8.00 MiB
Logical volume "test" created.
# lvs lvm_test
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lvol1 lvm_test twi-a-tz-- 4.00m 0.00 0.98
test lvm_test Vwi-a-tz-- 8.00m lvol1 0.00
compared to the former code which had the following:
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
test LVM_Test swi-a-s--- 4.00m [test_vorigin] 0.00
Since libvirt doesn't know how to parse the thin logical volume
and pool, it will fail to find the newly created volume and pool
even though it exists in the volume group.
It cannot find since the command used to find/parse returns a thin volume
'test' with no associated device, for example the output is:
lvol1##UgUwkp-fTFP-C0rc-ufue-xrYh-dkPr-FGPFPx#lvol1_tdata(0)#thin-pool#1#4194304#4194304#4194304#twi-a-tz--
test##NcaIoH-4YWJ-QKu3-sJc3-EOcS-goff-cThLIL##thin#0#8388608#4194304#8388608#Vwi-a-tz--
as compared to the former which had the following:
test#[test_vorigin]#Dt5Of3-4WE6-buvw-CWJ4-XOiz-ywOU-YULYw6#/dev/sda3(1300)#linear#1#4194304#4194304#4194304#swi-a-s---
While it's possible to generate code to handle the new thin lv and pool, this
patch will add a "--type snapshot" onto the lvcreate command libvirt uses
in order to "for now" be able to continue to utilize the thin snapshots
Currently virStorageFileResize() function uses build conditionals to
choose either the posix_fallocate() or syscall(SYS_fallocate) with no
fallback in order to preallocate the space in the newly resized file.
Since the safezero code has a similar set of conditionals modify the
resize and safezero code in order to allow the resize logic to make use
of safezero to unify the look/feel of the code paths.
Add a new boolean (resize) to safezero() to make the optional decision
whether to try syscall(SYS_fallocate) if the posix_fallocate fails because
HAVE_POSIX_FALLOCATE is not defined (eg, return -1 and errno == 0).
Create a local safezero_sys_fallocate in order to handle the resize
code paths that support that. If not present, the set errno = ENOSYS
in order to allow the caller to handle the failure scenarios.
Signed-off-by: John Ferlan <jferlan@redhat.com>
In old version of parted like parted-2.1-25, error message is shown in
stdout when printing a disk info without disk label.
Error: /dev/sda: unrecognised disk label
This line has been moved to stderr in newer version of parted. So we
should check both stdout and stderr when locating this message.
This should fix bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1172468
Signed-off-by: Hao Liu <hliu@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1087104#c5
When trying to use an invalid offset to virStorageVolUpload(), libvirt
fails in virFDStreamOpenFileInternal(), although it seems libvirt does
not check the return in storageVolUpload(), and calls
virFDStreamSetInternalCloseCb() right after. But stream doesn't have a
privateData (is NULL) yet, and the daemon crashes then.
0 0x00007f09429a9c10 in pthread_mutex_lock () from /lib64/libpthread.so.0
1 0x00007f094514dbf5 in virMutexLock (m=<optimized out>) at util/virthread.c:88
2 0x00007f09451cb211 in virFDStreamSetInternalCloseCb at fdstream.c:795
3 0x00007f092ff2c9eb in storageVolUpload at storage/storage_driver.c:2098
4 0x00007f09451f46e0 in virStorageVolUpload at libvirt.c:14000
5 0x00007f0945c78fa1 in remoteDispatchStorageVolUpload at remote_dispatch.h:14339
6 remoteDispatchStorageVolUploadHelper at remote_dispatch.h:14309
7 0x00007f094524a192 in virNetServerProgramDispatchCall at rpc/virnetserverprogram.c:437
Signed-off-by: Luyao Huang <lhuang@redhat.com>
While this could be exposed as a public API, it's not done yet as
there's no demand for that yet. Anyway, this is just preparing
the environment for easier volume creation on the destination.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Since virSecretFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
Since virStoragePoolFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
Since virStorageVolFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
https://bugzilla.redhat.com/show_bug.cgi?id=1159180
The virStoragePoolSourceFindDuplicate only checks the incoming definition
against the same type of pool as the def; however, for "scsi_host" and
"fc_host" adapter pools, it's possible that either some pool "scsi_host"
adapter definition is already using the scsi_hostN that the "fc_host"
adapter definition wants to use or some "fc_host" pool adapter definition
is using a vHBA scsi_hostN or parent scsi_hostN that an incoming "scsi_host"
definition is trying to use.
This patch adds the mismatched type checks and adds extraneous comments
to describe what each check is determining.
This patch also modifies the documentation to be describe what scsi_hostN
devices a "scsi_host" source adapter should use and which to avoid. It also
updates the parent definition to specifically call out that for mixed
environments it's better to define which parent to use so that the duplicate
pool checks can be done properly.
https://bugzilla.redhat.com/show_bug.cgi?id=1159180
Move the API from the backend to storage_conf and rename it to
virStoragePoolGetVhbaSCSIHostParent. A future patch will need to
use this functionality from storage_conf
https://bugzilla.redhat.com/show_bug.cgi?id=1152382
When libvirt create's the vport (VPORT_CREATE) for the vHBA, there isn't
enough "time" between the creation and the running of the following
backend->refreshPool after a backend->startPool in order to find the LU's.
Population of LU's happens asynchronously when udevEventHandleCallback
discovers the "new" vHBA port. Creation of the infrastructure by udev
is an iterative process creating and discovering actual storage devices and
adjusting the environment.
Because of the time it takes to discover and set things up, the backend
refreshPool call done after the startPool call will generally fail to
find any devices. This leaves the newly started pool appear empty when
querying via 'vol-list' after startup. The "workaround" has always been
to run pool-refresh after startup (or any time thereafter) in order to
find the LU's. Depending on how quickly run after startup, this too may
not find any LUs in the pool. Eventually though given enough time and
retries it will find something if LU's exist for the vHBA.
This patch adds a thread to be executed after the VPORT_CREATE which will
attempt to find the LU's without requiring the external run of refresh-pool.
It does this by waiting for 5 seconds and searching for the LU's. If any
are found, then the thread completes; otherwise, it will retry once more
in another 5 seconds. If none are found in that second pass, the thread
gives up.
Things learned while investigating this... No need to try and fill the
pool too quickly or too many times. Over the course of creation, the udev
code may 'add', 'change', and 'delete' the same device. So if the refresh
code runs and finds something, it may display it only to have a subsequent
refresh appear to "lose" the device. The udev processing doesn't seem to
have a way to indicate that it's all done with the creation processing of a
newly found vHBA. Only the Lone Ranger has silver bullets to fix everything.
Fix a problem in the getBlockDevice and processLU where retval initialized
to zero causing some failures to erroneously continue through to the
virStorageBackendSCSINewLun with an attempt to find a path for "/dev/(null)".
This would fail approximately 10 seconds later with debug message:
virStorageBackendSCSINewLun:203 :
No stable path found for '/dev/(null)' in '/dev/disk/by-path'
The root cause of the issue is for many /sys/bus/scsi/devices/<lun path>
there is no "block*" device found for the vHBA's, where "<lun path>" are
the various paths created for the vHBA, such as "17:0:0:0", "17:0:1:0",
"17:0:2:0", "17:0:3:0", etc. If the block device isn't found, then the
directory should just be ignored rather than attempting to process it.
The bug was that in getBlockDevice the assumption was "block" would exist
and either getNewStyleBlockDevice or getOldStyleBlockDevice would fill in
@block_device. However, if 'block*' doesn't exist, then the code returned
NULL for block_device *and* a good (zero) retval value. This caused the
processLU code to attempt the virStorageBackendSCSINewLun which failed
"at some point in time" in the future.
After this change - on test system the refresh-pool didn't have a noticable
pause of about 20 seconds - it completed within a second since no longer
was there an attempt/need to find "/dev/(null)".
Additionally, the virStorageBackendSCSIFindLU's shouldn't be declaring
found unless the processLU actually returns success. This will be
important in the followup patch which relies on whether a LU was found.
https://bugzilla.redhat.com/show_bug.cgi?id=1160926
Introduce a 'managed' attribute to allow libvirt to decide whether to
delete a vHBA vport created via external means such as nodedev-create.
The code currently decides whether to delete the vHBA based solely on
whether the parent was provided at creation time. However, that may not
be the desired action, so rather than delete and force someone to create
another vHBA via an additional nodedev-create allow the configuration of
the storage pool to decide the desired action.
During createVport when libvirt does the VPORT_CREATE, set the managed
value to YES if not already set to indicate to the deleteVport code that
it should delete the vHBA when the pool is destroyed.
If libvirtd is restarted all the memory only state was lost, so for a
persistent storage pool, use the virStoragePoolSaveConfig in order to
write out the managed value.
Because we're now saving the current configuration, we need to be sure
to not save the parent in the output XML if it was undefined at start.
Saving the name would cause future starts to always use the same parent
which is not the expected result when not providing a parent. By not
providing a parent, libvirt is expected to find the best available
vHBA port for each subsequent (re)start.
At deleteVport, use the new managed value to decide whether to execute
the VPORT_DELETE. Since we no longer save the parent in memory or in
XML when provided, if it was not provided, then we have to look it up.
https://bugzilla.redhat.com/show_bug.cgi?id=1160926
Passing a copy of the storage pool adapter to a function just changes the
copy of the fields in the particular function and then when returning to
the caller those changes are discarded. While not yet biting us in the
storage clean-up case, it did cause an issue for the fchost storage pool
startup case, createVport. The issue was at startup, if no parent is found
in the XML, the code will search for the 'best available' parent and then
store that in the in memory copy of the adapter. Of course, in this case
it was a copy, so when returning to the virStorageBackendSCSIStartPool that
change was discarded (or lost) from the pool->def->source.adapter which
meant at shutdown (deleteVport), the code assumed no adapter was passed
and skipped the deletion, leaving the vHBA created by libvirt still defined
requiring an additional stop of a nodedev-destroy to remove.
Adjusted the createVport to take virStoragePoolDefPtr instead of the
adapter copy. Then use the virStoragePoolSourceAdapterPtr when processing.
A future patch will need the 'def' anyway, so this just sets up for that.
https://bugzilla.redhat.com/show_bug.cgi?id=1160565
The existing code assumed that the configuration of a 'parent' attribute
was correct for the createVport path. As it turns out, that may not be
the case which leads errors during the deleteVport path because the
wwnn/wwpn isn't associated with the parent.
With this change the following is reported:
error: Failed to start pool fc_pool_host3
error: XML error: Parent attribute 'scsi_host4' does not match parent 'scsi_host3' determined for the 'scsi_host16' wwnn/wwpn lookup.
for XML as follows:
<pool type='scsi'>
<name>fc_pool</name>
<source>
<adapter type='fc_host' parent='scsi_host4' wwnn='5001a4aaf3ca174b' wwpn='5001a4a77192b864'/>
</source>
Where 'nodedev-dumpxml scsi_host16' provides:
<device>
<name>scsi_host16</name>
<path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3/vport-3:0-11/host16</path>
<parent>scsi_host3</parent>
<capability type='scsi_host'>
<host>16</host>
<unique_id>13</unique_id>
<capability type='fc_host'>
<wwnn>5001a4aaf3ca174b</wwnn>
<wwpn>5001a4a77192b864</wwpn>
...
The patch also adjusts the description of the storage pool to describe the
restrictions.
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1160565
If a 'parent' attribute is provided for the fchost, then at startup
time check to ensure it is a vport capable scsi_host. If the parent
is not vport capable, then disallow the startup. The following is the
expected results:
error: Failed to start pool fc_pool
error: XML error: parent 'scsi_host2' specified for vHBA is not vport capable
where the XML for the fc_pool is:
<pool type='scsi'>
<name>fc_pool</name>
<source>
<adapter type='fc_host' parent='scsi_host2' wwnn='5001a4aaf3ca174b' wwpn='5001a4a77192b864'/>
</source>
...
and 'scsi_host2' is not vport capable.
Providing an incorrect parent and a correct wwnn/wwpn could lead to
failures at shutdown (deleteVport) where the assumption is the parent
is for the fchost.
NOTE: If the provided wwnn/wwpn doesn't resolve to an existing scsi_host,
then we will be creating one with code (virManageVport) which
assumes the parent is vport capable.
Signed-off-by: John Ferlan <jferlan@redhat.com>
The shared storage driver is stateful and inside the daemon so
there is no need to use the storagePrivateData field to get the
driver handle. Just access the global driver handle directly.
Add a new parameter to virStorageFileGetMetadata that will break the
backing chain detection process and report useful error message rather
than having to use virStorageFileChainGetBroken.
This patch just introduces the option, usage will be provided
separately.
- Provide an implementation for buildPool and deletePool operations
for the ZFS storage backend.
- Add VIR_STORAGE_POOL_SOURCE_DEVICE flag to ZFS pool poolOptions
as now we can specify devices to build pool from
- storagepool.rng: add an optional 'sourceinfodev' to 'sourcezfs' and
add an optional 'target' to 'poolzfs' entity
- Add a couple of tests to storagepoolxml2xmltest
Coverity complains that when multiplying to 32 bit values that eventually
will be stored in a 64 bit value that it's possible the math could
overflow unless one of the values being multiplied is type cast to
the proper size.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Since cd4d547576
Coverity notes that setting 'ret = -3' prior to the unconditional
setting of 'ret = 0' will cause the value to be UNUSED.
Since the comment indicates that it is expect to allow the code
to continue, just remove the ret = -3 setting.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Currently, after calling commands to create a new volumes,
virStorageBackendZFSCreateVol calls virStorageBackendZFSFindVols that
calls virStorageBackendZFSParseVol.
virStorageBackendZFSParseVol checks if a volume already exists by
trying to get it using virStorageVolDefFindByName.
For a just created volume it returns NULL, so volume is reported as
new and appended to pool->volumes. This causes a volume to be listed
twice as storageVolCreateXML appends this new volume to the list as
well.
Fix that by passing a new volume definition to
virStorageBackendZFSParseVol so it could determine if it needs to add
this volume to the list.
There were two occurrances of attempting to initialize actualType by
calling virStorageSourceGetActualType(src) prior to a check if (!src)
resulting in Coverity complaining about the possible NULL dereference
in virStorageSourceGetActualType() of src.
Resolve by moving the actualType setting until after checking !src
virStorageBackendVolDownloadLocal and virStorageBackendVolUploadLocal
use virFDStreamOpenFile function to work with the volume fd.
virFDStreamOpenFile calls virFDStreamOpenFileInternal that implements
handling of the non-blocking I/O. If a file is not a character device and
not a fifo, it uses libvirt_iohelper.
On FreeBSD, it doesn't work as expected because disk devices (including
ZFS volumes) are exposed as character devices, and ZFS volumes do not
support open(2) with O_NONBLOCK.
To overcome this, introduce a forceIOHelper flag to
virFDStreamOpenFileInternal that forces using libvirt_iohelper. And
introduce virFDStreamOpenBlockDevice that calls
virFDStreamOpenFileInternal with the forceIOHelper set to true.
On some places in the libvirt code we have:
f(a,z)
instead of
f(a, z)
This trivial patch fixes couple of such occurrences.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Currently, qemu driver uses qemuTranslateDiskSourcePool()
to translate disk volume information. This function is
general enough and could be used for other drivers as well,
so move it to conf/domain_conf.c along with its helpers.
- qemuTranslateDiskSourcePool: move to storage/storage_driver.c
and rename to virStorageTranslateDiskSourcePool,
- qemuAddISCSIPoolSourceHost: move to storage/storage_driver.c
and rename to virStorageAddISCSIPoolSourceHost,
- qemuTranslateDiskSourcePoolAuth: move to storage/storage_driver.c
and rename to virStorageTranslateDiskSourcePoolAuth,
- Update users of qemuTranslateDiskSourcePool to use a
new name.
Implement ZFS storage backend driver. Currently supported
only on FreeBSD because of ZFS limitations on Linux.
Features supported:
- pool-start, pool-stop
- pool-info
- vol-list
- vol-create / vol-delete
Pool definition looks like that:
<pool type='zfs'>
<name>myzfspool</name>
<source>
<name>actualpoolname</name>
</source>
</pool>
The 'actualpoolname' value is a name of the pool on the system,
such as shown by 'zpool list' command. Target makes no sense
here because volumes path is always /dev/zvol/$poolname/$volname.
User has to create a pool on his own, this driver doesn't
support pool creation currently.
A volume could be used with Qemu by adding an entry like this:
<disk type='volume' device='disk'>
<driver name='qemu' type='raw'/>
<source pool='myzfspool' volume='vol5'/>
<target dev='hdc' bus='ide'/>
</disk>
https://bugzilla.redhat.com/show_bug.cgi?id=1072653
Upon successful upload of a volume, the target volume and storage pool
were not updated to reflect any changes as a result of the upload. Make
use of the existing stream close callback mechanism to force a backend
pool refresh to occur in a separate thread once the stream closes. The
separate thread should avoid potential deadlocks if the refresh needed
to wait on some event from the event loop which is used to perform
the stream callback.
Use correct mode when pre-creating files (for snapshots). The refactor
changing to storage driver usage caused a regression as some systems
created the file with 000 permissions forbidding qemu to write the file.
Pass mode to the creating functions to avoid the problem.
Regression since 185e07a5f8.
With my intended use of storage driver assist to chown files on remote
storage we will need a witness that will tell us whether the given
storage volume supports operations needed by the storage driver.
Gluster storage works on a similar principle to NFS where it takes the
uid and gid of the actual process and uses it to access the storage
volume on the remote server. This introduces a need to chown storage
files on gluster via native API.
virStorageBackendLogicalCreateVol contains a piece like:
if (vol->target.path != NULL) {
/* A target path passed to CreateVol has no meaning */
VIR_FREE(vol->target.path);
}
The 'if' is useless here, but 'syntax-check' doesn't catch that
because of the comment, so drop the 'if'.
If a parentaddr was provided in the XML, have getAdapterName lookup
the stable address. This allows virStorageBackendSCSICheckPool() and
virStorageBackendSCSIRefreshPool() to automagically find the scsi_host
by its PCI address and unique_id
Rather than assume that NOT FC_HOST is SCSI_HOST, let's call them out
specifically. Makes it easier to find SCSI_HOST code/structs and ensures
something isn't missed in the future
https://bugzilla.redhat.com/show_bug.cgi?id=1091866
Add a new boolean 'sparse'. This will be used by the logical backend
storage driver to determine whether the target volume is sparse or not
(also known by a snapshot or thin logical volume). Although setting sparse
to true at creation could be seen as duplicitous to setting during
virStorageBackendLogicalMakeVol() in case there are ever other code paths
between Create and FindLVs that need to know about the volume be sparse.
Use the 'sparse' in a new virStorageBackendLogicalVolWipe() to decide whether
to attempt to wipe the logical volume or not. For now, I have found no
means to wipe the volume without writing to it. Writing to the sparse
volume causes it to be filled. A sparse logical volume is not completely
writeable as there exists metadata which if overwritten will cause the
sparse lv to go INACTIVE which means pool-refresh will not find it.
Access to whatever lvm uses to manage data blocks is not provided by
any API I could find.
Coverity complains about the return value of ioctl not being checked.
Even though we carry on when this fails (just like qemu-img does),
we can log an error.
For non-local storage drivers we can't expect to use the "scrub" tool to
wipe the volume. Split the code into a separate backend function so that
we can add protocol specific code later.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1118710
The next patch will move the storage volume wiping code into the
individual backends. This patch splits out the common code to wipe a
local volume into a separate backend helper so that the next patch is
simpler.
Add 'nocow' to storage volume xml so that user can have an option
to set NOCOW flag to the newly created volume. It's useful on btrfs
file system to enhance performance.
Btrfs has low performance when hosting VM images, even more when the guest
in those VM are also using btrfs as file system. One way to mitigate this
bad performance is to turn off COW attributes on VM files. Generally, there
are two ways to turn off COW on btrfs: a) by mounting fs with nodatacow,
then all newly created files will be NOCOW. b) per file. Add the NOCOW file
attribute. It could only be done to empty or new files.
This patch tries the second way, according to 'nocow' option, it could set
NOCOW flag per file:
for raw file images, handle 'nocow' in libvirt code; for non-raw file images,
pass 'nocow=on' option to qemu-img, and let qemu-img to handle that (requires
qemu-img version >= 2.1).
Signed-off-by: Chunyan Liu <cyliu@suse.com>
When the backing store of a volume wasn't accessible while updating the
volume definition the call would fail altogether. In cases where we
currently (incorrectly) treat remote backing stores as local one this
might lead to strange errors.
Ignore the opening errors until we figure out how to track proper volume
metadata.
Use the backing store parser to properly create the information about a
volume's backing store. Unfortunately as the storage driver isn't
prepared to allow volumes backed by networked filesystems add a
workaround that will avoid changing the XML output.
For non-local storage drivers we can't expect to use the FDStream
backend for up/downloading volumes. Split the code into a separate
backend function so that we can add protocol specific code later.
To allow reusing this function in the qemu driver we need to allow
specifying the storage format. Also separate return of the backing store
path now isn't necessary.
Replace the authType, chap, and cephx unions in virStoragePoolSource
with a single pointer to a virStorageAuthDefPtr. Adjust all users of
the previous chap/cephx and secret unions with the source->auth data.
Replace:
if (virBufferError(&buf)) {
virBufferFreeAndReset(&buf);
virReportOOMError();
...
}
with:
if (virBufferCheckError(&buf) < 0)
...
This should not be a functional change (unless some callers
misused the virBuffer APIs - a different error would be reported
then)
The parent directory doesn't necessarily need to be stored after we
don't mangle the path stored in the image. Remove it and tweak the code
to avoid using it.
Due to various refactors and compatibility with the virstoragetest the
relPath field of the virStorageSource structure was always filled either
with the relative name or the full path in case of absolutely backed
storage. Return its original purpose to store only the relative name of
the disk if it is backed relatively and tweak the tests.
Report VIR_ERR_NO_STORAGE_VOL instead of a system error when lstat
fails because the file doesn't exist.
Fixes this problem in virt-install:
https://bugzilla.redhat.com/show_bug.cgi?id=1108922
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Rework internal pool lookup code to avoid printing the raw UUID buffer
in the case a storage pool can't be found:
$ virsh pool-name e012ace0-0460-5810-39ef-1bce5fa5a4dd
error: failed to get pool 'e012ace0-0460-5810-39ef-1bce5fa5a4dd'
error: Storage pool not found: no storage pool with matching uuid à¬à`X9ï_¥¤Ý
The rework is mostly done by switching the lookup code to the newly
introduced helper virStoragePoolObjFromStoragePool
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1104993
Use the new backing store parser in the backing chain crawler. This
change needs one test change where information about the NBD image are
now parsed differently.
Use virStorageFileReadHeader() to read headers of storage files possibly
on remote storage to retrieve the image metadata.
The backend information is now parsed by
virStorageFileGetMetadataInternal which is now exported from the util
source and virStorageFileGetMetadataFromFDInternal now doesn't need to
be exported.
Use the virStorageFileGetUniqueIdentifier() function to get a unique
identifier regardless of the target storage type instead of relying on
canonicalize_path().
A new function that checks whether we support a given image is
introduced to avoid errors for unimplemented backends.
Add a new function wrapper and tweak the storage file backend lookup
function so that it can be used without reporting error. This will be
useful in the metadata crawler code where we need silently break if
metadata retrieval is not supported for the current storage type.
When walking the backing chain we previously set the storage type to
_FILE and let the virStorageFileGetMetadataFromFDInternal update it to
the correct type later on.
This patch moves the actual storage type determination to the place
where we parse the backing store name so that the code can later be
switched to use virStorageFileReadHeader() directly.
My future work will modify the metadata crawler function to use the
storage driver file APIs to access the files instead of accessing them
directly so that we will be able to request the metadata for remote
files too. To avoid linking the storage driver to every helper file
using the utils code, the backing chain traversal function needs to be
moved to the storage driver source.
Additionally the virt-aa-helper and virstoragetest programs need to be
linked with the storage driver as a result of this change.
Different protocols have different means to uniquely identify a storage
file. This patch implements a storage driver API to retrieve a unique
string describing a volume. The current implementation works for local
storage only and returns the canonical path of the volume.
To add caching support the local filesystem driver now has a private
structure holding the cached string, which is created only when it's
initially accessed.
This patch provides the implementation for local files only for start.
Use virStorageFileGetMetadataFromFD instead in
virStorageBackendProbeTarget as it now returns all required data and the
storage file is already open in a filedescriptor.
Also fix improper error code being returned when virFileReadHeaderFD
would fail as virStorageBackendUpdateVolTargetInfoFD would set the
return code to 0.
Add storage driver based functions to access headers of storage files
for metadata extraction. Along with this patch a local filesystem and
gluster via libgfapi implementation is provided. The gluster
implementation is based on code of the saferead_lim function.
To allow using the storage driver APIs to access files on various
storage sources in a universal fashion possibly on storage such as nfs
with root squash we'll need to store the desired uid/gid in the
metadata.
Add new initialisation API that will store the desired uid/gid and a
wrapper for the current use. Additionally add docs for the two APIs.
Print the debug statements of individual file access functions from the
main API functions instead of the individual backend functions.
Also enhance initialization debug messages on a per-backend basis.
The gluster volume name was previously stored as part of the source path
string. This is unfortunate when we want to do operations on the path as
the volume is used separately.
Parse and store the volume name separately for gluster storage volumes
and use the newly stored variable appropriately.
The VIR_ENUM_DECL/VIR_ENUM_IMPL helper macros already append 'Type'
to the enum name being converted; it looks silly to have functions
with 'TypeType' in their name. Even though some of our enums have
to have a 'Type' suffix, the corresponding string conversion
functions do not.
* src/conf/secret_conf.h (VIR_ENUM_DECL): Rename virSecretUsageType.
* src/conf/storage_conf.h (VIR_ENUM_DECL): Rename
virStoragePoolAuthType, virStoragePoolSourceAdapterType,
virStoragePartedFsType.
* src/conf/domain_conf.c (virDomainDiskDefParseXML)
(virDomainFSDefParseXML, virDomainFSDefFormat): Update callers.
* src/conf/secret_conf.c (virSecretDefParseUsage)
(virSecretDefFormatUsage): Likewise.
* src/conf/storage_conf.c (virStoragePoolDefParseAuth)
(virStoragePoolDefParseSource, virStoragePoolSourceFormat):
Likewise.
* src/lxc/lxc_controller.c (virLXCControllerSetupLoopDevices):
Likewise.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskPartFormat): Likewise.
* src/util/virstorageencryption.c (virStorageEncryptionSecretParse)
(virStorageEncryptionSecretFormat): Likewise.
* tools/virsh-secret.c (cmdSecretList): Likewise.
* src/libvirt_private.syms (secret_conf.h, storage_conf.h): Export
corrected names.
Signed-off-by: Eric Blake <eblake@redhat.com>
In "src/conf/" there are many enumeration (enum) declarations.
Similar to the recent cleanup to "src/util" directory, it's
better to use a typedef for variable types, function types and
other usages. Other enumeration and folders will be changed to
typedef's in the future. Most of the files changed in this
commit are related to storage (storage_conf) enums.
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1092882
Refactoring in commit id '0c2305b3' resulted in the wrong storage
volume object being passed to the new storageVolDeleteInternal().
It should have passed 'voldef' which is the address found in the
pool->volumes.objs[i] array. By passing 'voldef', the DeleteInternal
code will find and remove the voldef from the volumes.objs[] list.
When creating a new volume, it is possible to copy data into it from
another already existing volume (referred to as @origvol). Obviously,
the read-only access to @origvol is required, which is thread safe
(probably not performance-wise though). However, with current code
both @newvol and @origvol are marked as building for the time of
copying data from the @origvol to @newvol. The rationale behind
is to disallow some operations on both @origvol and @newvol, e.g.
vol-wipe, vol-delete, vol-download. While it makes sense to not allow
such operations on partly copied mirror, but it doesn't make sense to
disallow vol-create or vol-download on the source (@origvol).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In "src/util/" there are many enumeration (enum) declarations.
Sometimes, it's better using a typedef for variable types,
function types and other usages. Other enumeration will be
changed to typedef's in the future.
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
All callers of virStorageFileGetMetadataFromBuf were first calling
virStorageFileProbeFormatFromBuf, to learn what format to pass in.
But this function is already wired to do the exact same probe if
the incoming format is VIR_STORAGE_FILE_AUTO, so it's simpler to
just refactor the probing into the central function.
* src/util/virstoragefile.h (virStorageFileGetMetadataFromBuf):
Drop parameter.
(virStorageFileProbeFormatFromBuf): Drop declaration.
* src/util/virstoragefile.c (virStorageFileGetMetadataFromBuf):
Do probe here instead of in callers.
(virStorageFileProbeFormatFromBuf): Make static.
* src/libvirt_private.syms (virstoragefile.h): Drop function.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Update caller.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit id 'ac9a0963' refactored out the 'withCapacity' for the
virStorageBackendUpdateVolInfo() API. See:
http://www.redhat.com/archives/libvir-list/2014-April/msg00043.html
This resulted in a difference in how 'virsh vol-info --pool <poolName>
<volume>' or 'virsh vol-list vol-list --pool <poolName> --details' outputs
the capacity information for a directory pool with a qcow2 sparse file.
For example, using the following XML
mkdir /home/TestPool
cat testpool.xml
<pool type='dir'>
<name>TestPool</name>
<uuid>6bf80895-10b6-75a6-6059-89fdea2aefb7</uuid>
<source>
</source>
<target>
<path>/home/TestPool</path>
<permissions>
<mode>0755</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
virsh pool-create testpool.xml
virsh vol-create-as --pool TestPool temp_vol_1 \
--capacity 1048576 --allocation 1048576 --format qcow2
virsh vol-info --pool TestPool temp_vol_1
Results in listing a Capacity value. Prior to the commit, the value would
be '1.0 MiB' (1048576 bytes). However, after the commit the output would be
(for example) '192.50 KiB', which for my system was the size of the volume
in my file system (eg 'ls -l TestPool/temp_vol_1' results in '197120' bytes
or 192.50 KiB). While perhaps technically correct, it's not necessarily
what the user expected (certainly virt-test didn't expect it).
This patch restores the code to not update the target capacity for this path
The stripe_unit and stripe_count arguments are passed to rbd_create3 in
the wrong order, resulting in a stripe size of 1 byte with 4194304
stripes on newly created RBD volumes.
https://bugzilla.redhat.com/show_bug.cgi?id=1092208
Signed-off-by: Steven McDonald <steven.mcdonald@anchor.net.au>
More instances of failure to report (unlikely) readdir errors.
In one case, I chose to ignore them, given that a readdir error
would be no different than timing out on the loop, where the
fallback path behaves correctly either way.
* src/storage/storage_backend.c (virStorageBackendStablePath):
Ignore readdir errors.
* src/storage/storage_backend_fs.c
(virStorageBackendFileSystemRefresh): Report readdir errors.
* src/storage/storage_backend_iscsi.c
(virStorageBackendISCSIGetHostNumber): Likewise.
* src/storage/storage_backend_scsi.c (getNewStyleBlockDevice)
(getBlockDevice, virStorageBackendSCSIFindLUs): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Instead of hardcoding LIBEXECDIR as the location of the libvirt_parthelper
binary, use virFileFindResource to optionally find it in the current
build directory.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Commit id '18642d10' caused a virt-test regression for NFS backend
storage error path checks when running the command:
'virsh find-storage-pool-sources-as netfs Unknown '
when the host did not have Gluster installed. Prior to the commit,
the test would fail with the error:
error: internal error: Child process (/usr/sbin/showmount --no-headers
--exports Unknown) unexpected exit status 1: clnt_create: RPC: Unknown host
After the commit, the error would be ignored, the call would succeed,
and an empty list of pool sources returned. This was tucked into the
commit message as an expected outcome.
When the target host does not have a GLUSTER_CLI this is a regression
over the previous release. Furthermore, even if Gluster CLI was present,
but had a failure to get devices, the API would return a failure even if
the NFS backend had found devices.
Modify the logic to return failure when the NFS backend check fails and
there's no GLUSTER_CLI or when both backend checks fail.
If either returns success and GLUSTER_CLI is defined, then fetch and return
a list of source devices even if it's empty
A couple pieces of virStorageFileMetadata are used only while
collecting information about the chain, and don't need to
live permanently in the struct. This patch refactors external
callers to collect the information separately, so that the
next patch can remove the fields.
* src/util/virstoragefile.h (virStorageFileGetMetadataFromBuf):
Alter signature.
* src/util/virstoragefile.c (virStorageFileGetMetadataInternal):
Likewise.
(virStorageFileGetMetadataFromFDInternal): Adjust callers.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Likewise.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Deciding if a user string represents a local file instead of a
network path is an operation worth exposing directly, particularly
since the next patch will be removing a redundant variable that
was caching the information.
* src/util/virstoragefile.h (virStorageIsFile): New declaration.
* src/util/virstoragefile.c (virBackingStoreIsFile): Rename...
(virStorageIsFile): ...export, and allow NULL input.
(virStorageFileGetMetadataInternal)
(virStorageFileGetMetadataRecurse, virStorageFileGetMetadata):
Update callers.
* src/conf/domain_conf.c (virDomainDiskDefForeachPath): Use it.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Likewise.
* src/libvirt_private.syms (virstoragefile.h): Export function.
Signed-off-by: Eric Blake <eblake@redhat.com>
Now that we store all metadata about a storage image in a
virStorageSource struct let's use it also to store information needed by
the storage driver to access and do operations on the files.
https://bugzilla.redhat.com/show_bug.cgi?id=1024159
If adding a volume to a storage pool fails during the CreateXML or
CreateXMLFrom API's, we don't want to adjust the available and
allocation values for the storage pool during storageVolDelete
since we haven't adjusted the values for the create.
Refactor storageVolDelete() a bit to create a storageVolDeleteInternal()
which will handle the primary deletion activities. Add a parameter
updateMeta which will signify whether to update the values or not.
Adjust the calls from CreateXML and CreateXMLFrom to directly call the
DeleteInternal with the pool lock held. This does bypass the call
to virStorageVolDeleteEnsureACL().
Commit id '18642d10' refactored the call to virCommandRunRegex()
inside a new function virStorageBackendFileSystemNetFindNFSPoolSources(),
but the cut-n-paste didn't remove the "&state". Now that state is passed
by reference, it results in a libvirtd core with a messages entry:
"...internal error: unknown storage pool type Unknow"
Currently VolOpen notifies the user of a potentially non-fatal failure by
returning -2 and logging a VIR_WARN or VIR_INFO. Unfortunately most
callers treat -2 as fatal but don't actually report any message with
the error APIs.
Rename the VOL_OPEN_ERROR flag to VOL_OPEN_NOERROR. If NOERROR is specified,
we preserve the current behavior of returning -2 (there's only one caller
that wants this).
However in the default case, only return -1, and actually use the error
APIs. Fix up a couple callers as a result.
A future patch will merge virStorageFileMetadata and virStorageSource,
but I found it easier to do if both structs use the same information
for tracking whether a source file needs encryption keys.
* src/util/virstoragefile.h (_virStorageFileMetadata): Prepare
full encryption struct instead of just a bool.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Use transfer semantics.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
* src/util/virstoragefile.c (virStorageFileGetMetadataInternal):
Populate struct.
(virStorageFileFreeMetadata): Adjust clients.
* tests/virstoragetest.c (testStorageChain): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Now that each virStorageSource can track allocation information,
and given that we already have the information without extra
syscalls, it's easier to just always populate the information
directly into the struct than it is to sometimes pass the address
of the struct members down the call chain.
* src/storage/storage_backend.h (virStorageBackendUpdateVolInfo)
(virStorageBackendUpdateVolTargetInfo)
(virStorageBackendUpdateVolTargetInfoFD): Update signature.
* src/storage/storage_backend.c (virStorageBackendUpdateVolInfo)
(virStorageBackendUpdateVolTargetInfo)
(virStorageBackendUpdateVolTargetInfoFD): Always populate struct
members instead.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskMakeDataVol): Update client.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget)
(virStorageBackendFileSystemRefresh)
(virStorageBackendFileSystemVolRefresh): Likewise.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
* src/storage/storage_backend_logical.c
(virStorageBackendLogicalMakeVol): Likewise.
* src/storage/storage_backend_mpath.c
(virStorageBackendMpathNewVol): Likewise.
* src/storage/storage_backend_scsi.c
(virStorageBackendSCSINewLun): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
One of the features of qcow2 is that a wrapper file can have
more capacity than its backing file from the guest's perspective;
what's more, sparse files make tracking allocation of both
the active and backing file worthwhile. As such, it makes
more sense to show allocation numbers for each file in a chain,
and not just the top-level file. This sets up the fields for
the tracking, although it does not modify XML to display any
new information.
* src/util/virstoragefile.h (_virStorageSource): Add fields.
* src/conf/storage_conf.h (_virStorageVolDef): Drop redundant
fields.
* src/storage/storage_backend.c (virStorageBackendCreateBlockFrom)
(createRawFile, virStorageBackendCreateQemuImgCmd)
(virStorageBackendCreateQcowCreate): Update clients.
* src/storage/storage_driver.c (storageVolDelete)
(storageVolCreateXML, storageVolCreateXMLFrom, storageVolResize)
(storageVolWipeInternal, storageVolGetInfo): Likewise.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget)
(virStorageBackendFileSystemRefresh)
(virStorageBackendFileSystemVolResize)
(virStorageBackendFileSystemVolRefresh): Likewise.
* src/storage/storage_backend_logical.c
(virStorageBackendLogicalMakeVol)
(virStorageBackendLogicalCreateVol): Likewise.
* src/storage/storage_backend_scsi.c
(virStorageBackendSCSINewLun): Likewise.
* src/storage/storage_backend_mpath.c
(virStorageBackendMpathNewVol): Likewise.
* src/storage/storage_backend_rbd.c
(volStorageBackendRBDRefreshVolInfo)
(virStorageBackendRBDCreateImage): Likewise.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskMakeDataVol)
(virStorageBackendDiskCreateVol): Likewise.
* src/storage/storage_backend_sheepdog.c
(virStorageBackendSheepdogBuildVol)
(virStorageBackendSheepdogParseVdiList): Likewise.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
* src/conf/storage_conf.c (virStorageVolDefFormat)
(virStorageVolDefParseXML): Likewise.
* src/test/test_driver.c (testOpenVolumesForPool)
(testStorageVolCreateXML, testStorageVolCreateXMLFrom)
(testStorageVolDelete, testStorageVolGetInfo): Likewise.
* src/esx/esx_storage_backend_iscsi.c (esxStorageVolGetXMLDesc):
Likewise.
* src/esx/esx_storage_backend_vmfs.c (esxStorageVolGetXMLDesc)
(esxStorageVolCreateXML): Likewise.
* src/parallels/parallels_driver.c (parallelsAddHddByVolume):
Likewise.
* src/parallels/parallels_storage.c (parallelsDiskDescParseNode)
(parallelsStorageVolDefineXML, parallelsStorageVolCreateXMLFrom)
(parallelsStorageVolDefRemove, parallelsStorageVolGetInfo):
Likewise.
* src/vbox/vbox_tmpl.c (vboxStorageVolCreateXML)
(vboxStorageVolGetXMLDesc): Likewise.
* tests/storagebackendsheepdogtest.c (test_vdi_list_parser):
Likewise.
* src/phyp/phyp_driver.c (phypStorageVolCreateXML): Likewise.
A fairly smooth transition. And now that domain disks and
storage volumes share a common struct, it opens the doors for
a future patch to expose more details in the XML for both
objects.
* src/conf/storage_conf.h (_virStorageVolTarget): Delete.
(_virStorageVolDef): Use common type.
* src/conf/storage_conf.c (virStorageVolDefFree)
(virStorageVolTargetDefFormat): Update clients.
* src/storage/storage_backend.h: Likewise.
* src/storage/storage_backend.c
(virStorageBackendDetectBlockVolFormatFD)
(virStorageBackendUpdateVolTargetInfo)
(virStorageBackendUpdateVolTargetInfoFD): Likewise.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Some preparatory work before consolidating storage volume
structs with the rest of virstoragefile. Making these
changes allows a volume target to be much closer to (a
subset of) the virStorageSource struct.
Making perms be a pointer allows it to be optional if we
have a storage pool that doesn't expose permissions in a
way we can access. It also allows future patches to
optionally expose permissions details learned about a disk
image via domain <disk> listings, rather than just
limiting it to storage volume listings.
Disk partition types was only used by internal code to
control what type of partition to create when carving up
an MS-DOS partition table storage pool (and is not used
for GPT partition tables or other storage pools). It was
not exposed in volume XML, and as it is more closely
related to extent information of the overall block device
than it is to the <target> information describing the host
file. Besides, if we ever decide to expose it in XML down
the road, we can move it back as needed.
* src/conf/storage_conf.h (_virStorageVolTarget): Change perms to
pointer, enhance comments. Move partition type...
(_virStorageVolSource): ...here.
* src/conf/storage_conf.c (virStorageVolDefFree)
(virStorageVolDefParseXML, virStorageVolTargetDefFormat): Update
clients.
* src/storage/storage_backend_fs.c (createFileDir): Likewise.
* src/storage/storage_backend.c (virStorageBackendCreateBlockFrom)
(virStorageBackendCreateRaw, virStorageBackendCreateExecCommand)
(virStorageBackendUpdateVolTargetInfoFD): Likewise.
* src/storage/storage_backend_logical.c
(virStorageBackendLogicalCreateVol): Likewise.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskMakeDataVol)
(virStorageBackendDiskPartTypeToCreate): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Noticed during my work on storage struct cleanups.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskPartBoundaries): Fix spelling errors.
Signed-off-by: Eric Blake <eblake@redhat.com>
Now that we have a common struct, it's time to start using it!
Since external snapshots make a longer backing chain, it is
only natural to use the same struct for the file created by
the snapshot as what we use for <domain> disks.
* src/conf/snapshot_conf.h (_virDomainSnapshotDiskDef): Use common
struct instead of open-coded duplicate fields.
* src/conf/snapshot_conf.c (virDomainSnapshotDiskDefClear)
(virDomainSnapshotDiskDefParseXML, virDomainSnapshotAlignDisks)
(virDomainSnapshotDiskDefFormat)
(virDomainSnapshotDiskGetActualType): Adjust clients.
* src/qemu/qemu_conf.c (qemuTranslateSnapshotDiskSourcePool):
Likewise.
* src/qemu/qemu_driver.c (qemuDomainSnapshotDiskGetSourceString)
(qemuDomainSnapshotCreateInactiveExternal)
(qemuDomainSnapshotPrepareDiskExternalOverlayActive)
(qemuDomainSnapshotPrepareDiskExternal)
(qemuDomainSnapshotPrepare)
(qemuDomainSnapshotCreateSingleDiskActive): Likewise.
* src/storage/storage_driver.c
(virStorageFileInitFromSnapshotDef): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1072714
Use the "gluster" command line tool to retrieve information about remote
volumes on a gluster server to allow storage pool source lookup.
Unfortunately gluster doesn't provide a management library so that we
could use that directly, instead the RPC calls are hardcoded in the
command line tool.
Extract the NFS related stuff into a separate function and tidy up the
rest of the code so we can reuse it to add gluster backend detection.
Additionally avoid reporting of errors from "showmount" and return an
empty source list instead. This will help when adding other detection
backends.
According to our documentation the "key" value has the following
meaning: "Providing an identifier for the volume which identifies a
single volume." The currently used keys for gluster volumes consist of
the gluster volume name and file path. This can't be considered unique
as a different storage server can serve a volume with the same name.
Unfortunately I wasn't able to figure out a way to retrieve the gluster
volume UUID which would avoid the possibility of having two distinct
keys identifying a single volume.
Use the full URI as the key for the volume to avoid the more critical
ambiguity problem and document the possible change to UUID.
The libgfapi function glfs_fini doesn't tolerate NULL pointers. Add a
check on the error paths as it's possible to crash libvirtd if the
gluster volume can't be initialized.
It's finally time to start tracking disk backing chains in
<domain> XML. The first step is to start refactoring code
so that we have an object more convenient for representing
each host source resource in the context of a single guest
<disk>. Ultimately, I plan to move the new type into src/util
where it can be reused by virStorageFile, but to make the
transition easier to review, this patch just creates the
new type then fixes everything until it compiles again.
* src/conf/domain_conf.h (_virDomainDiskDef): Split...
(_virDomainDiskSourceDef): ...to new struct.
(virDomainDiskAuthClear): Use new type.
* src/conf/domain_conf.c (virDomainDiskDefFree): Split...
(virDomainDiskSourceDefClear): ...to new function.
(virDomainDiskGetType, virDomainDiskSetType)
(virDomainDiskGetSource, virDomainDiskSetSource)
(virDomainDiskGetDriver, virDomainDiskSetDriver)
(virDomainDiskGetFormat, virDomainDiskSetFormat)
(virDomainDiskAuthClear, virDomainDiskGetActualType)
(virDomainDiskDefParseXML, virDomainDiskSourceDefFormat)
(virDomainDiskDefFormat, virDomainDiskDefForeachPath)
(virDomainDiskDefGetSecurityLabelDef)
(virDomainDiskSourceIsBlockType): Adjust all users.
* src/lxc/lxc_controller.c (virLXCControllerSetupDisk):
Likewise.
* src/lxc/lxc_driver.c (lxcDomainAttachDeviceMknodHelper):
Likewise.
* src/qemu/qemu_command.c (qemuAddRBDHost, qemuParseRBDString)
(qemuParseDriveURIString, qemuParseGlusterString)
(qemuParseISCSIString, qemuParseNBDString)
(qemuDomainDiskGetSourceString, qemuBuildDriveStr)
(qemuBuildCommandLine, qemuParseCommandLineDisk)
(qemuParseCommandLine): Likewise.
* src/qemu/qemu_conf.c (qemuCheckSharedDevice)
(qemuAddISCSIPoolSourceHost, qemuTranslateDiskSourcePool):
Likewise.
* src/qemu/qemu_driver.c (qemuDomainUpdateDeviceConfig)
(qemuDomainPrepareDiskChainElement)
(qemuDomainSnapshotCreateInactiveExternal)
(qemuDomainSnapshotPrepareDiskExternalBackingInactive)
(qemuDomainSnapshotPrepareDiskInternal)
(qemuDomainSnapshotPrepare)
(qemuDomainSnapshotCreateSingleDiskActive)
(qemuDomainSnapshotUndoSingleDiskActive)
(qemuDomainBlockPivot, qemuDomainBlockJobImpl)
(qemuDomainBlockCopy, qemuDomainBlockCommit): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationIsSafe): Likewise.
* src/qemu/qemu_process.c (qemuProcessGetVolumeQcowPassphrase)
(qemuProcessInitPasswords): Likewise.
* src/security/security_selinux.c
(virSecuritySELinuxSetSecurityFileLabel): Likewise.
* src/storage/storage_driver.c (virStorageFileInitFromDiskDef):
Likewise.
* tests/securityselinuxlabeltest.c (testSELinuxLoadDef):
Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
If we cannot stat/open a file on pool refresh, returning -1 aborts
the refresh and the pool is undefined.
Only treat missing files as fatal unless VolOpenCheckMode is called
with the VIR_STORAGE_VOL_OPEN_ERROR flag. If this flag is missing
(when it's called from virStorageBackendProbeTarget in
virStorageBackendFileSystemRefresh), only emit a warning and return
-2 to let the caller skip over the file.
https://bugzilla.redhat.com/show_bug.cgi?id=977706
Without this, using /dev/mapper as a directory pool
fails in virStorageBackendUpdateVolTargetInfoFD:
cannot seek to end of file '/dev/mapper/control': Illegal seek
Skip over character devices by default.
https://bugzilla.redhat.com/show_bug.cgi?id=710866
virStorageBackendISCSISession only needs the path of the source
device and virStorageBackendISCSIRescanLUNs doesn't need the pool
at all.
This will allow the functions to be moved to src/util.
Any source file which calls the logging APIs now needs
to have a VIR_LOG_INIT("source.name") declaration at
the start of the file. This provides a static variable
of the virLogSource type.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Coverity found an issue in lxc_driver and uml_driver that we don't
check the return value of register functions.
I've also updated all other places and unify the way we check the
return value.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
From commit id 'd53bbfd1'
Found one core and one possible memory leak. Core seen during local
virt-test/tp_libvirt run for the vol_create_from test. The memory leak
was seen by inspection during a review of all VIR_APPEND_ELEMENT changes
In storage_backend_disk/virStorageBackendDiskMakeDataVol(), the 'vol'
needs to be kept around since it's used later, so use the _COPY macro.
This caused a segv in libvirtd:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffe87c3700 (LWP 6919)]
virStorageBackendDiskMakeDataVol (vol=0x0, groups=0x7fffc8000d70, pool=0x7fffc8002460) at storage/storage_backend_disk.c:66
66 if (vol->target.path == NULL) {
In storage_backend_rbd/virStorageBackendRBDRefreshPool() there's a failure
path where the 'vol' needs to go through virStorageVolDefFree() since it
wouldn't be appended.
In storageVolLookupByPath the provided path is "sanitized" at first.
This removes some extra slashes and stuff. When the lookup of the volume
fails the original path is used which makes it hard to trace errors in
some cases.
Improve the error message to print the sanitized path along with the
user provided path if they are not equal.
When looking up a volume by path on a non-local filesystem don't use the
"cleaned" path that might be mangled in such a way that it will differ
from a path provided by a storage backend.
Skip the cleanup step for gluster, sheepdog and RBD.
Pools that are not backed by files in the filesystem cause problems with
some APIs. Error out when attempting to upload a volume in such a pool
as currently we expect a local file representation for it.
Auditing all callers of virCommandRun and virCommandWait that
passed a non-NULL pointer for exit status turned up some
interesting observations. Many callers were merely passing
a pointer to avoid the overall command dying, but without
caring what the exit status was - but these callers would
be better off treating a child death by signal as an abnormal
exit. Other callers were actually acting on the status, but
not all of them remembered to filter by WIFEXITED and convert
with WEXITSTATUS; depending on the platform, this can result
in a status being reported as 256 times too big. And among
those that correctly parse the output, it gets rather verbose.
Finally, there were the callers that explicitly checked that
the status was 0, and gave their own message, but with fewer
details than what virCommand gives for free.
So the best idea is to move the complexity out of callers and
into virCommand - by default, we return the actual exit status
already cleaned through WEXITSTATUS and treat signals as a
failed command; but the few callers that care can ask for raw
status and act on it themselves.
* src/util/vircommand.h (virCommandRawStatus): New prototype.
* src/libvirt_private.syms (util/command.h): Export it.
* docs/internals/command.html.in: Document it.
* src/util/vircommand.c (virCommandRawStatus): New function.
(virCommandWait): Adjust semantics.
* tests/commandtest.c (test1): Test it.
* daemon/remote.c (remoteDispatchAuthPolkit): Adjust callers.
* src/access/viraccessdriverpolkit.c (virAccessDriverPolkitCheck):
Likewise.
* src/fdstream.c (virFDStreamCloseInt): Likewise.
* src/lxc/lxc_process.c (virLXCProcessStart): Likewise.
* src/qemu/qemu_command.c (qemuCreateInBridgePortWithHelper):
Likewise.
* src/xen/xen_driver.c (xenUnifiedXendProbe): Simplify.
* tests/reconnect.c (mymain): Likewise.
* tests/statstest.c (mymain): Likewise.
* src/bhyve/bhyve_process.c (virBhyveProcessStart)
(virBhyveProcessStop): Don't overwrite virCommand error.
* src/libvirt.c (virConnectAuthGainPolkit): Likewise.
* src/openvz/openvz_driver.c (openvzDomainGetBarrierLimit)
(openvzDomainSetBarrierLimit): Likewise.
* src/util/virebtables.c (virEbTablesOnceInit): Likewise.
* src/util/viriptables.c (virIpTablesOnceInit): Likewise.
* src/util/virnetdevveth.c (virNetDevVethCreate): Fix debug
message.
* src/qemu/qemu_capabilities.c (virQEMUCapsInitQMP): Add comment.
* src/storage/storage_backend_iscsi.c
(virStorageBackendISCSINodeUpdate): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
These timeout values make librados/librbd return -ETIMEDOUT when a
operation is blocking due to a failing/unreachable Ceph cluster.
By having the operations time out libvirt will not block.
The internal pools were an idea in one of the first iterations of the
gluster series, which we decided not to use. Somehow the patch still
got pushed. Remove it as the internal flag isn't needed.
This reverts commit 362da8209d.
In a44b7b87bc I've introduced a function
that initializes a storage file wrapper object on gluster based volumes.
The initialization function leaks the private data pointer in case of
failure. This patch fixes it.
Reported by John Ferlan.
In commit e32268184b I accidentally added
twice a typedef for virStorageFileBackend when I moved it between files
across patch iterations. The double declaration breaks build on older
compilers in RHEL5 and FreeBSD.
Remove the spurious definition.
Add APIs that will allow to use the storage driver to assist in
operations on files even for remote filesystems without native
representation as files in the host.
virGetStorageVol can return NULL on out-of-memory. If it does, cleanly
abort the volume clone operation.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
This reverts commit 67ccf91bf2.
We only generate the volume key after we've built it, but the storage
driver expects it to be filled after createVol finishes.
Squash the volume building back with creating to fulfill this
expectation.
This new RBD format supports snapshotting and cloning. By having
libvirt create images in format 2 end-users of the created images
can benefit from the new RBD format.
Older versions of libvirt can work with this new RBD format as long
as librbd supports format 2. RBD format is supported by librbd since
version 0.56 (Ceph Bobtail).
Signed-off-by: Wido den Hollander <wido@widodh.nl>
When restarting sheepdog pool, all volumes are missing.
This patch add automatically all volume from the added pool.
Adding last Daniel P. Berrange's syntaxes correction.
Adding vol on separeted function 'inspired' from parallels_storage :
parallelsAddDiskVolume
The "checkPool" is a bit different for pool with "fc_host"
type source adapter, since the vHBA it's based on might be
not created yet (it's created by "startPool", which is
involked after "checkPool" in storageDriverAutostart). So it
should not fail, otherwise the "autostart" of the pool will
fail either.
The problem is easy to reproduce:
* Enable "autostart" for the pool
* Restart libvirtd service
* Check the pool's state
For pool which relies on remote resources, such as a "iscsi" type
pool, since how long it takes to export the corresponding devices
to host's sysfs is really depended, it could depend on the network
connection, it also could depend on the host's udev procedures. So
it's likely that the volumes are not able to be detected during pool
starting process, polling the sysfs doesn't work, since we don't
know how much time is best for the polling, and even worse, the
volumes could still be not detected or partly not detected even after
the polling. So we end up with a documentation to prompt the fact,
in virsh manual.
And as a small improvement, let's explicitly say no LUNs found in
the debug log in that case.
The public virConnectRef and virConnectClose API are just thin
wrappers around virObjectRef/virObjectRef, with added object
validation and an error reset. Within our backend drivers, use
of the object validation is just an inefficiency since we always
pass valid objects. More important to think about is what
happens with the error reset; our uses of virConnectRef happened
to be safe (since we hadn't encountered any earlier errors), but
in several cases the use of virConnectClose could lose a real
error.
Ideally, we should also avoid calling virConnectOpen() from
within backend drivers - but that is a known situation that
needs much more design work.
* src/qemu/qemu_process.c (qemuProcessReconnectHelper)
(qemuProcessReconnect): Avoid nested public API call.
* src/qemu/qemu_driver.c (qemuAutostartDomains)
(qemuStateInitialize, qemuStateStop): Likewise.
* src/qemu/qemu_migration.c (doPeer2PeerMigrate): Likewise.
* src/storage/storage_driver.c (storageDriverAutostart):
Likewise.
* src/uml/uml_driver.c (umlAutostartConfigs): Likewise.
* src/lxc/lxc_process.c (virLXCProcessAutostartAll): Likewise.
(virLXCProcessReboot): Likewise, and avoid leaking conn on error.
Signed-off-by: Eric Blake <eblake@redhat.com>
To allow using the storage driver APIs to do operation on generic domain
disks we will need to introduce internal storage pools that will give is
a base to support this stuff even on files that weren't originally
defined as a part of the pool.
This patch introduces the 'internal' flag for a storage pool that will
prevent it from being listed along with the user defined storage pools.