Commit Graph

618 Commits

Author SHA1 Message Date
Ján Tomko
9146fd7aa9 Remove overengineered loop
Do not loop over enum with one value.
2015-04-13 15:59:20 +02:00
Ján Tomko
fbcf7da95b Introduce struct _virStorageBackendQemuImgInfo
This will contain the data required for creating the qemu-img
command line without having access to the volume definition.
2015-04-13 15:59:20 +02:00
Ján Tomko
a832735df9 Rename virStorageBackendCreateQemuImgCmd
Add FromVol at the end. This function will create the qemu-img
command line from volume definitions and check them.
2015-04-13 15:59:20 +02:00
John Ferlan
2ac0e647bd storage: Don't duplicate efforts of backend driver
https://bugzilla.redhat.com/show_bug.cgi?id=1206521

If the backend driver updates the pool available and/or allocation values,
then the storage_driver VolCreateXML, VolCreateXMLFrom, and VolDelete APIs
should not change the value; otherwise, it will appear as if the values
were "doubled" for each change.  Additionally since unsigned arithmetic will
be used depending on the size and operation, either or both values could be
appear to be much larger than they should be (in the EiB range).

Currently only the disk pool updates the values, but other pools could.
Assume a "fresh" disk pool of 500 MiB using /dev/sde:

$ virsh pool-info disk-pool
...
Capacity:       509.88 MiB
Allocation:     0.00 B
Available:      509.84 MiB

$ virsh vol-create-as disk-pool sde1 --capacity 300M

$ virsh pool-info disk-pool
...
Capacity:       509.88 MiB
Allocation:     600.47 MiB
Available:      16.00 EiB

Following assumes disk backend updated to refresh the disk pool at deletion
of primary partition as well as extended partition:

$ virsh vol-delete --pool disk-pool sde1
Vol sde1 deleted

$ virsh pool-info disk-pool
...
Capacity:       509.88 MiB
Allocation:     9.73 EiB
Available:      6.27 EiB

This patch will check if the backend updated the pool values and honor that
update.
2015-04-09 19:04:18 -04:00
John Ferlan
1ffd82bb89 storage: Need to update freeExtent at delete primary partition
Commit id '471e1c4e' only considered updating the pool if the extended
partition was removed. As it turns out removing a primary partition
would also need to update the freeExtent list otherwise the following
sequence would fail (assuming a "fresh" disk pool for /dev/sde of 500M):

$  virsh pool-info disk-pool
...
Capacity:       509.88 MiB
Allocation:     0.00 B
Available:      509.84 MiB

$ virsh vol-create-as disk-pool sde1 --capacity 300M
$ virsh vol-delete --pool disk-pool sde1
$ virsh vol-create-as disk-pool sde1 --capacity 300M
error: Failed to create vol sde1
error: internal error: no large enough free extent

$

This patch will refresh the pool, rereading the partitions, and
return
2015-04-09 19:04:18 -04:00
John Ferlan
1095230dee storage: Fix issues in storageVolResize
https://bugzilla.redhat.com/show_bug.cgi?id=1073305

When creating a volume in a pool, the creation allows the 'capacity'
value to be larger than the available space in the pool. As long as
the 'allocation' value will fit in the space, the volume will be created.

However, resizing the volume checks were made with the new absolute
capacity value against existing capacity + the available space without
regard for whether the new absolute capacity was actually allocating
space or not.  For example, a pool with 75G of available space creates
a volume of 10G using a capacity of 100G and allocation of 10G will succeed;
however, if the allocation used a capacity of 10G instead and then tried
to resize the allocation to 100G the code would fail to allow the backend
to try the resize.

Furthermore, when updating the pool "available" and "allocation" values,
the resize code would just "blindly" adjust them regardless of whether
space was "allocated" or just "capacity" was being adjusted.  This left
a scenario whereby a resize to 100G would fail; however, a resize to 50G
followed by one to 100G would both succeed.  Again, neither was adjusting
the allocation value, just the "capacity" value.

This patch adds more logic to the resize code to understand whether the
new capacity value is actually "allocating" space as well and whether it
shrinking or expanding. Since unsigned arithmatic is involved, the possibility
that we adjust the pool size values incorrectly is probable.

This patch also ensures that updates to the pool values only occur if we
actually performed the allocation.

NB: The storageVolDelete, storageVolCreateXML, and storageVolCreateXMLFrom
each only updates the pool allocation/availability values by the target
volume allocation value.
2015-04-09 19:04:18 -04:00
Erik Skultety
2a31c5f030 storage: Introduce storagePoolUpdateAllState function
The 'checkPool' callback was originally part of the storageDriverAutostart function,
but the pools need to be checked earlier during initialization phase,
otherwise we can't start a domain which mounts a volume after the
libvirtd daemon restarted. This is because qemuProcessReconnect is called
earlier than storageDriverAutostart. Therefore the 'checkPool' logic has been
moved to storagePoolUpdateAllState which is called inside storageDriverInitialize.

We also need a valid 'conn' reference to be able to execute 'refreshPool'
during initialization phase. Though it isn't available until storageDriverAutostart
all of our storage backends do ignore 'conn' pointer, except for RBD,
but RBD doesn't support 'checkPool' callback, so it's safe to pass
conn = NULL in this case.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1177733
2015-04-07 16:22:40 +02:00
Erik Skultety
a9700771f5 conf: Introduce virStoragePoolLoadAllState && virStoragePoolLoadState
These functions operate exactly the same as their network equivalents
virNetworkLoadAllState, virNetworkLoadState.
2015-04-07 16:22:40 +02:00
Erik Skultety
723143a19c storage: Add support for storage pool state XML
This patch introduces new virStorageDriverState element stateDir.
Also adds necessary changes to storageStateInitialize, so that
directories initialization becomes more generic.
2015-04-07 16:22:40 +02:00
John Ferlan
f51fbdd19d scsi: Remove unused 'type_path' in processLU
Seems to be a remnant that was never cleaned up from original submit...

Signed-off-by: John Ferlan <jferlan@redhat.com>
2015-04-02 08:46:30 -04:00
John Ferlan
f9efcd9218 iscsi: Fix exit path for virStorageBackendISCSIFindLUs failure
If the call to virStorageBackendISCSIGetHostNumber failed, we set
retval = -1, but yet still called virStorageBackendSCSIFindLUs.
Need to add a goto cleanup - while at it, adjust the logic to
initialize retval to -1 and only changed to 0 (zero) on success.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2015-04-02 08:46:26 -04:00
John Ferlan
d9ece06526 iscsi: Use error message from virStorageBackendSCSIFindLUs
Don't supercede the error message virStorageBackendSCSIFindLUs as the
message such as "error: Failed to find LUs on host 60: ..." is not overly
clear as to what the real problem might be.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2015-04-02 08:46:23 -04:00
Erik Skultety
cf7392a0d2 storage: Remove unused attribute conn from 'checkPool' callback
In order to be able to use 'checkPool' inside functions which do not
have any connection reference, 'conn' attribute needs to be discarded
from the checkPool's signature, since it's not used by any storage backend
anyway.
2015-04-02 11:57:07 +02:00
Ján Tomko
22fd3ac38f Introduce virBitmapIsBitSet
A helper that never returns an error and treats bits out of bitmap range
as false.

Use it everywhere we use ignore_value on virBitmapGetBit, or loop over
the bitmap size.
2015-03-13 15:31:33 +01:00
John Ferlan
30f69ae86b iscsi: Adjust error message for findStorageSources backend
The virStorageBackendISCSIFindPoolSources API only needs the 'host' name
in order to discover iSCSI pools, it returns the various device paths.
On input, it's also possible to further restrict a search by providing the
port attribute for the host element and the (undocumented) initiator element.

For example:

$  virsh find-storage-pool-sources-as iscsi
error: Failed to find any iscsi pool sources
error: invalid argument: hostname and device path must be specified for iscsi sources

$ virsh find-storage-pool-sources-as iscsi 192.168.122.1
<sources>
  <source>
    <host name='192.168.122.1' port='3260'/>
    <device path='iqn.2013-12.com.example:iscsi-chap-lclpool'/>
  </source>
</sources>
2015-03-02 22:57:27 -05:00
John Ferlan
832a9256b2 disk: Provide a default storage source format type.
https://bugzilla.redhat.com/show_bug.cgi?id=1181062

According to the formatstorage.html description for <source> element
and "format" attribute: "All drivers are required to have a default
value for this, so it is optional."

As it turns out the disk backend did not choose a default value, so I
added a default of "msdos" if the source type is "unknown" as well as
updating the storage.html backend disk volume driver documentation to
indicate the default format is dos.
2015-03-02 22:42:25 -05:00
Peter Krempa
1fcb9351d7 storage: sheepdog: Avoid skipping variable initialization
Commit 155ca616eb added a error message
that skips initialization of the 'cmd' variable. Fortunately it was not
released.
2015-03-02 10:09:49 +01:00
Ján Tomko
155ca616eb Allow creating volumes with a backing store but no capacity
The tool creating the image can get the capacity from the backing
storage. Just refresh the volume afterwards.

https://bugzilla.redhat.com/show_bug.cgi?id=958510
2015-03-02 08:07:11 +01:00
Ján Tomko
d3452a3f73 Revert "Restore skipping of setting capacity"
This reverts commit f1856eb622.

Now that we can update capacity from image metadata,
we don't need to skip the update.
2015-03-02 08:07:11 +01:00
Ján Tomko
a760ba3a7f Probe for capacity in virStorageBackendUpdateVolTargetInfo
Instead of just looking at the output of fstat, call
virStorageFileGetMetadata to get the full capacity from
image headers.

Note that the capacity is probed unconditionally. The updateCapacity
bool parameter is ignored and will be removed in the following commit.
2015-03-02 08:07:11 +01:00
Ján Tomko
e3f1d2a820 Allow cloning volumes with no capacity specified
In virStorageVolCreateXML, add VIR_VOL_XML_PARSE_NO_CAPACITY
to the call parsing the XML of the new volume to make the capacity
optional.

If the capacity is omitted, use the capacity of the old volume.
We already do that for values that are less than the original
volume capacity.
2015-03-02 08:07:11 +01:00
Ján Tomko
cbd788eba6 Add flags argument to virStorageVolDefParse*
Allow the callers to pass down libvirt-internal flags.
2015-03-02 08:07:11 +01:00
Jiri Denemark
bc6e206322 Search for schemas and cpu_map.xml in source tree
Not all files we want to find using virFileFindResource{,Full} are
generated when libvirt is built, some of them (such as RNG schemas) are
distributed with sources. The current API was not able to find source
files if libvirt was built in VPATH.

Both RNG schemas and cpu_map.xml are distributed in source tarball.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-02-19 15:25:04 +01:00
Daniel P. Berrange
aa9aa6a975 Allow shrinking of file based volumes
While the main storage driver code allows the flag
VIR_STORAGE_VOL_RESIZE_SHRINK to be set, none of the backend
drivers are supporting it. At the very least this can work
for plain file based volumes since we just ftruncate() them
to the new size. It does not work with qcow2 volumes, but we
can arguably delegate to qemu-img for error reporting for that
instead of second guessing this for ourselves:

$ virsh vol-resize --shrink /home/berrange/VirtualMachines/demo.qcow2 2G
error: Failed to change size of volume 'demo.qcow2' to 2G

error: internal error: Child process (/usr/bin/qemu-img resize /home/berrange/VirtualMachines/demo.qcow2 2147483648) unexpected exit status 1: qemu-img: qcow2 doesn't support shrinking images yet
qemu-img: This image does not support resize

See also https://bugzilla.redhat.com/show_bug.cgi?id=1021802
2015-02-12 11:11:52 +00:00
John Ferlan
1d2e4d8ca2 storage: Need to clear pool prior to refreshPool during Autostart
https://bugzilla.redhat.com/show_bug.cgi?id=1176510

When storageDriverAutostart is called path virStateReload via a 'service
libvirtd reload', then because the volume list in the pool wasn't cleared
prior to the call, each volume would be listed multiple times (as many
times as we reload). I believe the issue would be introduced by commit
id '9e093f0b' at least for the libvirtd reload path, although I suppose
the introduction of virStateReload (commit id '70da0494') could be a
different cause.

Thus like other places prior to calling refreshPool, we need to call
virStoragePoolObjClearVols
2015-01-31 07:56:15 -05:00
John Ferlan
9bbbb91216 storage: Check the partition name against provided name
https://bugzilla.redhat.com/show_bug.cgi?id=1138516

If the provided volume name doesn't match what parted generated as the
partition name, then return a failure.

Update virsh.pod and formatstorage.html.in to describe the 'name' restriction
for disk pools as well as the usage of the <target>'s <format type='value'>.
2015-01-28 17:28:03 -05:00
John Ferlan
471e1c4e2a storage: When delete extended partition, need to refresh pool
When removing a volume that is the extended partition, all the logical
volume partitions that exist within the extended partition will also be
removed, so we need to refresh the pool to have the updated list
2015-01-28 17:28:03 -05:00
John Ferlan
bce671b731 storage: Adjust how to refresh extended partition disk data
During virStorageBackendDiskMakeDataVol processing, if we find an extended
partition, then handle it specially when updating the capacity/allocation
rather than calling virStorageBackendUpdateVolInfo.

As it turns out, once a logical partition exists, any attempt to refresh
the pool or after libvirtd restart/reload will result in a failure to open
the extended partition device resulting in the inability to start the pool.
The downside to this is we will lose the <permissions> and <timestamps> for
the extended partition upon subsequent restart, refresh, reload since the
stat() in virStorageBackendUpdateVolTargetInfoFD will not be called. However,
since it's really only a container and shouldn't directly be used for
storage that seems reasonable.

Therefore, only use the existing code that already had a comment about
getting the allocation wrong for extended partitions for just the setting
of the extended partition data.
2015-01-28 17:28:03 -05:00
John Ferlan
a0d88ed4e7 storage: Fix check for partition type for disk backing volumes
While checking the existing partitions in virStorageBackendDiskPartFormat,
the code would erroneously compare the volume target format type (eg, the
virStoragePartedFsType) rather than the source partition type (eg, the
virStorageVolTypeDisk) which is set during virStorageBackendDiskReadPartitions.
2015-01-28 17:28:03 -05:00
John Ferlan
290ffcfbbc storage: Attempt error recovery in virStorageBackendDiskCreateVol
During virStorageBackendDiskCreateVol if virStorageBackendDiskReadPartitions
fails, then we were leaving with an error and a partition on the disk for
which there was no corresponding volume and used space on the disk which
could be reclaimable through direct parted activity. On a subsequent restart,
reload, or refresh the volume may magically appear too.
2015-01-28 17:28:03 -05:00
John Ferlan
1e79ad6d35 storage: Move virStorageBackendDiskDeleteVol
Move the API to before virStorageBackendDiskCreateVol in order to be
able to call the DeleteVol API when virStorageBackendDiskReadPartitions
fails so that we don't by chance leave a partition on the disk.
2015-01-28 17:28:03 -05:00
Chen Hanxiao
95da191376 storage: add a flag to clone files on btrfs
When creating a RAW file, we don't take advantage
of clone of btrfs.

Add a VIR_STORAGE_VOL_CREATE_REFLINK flag to request
a reflink copy.

Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2015-01-27 13:41:14 +01:00
Chen Hanxiao
466b29c8c3 storage: introduce btrfsCloneFile() for COW copy
Add a wrapper for BTRFS_IOC_CLONE ioctl.

Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2015-01-27 13:24:10 +01:00
Daniel P. Berrange
55ea7be7d9 Removing probing of secondary drivers
For stateless, client side drivers, it is never correct to
probe for secondary drivers. It is only ever appropriate to
use the secondary driver that is associated with the
hypervisor in question. As a result the ESX & HyperV drivers
have both been forced to do hacks where they register no-op
drivers for the ones they don't implement.

For stateful, server side drivers, we always just want to
use the same built-in shared driver. The exception is
virtualbox which is really a stateless driver and so wants
to use its own server side secondary drivers. To deal with
this virtualbox has to be built as 3 separate loadable
modules to allow registration to work in the right order.

This can all be simplified by introducing a new struct
recording the precise set of secondary drivers each
hypervisor driver wants

struct _virConnectDriver {
    virHypervisorDriverPtr hypervisorDriver;
    virInterfaceDriverPtr interfaceDriver;
    virNetworkDriverPtr networkDriver;
    virNodeDeviceDriverPtr nodeDeviceDriver;
    virNWFilterDriverPtr nwfilterDriver;
    virSecretDriverPtr secretDriver;
    virStorageDriverPtr storageDriver;
};

Instead of registering the hypervisor driver, we now
just register a virConnectDriver instead. This allows
us to remove all probing of secondary drivers. Once we
have chosen the primary driver, we immediately know the
correct secondary drivers to use.

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2015-01-27 12:02:04 +00:00
Ján Tomko
1390c26847 safezero: fall back to writing zeroes even when resizing
Remove the resize flag and use the same code path for all callers.
This flag was added by commit 18f0316 to allow virStorageFileResize
use 'safezero' while preserving the behavior.

Explicitly return -2 when a fallback to a different method should
be done, to make the code path more obvious.

Fail immediately when ftruncate fails in the mmap method,
as we did before commit 18f0316.
2015-01-09 13:48:23 +01:00
John Ferlan
cafb934db8 logical: Add "--type snapshot" to lvcreate command
A recent lvm change has resulted in a change for the "default" type of
logical volume created when the "--virtualsize" or "--V" is supplied on
the command line (e.g. when the allocation and capacity values of a to
be created volume differ). It seems that at the very least the following
change adjusts the default type:

https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=e0164f21

and the following may also have some impact.

https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=87fc3b71

When using the virsh vol-create-as or vol-create xmlfile commands, the
result is that libvirt will now create a "thin logical volume" and a
"thin logical volume pool" rather than just a "thin snapshot logical
volume". For example the following sequence:

  # lvcreate --name test -L 2M -V 5M lvm_test
    Rounding up size to full physical extent 4.00 MiB
    Rounding up size to full physical extent 8.00 MiB
    Logical volume "test" created.
  # lvs lvm_test
    LV    VG       Attr       LSize Pool  Origin Data%  Meta%  Move Log Cpy%Sync Convert
    lvol1 lvm_test twi-a-tz-- 4.00m              0.00   0.98
    test  lvm_test Vwi-a-tz-- 8.00m lvol1        0.00

compared to the former code which had the following:

    LV   VG       Attr       LSize  Pool Origin         Data%  Move Log Cpy%Sync Convert
    test LVM_Test swi-a-s---  4.00m      [test_vorigin]   0.00

Since libvirt doesn't know how to parse the thin logical volume
and pool, it will fail to find the newly created volume and pool
even though it exists in the volume group.

It cannot find since the command used to find/parse returns a thin volume
'test' with no associated device, for example the output is:

  lvol1##UgUwkp-fTFP-C0rc-ufue-xrYh-dkPr-FGPFPx#lvol1_tdata(0)#thin-pool#1#4194304#4194304#4194304#twi-a-tz--
  test##NcaIoH-4YWJ-QKu3-sJc3-EOcS-goff-cThLIL##thin#0#8388608#4194304#8388608#Vwi-a-tz--

as compared to the former which had the following:

      test#[test_vorigin]#Dt5Of3-4WE6-buvw-CWJ4-XOiz-ywOU-YULYw6#/dev/sda3(1300)#linear#1#4194304#4194304#4194304#swi-a-s---

While it's possible to generate code to handle the new thin lv and pool, this
patch will add a "--type snapshot" onto the lvcreate command libvirt uses
in order to "for now" be able to continue to utilize the thin snapshots
2014-12-17 06:14:21 -05:00
John Ferlan
18f03166fd virstoragefile: Have virStorageFileResize use safezero
Currently virStorageFileResize() function uses build conditionals to
choose either the posix_fallocate() or syscall(SYS_fallocate) with no
fallback in order to preallocate the space in the newly resized file.

Since the safezero code has a similar set of conditionals modify the
resize and safezero code in order to allow the resize logic to make use
of safezero to unify the look/feel of the code paths.

Add a new boolean (resize) to safezero() to make the optional decision
whether to try syscall(SYS_fallocate) if the posix_fallocate fails because
HAVE_POSIX_FALLOCATE is not defined (eg, return -1 and errno == 0).

Create a local safezero_sys_fallocate in order to handle the resize
code paths that support that.  If not present, the set errno = ENOSYS
in order to allow the caller to handle the failure scenarios.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2014-12-16 13:11:35 -05:00
Hao Liu
9788007892 storage: Check stderr when matching parted output
In old version of parted like parted-2.1-25, error message is shown in
stdout when printing a disk info without disk label.

    Error: /dev/sda: unrecognised disk label

This line has been moved to stderr in newer version of parted. So we
should check both stdout and stderr when locating this message.

This should fix bug:
    https://bugzilla.redhat.com/show_bug.cgi?id=1172468

Signed-off-by: Hao Liu <hliu@redhat.com>
2014-12-10 10:55:23 +01:00
Peter Krempa
8ef4f598f1 storage: Fix printing/casting of uid_t/gid_t
Other parts of libvirt use "%u" for formatting uid/gid and typecast to
unsigned int. Storage driver used the signed variant.
2014-12-08 11:36:29 +01:00
Peter Krempa
3b31cbc558 storage: backend: Log uid/gid when initializing storage file backend
To ease debugging permission problems add uid/gid values to the debug
message when initializing a storage file backend.
2014-12-05 10:07:17 +01:00
Luyao Huang
87b9437f89 storage: fix crash caused by no check return before set close
https://bugzilla.redhat.com/show_bug.cgi?id=1087104#c5

When trying to use an invalid offset to virStorageVolUpload(), libvirt
fails in virFDStreamOpenFileInternal(), although it seems libvirt does
not check the return in storageVolUpload(), and calls
virFDStreamSetInternalCloseCb() right after.  But stream doesn't have a
privateData (is NULL) yet, and the daemon crashes then.

0  0x00007f09429a9c10 in pthread_mutex_lock () from /lib64/libpthread.so.0
1  0x00007f094514dbf5 in virMutexLock (m=<optimized out>) at util/virthread.c:88
2  0x00007f09451cb211 in virFDStreamSetInternalCloseCb at fdstream.c:795
3  0x00007f092ff2c9eb in storageVolUpload at storage/storage_driver.c:2098
4  0x00007f09451f46e0 in virStorageVolUpload at libvirt.c:14000
5  0x00007f0945c78fa1 in remoteDispatchStorageVolUpload at remote_dispatch.h:14339
6  remoteDispatchStorageVolUploadHelper at remote_dispatch.h:14309
7  0x00007f094524a192 in virNetServerProgramDispatchCall at rpc/virnetserverprogram.c:437

Signed-off-by: Luyao Huang <lhuang@redhat.com>
2014-12-03 17:36:07 +01:00
Michal Privoznik
5ab746b83a storage: Introduce storagePoolLookupByTargetPath
While this could be exposed as a public API, it's not done yet as
there's no demand for that yet. Anyway, this is just preparing
the environment for easier volume creation on the destination.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2014-12-02 17:51:57 +01:00
John Ferlan
a0b13d35e7 Replace virSecretFree with virObjectUnref
Since virSecretFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
2014-12-02 11:03:41 -05:00
John Ferlan
adbbff5fb7 Replace virStoragePoolFree with virObjectUnref
Since virStoragePoolFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
2014-12-02 11:03:40 -05:00
John Ferlan
d1219054e3 Replace virStorageVolFree with virObjectUnref
Since virStorageVolFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
2014-12-02 11:03:40 -05:00
John Ferlan
b09ff13848 storage: Add mixed fc_host/scsi_host duplicate adapter source checks
https://bugzilla.redhat.com/show_bug.cgi?id=1159180

The virStoragePoolSourceFindDuplicate only checks the incoming definition
against the same type of pool as the def; however, for "scsi_host" and
"fc_host" adapter pools, it's possible that either some pool "scsi_host"
adapter definition is already using the scsi_hostN that the "fc_host"
adapter definition wants to use or some "fc_host" pool adapter definition
is using a vHBA scsi_hostN or parent scsi_hostN that an incoming "scsi_host"
definition is trying to use.

This patch adds the mismatched type checks and adds extraneous comments
to describe what each check is determining.

This patch also modifies the documentation to be describe what scsi_hostN
devices a "scsi_host" source adapter should use and which to avoid. It also
updates the parent definition to specifically call out that for mixed
environments it's better to define which parent to use so that the duplicate
pool checks can be done properly.
2014-12-01 10:04:25 -05:00
John Ferlan
7b4cdb6eaa storage: Move and rename getVhbaSCSIHostParent
https://bugzilla.redhat.com/show_bug.cgi?id=1159180

Move the API from the backend to storage_conf and rename it to
virStoragePoolGetVhbaSCSIHostParent.  A future patch will need to
use this functionality from storage_conf
2014-12-01 10:04:19 -05:00
Chen Hanxiao
3a04f73997 storage_driver: fix a comment typo
s/rereshed/refreshed

Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
2014-11-24 11:57:52 -07:00
John Ferlan
512b8747d8 storage: Add thread to refresh for createVport
https://bugzilla.redhat.com/show_bug.cgi?id=1152382

When libvirt create's the vport (VPORT_CREATE) for the vHBA, there isn't
enough "time" between the creation and the running of the following
backend->refreshPool after a backend->startPool in order to find the LU's.
Population of LU's happens asynchronously when udevEventHandleCallback
discovers the "new" vHBA port.  Creation of the infrastructure by udev
is an iterative process creating and discovering actual storage devices and
adjusting the environment.

Because of the time it takes to discover and set things up, the backend
refreshPool call done after the startPool call will generally fail to
find any devices. This leaves the newly started pool appear empty when
querying via 'vol-list' after startup. The "workaround" has always been
to run pool-refresh after startup (or any time thereafter) in order to
find the LU's. Depending on how quickly run after startup, this too may
not find any LUs in the pool. Eventually though given enough time and
retries it will find something if LU's exist for the vHBA.

This patch adds a thread to be executed after the VPORT_CREATE which will
attempt to find the LU's without requiring the external run of refresh-pool.
It does this by waiting for 5 seconds and searching for the LU's. If any
are found, then the thread completes; otherwise, it will retry once more
in another 5 seconds.  If none are found in that second pass, the thread
gives up.

Things learned while investigating this... No need to try and fill the
pool too quickly or too many times. Over the course of creation, the udev
code may 'add', 'change', and 'delete' the same device. So if the refresh
code runs and finds something, it may display it only to have a subsequent
refresh appear to "lose" the device. The udev processing doesn't seem to
have a way to indicate that it's all done with the creation processing of a
newly found vHBA. Only the Lone Ranger has silver bullets to fix everything.
2014-11-20 10:24:11 -05:00
John Ferlan
2087041732 storage: Fix issue finding LU's when block doesn't exist
Fix a problem in the getBlockDevice and processLU where retval initialized
to zero causing some failures to erroneously continue through to the
virStorageBackendSCSINewLun with an attempt to find a path for "/dev/(null)".
This would fail approximately 10 seconds later with debug message:

virStorageBackendSCSINewLun:203 :
     No stable path found for '/dev/(null)' in '/dev/disk/by-path'

The root cause of the issue is for many /sys/bus/scsi/devices/<lun path>
there is no "block*" device found for the vHBA's, where "<lun path>" are
the various paths created for the vHBA, such as "17:0:0:0", "17:0:1:0",
"17:0:2:0", "17:0:3:0", etc.  If the block device isn't found, then the
directory should just be ignored rather than attempting to process it.

The bug was that in getBlockDevice the assumption was "block" would exist
and either getNewStyleBlockDevice or getOldStyleBlockDevice would fill in
@block_device. However, if 'block*' doesn't exist, then the code returned
NULL for block_device *and* a good (zero) retval value.  This caused the
processLU code to attempt the virStorageBackendSCSINewLun which failed
"at some point in time" in the future.

After this change - on test system the refresh-pool didn't have a noticable
pause of about 20 seconds - it completed within a second since no longer
was there an attempt/need to find "/dev/(null)".

Additionally, the virStorageBackendSCSIFindLU's shouldn't be declaring
found unless the processLU actually returns success. This will be
important in the followup patch which relies on whether a LU was found.
2014-11-20 10:24:11 -05:00