Commit Graph

328 Commits

Author SHA1 Message Date
John Ferlan
2dd70901db storage: Resolve storage driver crash
Resolve a storage driver crash as a result of a long running
storageVolCreateXML when the virStorageVolPoolRefreshThread is
run as a result of when a storageVolUpload completed and ran the
virStoragePoolObjClearVols without checking if the creation
code was currently processing a buildVol after incrementing
the driver->asyncjob count.

The refreshThread will now check the pool asyncjob count before
attempting to pursue the pool refresh. Adjust the documentation
to describe the condition.

Crash from valgrind is as follows (with a bit of editing):

==21309== Invalid read of size 8
==21309==    at 0x153E47AF: storageBackendUpdateVolTargetInfo
==21309==    by 0x153E4C30: virStorageBackendUpdateVolInfo
==21309==    by 0x153E52DE: virStorageBackendVolRefreshLocal
==21309==    by 0x153DE29E: storageVolCreateXML
==21309==    by 0x562035B: virStorageVolCreateXML
==21309==    by 0x147366: remoteDispatchStorageVolCreateXML
...
==21309==  Address 0x2590a720 is 64 bytes inside a block of size 336 free'd
==21309==    at 0x4C2F2BB: free
==21309==    by 0x54CB9FA: virFree
==21309==    by 0x55BC800: virStorageVolDefFree
==21309==    by 0x55BF1D8: virStoragePoolObjClearVols
==21309==    by 0x153D967E: virStorageVolPoolRefreshThread
...
==21309==  Block was alloc'd at
==21309==    at 0x4C300A5: calloc
==21309==    by 0x54CB483: virAlloc
==21309==    by 0x55BDC1F: virStorageVolDefParseXML
==21309==    by 0x55BDC1F: virStorageVolDefParseNode
==21309==    by 0x55BE5A4: virStorageVolDefParse
==21309==    by 0x153DDFF1: storageVolCreateXML
==21309==    by 0x562035B: virStorageVolCreateXML
==21309==    by 0x147366: remoteDispatchStorageVolCreateXML
...
2017-11-16 11:34:26 -05:00
John Ferlan
ba6575e2f6 storage: Use virStoragePoolObjGetDef accessor for new driver events
Missed from merge from commit id 'b0652192' into commit id 'bfcd8fc92'
were a couple of obj->def-> references.
2017-11-07 14:12:52 -05:00
John Ferlan
bfcd8fc924 storage: Use virStoragePoolObjGetDef accessor for driver
In preparation for privatizing the object, use the accessor to fetch
the obj->def instead of the direct reference.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2017-10-06 06:53:05 -04:00
John Ferlan
2dd024754e util: Move virSecretUsageType to virsecret.h
Move the virSecretUsageType into the util.
2017-09-21 15:46:48 -04:00
Julio Faracco
b06521928c storage: Add new events for *PoolBuild() and *PoolDelete().
This commit adds new events for two methods and operations: *PoolBuild() and
*PoolDelete(). Using the event-test and the commands set below we have the
following outputs:

$ sudo ./event-test
Registering event callbacks
myStoragePoolEventCallback EVENT: Storage pool test Defined 0
myStoragePoolEventCallback EVENT: Storage pool test Created 0
myStoragePoolEventCallback EVENT: Storage pool test Started 0
myStoragePoolEventCallback EVENT: Storage pool test Stopped 0
myStoragePoolEventCallback EVENT: Storage pool test Deleted 0
myStoragePoolEventCallback EVENT: Storage pool test Undefined 0

Another terminal:
$ sudo virsh pool-define test.xml
Pool test defined from test.xml

$ sudo virsh pool-build test
Pool test built

$ sudo virsh pool-start test
Pool test started

$ sudo virsh pool-destroy test
Pool test destroyed

$ sudo virsh pool-delete test
Pool test deleted

$ sudo virsh pool-undefine test
Pool test has been undefined

This commits can be a solution for RHBZ #1475227.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1475227

Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2017-09-20 11:52:56 +02:00
John Ferlan
d085197ce2 storage: Use virStoragePoolObjDefUseNewDef
Use the new accessor API for storage_driver.
2017-09-19 08:31:56 -04:00
John Ferlan
67129bb435 storage: Use virStoragePoolObj{Get|Incr|Decr}Asyncjobs
Use the new accessor APIs for storage_driver.
2017-09-19 08:31:05 -04:00
John Ferlan
ccc8c311b2 storage: Internally represent @autostart as bool
Since it's been used that way anyway, let's just convert it to a bool
and only make the external representation be an int.
2017-09-19 08:30:37 -04:00
John Ferlan
bb15e65af2 storage: Use virStoragePoolObj{Is|Set}Autostart
Use the new accessor APIs for storage_driver and test_driver.
2017-09-19 08:30:19 -04:00
John Ferlan
0147f72741 storage: Use virStoragePoolObj{Is|Set}Active
Use the new accessor APIs for storage_driver, test_driver, and
gluster backend.
2017-09-19 08:30:19 -04:00
John Ferlan
1bd4349671 storage: Use virStoragePoolObjGetAutostartLink
Use the new accessor API for storage_driver.
2017-09-19 08:28:50 -04:00
John Ferlan
8603d848a3 storage: Use virStoragePoolObj{Get|Set}ConfigFile
Use the new accessor APIs for storage_driver and test_driver.
2017-09-19 08:28:50 -04:00
John Ferlan
40630a8e45 storage: Introduce storage volume add, delete, count APIs
Create/use virStoragePoolObjAddVol in order to add volumes onto list.

Create/use virStoragePoolObjRemoveVol in order to remove volumes from list.

Create/use virStoragePoolObjGetVolumesCount to get count of volumes on list.

For the storage driver, the logic alters when the volumes.obj list grows
to after we've fetched the volobj. This is an optimization of sorts, but
also doesn't "needlessly" grow the volumes.objs list and then just decr
the count if the virGetStorageVol fails.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2017-09-19 08:28:50 -04:00
John Ferlan
d16f803d78 storage: Use virStorageBackendRefreshVolTargetUpdate after wipeVol
https://bugzilla.redhat.com/show_bug.cgi?id=1437797

Rather than using refreshVol which essentially only updates the
allocation, capacity, and permissions for the volume, but not
the format which does get updated in a pool refresh - let's use
the same helper that pool refresh uses in order to update the
volume target.
2017-08-30 15:32:13 -04:00
Peter Krempa
97ea8da183 virStorageNetHostDef: Turn @port into integer
Currently, @port is type of string. Well, that's overkill and
waste of memory. Port is always an integer. Use it as such.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2017-07-24 10:55:44 +02:00
John Ferlan
f36f2e463f storage: Fix editing mistake in storagePoolSetAutostart
Commit id '905f1024b' had a rogue editing mistake that inadvertently
dropped a goto cleanup in storagePoolSetAutostart, but Coverity noted it.
2017-07-22 07:11:28 -04:00
John Ferlan
7cc30e0ed7 storage: Alter volume num, name, and export API's to just take obj
Alter the virStoragePoolObjNumOfVolumes, virStoragePoolObjVolumeGetNames,
and virStoragePoolObjVolumeListExport APIs to take a virStoragePoolObjPtr
instead of the &obj->volumes and obj->def.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2017-07-21 14:51:47 -04:00
John Ferlan
905f1024bd storage: Use consistent variable names for driver
A virStoragePoolObjPtr will be an 'obj'.

A virStoragePoolPtr will be a 'pool'.

A virStorageVolPtr will be a 'vol'.

A virStorageVolDefPtr will be a 'voldef'.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2017-07-21 14:51:47 -04:00
Peter Krempa
9506bd25a3 storage: Split out virStorageSource accessors to separate file
The helper methods for actually accessing the storage objects don't
really belong to the main storage driver implementation file. Split them
out.
2017-07-11 17:07:04 +02:00
John Ferlan
2065499b60 events: Avoid double free possibility on remote call failure
If a remote call fails during event registration (more than likely from
a network failure or remote libvirtd restart timed just right), then when
calling the virObjectEventStateDeregisterID we don't want to call the
registered @freecb function because that breaks our contract that we
would only call it after succesfully returning.  If the @freecb routine
were called, it could result in a double free from properly coded
applications that free their opaque data on failure to register, as seen
in the following details:

    Program terminated with signal 6, Aborted.
    #0  0x00007fc45cba15d7 in raise
    #1  0x00007fc45cba2cc8 in abort
    #2  0x00007fc45cbe12f7 in __libc_message
    #3  0x00007fc45cbe86d3 in _int_free
    #4  0x00007fc45d8d292c in PyDict_Fini
    #5  0x00007fc45d94f46a in Py_Finalize
    #6  0x00007fc45d960735 in Py_Main
    #7  0x00007fc45cb8daf5 in __libc_start_main
    #8  0x0000000000400721 in _start

The double dereference of 'pyobj_cbData' is triggered in the following way:

    (1) libvirt_virConnectDomainEventRegisterAny is invoked.
    (2) the event is successfully added to the event callback list
        (virDomainEventStateRegisterClient in
        remoteConnectDomainEventRegisterAny returns 1 which means ok).
    (3) when function remoteConnectDomainEventRegisterAny is hit,
        network connection disconnected coincidently (or libvirtd is
        restarted) in the context of function 'call' then the connection
        is lost and the function 'call' failed, the branch
        virObjectEventStateDeregisterID is therefore taken.
    (4) 'pyobj_conn' is dereferenced the 1st time in
        libvirt_virConnectDomainEventFreeFunc.
    (5) 'pyobj_cbData' (refered to pyobj_conn) is dereferenced the
         2nd time in libvirt_virConnectDomainEventRegisterAny.
    (6) the double free error is triggered.

Resolve this by adding a @doFreeCb boolean in order to avoid calling the
freeCb in virObjectEventStateDeregisterID for any remote call failure in
a remoteConnect*EventRegister* API. For remoteConnect*EventDeregister* calls,
the passed value would be true indicating they should run the freecb if it
exists; whereas, it's false for the remote call failure path.

Patch based on the investigation and initial patch posted by
fangying <fangying1@huawei.com>.
2017-06-25 08:16:04 -04:00
Peter Krempa
d97cfdc891 storage: Add helper to retrieve the backing store string of a storage volume
It is necessary for some parts of the code to refresh just data
based on the based on the backing store string. Add a convenience
function that will retrieve this data.
2017-06-20 13:25:55 +02:00
Marc Hartmayer
adf846d3c9 Use ATTRIBUTE_FALLTHROUGH
Use ATTRIBUTE_FALLTHROUGH, introduced by commit
5d84f5961b, instead of comments to
indicate that the fall through is an intentional behavior.

Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
2017-06-12 19:11:30 -04:00
Michal Privoznik
1f43aa67c5 Introduce virStorageVol{Download,Upload}Flags
These flags to APIs will tell if caller wants to use sparse
stream for storage transfer. At the same time, it's safe to
enable them in storage driver frontend and rely on our backends
checking the flags. This way we can enable specific flags only on
some specific backends, e.g. enable
VIR_STORAGE_VOL_DOWNLOAD_SPARSE_STREAM for filesystem backend but
not iSCSI backend.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2017-05-18 07:42:13 +02:00
John Ferlan
4a5d191e2a storage: Create helpers to perform FindByUUID and FindByName
Create a couple of helpers that will perform the same call sequence.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2017-04-11 08:49:51 -04:00
John Ferlan
babf148a94 storage: Pass driver arg by ref
Alter virStoragePoolObjListExport in order to pass the drivers->pools
by reference

Signed-off-by: John Ferlan <jferlan@redhat.com>
2017-04-11 08:49:51 -04:00
John Ferlan
50e6d4e8e1 storage: Introduce virStoragePoolObjGetNames
Mostly code motion to move storageConnectList[Defined]StoragePools
and similar test driver code into virstorageobj.c and rename to
virStoragePoolObjGetNames.

Also includes a couple of variable name adjustments to keep code consistent
with other drivers.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2017-04-11 08:49:51 -04:00
John Ferlan
2fae7c7fb2 storage: Introduce virStoragePoolObjNumOfStoragePools
Unify the NumOf[Defined]StoragePools API into virstorageobj.c from
storage_driver and test_driver.  The only real difference between the
two is the test driver doesn't call using the aclfilter API.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2017-04-11 08:49:51 -04:00
John Ferlan
96155c6994 storage: Introduce virStoragePoolObjVolumeListExport
Essentially code motion to move the storage/test driver ListAllVolumes
logic into virstorageobj.c

Signed-off-by: John Ferlan <jferlan@redhat.com>
2017-04-11 08:49:51 -04:00
John Ferlan
7e94830f07 storage: Introduce virStoragePoolObjVolumeGetNames
Mostly code motion to move storagePoolListVolumes code into virstorageobj.c
and rename to virStoragePoolObjVolumeGetNames.

Also includes a couple of variable name adjustments to keep code consistent
with other drivers.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2017-04-11 08:49:51 -04:00
John Ferlan
4a440e4366 storage: Introduce virStoragePoolObjNumOfVolumes
Unify the NumOfVolumes API into virstorageobj.c from storage_driver and
test_driver.  The only real difference between the two is the test driver
doesn't call using the aclfilter API.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2017-04-11 08:49:51 -04:00
Peter Krempa
f3a8e80c13 storage: driver: Remove unavailable transient pools after restart
If a transient storage pool is deemed inactive after libvirtd restart it
would not be deleted from the list. Reuse virStoragePoolUpdateInactive
along with a refactor necessary to properly update the state.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1242801
2017-04-03 08:42:09 +02:00
Peter Krempa
aced6b2356 storage: driver: Split out code fixing pool state after deactivation
After a pool is made inactive the definition objects need to be updated
(if a new definition is prepared) and transient pools need to be
completely removed. Split out the code doing these steps into a separate
function for later reuse.
2017-04-03 08:42:09 +02:00
Martin Kletzander
bdcb199532 Move src/fdstream to src/util/virfdstream
There is no reason for it not to be in the utils, all global symbols
under that file already have prefix vir* and there is no reason for it
to be part of DRIVER_SOURCES because that is just a leftover from
older days (pre-driver modules era, I believe).

Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2017-03-27 13:13:29 +02:00
John Ferlan
a6d681485f conf: Use consistent function name prefixes for virstorageobj
Use "virStoragePoolObj" as a prefix for any external API in virstorageobj.

Also a couple of functions were local to virstorageobj.c, so remove their
external defs iin virstorageobj.h.

NB: The virStorageVolDef* API's won't change.
Signed-off-by: John Ferlan <jferlan@redhat.com>
2017-03-17 14:40:09 -04:00
Peter Krempa
93a3f516ee tests: drivermodule: Make sure that all compiled storage backends can be loaded
Add a new storage driver registration function that will force the
backend code to fail if any of the storage backend modules can't be
loaded. This will make sure that they work and are present.
2017-02-22 09:31:33 +01:00
Peter Krempa
f813fe810f storage: backend: Refactor registration of the backend drivers
Add APIs that allow to dynamically register driver backends so that the
list of available drivers does not need to be known during compile time.

This will allow us to modularize the storage driver on runtime.
2017-02-21 09:34:30 +01:00
Olga Krishtal
e590d5301e storage: Introduce Virtuozzo vstorage backend
Added general definitions for vstorage pool backend including
the build options to add --with-storage-vstorage checking.
In order to use vstorage as a backend for a storage pool
vstorage tools (vstorage and vstorage-mount) need to be installed.

Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
2017-01-26 10:43:42 -05:00
Peter Krempa
46e8049c15 storage: Split utility functions from storage_backend.(ch)
The file became a garbage dump for all kinds of utility functions over
time. Move them to a separate file so that the files can become a clean
interface for the storage backends.
2017-01-19 09:25:51 +01:00
Michal Privoznik
39779eb195 security_dac: Resolve virSecurityDACSetOwnershipInternal const correctness
The code at the very bottom of the DAC secdriver that calls
chown() should be fine with read-only data. If something needs to
be prepared it should have been done beforehand.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2017-01-10 12:49:59 +01:00
John Ferlan
0c234889c4 storage: Introduce virStorageVolInfoFlags
https://bugzilla.redhat.com/show_bug.cgi?id=1332019

This function will essentially be a wrapper to virStorageVolInfo in order
to provide a mechanism to have the "physical" size of the volume returned
instead of the "allocation" size. This will provide similar capabilities to
the virDomainBlockInfo which can return both allocation and physical of a
domain storage volume.

NB: Since we're reusing the _virStorageVolInfo and not creating a new
_virStorageVolInfoFlags structure, we'll need to generate the rpc APIs
remoteStorageVolGetInfoFlags and remoteDispatchStorageVolGetInfoFlags
(although both were originally created from gendispatch.pl and then
just copied into daemon/remote.c and src/remote/remote_driver.c).

The new API will allow the usage of a VIR_STORAGE_VOL_GET_PHYSICAL flag
and will make the decision to return the physical or allocation value
into the allocation field.

In order to get that physical value, virStorageBackendUpdateVolTargetInfoFD
adds logic to fill in physical value matching logic in qemuStorageLimitsRefresh
used by virDomainBlockInfo when the domain is inactive.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2016-12-20 13:52:39 -05:00
John Ferlan
d3bba70771 storage: Fix type PLOOP type check for storageVolUpload
Commit id '03e750f3' added support for checking the PLOOP type; however,
it used 'target.type' which no storage code ever fills in, so it will
never be set.  Change to just vol->type (could use vol->target.format
as well).
2016-12-05 06:44:04 -05:00
Michal Privoznik
c2a5a4e7ea virstring: Unify string list function names
We have couple of functions that operate over NULL terminated
lits of strings. However, our naming sucks:

virStringJoin
virStringFreeList
virStringFreeListCount
virStringArrayHasString
virStringGetFirstWithPrefix

We can do better:

virStringListJoin
virStringListFree
virStringListFreeCount
virStringListHasString
virStringListGetFirstWithPrefix

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2016-11-25 13:54:05 +01:00
Sławek Kapłoński
ae381879f3 Forbid new-line char in name of new storagepool
New line character in name of storagepool is now forbidden because it
mess virsh output and can be confusing for users.
Validation of name is done in driver, after parsing XML to avoid
problems with dissappeared pools which was already created with
new-line char in name.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2016-11-22 14:36:47 +01:00
Martin Kletzander
1827f2ac5d Change virDomainEventState to virObjectLockable
This way we get reference counting and we can get rid of locking
function.

Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2016-10-12 12:54:47 +02:00
John Ferlan
fbfd6f2103 storage: Don't remove the pool for buildPool failure in storagePoolCreate
https://bugzilla.redhat.com/show_bug.cgi?id=1362349

When adding the ability to build the pool during the start pool processing
using the similar flags as buildPool processing would use, the code was
essentially cut-n-pasted from storagePoolCreateXML.  However, that included
a call to virStoragePoolObjRemove which shouldn't happen within the
storagePoolCreate path since that'll remove the pool from the list of
pools only to be rediscovered if libvirtd restarts.

So on failure, just fail and return as we should expect
2016-08-05 09:30:54 -04:00
Olga Krishtal
3dd50be7ca vz: support filesystem type volume
Vz containers are able to use ploop volumes from storage pools
to work upon.

To use filesystem type volume, pool name and volume name should be
specifaed in <source> :
   <filesystem type='volume' accessmode='passthrough'>
      <driver type='ploop' format='ploop'/>
      <source pool='guest_images' volume='TEST_POOL_CT'/>
      <target dir='/'/>
   </filesystem>

The information about pool and volume is stored in ct dom configuration:
<StorageURL>libvirt://localhost/pool_name/vol_name</StorageURL>
and can be easily obtained via PrlVmDevHd_GetStorageURL sdk call.

The only shorcoming: if storage pool is moved somewhere the ct
should be redefined in order to refresh the information aboot path
to root.hdd

Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
2016-07-18 23:39:57 +03:00
Michal Privoznik
ca5d51df27 virStorageTranslateDiskSourcePool: Avoid double free
https://bugzilla.redhat.com/show_bug.cgi?id=1316370

Consider the following disk for a domain:

    <disk type='volume' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <auth username='libvirt'>
        <secret type='iscsi' usage='libvirtiscsi'/>
      </auth>
      <source pool='iscsi-secret-pool' volume='unit:0:0:0' mode='direct' startupPolicy='optional'/>
      <target dev='sda' bus='scsi'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

Now, startupPolicy is currently not allowed for iscsi disks, so
one would expect an error message to be thrown. But what a
surprise is waiting for users if they try to start up such
domain:

==15724== Invalid free() / delete / delete[] / realloc()
==15724==    at 0x4C2B1F0: free (vg_replace_malloc.c:473)
==15724==    by 0x54B7A69: virFree (viralloc.c:582)
==15724==    by 0x552DC90: virStorageAuthDefFree (virstoragefile.c:1549)
==15724==    by 0x552F023: virStorageSourceClear (virstoragefile.c:2055)
==15724==    by 0x552F054: virStorageSourceFree (virstoragefile.c:2067)
==15724==    by 0x55556AA: virDomainDiskDefFree (domain_conf.c:1562)
==15724==    by 0x5557ABE: virDomainDefFree (domain_conf.c:2547)
==15724==    by 0x1B43CC42: qemuProcessStop (qemu_process.c:5918)
==15724==    by 0x1B43BA2E: qemuProcessStart (qemu_process.c:5511)
==15724==    by 0x1B48993E: qemuDomainObjStart (qemu_driver.c:7050)
==15724==    by 0x1B489B9A: qemuDomainCreateWithFlags (qemu_driver.c:7104)
==15724==    by 0x1B489C01: qemuDomainCreate (qemu_driver.c:7122)
==15724==  Address 0x21cfbb90 is 0 bytes inside a block of size 48 free'd
==15724==    at 0x4C2B1F0: free (vg_replace_malloc.c:473)
==15724==    by 0x54B7A69: virFree (viralloc.c:582)
==15724==    by 0x552DC90: virStorageAuthDefFree (virstoragefile.c:1549)
==15724==    by 0x12D1C8D4: virStorageTranslateDiskSourcePool (storage_driver.c:3475)
==15724==    by 0x1B4396E4: qemuProcessPrepareDomain (qemu_process.c:4896)
==15724==    by 0x1B43B880: qemuProcessStart (qemu_process.c:5466)
==15724==    by 0x1B48993E: qemuDomainObjStart (qemu_driver.c:7050)
==15724==    by 0x1B489B9A: qemuDomainCreateWithFlags (qemu_driver.c:7104)
==15724==    by 0x1B489C01: qemuDomainCreate (qemu_driver.c:7122)
==15724==    by 0x561CA97: virDomainCreate (libvirt-domain.c:6787)
==15724==    by 0x12B6FD: remoteDispatchDomainCreate (remote_dispatch.h:4116)
==15724==    by 0x12B61A: remoteDispatchDomainCreateHelper (remote_dispatch.h:4092)

The problem is, in virStorageTranslateDiskSourcePool disk
def->src->auth is freed, but the pointer is not set to NULL. So
later, when qemuProcessStop starts to free the domain definition,
virStorageAuthDefFree() tries to free the memory again, instead
of jumping out immediately.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2016-06-28 15:02:16 +02:00
John Ferlan
01f4a4a070 storage: Introduce virStoragePoolObjBuildTempFilePath
Create a function to return a temporary file path to be used in a mkostemp
type call using the path to the stateDir + pool->def->name + vol->name

Signed-off-by: John Ferlan <jferlan@redhat.com>
2016-06-24 13:42:38 -04:00
Daniel P. Berrange
0330848207 Promote storage pool refresh lifecycle event to top level event
The VIR_STORAGE_POOL_EVENT_REFRESHED constant does not
reflect any change in the lifecycle of the storage pool.

It should thus not be part of the storage pool lifecycle
event set, but rather be a top level event in its own
right. Thus we introduce VIR_STORAGE_POOL_EVENT_ID_REFRESH
to replace it.

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2016-06-24 18:26:11 +01:00
Cole Robinson
bdb868101b storage: Fix coverity warning
After commit e808d3f227 cbdata is always available here, so the
check is pointless
2016-06-24 07:31:47 -04:00
Cole Robinson
e808d3f227 storage: Remove redundant refreshPool check
Every driver provides a refreshPool impl, and many other critical
places in the code unconditionally call it without checking if
it exists, so this check is pointless
2016-06-23 09:29:54 -04:00
Jovanka Gulicoska
41b2f108d5 storage: implement storage lifecycle event APIs
Implement storage pool event callbacks for START, STOP, DEFINE, UNDEFINED
and REFRESHED in functions when a storage pool is created/started/stopped
etc. accordingly
2016-06-16 12:22:11 -04:00
Jovanka Gulicoska
580dbf06a4 storage: Replace VIR_ERROR with standard vir*Error in state driver init
Replace VIR_ERROR with virReportError and virReportSystemError
2016-05-23 15:42:46 -04:00
Ján Tomko
21fdb4fe70 storage: do not clear vols before volume upload
Commit 5e54361c added virStoragePoolObjClearVols before refreshPool
to prevent duplicate volume entries.

However it is not needed here because we're not refreshing the pool yet,
just checking for the existence of the refresh callback.

The actual refresh is done via virStorageVolFDStreamCloseCb
in virStorageVolPoolRefreshThread, which already calls
virStoragePoolObjClearVols.
2016-05-23 10:47:32 +02:00
Jovanka Gulicoska
b29e08dbe3 More usage of virGetLastErrorMessage
Convert to virGetLastErrorMessage() in the rest of the code
2016-05-19 15:17:03 -04:00
John Ferlan
5e54361c9d storage: Need to clear pool prior to calling the refreshPool
Prior to calling the 'refreshPool' during CreatePool or UploadPool
operations, we need to clear the pool; otherwise, the pool will
have duplicated entries.
2016-05-11 09:23:31 -04:00
John Ferlan
2c52ec43aa storage: Fix regression cloning volume into a logical pool
https://bugzilla.redhat.com/show_bug.cgi?id=1318993

Commit id 'dd519a294' caused a regression cloning a volume into a
logical pool by removing just the 'allocation' adjustment during
storageVolCreateXMLFrom. Combined with the change to not require the
new volume input XML to have a capacity listed (commit id 'e3f1d2a8')
left the possibility that a zero allocation value (e.g., not provided)
would create a thin/sparse logical volume. When a thin lv becomes fully
populated, then LVM sets the partition 'inactive' and the subsequent
fdatasync() fails.

Add a new 'has_allocation' flag to be set at XML parse time to indicate
that allocation was provided. This is done so that if it's not provided
the create-from code uses the capacity value since we document that if
omitted, the volume will be fully allocated at time of creation.

For a logical backend, that creation time is 'createVol', while for a
file backend, creation doesn't set the size, but the 'createRaw' called
during buildVolFrom will decide whether the file is sparse or not based
on the provided capacity and allocation value.

For volume clones that provide different allocation and capacity values
to allow for sparse files, there is no change.
2016-05-11 09:06:26 -04:00
Yuri Chornoivan
50fc4b4bdd Fix minor typos in messages
Signed-off-by: Yuri Chornoivan <yurchor@ukr.net>
2016-04-30 15:37:31 +02:00
Olga Krishtal
03e750f35d storage: dir: adapt .uploadVol .dowloadVol for ploop volume
In case of ploop volume, target path of the volume is the path to the
directory that contains image file named root.hds and DiskDescriptor.xml.
While using uploadVol and downloadVol callbacks we need to open root.hds
itself.
Upload or download operations with ploop volume are only allowed when
images do not have snapshots. Otherwise operation fails.

Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2016-04-15 17:27:32 +02:00
John Ferlan
ee67069c73 storage: Fix error path in storagePoolDefineXML
Found by inspection - after calling virStoragePoolObjAssignDef the
pool is part of the driver->pools.objs list and the failure path
for the virStoragePoolObjSaveDef will use virStoragePoolObjRemove
to remove the pool from the objs list which will unlock and free
the pool pointer (as pools->objs[i] during the loop). Since the call
doesn't clear the pool address from the callee, we need to set it
to NULL; otherwise, the virStoragePoolObjUnlock in the cleanup: code
will fail miserably.
2016-02-26 07:23:05 -05:00
Ján Tomko
cdb757c970 Revert "storageVolCreateXMLFrom: Check if backend knows how to createVol"
This reverts commit 611a278fa4.

According to the original commit message, this is dead code:

  It is highly unlikely that a backend will know how to create a
  volume from a different volume (buildVolFrom) and not know how to
  create an empty volume (createVol).
2016-02-17 13:29:41 +01:00
Michal Privoznik
611a278fa4 storageVolCreateXMLFrom: Check if backend knows how to createVol
It is highly unlikely that a backend will know how to create a
volume from a different volume (buildVolFrom) and not know how to
create an empty volume (createVol). But:
1) we call the function without any prior check so if that's the
case we would SIGSEGV immediatelly
2) it's better to be safe than sorry.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2016-02-12 16:16:58 +01:00
Michal Privoznik
78490acc39 storageVolCreateXML: Swap order of two operations
Firstly, we realloc internal list to hold new item (=volume that
will be potentially created) and then we check whether we
actually know how to create it. If we don't we consume more
memory than we really need for no good reason.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2016-02-12 16:16:46 +01:00
Michal Privoznik
d1a7102389 virStringListLength: Ensure const correctness
The virStringListLength function does not ever modify the passed
string list. It merely counts the items in it. Make sure that we
reflect this bit in the function header.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>

(crobinso: fix up spacing and squash in sheepdog bit suggested
 by Andrea)
2016-02-09 15:44:58 -05:00
John Ferlan
dc77344a8e storage: Clean up error path for create buildPool failure
Commit id 'aeb1078ab' added a buildPool option and failure path which
calls virStoragePoolObjRemove, which unlocks the pool, clears the 'pool'
variable, and goto cleanup.  However, at cleanup virStoragePoolObjUnlock
is called without check if pool is non NULL.
2016-01-05 09:08:02 -05:00
Michael Chapman
c494db8fd6 storage: do not leak storage pool XML filename
Valgrind complained:

==28277== 38 bytes in 1 blocks are definitely lost in loss record 298 of 957
==28277==    at 0x4A06A2E: malloc (vg_replace_malloc.c:270)
==28277==    by 0x82D7F57: __vasprintf_chk (in /lib64/libc-2.12.so)
==28277==    by 0x52EF16A: virVasprintfInternal (stdio2.h:199)
==28277==    by 0x52EF25C: virAsprintfInternal (virstring.c:514)
==28277==    by 0x52B1FA9: virFileBuildPath (virfile.c:2831)
==28277==    by 0x19B1947C: storageDriverAutostart (storage_driver.c:191)
==28277==    by 0x19B196A7: storageStateAutoStart (storage_driver.c:307)
==28277==    by 0x538527E: virStateInitialize (libvirt.c:793)
==28277==    by 0x11D7CF: daemonRunStateInit (libvirtd.c:947)
==28277==    by 0x52F4694: virThreadHelper (virthread.c:206)
==28277==    by 0x6E08A50: start_thread (in /lib64/libpthread-2.12.so)
==28277==    by 0x82BE93C: clone (in /lib64/libc-2.12.so)

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2016-01-04 14:54:23 +01:00
John Ferlan
aeb1078ab5 storage: Add flags to allow building pool during create processing
https://bugzilla.redhat.com/show_bug.cgi?id=830056

Add flags handling to the virStoragePoolCreate and virStoragePoolCreateXML
API's which will allow the caller to provide the capability for the storage
pool create API's to also perform a pool build during creation rather than
requiring the additional buildPool step. This will allow transient pools
to be defined, built, and started.

The new flags are:

    * VIR_STORAGE_POOL_CREATE_WITH_BUILD
      Perform buildPool without any flags passed.

    * VIR_STORAGE_POOL_CREATE_WITH_BUILD_OVERWRITE
      Perform buildPool using VIR_STORAGE_POOL_BUILD_OVERWRITE flag.

    * VIR_STORAGE_POOL_CREATE_WITH_BUILD_NO_OVERWRITE
      Perform buildPool using VIR_STORAGE_POOL_BUILD_NO_OVERWRITE flag.

It is up to the backend to handle the processing of build flags. The
overwrite and no-overwrite flags are mutually exclusive.

NB:
This patch is loosely based upon code originally authored by Osier
Yang that were not reviewed and pushed, see:

https://www.redhat.com/archives/libvir-list/2012-July/msg01328.html
2015-12-17 11:56:18 -05:00
John Ferlan
80ca86e54d storage: Attempt to refresh volume after successful wipe volume
https://bugzilla.redhat.com/show_bug.cgi?id=1270709

When a volume wipe is successful, perform a volume refresh afterwards to
update any volume data that may be used in future volume commands, such as
volume resize.  For a raw file volume, a wipe could truncate the file and
a followup volume resize the capacity may fail because the volume target
allocation isn't updated to reflect the wipe activity.
2015-12-17 07:30:03 -05:00
John Ferlan
c3afa6a9a3 storage: Introduce virStoragePoolObjFindPoolByUUID
Add a new API to search the currently defined pool list for a pool with
a matching UUID and return the locked pool object pointer.
2015-11-12 06:30:32 -05:00
John Ferlan
0c7a9b994c storage: Make active boolean
Since we treat it like a boolean, let's store it that way. At least one
path had already treated as true/false anyway.
2015-11-12 06:30:32 -05:00
John Ferlan
4cd7d220c9 storage: On 'buildVol' failure don't delete the volume
https://bugzilla.redhat.com/show_bug.cgi?id=1233003

Commit id 'fdda3760' only managed a symptom where it was possible to
create a file in a pool without libvirt's knowledge, so it was reverted.

The real fix is to have all the createVol API's which actually create
a volume (disk, logical, zfs) and the buildVol API's which handle the
real creation of some volume file (fs, rbd, sheepdog) manage deleting
any volume which they create when there is some sort of error in
processing the volume.

This way the onus isn't left up to the storage_driver to determine whether
the buildVol failure was due to some failure as a result of adjustments
made to the volume after creation such as getting sizes, changing ownership,
changing volume protections, etc. or simple a failure in creation.

Without needing to consider that the volume has to be removed, the
buildVol failure path only needs to remove the volume from the pool.
This way if a creation failed due to duplicate name, libvirt wouldn't
remove a volume that it didn't create in the pool target.
2015-11-04 07:21:11 -05:00
John Ferlan
0a6e709c95 Revert "storage: Prior to creating a volume, refresh the pool"
This reverts commit fdda37608a.

This commit only manages a symptom of finding a buildRet failure
where a volume was not listed in the pool, but someone created the
volume outside of libvirt in the pool being managed by libvirt.
2015-11-04 07:21:11 -05:00
John Ferlan
a1703557fd storage: Pull volume removal from pool in storageVolDeleteInternal
Create a helper function to remove volume from the pool.
2015-11-04 07:21:11 -05:00
John Ferlan
27d2d99fe7 storage: Fix a resource leak in storageVolCreateXML
Commit id '1b5685da' refactored the code to move buildvoldef inside
the buildVol conditional; however, the VIR_FREE of the memory was
left only when 'buildret' failed, thus we're leaking memory.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2015-10-13 18:03:55 -04:00
John Ferlan
5275c0f4a1 storage: Fix incorrect format for <disk> <auth> XML
https://bugzilla.redhat.com/show_bug.cgi?id=1256999

After creating a copy of the 'authdef' in a pool -> disk translation,
unconditionally clear the 'authType' in the resulting disk auth def
structure since that's used for a storage pool and not a disk.  This
ensures virStorageAuthDefFormat will properly format the <auth> XML
for a <disk> (e.g. it won't have a <auth type='%s'.../>).
2015-10-12 09:46:59 -04:00
John Ferlan
fdda37608a storage: Prior to creating a volume, refresh the pool
https://bugzilla.redhat.com/show_bug.cgi?id=1233003

Although perhaps bordering on a don't do that type scenario, if
someone creates a volume in a pool outside of libvirt, then uses that
same name to create a volume in the pool via libvirt, then the creation
will fail and in some cases cause the same name volume to be deleted.

This patch will refresh the pool just prior to checking whether the
named volume exists prior to creating the volume in the pool. While
it's still possible to have a timing window to create a file after the
check - at least we tried.  At that point, someone is being malicious.
2015-10-05 08:14:44 -04:00
Ján Tomko
1b5685dada Create a shallow copy for volume building only if supported
Since the previous commit, the shallow copy is only used inside
the if (backend->buildVol) if.
2015-09-29 10:45:01 +02:00
Ján Tomko
56a4e9cb61 Update pool allocation with new values on volume creation
Since commit e0139e3, we update the pool allocation with
the user-provided allocation values.

For qcow2, the allocation is ignored for volume building,
but we still subtracted it from pool's allocation.
This can result in interesting values if the user-provided
allocation is large enough:

Capacity:       104.71 GiB
Allocation:     109.13 GiB
Available:      16.00 EiB

We already do a VolRefresh on volume creation. Also refresh
the volume after creating and use the new value to update the pool.

https://bugzilla.redhat.com/show_bug.cgi?id=1163091
2015-09-29 10:45:01 +02:00
John Ferlan
db9277a39b storage: Handle failure from refreshVol
Commit id '155ca616' added the 'refreshVol' API. In an NFS root-squash
environment it was possible that if the just created volume from XML wasn't
properly created with the right uid/gid and/or mode, then the followup
refreshVol will fail to open the volume in order to get the allocation/
capacity values. This would leave the volume still on the server and
cause a libvirtd crash because 'voldef' would be in the pool list, but
the cleanup code would free it.
2015-09-02 08:59:53 -04:00
Prerna Saxena
dd519a294b Fix cloning of raw, sparse volumes
When virsh vol-clone is attempted on a raw file where capacity > allocation,
the resulting cloned volume has a size that matches the virtual-size of
the parent; in place of matching its actual, disk size.
This patch fixes the cloned disk to have same _allocated_size_ as
the parent file from which it was cloned.

Ref: http://www.redhat.com/archives/libvir-list/2015-May/msg00050.html

Also fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1130739

Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2015-07-10 08:54:10 +02:00
Erik Skultety
b563787192 storage: Revert volume obj list updating after volume creation (4749d82a)
This patch reverts commit 4749d82a which tried to tweak the logic in
volume creation. We did realloc and update our object list before we executed
volume building within a specific storage backend. If that failed, we
had to update (again) our object list to the original state as it was before the
build and delete the volume from the pool (even though it didn't exist - this
truly depends on the backend).
I misunderstood the base idea to be able to poll the status of the volume
creation using vol-info. After commit 4749d82a this wasn't possible
anymore, although no BZ has been reported yet.

Commit 4749d82a also claimed to fix
https://bugzilla.redhat.com/show_bug.cgi?id=1223177, but commit c8be606b of the
same series as 4749d82ad (which was more of a refactor than a fix)
fixes the same issue so the revert should be pretty straightforward.
Further more, BZ https://bugzilla.redhat.com/show_bug.cgi?id=1241454 can be
fixed with this revert.
2015-07-09 13:23:27 +02:00
Erik Skultety
f92f31213a storage: Fix regression in storagePoolUpdateAllState
Commit 2a31c5f0 introduced support for storage pool state XMLs, however
it also introduced a regression:

if (!virstoragePoolObjIsActive(pool)) {
    virStoragePoolObjUnlock(pool);
    continue;
}

The idea behind this was that since we've got state XMLs and the pool
wasn't marked as active by autostart routine (if the autostart flag had been
set earlier), the pool is inactive and we can leave it be and continue with
other pools. However, filesystem type pools like fs,dir, possibly netfs are
supposed to be active if the filesystem is mounted on the host. And this is
exactly where the regression occurs, e.g. pool type 'dir' which has been
previously destroyed and marked as !autostart gets filtered out
by the condition above.
The resolution should be simply to remove the condition completely,
all pools will get their 'active' flag updated by check callback and if
they do not support such callback, the logic doesn't change and such
pools will be inactive by default (e.g. RBD, even if a state XML exists).

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1238610
2015-07-08 12:21:25 +02:00
Prerna Saxena
7e7dee4389 Storage: Introduce shadow vol for refresh while the main vol builds.
Libvirt periodically refreshes all volumes in a storage pool, including
the volumes being cloned.
While cloning a storage volume from parent, we drop pool locks. Subsequent
volume refresh sometimes changes allocation for an ongoing copy, and leads
to corrupt images.
Fix: Introduce a shadow volume that isolates the volume object under refresh
from the base which has a copy ongoing.

Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2015-06-30 14:29:38 +02:00
John Ferlan
1feaccf000 storage: Need to set secrettype for direct iscsi disk volume
https://bugzilla.redhat.com/show_bug.cgi?id=1200206

Commit id '1b4eaa61' added the ability to have a mode='direct' for
an iscsi disk volume.  It relied on virStorageTranslateDiskSourcePool
in order to copy any disk source pool authentication information to
the direct disk volume, but it neglected to also copy the 'secrettype'
field which ends up being used in the domain volume formatting code.
Adding a secrettype for this case will allow for proper formatting later
and allow disk snapshotting to work properly

Additionally libvirtd restart processing would fail to find the domain
since the translation processing code is run after domain xml processing,
so handle the the case where the authdef could have an empty secrettype
field when processing the auth and additionally ignore performing the
actual and expected auth secret type checks for a DISK_VOLUME since that
data will be reassembled later during translation processing of the
running domain.
2015-06-15 07:14:40 -04:00
Erik Skultety
4749d82a8b storage: Don't update volume objs list before we successfully create one
We do update pool volume object list before we actually create any
volume. If buildVol fails, we then try to delete the volume in the
storage as well as remove it from our structures. The problem is, that
any backend that supports both buildVol and deleteVol would fail in this
case which is completely unnecessary. This patch causes the update to
take place after we know a volume has been created successfully, thus no
removal in case of a buildVol failure is necessary.

https://bugzilla.redhat.com/show_bug.cgi?id=1223177
2015-06-02 15:02:02 +02:00
John Ferlan
48809204d1 storage: Don't adjust pool alloc/avail values for disk backend
Commit id '2ac0e647' for https://bugzilla.redhat.com/show_bug.cgi?id=1206521
was meant to be a generic check for the CreateVol, CreateVolFrom, and
DeleteVol paths to check if the storage backend's changed the pool's view
of allocation or available values.

Unfortunately as it turns out this caused a side effect when the disk backend
created an extended partition there would be no actual storage removed from
the pool, thus the changes would not find any change in allocation or
available and incorrectly update the pool values using the size of the
extended partition. A subsequent refresh of the pool would reset the
values appropriately.

This patch modifies those checks in order to specifically not update the
pool allocation and available for only the disk backend rather than be
generic before and after checks.
2015-05-28 13:32:16 -04:00
John Ferlan
6727bfd728 Revert "storage: Don't duplicate efforts of backend driver"
This reverts commit 2ac0e647bd.
2015-05-28 13:32:16 -04:00
Ján Tomko
8b316fe5da Fix shrinking volumes with the delta flag
This never worked.

In 0.9.10 when this API was introduced, it was intended that
the SHRINK flag combined with DELTA would shrink the volume by
the specified capacity (to avoid passing negative numbers).
See commit 055bbf4.

When the SHRINK flag was finally implemented for the first backend
in 1.2.13 (commit aa9aa6a), it was only implemented for the absolute
values and with the delta flag the volume is always extended,
regardless of the SHRINK flag.

Treat the SHRINK flag as a minus sign when used together with DELTA,
to allow shrinking volumes as was documented in the API since 0.9.10.

https://bugzilla.redhat.com/show_bug.cgi?id=1220213
2015-05-28 14:10:32 +02:00
Ján Tomko
7211f66ad7 Simplify allocation check in storageVolResize
Since shrinking a volume below existing allocation is not allowed,
it is not possible for a successful resize with VOL_RESIZE_ALLOCATE
to increase the pool's available value.

Even with the SHRINK flag it is possible to extend the current
allocation or even the capacity. Remove the overflow when
computing delta with this flag and do the check even if the
flag was specified.

https://bugzilla.redhat.com/show_bug.cgi?id=1073305
2015-05-28 14:10:09 +02:00
Cole Robinson
65fc824666 storage: If driver startup state syncing fails, delete statefile
If you end up with a state file for a pool that no longer starts up
or refreshes correctly, the state file is never removed and adds
noise to the logs everytime libvirtd is started.

If the initial state syncing fails, delete the statefile.
2015-04-28 09:37:58 -04:00
Cole Robinson
af9dc75c1f storage: Break out storageDriverLoadPoolState
Will simplify a future patch
2015-04-28 09:37:57 -04:00
Cole Robinson
c180a3dcf7 storage: Don't leave stale state file if pool startup fails
After pool startup we call refreshPool(). If that fails, we leave
a stale pool state file hanging around.

Hit this trying to create a pool with qemu:///session containing
root owned files.
2015-04-28 09:37:57 -04:00
Cole Robinson
b29aff322f storage: Fix autostart dir for qemu:///session 2015-04-28 09:37:57 -04:00
John Ferlan
2ac0e647bd storage: Don't duplicate efforts of backend driver
https://bugzilla.redhat.com/show_bug.cgi?id=1206521

If the backend driver updates the pool available and/or allocation values,
then the storage_driver VolCreateXML, VolCreateXMLFrom, and VolDelete APIs
should not change the value; otherwise, it will appear as if the values
were "doubled" for each change.  Additionally since unsigned arithmetic will
be used depending on the size and operation, either or both values could be
appear to be much larger than they should be (in the EiB range).

Currently only the disk pool updates the values, but other pools could.
Assume a "fresh" disk pool of 500 MiB using /dev/sde:

$ virsh pool-info disk-pool
...
Capacity:       509.88 MiB
Allocation:     0.00 B
Available:      509.84 MiB

$ virsh vol-create-as disk-pool sde1 --capacity 300M

$ virsh pool-info disk-pool
...
Capacity:       509.88 MiB
Allocation:     600.47 MiB
Available:      16.00 EiB

Following assumes disk backend updated to refresh the disk pool at deletion
of primary partition as well as extended partition:

$ virsh vol-delete --pool disk-pool sde1
Vol sde1 deleted

$ virsh pool-info disk-pool
...
Capacity:       509.88 MiB
Allocation:     9.73 EiB
Available:      6.27 EiB

This patch will check if the backend updated the pool values and honor that
update.
2015-04-09 19:04:18 -04:00
John Ferlan
1095230dee storage: Fix issues in storageVolResize
https://bugzilla.redhat.com/show_bug.cgi?id=1073305

When creating a volume in a pool, the creation allows the 'capacity'
value to be larger than the available space in the pool. As long as
the 'allocation' value will fit in the space, the volume will be created.

However, resizing the volume checks were made with the new absolute
capacity value against existing capacity + the available space without
regard for whether the new absolute capacity was actually allocating
space or not.  For example, a pool with 75G of available space creates
a volume of 10G using a capacity of 100G and allocation of 10G will succeed;
however, if the allocation used a capacity of 10G instead and then tried
to resize the allocation to 100G the code would fail to allow the backend
to try the resize.

Furthermore, when updating the pool "available" and "allocation" values,
the resize code would just "blindly" adjust them regardless of whether
space was "allocated" or just "capacity" was being adjusted.  This left
a scenario whereby a resize to 100G would fail; however, a resize to 50G
followed by one to 100G would both succeed.  Again, neither was adjusting
the allocation value, just the "capacity" value.

This patch adds more logic to the resize code to understand whether the
new capacity value is actually "allocating" space as well and whether it
shrinking or expanding. Since unsigned arithmatic is involved, the possibility
that we adjust the pool size values incorrectly is probable.

This patch also ensures that updates to the pool values only occur if we
actually performed the allocation.

NB: The storageVolDelete, storageVolCreateXML, and storageVolCreateXMLFrom
each only updates the pool allocation/availability values by the target
volume allocation value.
2015-04-09 19:04:18 -04:00
Erik Skultety
2a31c5f030 storage: Introduce storagePoolUpdateAllState function
The 'checkPool' callback was originally part of the storageDriverAutostart function,
but the pools need to be checked earlier during initialization phase,
otherwise we can't start a domain which mounts a volume after the
libvirtd daemon restarted. This is because qemuProcessReconnect is called
earlier than storageDriverAutostart. Therefore the 'checkPool' logic has been
moved to storagePoolUpdateAllState which is called inside storageDriverInitialize.

We also need a valid 'conn' reference to be able to execute 'refreshPool'
during initialization phase. Though it isn't available until storageDriverAutostart
all of our storage backends do ignore 'conn' pointer, except for RBD,
but RBD doesn't support 'checkPool' callback, so it's safe to pass
conn = NULL in this case.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1177733
2015-04-07 16:22:40 +02:00
Erik Skultety
a9700771f5 conf: Introduce virStoragePoolLoadAllState && virStoragePoolLoadState
These functions operate exactly the same as their network equivalents
virNetworkLoadAllState, virNetworkLoadState.
2015-04-07 16:22:40 +02:00
Erik Skultety
723143a19c storage: Add support for storage pool state XML
This patch introduces new virStorageDriverState element stateDir.
Also adds necessary changes to storageStateInitialize, so that
directories initialization becomes more generic.
2015-04-07 16:22:40 +02:00
Erik Skultety
cf7392a0d2 storage: Remove unused attribute conn from 'checkPool' callback
In order to be able to use 'checkPool' inside functions which do not
have any connection reference, 'conn' attribute needs to be discarded
from the checkPool's signature, since it's not used by any storage backend
anyway.
2015-04-02 11:57:07 +02:00
Ján Tomko
155ca616eb Allow creating volumes with a backing store but no capacity
The tool creating the image can get the capacity from the backing
storage. Just refresh the volume afterwards.

https://bugzilla.redhat.com/show_bug.cgi?id=958510
2015-03-02 08:07:11 +01:00