Commit Graph

221 Commits

Author SHA1 Message Date
Yuri Chornoivan
50fc4b4bdd Fix minor typos in messages
Signed-off-by: Yuri Chornoivan <yurchor@ukr.net>
2016-04-30 15:37:31 +02:00
Olga Krishtal
03e750f35d storage: dir: adapt .uploadVol .dowloadVol for ploop volume
In case of ploop volume, target path of the volume is the path to the
directory that contains image file named root.hds and DiskDescriptor.xml.
While using uploadVol and downloadVol callbacks we need to open root.hds
itself.
Upload or download operations with ploop volume are only allowed when
images do not have snapshots. Otherwise operation fails.

Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2016-04-15 17:27:32 +02:00
John Ferlan
ee67069c73 storage: Fix error path in storagePoolDefineXML
Found by inspection - after calling virStoragePoolObjAssignDef the
pool is part of the driver->pools.objs list and the failure path
for the virStoragePoolObjSaveDef will use virStoragePoolObjRemove
to remove the pool from the objs list which will unlock and free
the pool pointer (as pools->objs[i] during the loop). Since the call
doesn't clear the pool address from the callee, we need to set it
to NULL; otherwise, the virStoragePoolObjUnlock in the cleanup: code
will fail miserably.
2016-02-26 07:23:05 -05:00
Ján Tomko
cdb757c970 Revert "storageVolCreateXMLFrom: Check if backend knows how to createVol"
This reverts commit 611a278fa4.

According to the original commit message, this is dead code:

  It is highly unlikely that a backend will know how to create a
  volume from a different volume (buildVolFrom) and not know how to
  create an empty volume (createVol).
2016-02-17 13:29:41 +01:00
Michal Privoznik
611a278fa4 storageVolCreateXMLFrom: Check if backend knows how to createVol
It is highly unlikely that a backend will know how to create a
volume from a different volume (buildVolFrom) and not know how to
create an empty volume (createVol). But:
1) we call the function without any prior check so if that's the
case we would SIGSEGV immediatelly
2) it's better to be safe than sorry.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2016-02-12 16:16:58 +01:00
Michal Privoznik
78490acc39 storageVolCreateXML: Swap order of two operations
Firstly, we realloc internal list to hold new item (=volume that
will be potentially created) and then we check whether we
actually know how to create it. If we don't we consume more
memory than we really need for no good reason.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2016-02-12 16:16:46 +01:00
Michal Privoznik
d1a7102389 virStringListLength: Ensure const correctness
The virStringListLength function does not ever modify the passed
string list. It merely counts the items in it. Make sure that we
reflect this bit in the function header.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>

(crobinso: fix up spacing and squash in sheepdog bit suggested
 by Andrea)
2016-02-09 15:44:58 -05:00
John Ferlan
dc77344a8e storage: Clean up error path for create buildPool failure
Commit id 'aeb1078ab' added a buildPool option and failure path which
calls virStoragePoolObjRemove, which unlocks the pool, clears the 'pool'
variable, and goto cleanup.  However, at cleanup virStoragePoolObjUnlock
is called without check if pool is non NULL.
2016-01-05 09:08:02 -05:00
Michael Chapman
c494db8fd6 storage: do not leak storage pool XML filename
Valgrind complained:

==28277== 38 bytes in 1 blocks are definitely lost in loss record 298 of 957
==28277==    at 0x4A06A2E: malloc (vg_replace_malloc.c:270)
==28277==    by 0x82D7F57: __vasprintf_chk (in /lib64/libc-2.12.so)
==28277==    by 0x52EF16A: virVasprintfInternal (stdio2.h:199)
==28277==    by 0x52EF25C: virAsprintfInternal (virstring.c:514)
==28277==    by 0x52B1FA9: virFileBuildPath (virfile.c:2831)
==28277==    by 0x19B1947C: storageDriverAutostart (storage_driver.c:191)
==28277==    by 0x19B196A7: storageStateAutoStart (storage_driver.c:307)
==28277==    by 0x538527E: virStateInitialize (libvirt.c:793)
==28277==    by 0x11D7CF: daemonRunStateInit (libvirtd.c:947)
==28277==    by 0x52F4694: virThreadHelper (virthread.c:206)
==28277==    by 0x6E08A50: start_thread (in /lib64/libpthread-2.12.so)
==28277==    by 0x82BE93C: clone (in /lib64/libc-2.12.so)

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2016-01-04 14:54:23 +01:00
John Ferlan
aeb1078ab5 storage: Add flags to allow building pool during create processing
https://bugzilla.redhat.com/show_bug.cgi?id=830056

Add flags handling to the virStoragePoolCreate and virStoragePoolCreateXML
API's which will allow the caller to provide the capability for the storage
pool create API's to also perform a pool build during creation rather than
requiring the additional buildPool step. This will allow transient pools
to be defined, built, and started.

The new flags are:

    * VIR_STORAGE_POOL_CREATE_WITH_BUILD
      Perform buildPool without any flags passed.

    * VIR_STORAGE_POOL_CREATE_WITH_BUILD_OVERWRITE
      Perform buildPool using VIR_STORAGE_POOL_BUILD_OVERWRITE flag.

    * VIR_STORAGE_POOL_CREATE_WITH_BUILD_NO_OVERWRITE
      Perform buildPool using VIR_STORAGE_POOL_BUILD_NO_OVERWRITE flag.

It is up to the backend to handle the processing of build flags. The
overwrite and no-overwrite flags are mutually exclusive.

NB:
This patch is loosely based upon code originally authored by Osier
Yang that were not reviewed and pushed, see:

https://www.redhat.com/archives/libvir-list/2012-July/msg01328.html
2015-12-17 11:56:18 -05:00
John Ferlan
80ca86e54d storage: Attempt to refresh volume after successful wipe volume
https://bugzilla.redhat.com/show_bug.cgi?id=1270709

When a volume wipe is successful, perform a volume refresh afterwards to
update any volume data that may be used in future volume commands, such as
volume resize.  For a raw file volume, a wipe could truncate the file and
a followup volume resize the capacity may fail because the volume target
allocation isn't updated to reflect the wipe activity.
2015-12-17 07:30:03 -05:00
John Ferlan
c3afa6a9a3 storage: Introduce virStoragePoolObjFindPoolByUUID
Add a new API to search the currently defined pool list for a pool with
a matching UUID and return the locked pool object pointer.
2015-11-12 06:30:32 -05:00
John Ferlan
0c7a9b994c storage: Make active boolean
Since we treat it like a boolean, let's store it that way. At least one
path had already treated as true/false anyway.
2015-11-12 06:30:32 -05:00
John Ferlan
4cd7d220c9 storage: On 'buildVol' failure don't delete the volume
https://bugzilla.redhat.com/show_bug.cgi?id=1233003

Commit id 'fdda3760' only managed a symptom where it was possible to
create a file in a pool without libvirt's knowledge, so it was reverted.

The real fix is to have all the createVol API's which actually create
a volume (disk, logical, zfs) and the buildVol API's which handle the
real creation of some volume file (fs, rbd, sheepdog) manage deleting
any volume which they create when there is some sort of error in
processing the volume.

This way the onus isn't left up to the storage_driver to determine whether
the buildVol failure was due to some failure as a result of adjustments
made to the volume after creation such as getting sizes, changing ownership,
changing volume protections, etc. or simple a failure in creation.

Without needing to consider that the volume has to be removed, the
buildVol failure path only needs to remove the volume from the pool.
This way if a creation failed due to duplicate name, libvirt wouldn't
remove a volume that it didn't create in the pool target.
2015-11-04 07:21:11 -05:00
John Ferlan
0a6e709c95 Revert "storage: Prior to creating a volume, refresh the pool"
This reverts commit fdda37608a.

This commit only manages a symptom of finding a buildRet failure
where a volume was not listed in the pool, but someone created the
volume outside of libvirt in the pool being managed by libvirt.
2015-11-04 07:21:11 -05:00
John Ferlan
a1703557fd storage: Pull volume removal from pool in storageVolDeleteInternal
Create a helper function to remove volume from the pool.
2015-11-04 07:21:11 -05:00
John Ferlan
27d2d99fe7 storage: Fix a resource leak in storageVolCreateXML
Commit id '1b5685da' refactored the code to move buildvoldef inside
the buildVol conditional; however, the VIR_FREE of the memory was
left only when 'buildret' failed, thus we're leaking memory.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2015-10-13 18:03:55 -04:00
John Ferlan
5275c0f4a1 storage: Fix incorrect format for <disk> <auth> XML
https://bugzilla.redhat.com/show_bug.cgi?id=1256999

After creating a copy of the 'authdef' in a pool -> disk translation,
unconditionally clear the 'authType' in the resulting disk auth def
structure since that's used for a storage pool and not a disk.  This
ensures virStorageAuthDefFormat will properly format the <auth> XML
for a <disk> (e.g. it won't have a <auth type='%s'.../>).
2015-10-12 09:46:59 -04:00
John Ferlan
fdda37608a storage: Prior to creating a volume, refresh the pool
https://bugzilla.redhat.com/show_bug.cgi?id=1233003

Although perhaps bordering on a don't do that type scenario, if
someone creates a volume in a pool outside of libvirt, then uses that
same name to create a volume in the pool via libvirt, then the creation
will fail and in some cases cause the same name volume to be deleted.

This patch will refresh the pool just prior to checking whether the
named volume exists prior to creating the volume in the pool. While
it's still possible to have a timing window to create a file after the
check - at least we tried.  At that point, someone is being malicious.
2015-10-05 08:14:44 -04:00
Ján Tomko
1b5685dada Create a shallow copy for volume building only if supported
Since the previous commit, the shallow copy is only used inside
the if (backend->buildVol) if.
2015-09-29 10:45:01 +02:00
Ján Tomko
56a4e9cb61 Update pool allocation with new values on volume creation
Since commit e0139e3, we update the pool allocation with
the user-provided allocation values.

For qcow2, the allocation is ignored for volume building,
but we still subtracted it from pool's allocation.
This can result in interesting values if the user-provided
allocation is large enough:

Capacity:       104.71 GiB
Allocation:     109.13 GiB
Available:      16.00 EiB

We already do a VolRefresh on volume creation. Also refresh
the volume after creating and use the new value to update the pool.

https://bugzilla.redhat.com/show_bug.cgi?id=1163091
2015-09-29 10:45:01 +02:00
John Ferlan
db9277a39b storage: Handle failure from refreshVol
Commit id '155ca616' added the 'refreshVol' API. In an NFS root-squash
environment it was possible that if the just created volume from XML wasn't
properly created with the right uid/gid and/or mode, then the followup
refreshVol will fail to open the volume in order to get the allocation/
capacity values. This would leave the volume still on the server and
cause a libvirtd crash because 'voldef' would be in the pool list, but
the cleanup code would free it.
2015-09-02 08:59:53 -04:00
Prerna Saxena
dd519a294b Fix cloning of raw, sparse volumes
When virsh vol-clone is attempted on a raw file where capacity > allocation,
the resulting cloned volume has a size that matches the virtual-size of
the parent; in place of matching its actual, disk size.
This patch fixes the cloned disk to have same _allocated_size_ as
the parent file from which it was cloned.

Ref: http://www.redhat.com/archives/libvir-list/2015-May/msg00050.html

Also fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1130739

Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2015-07-10 08:54:10 +02:00
Erik Skultety
b563787192 storage: Revert volume obj list updating after volume creation (4749d82a)
This patch reverts commit 4749d82a which tried to tweak the logic in
volume creation. We did realloc and update our object list before we executed
volume building within a specific storage backend. If that failed, we
had to update (again) our object list to the original state as it was before the
build and delete the volume from the pool (even though it didn't exist - this
truly depends on the backend).
I misunderstood the base idea to be able to poll the status of the volume
creation using vol-info. After commit 4749d82a this wasn't possible
anymore, although no BZ has been reported yet.

Commit 4749d82a also claimed to fix
https://bugzilla.redhat.com/show_bug.cgi?id=1223177, but commit c8be606b of the
same series as 4749d82ad (which was more of a refactor than a fix)
fixes the same issue so the revert should be pretty straightforward.
Further more, BZ https://bugzilla.redhat.com/show_bug.cgi?id=1241454 can be
fixed with this revert.
2015-07-09 13:23:27 +02:00
Erik Skultety
f92f31213a storage: Fix regression in storagePoolUpdateAllState
Commit 2a31c5f0 introduced support for storage pool state XMLs, however
it also introduced a regression:

if (!virstoragePoolObjIsActive(pool)) {
    virStoragePoolObjUnlock(pool);
    continue;
}

The idea behind this was that since we've got state XMLs and the pool
wasn't marked as active by autostart routine (if the autostart flag had been
set earlier), the pool is inactive and we can leave it be and continue with
other pools. However, filesystem type pools like fs,dir, possibly netfs are
supposed to be active if the filesystem is mounted on the host. And this is
exactly where the regression occurs, e.g. pool type 'dir' which has been
previously destroyed and marked as !autostart gets filtered out
by the condition above.
The resolution should be simply to remove the condition completely,
all pools will get their 'active' flag updated by check callback and if
they do not support such callback, the logic doesn't change and such
pools will be inactive by default (e.g. RBD, even if a state XML exists).

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1238610
2015-07-08 12:21:25 +02:00
Prerna Saxena
7e7dee4389 Storage: Introduce shadow vol for refresh while the main vol builds.
Libvirt periodically refreshes all volumes in a storage pool, including
the volumes being cloned.
While cloning a storage volume from parent, we drop pool locks. Subsequent
volume refresh sometimes changes allocation for an ongoing copy, and leads
to corrupt images.
Fix: Introduce a shadow volume that isolates the volume object under refresh
from the base which has a copy ongoing.

Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2015-06-30 14:29:38 +02:00
John Ferlan
1feaccf000 storage: Need to set secrettype for direct iscsi disk volume
https://bugzilla.redhat.com/show_bug.cgi?id=1200206

Commit id '1b4eaa61' added the ability to have a mode='direct' for
an iscsi disk volume.  It relied on virStorageTranslateDiskSourcePool
in order to copy any disk source pool authentication information to
the direct disk volume, but it neglected to also copy the 'secrettype'
field which ends up being used in the domain volume formatting code.
Adding a secrettype for this case will allow for proper formatting later
and allow disk snapshotting to work properly

Additionally libvirtd restart processing would fail to find the domain
since the translation processing code is run after domain xml processing,
so handle the the case where the authdef could have an empty secrettype
field when processing the auth and additionally ignore performing the
actual and expected auth secret type checks for a DISK_VOLUME since that
data will be reassembled later during translation processing of the
running domain.
2015-06-15 07:14:40 -04:00
Erik Skultety
4749d82a8b storage: Don't update volume objs list before we successfully create one
We do update pool volume object list before we actually create any
volume. If buildVol fails, we then try to delete the volume in the
storage as well as remove it from our structures. The problem is, that
any backend that supports both buildVol and deleteVol would fail in this
case which is completely unnecessary. This patch causes the update to
take place after we know a volume has been created successfully, thus no
removal in case of a buildVol failure is necessary.

https://bugzilla.redhat.com/show_bug.cgi?id=1223177
2015-06-02 15:02:02 +02:00
John Ferlan
48809204d1 storage: Don't adjust pool alloc/avail values for disk backend
Commit id '2ac0e647' for https://bugzilla.redhat.com/show_bug.cgi?id=1206521
was meant to be a generic check for the CreateVol, CreateVolFrom, and
DeleteVol paths to check if the storage backend's changed the pool's view
of allocation or available values.

Unfortunately as it turns out this caused a side effect when the disk backend
created an extended partition there would be no actual storage removed from
the pool, thus the changes would not find any change in allocation or
available and incorrectly update the pool values using the size of the
extended partition. A subsequent refresh of the pool would reset the
values appropriately.

This patch modifies those checks in order to specifically not update the
pool allocation and available for only the disk backend rather than be
generic before and after checks.
2015-05-28 13:32:16 -04:00
John Ferlan
6727bfd728 Revert "storage: Don't duplicate efforts of backend driver"
This reverts commit 2ac0e647bd.
2015-05-28 13:32:16 -04:00
Ján Tomko
8b316fe5da Fix shrinking volumes with the delta flag
This never worked.

In 0.9.10 when this API was introduced, it was intended that
the SHRINK flag combined with DELTA would shrink the volume by
the specified capacity (to avoid passing negative numbers).
See commit 055bbf4.

When the SHRINK flag was finally implemented for the first backend
in 1.2.13 (commit aa9aa6a), it was only implemented for the absolute
values and with the delta flag the volume is always extended,
regardless of the SHRINK flag.

Treat the SHRINK flag as a minus sign when used together with DELTA,
to allow shrinking volumes as was documented in the API since 0.9.10.

https://bugzilla.redhat.com/show_bug.cgi?id=1220213
2015-05-28 14:10:32 +02:00
Ján Tomko
7211f66ad7 Simplify allocation check in storageVolResize
Since shrinking a volume below existing allocation is not allowed,
it is not possible for a successful resize with VOL_RESIZE_ALLOCATE
to increase the pool's available value.

Even with the SHRINK flag it is possible to extend the current
allocation or even the capacity. Remove the overflow when
computing delta with this flag and do the check even if the
flag was specified.

https://bugzilla.redhat.com/show_bug.cgi?id=1073305
2015-05-28 14:10:09 +02:00
Cole Robinson
65fc824666 storage: If driver startup state syncing fails, delete statefile
If you end up with a state file for a pool that no longer starts up
or refreshes correctly, the state file is never removed and adds
noise to the logs everytime libvirtd is started.

If the initial state syncing fails, delete the statefile.
2015-04-28 09:37:58 -04:00
Cole Robinson
af9dc75c1f storage: Break out storageDriverLoadPoolState
Will simplify a future patch
2015-04-28 09:37:57 -04:00
Cole Robinson
c180a3dcf7 storage: Don't leave stale state file if pool startup fails
After pool startup we call refreshPool(). If that fails, we leave
a stale pool state file hanging around.

Hit this trying to create a pool with qemu:///session containing
root owned files.
2015-04-28 09:37:57 -04:00
Cole Robinson
b29aff322f storage: Fix autostart dir for qemu:///session 2015-04-28 09:37:57 -04:00
John Ferlan
2ac0e647bd storage: Don't duplicate efforts of backend driver
https://bugzilla.redhat.com/show_bug.cgi?id=1206521

If the backend driver updates the pool available and/or allocation values,
then the storage_driver VolCreateXML, VolCreateXMLFrom, and VolDelete APIs
should not change the value; otherwise, it will appear as if the values
were "doubled" for each change.  Additionally since unsigned arithmetic will
be used depending on the size and operation, either or both values could be
appear to be much larger than they should be (in the EiB range).

Currently only the disk pool updates the values, but other pools could.
Assume a "fresh" disk pool of 500 MiB using /dev/sde:

$ virsh pool-info disk-pool
...
Capacity:       509.88 MiB
Allocation:     0.00 B
Available:      509.84 MiB

$ virsh vol-create-as disk-pool sde1 --capacity 300M

$ virsh pool-info disk-pool
...
Capacity:       509.88 MiB
Allocation:     600.47 MiB
Available:      16.00 EiB

Following assumes disk backend updated to refresh the disk pool at deletion
of primary partition as well as extended partition:

$ virsh vol-delete --pool disk-pool sde1
Vol sde1 deleted

$ virsh pool-info disk-pool
...
Capacity:       509.88 MiB
Allocation:     9.73 EiB
Available:      6.27 EiB

This patch will check if the backend updated the pool values and honor that
update.
2015-04-09 19:04:18 -04:00
John Ferlan
1095230dee storage: Fix issues in storageVolResize
https://bugzilla.redhat.com/show_bug.cgi?id=1073305

When creating a volume in a pool, the creation allows the 'capacity'
value to be larger than the available space in the pool. As long as
the 'allocation' value will fit in the space, the volume will be created.

However, resizing the volume checks were made with the new absolute
capacity value against existing capacity + the available space without
regard for whether the new absolute capacity was actually allocating
space or not.  For example, a pool with 75G of available space creates
a volume of 10G using a capacity of 100G and allocation of 10G will succeed;
however, if the allocation used a capacity of 10G instead and then tried
to resize the allocation to 100G the code would fail to allow the backend
to try the resize.

Furthermore, when updating the pool "available" and "allocation" values,
the resize code would just "blindly" adjust them regardless of whether
space was "allocated" or just "capacity" was being adjusted.  This left
a scenario whereby a resize to 100G would fail; however, a resize to 50G
followed by one to 100G would both succeed.  Again, neither was adjusting
the allocation value, just the "capacity" value.

This patch adds more logic to the resize code to understand whether the
new capacity value is actually "allocating" space as well and whether it
shrinking or expanding. Since unsigned arithmatic is involved, the possibility
that we adjust the pool size values incorrectly is probable.

This patch also ensures that updates to the pool values only occur if we
actually performed the allocation.

NB: The storageVolDelete, storageVolCreateXML, and storageVolCreateXMLFrom
each only updates the pool allocation/availability values by the target
volume allocation value.
2015-04-09 19:04:18 -04:00
Erik Skultety
2a31c5f030 storage: Introduce storagePoolUpdateAllState function
The 'checkPool' callback was originally part of the storageDriverAutostart function,
but the pools need to be checked earlier during initialization phase,
otherwise we can't start a domain which mounts a volume after the
libvirtd daemon restarted. This is because qemuProcessReconnect is called
earlier than storageDriverAutostart. Therefore the 'checkPool' logic has been
moved to storagePoolUpdateAllState which is called inside storageDriverInitialize.

We also need a valid 'conn' reference to be able to execute 'refreshPool'
during initialization phase. Though it isn't available until storageDriverAutostart
all of our storage backends do ignore 'conn' pointer, except for RBD,
but RBD doesn't support 'checkPool' callback, so it's safe to pass
conn = NULL in this case.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1177733
2015-04-07 16:22:40 +02:00
Erik Skultety
a9700771f5 conf: Introduce virStoragePoolLoadAllState && virStoragePoolLoadState
These functions operate exactly the same as their network equivalents
virNetworkLoadAllState, virNetworkLoadState.
2015-04-07 16:22:40 +02:00
Erik Skultety
723143a19c storage: Add support for storage pool state XML
This patch introduces new virStorageDriverState element stateDir.
Also adds necessary changes to storageStateInitialize, so that
directories initialization becomes more generic.
2015-04-07 16:22:40 +02:00
Erik Skultety
cf7392a0d2 storage: Remove unused attribute conn from 'checkPool' callback
In order to be able to use 'checkPool' inside functions which do not
have any connection reference, 'conn' attribute needs to be discarded
from the checkPool's signature, since it's not used by any storage backend
anyway.
2015-04-02 11:57:07 +02:00
Ján Tomko
155ca616eb Allow creating volumes with a backing store but no capacity
The tool creating the image can get the capacity from the backing
storage. Just refresh the volume afterwards.

https://bugzilla.redhat.com/show_bug.cgi?id=958510
2015-03-02 08:07:11 +01:00
Ján Tomko
e3f1d2a820 Allow cloning volumes with no capacity specified
In virStorageVolCreateXML, add VIR_VOL_XML_PARSE_NO_CAPACITY
to the call parsing the XML of the new volume to make the capacity
optional.

If the capacity is omitted, use the capacity of the old volume.
We already do that for values that are less than the original
volume capacity.
2015-03-02 08:07:11 +01:00
Ján Tomko
cbd788eba6 Add flags argument to virStorageVolDefParse*
Allow the callers to pass down libvirt-internal flags.
2015-03-02 08:07:11 +01:00
John Ferlan
1d2e4d8ca2 storage: Need to clear pool prior to refreshPool during Autostart
https://bugzilla.redhat.com/show_bug.cgi?id=1176510

When storageDriverAutostart is called path virStateReload via a 'service
libvirtd reload', then because the volume list in the pool wasn't cleared
prior to the call, each volume would be listed multiple times (as many
times as we reload). I believe the issue would be introduced by commit
id '9e093f0b' at least for the libvirtd reload path, although I suppose
the introduction of virStateReload (commit id '70da0494') could be a
different cause.

Thus like other places prior to calling refreshPool, we need to call
virStoragePoolObjClearVols
2015-01-31 07:56:15 -05:00
Chen Hanxiao
95da191376 storage: add a flag to clone files on btrfs
When creating a RAW file, we don't take advantage
of clone of btrfs.

Add a VIR_STORAGE_VOL_CREATE_REFLINK flag to request
a reflink copy.

Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2015-01-27 13:41:14 +01:00
Daniel P. Berrange
55ea7be7d9 Removing probing of secondary drivers
For stateless, client side drivers, it is never correct to
probe for secondary drivers. It is only ever appropriate to
use the secondary driver that is associated with the
hypervisor in question. As a result the ESX & HyperV drivers
have both been forced to do hacks where they register no-op
drivers for the ones they don't implement.

For stateful, server side drivers, we always just want to
use the same built-in shared driver. The exception is
virtualbox which is really a stateless driver and so wants
to use its own server side secondary drivers. To deal with
this virtualbox has to be built as 3 separate loadable
modules to allow registration to work in the right order.

This can all be simplified by introducing a new struct
recording the precise set of secondary drivers each
hypervisor driver wants

struct _virConnectDriver {
    virHypervisorDriverPtr hypervisorDriver;
    virInterfaceDriverPtr interfaceDriver;
    virNetworkDriverPtr networkDriver;
    virNodeDeviceDriverPtr nodeDeviceDriver;
    virNWFilterDriverPtr nwfilterDriver;
    virSecretDriverPtr secretDriver;
    virStorageDriverPtr storageDriver;
};

Instead of registering the hypervisor driver, we now
just register a virConnectDriver instead. This allows
us to remove all probing of secondary drivers. Once we
have chosen the primary driver, we immediately know the
correct secondary drivers to use.

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2015-01-27 12:02:04 +00:00
Peter Krempa
8ef4f598f1 storage: Fix printing/casting of uid_t/gid_t
Other parts of libvirt use "%u" for formatting uid/gid and typecast to
unsigned int. Storage driver used the signed variant.
2014-12-08 11:36:29 +01:00
Luyao Huang
87b9437f89 storage: fix crash caused by no check return before set close
https://bugzilla.redhat.com/show_bug.cgi?id=1087104#c5

When trying to use an invalid offset to virStorageVolUpload(), libvirt
fails in virFDStreamOpenFileInternal(), although it seems libvirt does
not check the return in storageVolUpload(), and calls
virFDStreamSetInternalCloseCb() right after.  But stream doesn't have a
privateData (is NULL) yet, and the daemon crashes then.

0  0x00007f09429a9c10 in pthread_mutex_lock () from /lib64/libpthread.so.0
1  0x00007f094514dbf5 in virMutexLock (m=<optimized out>) at util/virthread.c:88
2  0x00007f09451cb211 in virFDStreamSetInternalCloseCb at fdstream.c:795
3  0x00007f092ff2c9eb in storageVolUpload at storage/storage_driver.c:2098
4  0x00007f09451f46e0 in virStorageVolUpload at libvirt.c:14000
5  0x00007f0945c78fa1 in remoteDispatchStorageVolUpload at remote_dispatch.h:14339
6  remoteDispatchStorageVolUploadHelper at remote_dispatch.h:14309
7  0x00007f094524a192 in virNetServerProgramDispatchCall at rpc/virnetserverprogram.c:437

Signed-off-by: Luyao Huang <lhuang@redhat.com>
2014-12-03 17:36:07 +01:00