https://bugzilla.redhat.com/show_bug.cgi?id=1256999
After creating a copy of the 'authdef' in a pool -> disk translation,
unconditionally clear the 'authType' in the resulting disk auth def
structure since that's used for a storage pool and not a disk. This
ensures virStorageAuthDefFormat will properly format the <auth> XML
for a <disk> (e.g. it won't have a <auth type='%s'.../>).
https://bugzilla.redhat.com/show_bug.cgi?id=1233003
Although perhaps bordering on a don't do that type scenario, if
someone creates a volume in a pool outside of libvirt, then uses that
same name to create a volume in the pool via libvirt, then the creation
will fail and in some cases cause the same name volume to be deleted.
This patch will refresh the pool just prior to checking whether the
named volume exists prior to creating the volume in the pool. While
it's still possible to have a timing window to create a file after the
check - at least we tried. At that point, someone is being malicious.
Since commit e0139e3, we update the pool allocation with
the user-provided allocation values.
For qcow2, the allocation is ignored for volume building,
but we still subtracted it from pool's allocation.
This can result in interesting values if the user-provided
allocation is large enough:
Capacity: 104.71 GiB
Allocation: 109.13 GiB
Available: 16.00 EiB
We already do a VolRefresh on volume creation. Also refresh
the volume after creating and use the new value to update the pool.
https://bugzilla.redhat.com/show_bug.cgi?id=1163091
Commit id '155ca616' added the 'refreshVol' API. In an NFS root-squash
environment it was possible that if the just created volume from XML wasn't
properly created with the right uid/gid and/or mode, then the followup
refreshVol will fail to open the volume in order to get the allocation/
capacity values. This would leave the volume still on the server and
cause a libvirtd crash because 'voldef' would be in the pool list, but
the cleanup code would free it.
When virsh vol-clone is attempted on a raw file where capacity > allocation,
the resulting cloned volume has a size that matches the virtual-size of
the parent; in place of matching its actual, disk size.
This patch fixes the cloned disk to have same _allocated_size_ as
the parent file from which it was cloned.
Ref: http://www.redhat.com/archives/libvir-list/2015-May/msg00050.html
Also fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1130739
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
This patch reverts commit 4749d82a which tried to tweak the logic in
volume creation. We did realloc and update our object list before we executed
volume building within a specific storage backend. If that failed, we
had to update (again) our object list to the original state as it was before the
build and delete the volume from the pool (even though it didn't exist - this
truly depends on the backend).
I misunderstood the base idea to be able to poll the status of the volume
creation using vol-info. After commit 4749d82a this wasn't possible
anymore, although no BZ has been reported yet.
Commit 4749d82a also claimed to fix
https://bugzilla.redhat.com/show_bug.cgi?id=1223177, but commit c8be606b of the
same series as 4749d82ad (which was more of a refactor than a fix)
fixes the same issue so the revert should be pretty straightforward.
Further more, BZ https://bugzilla.redhat.com/show_bug.cgi?id=1241454 can be
fixed with this revert.
Commit 2a31c5f0 introduced support for storage pool state XMLs, however
it also introduced a regression:
if (!virstoragePoolObjIsActive(pool)) {
virStoragePoolObjUnlock(pool);
continue;
}
The idea behind this was that since we've got state XMLs and the pool
wasn't marked as active by autostart routine (if the autostart flag had been
set earlier), the pool is inactive and we can leave it be and continue with
other pools. However, filesystem type pools like fs,dir, possibly netfs are
supposed to be active if the filesystem is mounted on the host. And this is
exactly where the regression occurs, e.g. pool type 'dir' which has been
previously destroyed and marked as !autostart gets filtered out
by the condition above.
The resolution should be simply to remove the condition completely,
all pools will get their 'active' flag updated by check callback and if
they do not support such callback, the logic doesn't change and such
pools will be inactive by default (e.g. RBD, even if a state XML exists).
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1238610
Libvirt periodically refreshes all volumes in a storage pool, including
the volumes being cloned.
While cloning a storage volume from parent, we drop pool locks. Subsequent
volume refresh sometimes changes allocation for an ongoing copy, and leads
to corrupt images.
Fix: Introduce a shadow volume that isolates the volume object under refresh
from the base which has a copy ongoing.
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1200206
Commit id '1b4eaa61' added the ability to have a mode='direct' for
an iscsi disk volume. It relied on virStorageTranslateDiskSourcePool
in order to copy any disk source pool authentication information to
the direct disk volume, but it neglected to also copy the 'secrettype'
field which ends up being used in the domain volume formatting code.
Adding a secrettype for this case will allow for proper formatting later
and allow disk snapshotting to work properly
Additionally libvirtd restart processing would fail to find the domain
since the translation processing code is run after domain xml processing,
so handle the the case where the authdef could have an empty secrettype
field when processing the auth and additionally ignore performing the
actual and expected auth secret type checks for a DISK_VOLUME since that
data will be reassembled later during translation processing of the
running domain.
We do update pool volume object list before we actually create any
volume. If buildVol fails, we then try to delete the volume in the
storage as well as remove it from our structures. The problem is, that
any backend that supports both buildVol and deleteVol would fail in this
case which is completely unnecessary. This patch causes the update to
take place after we know a volume has been created successfully, thus no
removal in case of a buildVol failure is necessary.
https://bugzilla.redhat.com/show_bug.cgi?id=1223177
Commit id '2ac0e647' for https://bugzilla.redhat.com/show_bug.cgi?id=1206521
was meant to be a generic check for the CreateVol, CreateVolFrom, and
DeleteVol paths to check if the storage backend's changed the pool's view
of allocation or available values.
Unfortunately as it turns out this caused a side effect when the disk backend
created an extended partition there would be no actual storage removed from
the pool, thus the changes would not find any change in allocation or
available and incorrectly update the pool values using the size of the
extended partition. A subsequent refresh of the pool would reset the
values appropriately.
This patch modifies those checks in order to specifically not update the
pool allocation and available for only the disk backend rather than be
generic before and after checks.
This never worked.
In 0.9.10 when this API was introduced, it was intended that
the SHRINK flag combined with DELTA would shrink the volume by
the specified capacity (to avoid passing negative numbers).
See commit 055bbf4.
When the SHRINK flag was finally implemented for the first backend
in 1.2.13 (commit aa9aa6a), it was only implemented for the absolute
values and with the delta flag the volume is always extended,
regardless of the SHRINK flag.
Treat the SHRINK flag as a minus sign when used together with DELTA,
to allow shrinking volumes as was documented in the API since 0.9.10.
https://bugzilla.redhat.com/show_bug.cgi?id=1220213
Since shrinking a volume below existing allocation is not allowed,
it is not possible for a successful resize with VOL_RESIZE_ALLOCATE
to increase the pool's available value.
Even with the SHRINK flag it is possible to extend the current
allocation or even the capacity. Remove the overflow when
computing delta with this flag and do the check even if the
flag was specified.
https://bugzilla.redhat.com/show_bug.cgi?id=1073305
If you end up with a state file for a pool that no longer starts up
or refreshes correctly, the state file is never removed and adds
noise to the logs everytime libvirtd is started.
If the initial state syncing fails, delete the statefile.
After pool startup we call refreshPool(). If that fails, we leave
a stale pool state file hanging around.
Hit this trying to create a pool with qemu:///session containing
root owned files.
https://bugzilla.redhat.com/show_bug.cgi?id=1206521
If the backend driver updates the pool available and/or allocation values,
then the storage_driver VolCreateXML, VolCreateXMLFrom, and VolDelete APIs
should not change the value; otherwise, it will appear as if the values
were "doubled" for each change. Additionally since unsigned arithmetic will
be used depending on the size and operation, either or both values could be
appear to be much larger than they should be (in the EiB range).
Currently only the disk pool updates the values, but other pools could.
Assume a "fresh" disk pool of 500 MiB using /dev/sde:
$ virsh pool-info disk-pool
...
Capacity: 509.88 MiB
Allocation: 0.00 B
Available: 509.84 MiB
$ virsh vol-create-as disk-pool sde1 --capacity 300M
$ virsh pool-info disk-pool
...
Capacity: 509.88 MiB
Allocation: 600.47 MiB
Available: 16.00 EiB
Following assumes disk backend updated to refresh the disk pool at deletion
of primary partition as well as extended partition:
$ virsh vol-delete --pool disk-pool sde1
Vol sde1 deleted
$ virsh pool-info disk-pool
...
Capacity: 509.88 MiB
Allocation: 9.73 EiB
Available: 6.27 EiB
This patch will check if the backend updated the pool values and honor that
update.
https://bugzilla.redhat.com/show_bug.cgi?id=1073305
When creating a volume in a pool, the creation allows the 'capacity'
value to be larger than the available space in the pool. As long as
the 'allocation' value will fit in the space, the volume will be created.
However, resizing the volume checks were made with the new absolute
capacity value against existing capacity + the available space without
regard for whether the new absolute capacity was actually allocating
space or not. For example, a pool with 75G of available space creates
a volume of 10G using a capacity of 100G and allocation of 10G will succeed;
however, if the allocation used a capacity of 10G instead and then tried
to resize the allocation to 100G the code would fail to allow the backend
to try the resize.
Furthermore, when updating the pool "available" and "allocation" values,
the resize code would just "blindly" adjust them regardless of whether
space was "allocated" or just "capacity" was being adjusted. This left
a scenario whereby a resize to 100G would fail; however, a resize to 50G
followed by one to 100G would both succeed. Again, neither was adjusting
the allocation value, just the "capacity" value.
This patch adds more logic to the resize code to understand whether the
new capacity value is actually "allocating" space as well and whether it
shrinking or expanding. Since unsigned arithmatic is involved, the possibility
that we adjust the pool size values incorrectly is probable.
This patch also ensures that updates to the pool values only occur if we
actually performed the allocation.
NB: The storageVolDelete, storageVolCreateXML, and storageVolCreateXMLFrom
each only updates the pool allocation/availability values by the target
volume allocation value.
The 'checkPool' callback was originally part of the storageDriverAutostart function,
but the pools need to be checked earlier during initialization phase,
otherwise we can't start a domain which mounts a volume after the
libvirtd daemon restarted. This is because qemuProcessReconnect is called
earlier than storageDriverAutostart. Therefore the 'checkPool' logic has been
moved to storagePoolUpdateAllState which is called inside storageDriverInitialize.
We also need a valid 'conn' reference to be able to execute 'refreshPool'
during initialization phase. Though it isn't available until storageDriverAutostart
all of our storage backends do ignore 'conn' pointer, except for RBD,
but RBD doesn't support 'checkPool' callback, so it's safe to pass
conn = NULL in this case.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1177733
This patch introduces new virStorageDriverState element stateDir.
Also adds necessary changes to storageStateInitialize, so that
directories initialization becomes more generic.
In order to be able to use 'checkPool' inside functions which do not
have any connection reference, 'conn' attribute needs to be discarded
from the checkPool's signature, since it's not used by any storage backend
anyway.
In virStorageVolCreateXML, add VIR_VOL_XML_PARSE_NO_CAPACITY
to the call parsing the XML of the new volume to make the capacity
optional.
If the capacity is omitted, use the capacity of the old volume.
We already do that for values that are less than the original
volume capacity.
https://bugzilla.redhat.com/show_bug.cgi?id=1176510
When storageDriverAutostart is called path virStateReload via a 'service
libvirtd reload', then because the volume list in the pool wasn't cleared
prior to the call, each volume would be listed multiple times (as many
times as we reload). I believe the issue would be introduced by commit
id '9e093f0b' at least for the libvirtd reload path, although I suppose
the introduction of virStateReload (commit id '70da0494') could be a
different cause.
Thus like other places prior to calling refreshPool, we need to call
virStoragePoolObjClearVols
When creating a RAW file, we don't take advantage
of clone of btrfs.
Add a VIR_STORAGE_VOL_CREATE_REFLINK flag to request
a reflink copy.
Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
For stateless, client side drivers, it is never correct to
probe for secondary drivers. It is only ever appropriate to
use the secondary driver that is associated with the
hypervisor in question. As a result the ESX & HyperV drivers
have both been forced to do hacks where they register no-op
drivers for the ones they don't implement.
For stateful, server side drivers, we always just want to
use the same built-in shared driver. The exception is
virtualbox which is really a stateless driver and so wants
to use its own server side secondary drivers. To deal with
this virtualbox has to be built as 3 separate loadable
modules to allow registration to work in the right order.
This can all be simplified by introducing a new struct
recording the precise set of secondary drivers each
hypervisor driver wants
struct _virConnectDriver {
virHypervisorDriverPtr hypervisorDriver;
virInterfaceDriverPtr interfaceDriver;
virNetworkDriverPtr networkDriver;
virNodeDeviceDriverPtr nodeDeviceDriver;
virNWFilterDriverPtr nwfilterDriver;
virSecretDriverPtr secretDriver;
virStorageDriverPtr storageDriver;
};
Instead of registering the hypervisor driver, we now
just register a virConnectDriver instead. This allows
us to remove all probing of secondary drivers. Once we
have chosen the primary driver, we immediately know the
correct secondary drivers to use.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1087104#c5
When trying to use an invalid offset to virStorageVolUpload(), libvirt
fails in virFDStreamOpenFileInternal(), although it seems libvirt does
not check the return in storageVolUpload(), and calls
virFDStreamSetInternalCloseCb() right after. But stream doesn't have a
privateData (is NULL) yet, and the daemon crashes then.
0 0x00007f09429a9c10 in pthread_mutex_lock () from /lib64/libpthread.so.0
1 0x00007f094514dbf5 in virMutexLock (m=<optimized out>) at util/virthread.c:88
2 0x00007f09451cb211 in virFDStreamSetInternalCloseCb at fdstream.c:795
3 0x00007f092ff2c9eb in storageVolUpload at storage/storage_driver.c:2098
4 0x00007f09451f46e0 in virStorageVolUpload at libvirt.c:14000
5 0x00007f0945c78fa1 in remoteDispatchStorageVolUpload at remote_dispatch.h:14339
6 remoteDispatchStorageVolUploadHelper at remote_dispatch.h:14309
7 0x00007f094524a192 in virNetServerProgramDispatchCall at rpc/virnetserverprogram.c:437
Signed-off-by: Luyao Huang <lhuang@redhat.com>
While this could be exposed as a public API, it's not done yet as
there's no demand for that yet. Anyway, this is just preparing
the environment for easier volume creation on the destination.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Since virStoragePoolFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
Since virStorageVolFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
https://bugzilla.redhat.com/show_bug.cgi?id=1159180
The virStoragePoolSourceFindDuplicate only checks the incoming definition
against the same type of pool as the def; however, for "scsi_host" and
"fc_host" adapter pools, it's possible that either some pool "scsi_host"
adapter definition is already using the scsi_hostN that the "fc_host"
adapter definition wants to use or some "fc_host" pool adapter definition
is using a vHBA scsi_hostN or parent scsi_hostN that an incoming "scsi_host"
definition is trying to use.
This patch adds the mismatched type checks and adds extraneous comments
to describe what each check is determining.
This patch also modifies the documentation to be describe what scsi_hostN
devices a "scsi_host" source adapter should use and which to avoid. It also
updates the parent definition to specifically call out that for mixed
environments it's better to define which parent to use so that the duplicate
pool checks can be done properly.
The shared storage driver is stateful and inside the daemon so
there is no need to use the storagePrivateData field to get the
driver handle. Just access the global driver handle directly.
Add a new parameter to virStorageFileGetMetadata that will break the
backing chain detection process and report useful error message rather
than having to use virStorageFileChainGetBroken.
This patch just introduces the option, usage will be provided
separately.
There were two occurrances of attempting to initialize actualType by
calling virStorageSourceGetActualType(src) prior to a check if (!src)
resulting in Coverity complaining about the possible NULL dereference
in virStorageSourceGetActualType() of src.
Resolve by moving the actualType setting until after checking !src
Currently, qemu driver uses qemuTranslateDiskSourcePool()
to translate disk volume information. This function is
general enough and could be used for other drivers as well,
so move it to conf/domain_conf.c along with its helpers.
- qemuTranslateDiskSourcePool: move to storage/storage_driver.c
and rename to virStorageTranslateDiskSourcePool,
- qemuAddISCSIPoolSourceHost: move to storage/storage_driver.c
and rename to virStorageAddISCSIPoolSourceHost,
- qemuTranslateDiskSourcePoolAuth: move to storage/storage_driver.c
and rename to virStorageTranslateDiskSourcePoolAuth,
- Update users of qemuTranslateDiskSourcePool to use a
new name.
Implement ZFS storage backend driver. Currently supported
only on FreeBSD because of ZFS limitations on Linux.
Features supported:
- pool-start, pool-stop
- pool-info
- vol-list
- vol-create / vol-delete
Pool definition looks like that:
<pool type='zfs'>
<name>myzfspool</name>
<source>
<name>actualpoolname</name>
</source>
</pool>
The 'actualpoolname' value is a name of the pool on the system,
such as shown by 'zpool list' command. Target makes no sense
here because volumes path is always /dev/zvol/$poolname/$volname.
User has to create a pool on his own, this driver doesn't
support pool creation currently.
A volume could be used with Qemu by adding an entry like this:
<disk type='volume' device='disk'>
<driver name='qemu' type='raw'/>
<source pool='myzfspool' volume='vol5'/>
<target dev='hdc' bus='ide'/>
</disk>
https://bugzilla.redhat.com/show_bug.cgi?id=1072653
Upon successful upload of a volume, the target volume and storage pool
were not updated to reflect any changes as a result of the upload. Make
use of the existing stream close callback mechanism to force a backend
pool refresh to occur in a separate thread once the stream closes. The
separate thread should avoid potential deadlocks if the refresh needed
to wait on some event from the event loop which is used to perform
the stream callback.
With my intended use of storage driver assist to chown files on remote
storage we will need a witness that will tell us whether the given
storage volume supports operations needed by the storage driver.
Gluster storage works on a similar principle to NFS where it takes the
uid and gid of the actual process and uses it to access the storage
volume on the remote server. This introduces a need to chown storage
files on gluster via native API.