We have macros for both positive and negative string matching.
Therefore there is no need to use !STREQ or !STRNEQ. At the same
time as we are dropping this, new syntax-check rule is
introduced to make sure we won't introduce it again.
Signed-off-by: Ishmanpreet Kaur Khera <khera.ishman@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1233003
Track when the logical volume was successfully created in order to
properly handle the call to virStorageBackendLogicalDeleteVol. It's
possible that the failure to create was because someone created an
LV in the pool outside of libvirt's knowledge. In this case, we don't
want to delete that LV. A subsequent or future refresh of the pool
will find the volume and cause an earlier failure
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id '1b5685da' refactored the code to move buildvoldef inside
the buildVol conditional; however, the VIR_FREE of the memory was
left only when 'buildret' failed, thus we're leaking memory.
Signed-off-by: John Ferlan <jferlan@redhat.com>
As of commit id '155ca616' a 'refreshVol' is called after a buildVol
succeeds in storageVolCreateXML, thus a volStorageBackendSheepdogRefreshVolInfo
call in virStorageBackendSheepdogBuildVol is no longer necessary.
Additionally, the 'conn' parameter becomes unused.
Signed-off-by: John Ferlan <jferlan@redhat.com>
As of commit id '155ca616' a 'refreshVol' is called after the buildVol
succeeds in storageVolCreateXML, thus the volStorageBackendRBDRefreshVolInfo
call in virStorageBackendRBDBuildVol is no longer necessary.
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1256999
After creating a copy of the 'authdef' in a pool -> disk translation,
unconditionally clear the 'authType' in the resulting disk auth def
structure since that's used for a storage pool and not a disk. This
ensures virStorageAuthDefFormat will properly format the <auth> XML
for a <disk> (e.g. it won't have a <auth type='%s'.../>).
https://bugzilla.redhat.com/show_bug.cgi?id=1247987
Calculation of the extended and logical partition values for the disk
pool is complex. As the bz points out an extended partition should have
it's allocation initialized to 0 (zero) and keep the capacity as the size
dictated by the extents read. Then for each logical partition found,
adjust the allocation of the extended partition.
Finally, previous logic tried to avoid recalculating things if a logical
partition was deleted; however, since we now have special logic to handle
the allocation of the extended partition, just make life easier by reading
the partition table again - rather than doing the reverse adjustment.
https://bugzilla.redhat.com/show_bug.cgi?id=1251461
When 'starting' up a disk pool, we need to make sure the label on the
device is valid; otherwise, the followup refreshPool will assume the
disk has been properly formatted for use. If we don't find the valid
label, then refuse the start and give a proper reason.
Let's check to ensure we can find the Partition Table in the label
and that libvirt actually recognizes that type; otherwise, when we
go to read the partitions during a refresh operation we may not be
reading what we expect.
This will expand upon the types of errors or reason that a build
would fail, so we can create more direct error messages.
Modify virStorageBackendDiskValidLabel to add a 'writelabel' parameter.
While initially for the purpose of determining whether the label should
be written during DiskBuild, a future use during DiskStart could determine
whether the pool should be started using the label found. Augment the
error messages also to give a hint as to what someone may need to do
or why the command failed.
Create a new function virStorageBackendDiskValidLabel to handle checking
whether there is a label on the device and whether it's valid or not.
While initially for the purpose of determining whether the label can be
overwritten during DiskBuild, a future use during DiskStart could determine
whether the pool should be started using the label found.
https://bugzilla.redhat.com/show_bug.cgi?id=1233003
Although perhaps bordering on a don't do that type scenario, if
someone creates a volume in a pool outside of libvirt, then uses that
same name to create a volume in the pool via libvirt, then the creation
will fail and in some cases cause the same name volume to be deleted.
This patch will refresh the pool just prior to checking whether the
named volume exists prior to creating the volume in the pool. While
it's still possible to have a timing window to create a file after the
check - at least we tried. At that point, someone is being malicious.
Since commit e0139e3, we update the pool allocation with
the user-provided allocation values.
For qcow2, the allocation is ignored for volume building,
but we still subtracted it from pool's allocation.
This can result in interesting values if the user-provided
allocation is large enough:
Capacity: 104.71 GiB
Allocation: 109.13 GiB
Available: 16.00 EiB
We already do a VolRefresh on volume creation. Also refresh
the volume after creating and use the new value to update the pool.
https://bugzilla.redhat.com/show_bug.cgi?id=1163091
Similar to commit id '35847860', it's possible to attempt to create
a 'netfs' directory in an NFS root-squash environment which will cause
the 'vol-delete' command to fail. It's also possible error paths from
the 'vol-create' would result in an error to remove a created directory
if the permissions were incorrect (and disallowed root access).
Thus rename the virFileUnlink to be virFileRemove to match the C API
functionality, adjust the code to following using rmdir or unlink
depending on the path type, and then use/call it for the VIR_STORAGE_VOL_DIR
Commit id '155ca616' added the 'refreshVol' API. In an NFS root-squash
environment it was possible that if the just created volume from XML wasn't
properly created with the right uid/gid and/or mode, then the followup
refreshVol will fail to open the volume in order to get the allocation/
capacity values. This would leave the volume still on the server and
cause a libvirtd crash because 'voldef' would be in the pool list, but
the cleanup code would free it.
Commit id '7c2d65dde2' changed the default value of mode to be -1 if not
supplied in the XML, which should cause creation of the volume using the
default mode of VIR_STORAGE_DEFAULT_VOL_PERM_MODE; however, the check
made was whether mode was '0' or not to use default or provided value.
This patch fixes the issue to check if the 'mode' was provided in the XML
and use that value.
In an NFS root-squashed environment the 'vol-delete' command will fail to
'unlink' the target volume since it was created under a different uid:gid.
This code continues the concepts introduced in virFileOpenForked and
virDirCreate[NoFork] with respect to running the unlink command under
the uid/gid of the child. Unlike the other two, don't retry on EACCES
(that's why we're here doing this now).
While a zero allocation in safezero should be fine it isn't when we use
posix_fallocate which returns EINVAL on a zero allocation.
While we could skip the zero allocation in safezero_posix_fallocate it's
an optimization to do it for all allocations.
This fixes vm installation via virtinst for me which otherwise aborts
like:
Starting install...
Retrieving file linux... | 5.9 MB 00:01 ...
Retrieving file initrd.gz... | 29 MB 00:07 ...
ERROR Couldn't create storage volume 'virtinst-linux.sBgds4': 'cannot fill file '/var/lib/libvirt/boot/virtinst-linux.sBgds4': Invalid argument'
The error was introduced by e30297b0 as spotted by Chunyan Liu
In commit 155ca616e, a change was introduced that no longer allowed defining
volumes via XML with a capacity of '0'. Because we check for info.size_arg
to be non-zero, this use-case fails. This patch allows info.size_arg to be
zero if no backing store is specified.
Signed-off-by: Chris J Arges <chris.j.arges@canonical.com>
Currently, when trying to virsh pool-define/virsh pool-build a new
'dir' pool, if the target directory already exists, virsh
pool-build/virStoragePoolBuild will error out. This is a change of
behaviour compared to eg libvirt 1.2.13
This is caused by the wrong type being used for the dir_create_flags
variable in virStorageBackendFileSystemBuild , it's defined as a bool
but is used as a flag bit field so should be unsigned int (this matches
the type virDirCreate expects for this variable).
This should fix https://bugzilla.gnome.org/show_bug.cgi?id=752417 (GNOME
Boxes) and https://bugzilla.redhat.com/show_bug.cgi?id=1244080
(downstream virt-manager).
Resolving an error reporting bug introduced by commit id '761491e' which
just took the return of virStorageBackendRBDCreateImage and used it as
the basis for the message generated. This would generate EPERM regardless
of error seen.
We used to look at the librbd code version and depending on that
we would invoke rbd_create3() or rbd_create().
Since librbd version 0.67.9 we can however tell RBD that it should
create rbd format 2 images even if we invoke rbd_create().
The less options we pass to librbd, the more we can lean on the sane
defaults it uses.
For rbd_create3() we had things like the stripe count and unit hardcoded
in libvirt and that might cause problems down the road.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
When virsh vol-clone is attempted on a raw file where capacity > allocation,
the resulting cloned volume has a size that matches the virtual-size of
the parent; in place of matching its actual, disk size.
This patch fixes the cloned disk to have same _allocated_size_ as
the parent file from which it was cloned.
Ref: http://www.redhat.com/archives/libvir-list/2015-May/msg00050.html
Also fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1130739
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Instead of storing the remaining bytes, store the position of the first
unallocated byte. This will allow changing the amount of bytes copied
by virStorageBackendCopyToFD without changing the safezero call.
No functional impact.
This patch reverts commit 4749d82a which tried to tweak the logic in
volume creation. We did realloc and update our object list before we executed
volume building within a specific storage backend. If that failed, we
had to update (again) our object list to the original state as it was before the
build and delete the volume from the pool (even though it didn't exist - this
truly depends on the backend).
I misunderstood the base idea to be able to poll the status of the volume
creation using vol-info. After commit 4749d82a this wasn't possible
anymore, although no BZ has been reported yet.
Commit 4749d82a also claimed to fix
https://bugzilla.redhat.com/show_bug.cgi?id=1223177, but commit c8be606b of the
same series as 4749d82ad (which was more of a refactor than a fix)
fixes the same issue so the revert should be pretty straightforward.
Further more, BZ https://bugzilla.redhat.com/show_bug.cgi?id=1241454 can be
fixed with this revert.
Commit 2a31c5f0 introduced support for storage pool state XMLs, however
it also introduced a regression:
if (!virstoragePoolObjIsActive(pool)) {
virStoragePoolObjUnlock(pool);
continue;
}
The idea behind this was that since we've got state XMLs and the pool
wasn't marked as active by autostart routine (if the autostart flag had been
set earlier), the pool is inactive and we can leave it be and continue with
other pools. However, filesystem type pools like fs,dir, possibly netfs are
supposed to be active if the filesystem is mounted on the host. And this is
exactly where the regression occurs, e.g. pool type 'dir' which has been
previously destroyed and marked as !autostart gets filtered out
by the condition above.
The resolution should be simply to remove the condition completely,
all pools will get their 'active' flag updated by check callback and if
they do not support such callback, the logic doesn't change and such
pools will be inactive by default (e.g. RBD, even if a state XML exists).
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1238610
https://bugzilla.redhat.com/show_bug.cgi?id=1230664
Per the devmapper docs, use "/dev/mapper" or "/dev/dm-n" in order to
determine if a device is under control of DM Multipath.
So add "/dev/mapper" to the virFileExists, leaving the "/dev/mpath"
as a "legacy" option since it appears for a while it was the preferred
mechanism, but is no longer maintained
Libvirt periodically refreshes all volumes in a storage pool, including
the volumes being cloned.
While cloning a storage volume from parent, we drop pool locks. Subsequent
volume refresh sometimes changes allocation for an ongoing copy, and leads
to corrupt images.
Fix: Introduce a shadow volume that isolates the volume object under refresh
from the base which has a copy ongoing.
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1227664
If the requested format type for the new entry in the file system pool
is a 'dir', then be sure to set the vol->type correctly as would be done
when the pool is refreshed.
Related to :
https://bugzilla.redhat.com/show_bug.cgi?id=1171933
Rather than ignore the return status from virStorageBackendSCSIFindLUs,
cause a failure to start the pool if a -1 is returned. Issue was noted
during testing of the bz for iscsi that 'scsi' and 'fc' pools don't fail.
Commit id '832a9256' adjusted the code to recognize when the default
type of "unknown" was provided as the format type and to use "dos" if
found. Since the pool is built with "dos" and it could cause some
confusion when formatting the XML after building by seeing "unknown"
in the output, let's just adjust the pool's setting to "dos" so that
subsequent formats will see the value.
https://bugzilla.redhat.com/show_bug.cgi?id=1224233
Currently it's not possible to determine the difference between a
fatal memory allocation or failure to open/read the directory error
with a perhaps less fatal, I didn't find the "block" device in the
directory (which may be a disk entry without a block device).
In the case of the latter, we shouldn't cause failure to continue
searching in the caller (virStorageBackendSCSIFindLUs), rather we
should allow trying reading the next directory entry.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1186969
When generating the path to the dir for a CIFS/Samba driver, the code
would generate a source path for the mount using "%s:%s" while the
mount.cifs expects to see "//%s/%s". So check for the cifsfs and
format the source path appropriately.
Additionally, since there is no means to authenticate, the mount
needs a "-o guest" on the command line in order to anonymously mount
the Samba directory.
In order for the glusterfs boolean to be set, the pool->def->type must be
VIR_STORAGE_POOL_NETFS, thus the check within virCommandNewArgList whether
pool->def->type is VIR_STORAGE_POOL_FS will never be true, so remove it
Instead of initializing return value to zero (success) and overwriting
it on every failure just before the control jumps onto 'out' label,
let's initialize to an error value and set to zero only when we are
sure about the success. Just follow the pattern we have in the rest of
the code.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Instead of initializing return value to zero (success) and overwriting
it on every failure just before the control jumps onto 'out' label,
let's initialize to an error value and set to zero only when we are
sure about the success. Just follow the pattern we have in the rest of
the code.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1225694
Check if the disk partition to be wiped is the extended partition, if
so then disallow it. Do this via changing the wipeVol backend to check
the volume before passing to the common virStorageBackendVolWipeLocal
https://bugzilla.redhat.com/show_bug.cgi?id=1200206
Commit id '1b4eaa61' added the ability to have a mode='direct' for
an iscsi disk volume. It relied on virStorageTranslateDiskSourcePool
in order to copy any disk source pool authentication information to
the direct disk volume, but it neglected to also copy the 'secrettype'
field which ends up being used in the domain volume formatting code.
Adding a secrettype for this case will allow for proper formatting later
and allow disk snapshotting to work properly
Additionally libvirtd restart processing would fail to find the domain
since the translation processing code is run after domain xml processing,
so handle the the case where the authdef could have an empty secrettype
field when processing the auth and additionally ignore performing the
actual and expected auth secret type checks for a DISK_VOLUME since that
data will be reassembled later during translation processing of the
running domain.
https://bugzilla.redhat.com/show_bug.cgi?id=1181087
The virStorageBackendFileSystemIsMounted is called from three source paths
checkPool, startPool, and stopPool. Both start and stop validate the FS
fields before calling *IsMounted; however the check path there is no call.
This could lead the code into returning a true in "isActive" if for some
reason the target path for the pool was mounted. The assumption being
that if it was mounted, then we believe we started/mounted it.
It's also of note that commit id '81165294' added an error message for
the start/mount path regarding that the target is already mounted so
fail the start. That check was adjusted by commit id '13fde7ce' to
only message if actually mounted.
At one time this led to the libvirtd restart autostart code to declare
that the pool was active even though the startPool would inhibit startup
and the stopPool would inhibit shutdown. The autostart path changed as
of commit id '2a31c5f0' as part of the keep storage pools started between
libvirtd restarts.
This patch adds the same check made prior to start/mount and stop/unmount
to ensure we have a valid configuration before attempting to see if the
target is already mounted to declare "isActive" or not. Finding an improper
configuration will now cause an error at checkPool, which should make it
so we can no longer be left in a situation where the pool was started and
we have no way to stop it.
https://bugzilla.redhat.com/show_bug.cgi?id=1181087
Currently the assumption on the error message is that there are
no source device paths defined when the number of devices check
fails, but in reality the XML could have had none or it could have
had more than the value supported. Adjust the error message accordingly
to make it clearer what the error really is.
We do update pool volume object list before we actually create any
volume. If buildVol fails, we then try to delete the volume in the
storage as well as remove it from our structures. The problem is, that
any backend that supports both buildVol and deleteVol would fail in this
case which is completely unnecessary. This patch causes the update to
take place after we know a volume has been created successfully, thus no
removal in case of a buildVol failure is necessary.
https://bugzilla.redhat.com/show_bug.cgi?id=1223177
https://bugzilla.redhat.com/show_bug.cgi?id=1224018
The disk pool recalculates the pool allocation, capacity, and available
values each time through processing a newly created disk partition. This
created an issue with the allocation setting since the code used is shared
with the refresh path. Each path calls virStorageBackendDiskReadPartitions
which initializes the pool values and then processes the partition table
from the 'libvirt_parthelper' utility output with the only difference being
create passes a specific volume to be processed while refresh pass a NULL
indicating to process all volumes. That passed volume is check during the
virStorageBackendDiskMakeVol call to see if the current partition described
by the volume key already exists. If it exists, then no adjustments are
made to the allocation and the next entry in the output is checked.
For the create path this resulted in only the most recently created
partition size would be accounted for in the 'allocation' setting. This
patch thus checks whether the incoming volume is NULL before clearing
the pool allocation value.
Commit id '2ac0e647' for https://bugzilla.redhat.com/show_bug.cgi?id=1206521
was meant to be a generic check for the CreateVol, CreateVolFrom, and
DeleteVol paths to check if the storage backend's changed the pool's view
of allocation or available values.
Unfortunately as it turns out this caused a side effect when the disk backend
created an extended partition there would be no actual storage removed from
the pool, thus the changes would not find any change in allocation or
available and incorrectly update the pool values using the size of the
extended partition. A subsequent refresh of the pool would reset the
values appropriately.
This patch modifies those checks in order to specifically not update the
pool allocation and available for only the disk backend rather than be
generic before and after checks.
This never worked.
In 0.9.10 when this API was introduced, it was intended that
the SHRINK flag combined with DELTA would shrink the volume by
the specified capacity (to avoid passing negative numbers).
See commit 055bbf4.
When the SHRINK flag was finally implemented for the first backend
in 1.2.13 (commit aa9aa6a), it was only implemented for the absolute
values and with the delta flag the volume is always extended,
regardless of the SHRINK flag.
Treat the SHRINK flag as a minus sign when used together with DELTA,
to allow shrinking volumes as was documented in the API since 0.9.10.
https://bugzilla.redhat.com/show_bug.cgi?id=1220213
Since shrinking a volume below existing allocation is not allowed,
it is not possible for a successful resize with VOL_RESIZE_ALLOCATE
to increase the pool's available value.
Even with the SHRINK flag it is possible to extend the current
allocation or even the capacity. Remove the overflow when
computing delta with this flag and do the check even if the
flag was specified.
https://bugzilla.redhat.com/show_bug.cgi?id=1073305
The code already exists there, it just modified different flags. I just
noticed this when looking at the code. This patch is better to view
with bigger context or '-W'.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Only set directory permissions at pool build time, if:
- User explicitly requested a mode via the XML
- The directory needs to be created
- We need to do the crazy NFS root-squash workaround
This allows qemu:///session to call build on an existing directory
like /tmp.
The XML parser sets a default <mode> if none is explicitly passed in.
This is then used at pool/vol creation time, and unconditionally reported
in the XML.
The problem with this approach is that it's impossible for other code
to determine if the user explicitly requested a storage mode. There
are some cases where we want to make this distinction, but we currently
can't.
Handle <mode> parsing like we handle <owner>/<group>: if no value is
passed in, set it to -1, and adjust the internal consumers to handle
it.
Coverity points out it's possible for one of the virCommand{Output|Error}*
API's to have not allocated 'output' and/or 'error' in which case the
strstr comparison will cause a NULL deref
Signed-off-by: John Ferlan <jferlan@redhat.com>
Just as we allow stopping filesystem pools when they were unmounted
externally, do not fail to stop an iscsi pool when someone else
closed the session externally.
Reported at:
https://bugzilla.redhat.com/show_bug.cgi?id=1171984
Trying to use qemu:///session to create a storage pool pointing at
/tmp will usually fail with something like:
$ virsh pool-start tmp
error: Failed to start pool tmp
error: cannot open volume '/tmp/systemd-private-c38cf0418d7a4734a66a8175996c384f-colord.service-kEyiTA': Permission denied
If any volume in an FS pool can't be opened by the daemon, the refresh
fails, and the pool can't be used.
This causes pain for virt-install/virt-manager though. Imaging a user
downloads a disk image to /tmp. virt-manager wants to import /tmp as
a storage pool, so we can detect what disk format it is, and set the
XML correctly. However this case will likely fail as explained above.
Change the logic here to skip volumes that fail to open. This could
conceivably cause user complaints along the lines of 'why doesn't
libvirt show $ROOT-OWNED-VOLUME-FOO', but figuring that currently
the pool won't even startup, I don't think there are any current
users that care about that case.
https://bugzilla.redhat.com/show_bug.cgi?id=1103308
If you end up with a state file for a pool that no longer starts up
or refreshes correctly, the state file is never removed and adds
noise to the logs everytime libvirtd is started.
If the initial state syncing fails, delete the statefile.
After pool startup we call refreshPool(). If that fails, we leave
a stale pool state file hanging around.
Hit this trying to create a pool with qemu:///session containing
root owned files.
https://bugzilla.redhat.com/show_bug.cgi?id=1171933
Adjust the processLU error returns to be a bit more logical. Currently,
the calling code cannot determine the difference between a non disk/lun
volume and a processed/found disk/lun. It can also not differentiate
between perhaps real/fatal error and one that won't necessarily stop
the code from finding other volumes.
After this patch virStorageBackendSCSIFindLUsInternal will stop processing
as soon as a "fatal" message occurs rather than continuting processing
for no apparent reason. It will also only set the *found value when
at least one of the processLU's was successful.
With the failed return, if the reason for the stop was that the pool
target path did not exist, was /dev, was /dev/, or did not start with
/dev, then iSCSI pool startup and refresh will fail.
Rather than passing/returning a pointer to a boolean to indicate that
perhaps we should try again - adjust the return of the call to return
the count of LU's found during processing, then let the caller decide
what to do with that value.
Use virStorageBackendPoolUseDevPath API to determine whether creation of
stable target path is possible for the volume.
This will differentiate a failed virStorageBackendStablePath which won't
need to be fatal. Thus, we'll add a -2 return value to differentiate that
the failure was a result of either the inability to find the symlink for
the device or failure to open the target path directory
For virStorageBackendStablePath, in order to make decisions in other code
split out the checks regarding whether the pool's target is empty, using /dev,
using /dev/, or doesn't start with /dev
https://bugzilla.redhat.com/show_bug.cgi?id=1206521
If the backend driver updates the pool available and/or allocation values,
then the storage_driver VolCreateXML, VolCreateXMLFrom, and VolDelete APIs
should not change the value; otherwise, it will appear as if the values
were "doubled" for each change. Additionally since unsigned arithmetic will
be used depending on the size and operation, either or both values could be
appear to be much larger than they should be (in the EiB range).
Currently only the disk pool updates the values, but other pools could.
Assume a "fresh" disk pool of 500 MiB using /dev/sde:
$ virsh pool-info disk-pool
...
Capacity: 509.88 MiB
Allocation: 0.00 B
Available: 509.84 MiB
$ virsh vol-create-as disk-pool sde1 --capacity 300M
$ virsh pool-info disk-pool
...
Capacity: 509.88 MiB
Allocation: 600.47 MiB
Available: 16.00 EiB
Following assumes disk backend updated to refresh the disk pool at deletion
of primary partition as well as extended partition:
$ virsh vol-delete --pool disk-pool sde1
Vol sde1 deleted
$ virsh pool-info disk-pool
...
Capacity: 509.88 MiB
Allocation: 9.73 EiB
Available: 6.27 EiB
This patch will check if the backend updated the pool values and honor that
update.
Commit id '471e1c4e' only considered updating the pool if the extended
partition was removed. As it turns out removing a primary partition
would also need to update the freeExtent list otherwise the following
sequence would fail (assuming a "fresh" disk pool for /dev/sde of 500M):
$ virsh pool-info disk-pool
...
Capacity: 509.88 MiB
Allocation: 0.00 B
Available: 509.84 MiB
$ virsh vol-create-as disk-pool sde1 --capacity 300M
$ virsh vol-delete --pool disk-pool sde1
$ virsh vol-create-as disk-pool sde1 --capacity 300M
error: Failed to create vol sde1
error: internal error: no large enough free extent
$
This patch will refresh the pool, rereading the partitions, and
return
https://bugzilla.redhat.com/show_bug.cgi?id=1073305
When creating a volume in a pool, the creation allows the 'capacity'
value to be larger than the available space in the pool. As long as
the 'allocation' value will fit in the space, the volume will be created.
However, resizing the volume checks were made with the new absolute
capacity value against existing capacity + the available space without
regard for whether the new absolute capacity was actually allocating
space or not. For example, a pool with 75G of available space creates
a volume of 10G using a capacity of 100G and allocation of 10G will succeed;
however, if the allocation used a capacity of 10G instead and then tried
to resize the allocation to 100G the code would fail to allow the backend
to try the resize.
Furthermore, when updating the pool "available" and "allocation" values,
the resize code would just "blindly" adjust them regardless of whether
space was "allocated" or just "capacity" was being adjusted. This left
a scenario whereby a resize to 100G would fail; however, a resize to 50G
followed by one to 100G would both succeed. Again, neither was adjusting
the allocation value, just the "capacity" value.
This patch adds more logic to the resize code to understand whether the
new capacity value is actually "allocating" space as well and whether it
shrinking or expanding. Since unsigned arithmatic is involved, the possibility
that we adjust the pool size values incorrectly is probable.
This patch also ensures that updates to the pool values only occur if we
actually performed the allocation.
NB: The storageVolDelete, storageVolCreateXML, and storageVolCreateXMLFrom
each only updates the pool allocation/availability values by the target
volume allocation value.
The 'checkPool' callback was originally part of the storageDriverAutostart function,
but the pools need to be checked earlier during initialization phase,
otherwise we can't start a domain which mounts a volume after the
libvirtd daemon restarted. This is because qemuProcessReconnect is called
earlier than storageDriverAutostart. Therefore the 'checkPool' logic has been
moved to storagePoolUpdateAllState which is called inside storageDriverInitialize.
We also need a valid 'conn' reference to be able to execute 'refreshPool'
during initialization phase. Though it isn't available until storageDriverAutostart
all of our storage backends do ignore 'conn' pointer, except for RBD,
but RBD doesn't support 'checkPool' callback, so it's safe to pass
conn = NULL in this case.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1177733
This patch introduces new virStorageDriverState element stateDir.
Also adds necessary changes to storageStateInitialize, so that
directories initialization becomes more generic.
If the call to virStorageBackendISCSIGetHostNumber failed, we set
retval = -1, but yet still called virStorageBackendSCSIFindLUs.
Need to add a goto cleanup - while at it, adjust the logic to
initialize retval to -1 and only changed to 0 (zero) on success.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Don't supercede the error message virStorageBackendSCSIFindLUs as the
message such as "error: Failed to find LUs on host 60: ..." is not overly
clear as to what the real problem might be.
Signed-off-by: John Ferlan <jferlan@redhat.com>
In order to be able to use 'checkPool' inside functions which do not
have any connection reference, 'conn' attribute needs to be discarded
from the checkPool's signature, since it's not used by any storage backend
anyway.
A helper that never returns an error and treats bits out of bitmap range
as false.
Use it everywhere we use ignore_value on virBitmapGetBit, or loop over
the bitmap size.
The virStorageBackendISCSIFindPoolSources API only needs the 'host' name
in order to discover iSCSI pools, it returns the various device paths.
On input, it's also possible to further restrict a search by providing the
port attribute for the host element and the (undocumented) initiator element.
For example:
$ virsh find-storage-pool-sources-as iscsi
error: Failed to find any iscsi pool sources
error: invalid argument: hostname and device path must be specified for iscsi sources
$ virsh find-storage-pool-sources-as iscsi 192.168.122.1
<sources>
<source>
<host name='192.168.122.1' port='3260'/>
<device path='iqn.2013-12.com.example:iscsi-chap-lclpool'/>
</source>
</sources>
https://bugzilla.redhat.com/show_bug.cgi?id=1181062
According to the formatstorage.html description for <source> element
and "format" attribute: "All drivers are required to have a default
value for this, so it is optional."
As it turns out the disk backend did not choose a default value, so I
added a default of "msdos" if the source type is "unknown" as well as
updating the storage.html backend disk volume driver documentation to
indicate the default format is dos.
Instead of just looking at the output of fstat, call
virStorageFileGetMetadata to get the full capacity from
image headers.
Note that the capacity is probed unconditionally. The updateCapacity
bool parameter is ignored and will be removed in the following commit.
In virStorageVolCreateXML, add VIR_VOL_XML_PARSE_NO_CAPACITY
to the call parsing the XML of the new volume to make the capacity
optional.
If the capacity is omitted, use the capacity of the old volume.
We already do that for values that are less than the original
volume capacity.
Not all files we want to find using virFileFindResource{,Full} are
generated when libvirt is built, some of them (such as RNG schemas) are
distributed with sources. The current API was not able to find source
files if libvirt was built in VPATH.
Both RNG schemas and cpu_map.xml are distributed in source tarball.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
While the main storage driver code allows the flag
VIR_STORAGE_VOL_RESIZE_SHRINK to be set, none of the backend
drivers are supporting it. At the very least this can work
for plain file based volumes since we just ftruncate() them
to the new size. It does not work with qcow2 volumes, but we
can arguably delegate to qemu-img for error reporting for that
instead of second guessing this for ourselves:
$ virsh vol-resize --shrink /home/berrange/VirtualMachines/demo.qcow2 2G
error: Failed to change size of volume 'demo.qcow2' to 2G
error: internal error: Child process (/usr/bin/qemu-img resize /home/berrange/VirtualMachines/demo.qcow2 2147483648) unexpected exit status 1: qemu-img: qcow2 doesn't support shrinking images yet
qemu-img: This image does not support resize
See also https://bugzilla.redhat.com/show_bug.cgi?id=1021802