Add a new parameter to virStorageFileGetMetadata that will break the
backing chain detection process and report useful error message rather
than having to use virStorageFileChainGetBroken.
This patch just introduces the option, usage will be provided
separately.
- Provide an implementation for buildPool and deletePool operations
for the ZFS storage backend.
- Add VIR_STORAGE_POOL_SOURCE_DEVICE flag to ZFS pool poolOptions
as now we can specify devices to build pool from
- storagepool.rng: add an optional 'sourceinfodev' to 'sourcezfs' and
add an optional 'target' to 'poolzfs' entity
- Add a couple of tests to storagepoolxml2xmltest
Coverity complains that when multiplying to 32 bit values that eventually
will be stored in a 64 bit value that it's possible the math could
overflow unless one of the values being multiplied is type cast to
the proper size.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Since cd4d547576a4f0371d1d4d4e0ca6db124c5ba257
Coverity notes that setting 'ret = -3' prior to the unconditional
setting of 'ret = 0' will cause the value to be UNUSED.
Since the comment indicates that it is expect to allow the code
to continue, just remove the ret = -3 setting.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Currently, after calling commands to create a new volumes,
virStorageBackendZFSCreateVol calls virStorageBackendZFSFindVols that
calls virStorageBackendZFSParseVol.
virStorageBackendZFSParseVol checks if a volume already exists by
trying to get it using virStorageVolDefFindByName.
For a just created volume it returns NULL, so volume is reported as
new and appended to pool->volumes. This causes a volume to be listed
twice as storageVolCreateXML appends this new volume to the list as
well.
Fix that by passing a new volume definition to
virStorageBackendZFSParseVol so it could determine if it needs to add
this volume to the list.
There were two occurrances of attempting to initialize actualType by
calling virStorageSourceGetActualType(src) prior to a check if (!src)
resulting in Coverity complaining about the possible NULL dereference
in virStorageSourceGetActualType() of src.
Resolve by moving the actualType setting until after checking !src
virStorageBackendVolDownloadLocal and virStorageBackendVolUploadLocal
use virFDStreamOpenFile function to work with the volume fd.
virFDStreamOpenFile calls virFDStreamOpenFileInternal that implements
handling of the non-blocking I/O. If a file is not a character device and
not a fifo, it uses libvirt_iohelper.
On FreeBSD, it doesn't work as expected because disk devices (including
ZFS volumes) are exposed as character devices, and ZFS volumes do not
support open(2) with O_NONBLOCK.
To overcome this, introduce a forceIOHelper flag to
virFDStreamOpenFileInternal that forces using libvirt_iohelper. And
introduce virFDStreamOpenBlockDevice that calls
virFDStreamOpenFileInternal with the forceIOHelper set to true.
On some places in the libvirt code we have:
f(a,z)
instead of
f(a, z)
This trivial patch fixes couple of such occurrences.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Currently, qemu driver uses qemuTranslateDiskSourcePool()
to translate disk volume information. This function is
general enough and could be used for other drivers as well,
so move it to conf/domain_conf.c along with its helpers.
- qemuTranslateDiskSourcePool: move to storage/storage_driver.c
and rename to virStorageTranslateDiskSourcePool,
- qemuAddISCSIPoolSourceHost: move to storage/storage_driver.c
and rename to virStorageAddISCSIPoolSourceHost,
- qemuTranslateDiskSourcePoolAuth: move to storage/storage_driver.c
and rename to virStorageTranslateDiskSourcePoolAuth,
- Update users of qemuTranslateDiskSourcePool to use a
new name.
Implement ZFS storage backend driver. Currently supported
only on FreeBSD because of ZFS limitations on Linux.
Features supported:
- pool-start, pool-stop
- pool-info
- vol-list
- vol-create / vol-delete
Pool definition looks like that:
<pool type='zfs'>
<name>myzfspool</name>
<source>
<name>actualpoolname</name>
</source>
</pool>
The 'actualpoolname' value is a name of the pool on the system,
such as shown by 'zpool list' command. Target makes no sense
here because volumes path is always /dev/zvol/$poolname/$volname.
User has to create a pool on his own, this driver doesn't
support pool creation currently.
A volume could be used with Qemu by adding an entry like this:
<disk type='volume' device='disk'>
<driver name='qemu' type='raw'/>
<source pool='myzfspool' volume='vol5'/>
<target dev='hdc' bus='ide'/>
</disk>
https://bugzilla.redhat.com/show_bug.cgi?id=1072653
Upon successful upload of a volume, the target volume and storage pool
were not updated to reflect any changes as a result of the upload. Make
use of the existing stream close callback mechanism to force a backend
pool refresh to occur in a separate thread once the stream closes. The
separate thread should avoid potential deadlocks if the refresh needed
to wait on some event from the event loop which is used to perform
the stream callback.
Use correct mode when pre-creating files (for snapshots). The refactor
changing to storage driver usage caused a regression as some systems
created the file with 000 permissions forbidding qemu to write the file.
Pass mode to the creating functions to avoid the problem.
Regression since 185e07a5f82bc0692324f3ee13b4816d71b653c1.
With my intended use of storage driver assist to chown files on remote
storage we will need a witness that will tell us whether the given
storage volume supports operations needed by the storage driver.
Gluster storage works on a similar principle to NFS where it takes the
uid and gid of the actual process and uses it to access the storage
volume on the remote server. This introduces a need to chown storage
files on gluster via native API.
virStorageBackendLogicalCreateVol contains a piece like:
if (vol->target.path != NULL) {
/* A target path passed to CreateVol has no meaning */
VIR_FREE(vol->target.path);
}
The 'if' is useless here, but 'syntax-check' doesn't catch that
because of the comment, so drop the 'if'.
If a parentaddr was provided in the XML, have getAdapterName lookup
the stable address. This allows virStorageBackendSCSICheckPool() and
virStorageBackendSCSIRefreshPool() to automagically find the scsi_host
by its PCI address and unique_id
Rather than assume that NOT FC_HOST is SCSI_HOST, let's call them out
specifically. Makes it easier to find SCSI_HOST code/structs and ensures
something isn't missed in the future
https://bugzilla.redhat.com/show_bug.cgi?id=1091866
Add a new boolean 'sparse'. This will be used by the logical backend
storage driver to determine whether the target volume is sparse or not
(also known by a snapshot or thin logical volume). Although setting sparse
to true at creation could be seen as duplicitous to setting during
virStorageBackendLogicalMakeVol() in case there are ever other code paths
between Create and FindLVs that need to know about the volume be sparse.
Use the 'sparse' in a new virStorageBackendLogicalVolWipe() to decide whether
to attempt to wipe the logical volume or not. For now, I have found no
means to wipe the volume without writing to it. Writing to the sparse
volume causes it to be filled. A sparse logical volume is not completely
writeable as there exists metadata which if overwritten will cause the
sparse lv to go INACTIVE which means pool-refresh will not find it.
Access to whatever lvm uses to manage data blocks is not provided by
any API I could find.
Coverity complains about the return value of ioctl not being checked.
Even though we carry on when this fails (just like qemu-img does),
we can log an error.
For non-local storage drivers we can't expect to use the "scrub" tool to
wipe the volume. Split the code into a separate backend function so that
we can add protocol specific code later.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1118710
The next patch will move the storage volume wiping code into the
individual backends. This patch splits out the common code to wipe a
local volume into a separate backend helper so that the next patch is
simpler.
Add 'nocow' to storage volume xml so that user can have an option
to set NOCOW flag to the newly created volume. It's useful on btrfs
file system to enhance performance.
Btrfs has low performance when hosting VM images, even more when the guest
in those VM are also using btrfs as file system. One way to mitigate this
bad performance is to turn off COW attributes on VM files. Generally, there
are two ways to turn off COW on btrfs: a) by mounting fs with nodatacow,
then all newly created files will be NOCOW. b) per file. Add the NOCOW file
attribute. It could only be done to empty or new files.
This patch tries the second way, according to 'nocow' option, it could set
NOCOW flag per file:
for raw file images, handle 'nocow' in libvirt code; for non-raw file images,
pass 'nocow=on' option to qemu-img, and let qemu-img to handle that (requires
qemu-img version >= 2.1).
Signed-off-by: Chunyan Liu <cyliu@suse.com>
When the backing store of a volume wasn't accessible while updating the
volume definition the call would fail altogether. In cases where we
currently (incorrectly) treat remote backing stores as local one this
might lead to strange errors.
Ignore the opening errors until we figure out how to track proper volume
metadata.
Use the backing store parser to properly create the information about a
volume's backing store. Unfortunately as the storage driver isn't
prepared to allow volumes backed by networked filesystems add a
workaround that will avoid changing the XML output.
For non-local storage drivers we can't expect to use the FDStream
backend for up/downloading volumes. Split the code into a separate
backend function so that we can add protocol specific code later.
To allow reusing this function in the qemu driver we need to allow
specifying the storage format. Also separate return of the backing store
path now isn't necessary.
Replace the authType, chap, and cephx unions in virStoragePoolSource
with a single pointer to a virStorageAuthDefPtr. Adjust all users of
the previous chap/cephx and secret unions with the source->auth data.
Replace:
if (virBufferError(&buf)) {
virBufferFreeAndReset(&buf);
virReportOOMError();
...
}
with:
if (virBufferCheckError(&buf) < 0)
...
This should not be a functional change (unless some callers
misused the virBuffer APIs - a different error would be reported
then)
The parent directory doesn't necessarily need to be stored after we
don't mangle the path stored in the image. Remove it and tweak the code
to avoid using it.
Due to various refactors and compatibility with the virstoragetest the
relPath field of the virStorageSource structure was always filled either
with the relative name or the full path in case of absolutely backed
storage. Return its original purpose to store only the relative name of
the disk if it is backed relatively and tweak the tests.