In an NFS root-squashed environment the 'vol-delete' command will fail to
'unlink' the target volume since it was created under a different uid:gid.
This code continues the concepts introduced in virFileOpenForked and
virDirCreate[NoFork] with respect to running the unlink command under
the uid/gid of the child. Unlike the other two, don't retry on EACCES
(that's why we're here doing this now).
Currently, when trying to virsh pool-define/virsh pool-build a new
'dir' pool, if the target directory already exists, virsh
pool-build/virStoragePoolBuild will error out. This is a change of
behaviour compared to eg libvirt 1.2.13
This is caused by the wrong type being used for the dir_create_flags
variable in virStorageBackendFileSystemBuild , it's defined as a bool
but is used as a flag bit field so should be unsigned int (this matches
the type virDirCreate expects for this variable).
This should fix https://bugzilla.gnome.org/show_bug.cgi?id=752417 (GNOME
Boxes) and https://bugzilla.redhat.com/show_bug.cgi?id=1244080
(downstream virt-manager).
https://bugzilla.redhat.com/show_bug.cgi?id=1227664
If the requested format type for the new entry in the file system pool
is a 'dir', then be sure to set the vol->type correctly as would be done
when the pool is refreshed.
https://bugzilla.redhat.com/show_bug.cgi?id=1186969
When generating the path to the dir for a CIFS/Samba driver, the code
would generate a source path for the mount using "%s:%s" while the
mount.cifs expects to see "//%s/%s". So check for the cifsfs and
format the source path appropriately.
Additionally, since there is no means to authenticate, the mount
needs a "-o guest" on the command line in order to anonymously mount
the Samba directory.
In order for the glusterfs boolean to be set, the pool->def->type must be
VIR_STORAGE_POOL_NETFS, thus the check within virCommandNewArgList whether
pool->def->type is VIR_STORAGE_POOL_FS will never be true, so remove it
https://bugzilla.redhat.com/show_bug.cgi?id=1181087
The virStorageBackendFileSystemIsMounted is called from three source paths
checkPool, startPool, and stopPool. Both start and stop validate the FS
fields before calling *IsMounted; however the check path there is no call.
This could lead the code into returning a true in "isActive" if for some
reason the target path for the pool was mounted. The assumption being
that if it was mounted, then we believe we started/mounted it.
It's also of note that commit id '81165294' added an error message for
the start/mount path regarding that the target is already mounted so
fail the start. That check was adjusted by commit id '13fde7ce' to
only message if actually mounted.
At one time this led to the libvirtd restart autostart code to declare
that the pool was active even though the startPool would inhibit startup
and the stopPool would inhibit shutdown. The autostart path changed as
of commit id '2a31c5f0' as part of the keep storage pools started between
libvirtd restarts.
This patch adds the same check made prior to start/mount and stop/unmount
to ensure we have a valid configuration before attempting to see if the
target is already mounted to declare "isActive" or not. Finding an improper
configuration will now cause an error at checkPool, which should make it
so we can no longer be left in a situation where the pool was started and
we have no way to stop it.
https://bugzilla.redhat.com/show_bug.cgi?id=1181087
Currently the assumption on the error message is that there are
no source device paths defined when the number of devices check
fails, but in reality the XML could have had none or it could have
had more than the value supported. Adjust the error message accordingly
to make it clearer what the error really is.
The code already exists there, it just modified different flags. I just
noticed this when looking at the code. This patch is better to view
with bigger context or '-W'.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Only set directory permissions at pool build time, if:
- User explicitly requested a mode via the XML
- The directory needs to be created
- We need to do the crazy NFS root-squash workaround
This allows qemu:///session to call build on an existing directory
like /tmp.
The XML parser sets a default <mode> if none is explicitly passed in.
This is then used at pool/vol creation time, and unconditionally reported
in the XML.
The problem with this approach is that it's impossible for other code
to determine if the user explicitly requested a storage mode. There
are some cases where we want to make this distinction, but we currently
can't.
Handle <mode> parsing like we handle <owner>/<group>: if no value is
passed in, set it to -1, and adjust the internal consumers to handle
it.
In order to be able to use 'checkPool' inside functions which do not
have any connection reference, 'conn' attribute needs to be discarded
from the checkPool's signature, since it's not used by any storage backend
anyway.
While the main storage driver code allows the flag
VIR_STORAGE_VOL_RESIZE_SHRINK to be set, none of the backend
drivers are supporting it. At the very least this can work
for plain file based volumes since we just ftruncate() them
to the new size. It does not work with qcow2 volumes, but we
can arguably delegate to qemu-img for error reporting for that
instead of second guessing this for ourselves:
$ virsh vol-resize --shrink /home/berrange/VirtualMachines/demo.qcow2 2G
error: Failed to change size of volume 'demo.qcow2' to 2G
error: internal error: Child process (/usr/bin/qemu-img resize /home/berrange/VirtualMachines/demo.qcow2 2147483648) unexpected exit status 1: qemu-img: qcow2 doesn't support shrinking images yet
qemu-img: This image does not support resize
See also https://bugzilla.redhat.com/show_bug.cgi?id=1021802
When creating a RAW file, we don't take advantage
of clone of btrfs.
Add a VIR_STORAGE_VOL_CREATE_REFLINK flag to request
a reflink copy.
Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Since virSecretFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
Since cd4d547576
Coverity notes that setting 'ret = -3' prior to the unconditional
setting of 'ret = 0' will cause the value to be UNUSED.
Since the comment indicates that it is expect to allow the code
to continue, just remove the ret = -3 setting.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Use correct mode when pre-creating files (for snapshots). The refactor
changing to storage driver usage caused a regression as some systems
created the file with 000 permissions forbidding qemu to write the file.
Pass mode to the creating functions to avoid the problem.
Regression since 185e07a5f8.
Gluster storage works on a similar principle to NFS where it takes the
uid and gid of the actual process and uses it to access the storage
volume on the remote server. This introduces a need to chown storage
files on gluster via native API.
For non-local storage drivers we can't expect to use the "scrub" tool to
wipe the volume. Split the code into a separate backend function so that
we can add protocol specific code later.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1118710
Use the backing store parser to properly create the information about a
volume's backing store. Unfortunately as the storage driver isn't
prepared to allow volumes backed by networked filesystems add a
workaround that will avoid changing the XML output.
For non-local storage drivers we can't expect to use the FDStream
backend for up/downloading volumes. Split the code into a separate
backend function so that we can add protocol specific code later.
Different protocols have different means to uniquely identify a storage
file. This patch implements a storage driver API to retrieve a unique
string describing a volume. The current implementation works for local
storage only and returns the canonical path of the volume.
To add caching support the local filesystem driver now has a private
structure holding the cached string, which is created only when it's
initially accessed.
This patch provides the implementation for local files only for start.
Use virStorageFileGetMetadataFromFD instead in
virStorageBackendProbeTarget as it now returns all required data and the
storage file is already open in a filedescriptor.
Also fix improper error code being returned when virFileReadHeaderFD
would fail as virStorageBackendUpdateVolTargetInfoFD would set the
return code to 0.
Add storage driver based functions to access headers of storage files
for metadata extraction. Along with this patch a local filesystem and
gluster via libgfapi implementation is provided. The gluster
implementation is based on code of the saferead_lim function.
Print the debug statements of individual file access functions from the
main API functions instead of the individual backend functions.
Also enhance initialization debug messages on a per-backend basis.
All callers of virStorageFileGetMetadataFromBuf were first calling
virStorageFileProbeFormatFromBuf, to learn what format to pass in.
But this function is already wired to do the exact same probe if
the incoming format is VIR_STORAGE_FILE_AUTO, so it's simpler to
just refactor the probing into the central function.
* src/util/virstoragefile.h (virStorageFileGetMetadataFromBuf):
Drop parameter.
(virStorageFileProbeFormatFromBuf): Drop declaration.
* src/util/virstoragefile.c (virStorageFileGetMetadataFromBuf):
Do probe here instead of in callers.
(virStorageFileProbeFormatFromBuf): Make static.
* src/libvirt_private.syms (virstoragefile.h): Drop function.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Update caller.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit id 'ac9a0963' refactored out the 'withCapacity' for the
virStorageBackendUpdateVolInfo() API. See:
http://www.redhat.com/archives/libvir-list/2014-April/msg00043.html
This resulted in a difference in how 'virsh vol-info --pool <poolName>
<volume>' or 'virsh vol-list vol-list --pool <poolName> --details' outputs
the capacity information for a directory pool with a qcow2 sparse file.
For example, using the following XML
mkdir /home/TestPool
cat testpool.xml
<pool type='dir'>
<name>TestPool</name>
<uuid>6bf80895-10b6-75a6-6059-89fdea2aefb7</uuid>
<source>
</source>
<target>
<path>/home/TestPool</path>
<permissions>
<mode>0755</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
virsh pool-create testpool.xml
virsh vol-create-as --pool TestPool temp_vol_1 \
--capacity 1048576 --allocation 1048576 --format qcow2
virsh vol-info --pool TestPool temp_vol_1
Results in listing a Capacity value. Prior to the commit, the value would
be '1.0 MiB' (1048576 bytes). However, after the commit the output would be
(for example) '192.50 KiB', which for my system was the size of the volume
in my file system (eg 'ls -l TestPool/temp_vol_1' results in '197120' bytes
or 192.50 KiB). While perhaps technically correct, it's not necessarily
what the user expected (certainly virt-test didn't expect it).
This patch restores the code to not update the target capacity for this path
More instances of failure to report (unlikely) readdir errors.
In one case, I chose to ignore them, given that a readdir error
would be no different than timing out on the loop, where the
fallback path behaves correctly either way.
* src/storage/storage_backend.c (virStorageBackendStablePath):
Ignore readdir errors.
* src/storage/storage_backend_fs.c
(virStorageBackendFileSystemRefresh): Report readdir errors.
* src/storage/storage_backend_iscsi.c
(virStorageBackendISCSIGetHostNumber): Likewise.
* src/storage/storage_backend_scsi.c (getNewStyleBlockDevice)
(getBlockDevice, virStorageBackendSCSIFindLUs): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit id '18642d10' caused a virt-test regression for NFS backend
storage error path checks when running the command:
'virsh find-storage-pool-sources-as netfs Unknown '
when the host did not have Gluster installed. Prior to the commit,
the test would fail with the error:
error: internal error: Child process (/usr/sbin/showmount --no-headers
--exports Unknown) unexpected exit status 1: clnt_create: RPC: Unknown host
After the commit, the error would be ignored, the call would succeed,
and an empty list of pool sources returned. This was tucked into the
commit message as an expected outcome.
When the target host does not have a GLUSTER_CLI this is a regression
over the previous release. Furthermore, even if Gluster CLI was present,
but had a failure to get devices, the API would return a failure even if
the NFS backend had found devices.
Modify the logic to return failure when the NFS backend check fails and
there's no GLUSTER_CLI or when both backend checks fail.
If either returns success and GLUSTER_CLI is defined, then fetch and return
a list of source devices even if it's empty
A couple pieces of virStorageFileMetadata are used only while
collecting information about the chain, and don't need to
live permanently in the struct. This patch refactors external
callers to collect the information separately, so that the
next patch can remove the fields.
* src/util/virstoragefile.h (virStorageFileGetMetadataFromBuf):
Alter signature.
* src/util/virstoragefile.c (virStorageFileGetMetadataInternal):
Likewise.
(virStorageFileGetMetadataFromFDInternal): Adjust callers.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Likewise.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Deciding if a user string represents a local file instead of a
network path is an operation worth exposing directly, particularly
since the next patch will be removing a redundant variable that
was caching the information.
* src/util/virstoragefile.h (virStorageIsFile): New declaration.
* src/util/virstoragefile.c (virBackingStoreIsFile): Rename...
(virStorageIsFile): ...export, and allow NULL input.
(virStorageFileGetMetadataInternal)
(virStorageFileGetMetadataRecurse, virStorageFileGetMetadata):
Update callers.
* src/conf/domain_conf.c (virDomainDiskDefForeachPath): Use it.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Likewise.
* src/libvirt_private.syms (virstoragefile.h): Export function.
Signed-off-by: Eric Blake <eblake@redhat.com>
Now that we store all metadata about a storage image in a
virStorageSource struct let's use it also to store information needed by
the storage driver to access and do operations on the files.
Commit id '18642d10' refactored the call to virCommandRunRegex()
inside a new function virStorageBackendFileSystemNetFindNFSPoolSources(),
but the cut-n-paste didn't remove the "&state". Now that state is passed
by reference, it results in a libvirtd core with a messages entry:
"...internal error: unknown storage pool type Unknow"
Currently VolOpen notifies the user of a potentially non-fatal failure by
returning -2 and logging a VIR_WARN or VIR_INFO. Unfortunately most
callers treat -2 as fatal but don't actually report any message with
the error APIs.
Rename the VOL_OPEN_ERROR flag to VOL_OPEN_NOERROR. If NOERROR is specified,
we preserve the current behavior of returning -2 (there's only one caller
that wants this).
However in the default case, only return -1, and actually use the error
APIs. Fix up a couple callers as a result.
A future patch will merge virStorageFileMetadata and virStorageSource,
but I found it easier to do if both structs use the same information
for tracking whether a source file needs encryption keys.
* src/util/virstoragefile.h (_virStorageFileMetadata): Prepare
full encryption struct instead of just a bool.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Use transfer semantics.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
* src/util/virstoragefile.c (virStorageFileGetMetadataInternal):
Populate struct.
(virStorageFileFreeMetadata): Adjust clients.
* tests/virstoragetest.c (testStorageChain): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Now that each virStorageSource can track allocation information,
and given that we already have the information without extra
syscalls, it's easier to just always populate the information
directly into the struct than it is to sometimes pass the address
of the struct members down the call chain.
* src/storage/storage_backend.h (virStorageBackendUpdateVolInfo)
(virStorageBackendUpdateVolTargetInfo)
(virStorageBackendUpdateVolTargetInfoFD): Update signature.
* src/storage/storage_backend.c (virStorageBackendUpdateVolInfo)
(virStorageBackendUpdateVolTargetInfo)
(virStorageBackendUpdateVolTargetInfoFD): Always populate struct
members instead.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskMakeDataVol): Update client.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget)
(virStorageBackendFileSystemRefresh)
(virStorageBackendFileSystemVolRefresh): Likewise.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
* src/storage/storage_backend_logical.c
(virStorageBackendLogicalMakeVol): Likewise.
* src/storage/storage_backend_mpath.c
(virStorageBackendMpathNewVol): Likewise.
* src/storage/storage_backend_scsi.c
(virStorageBackendSCSINewLun): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
One of the features of qcow2 is that a wrapper file can have
more capacity than its backing file from the guest's perspective;
what's more, sparse files make tracking allocation of both
the active and backing file worthwhile. As such, it makes
more sense to show allocation numbers for each file in a chain,
and not just the top-level file. This sets up the fields for
the tracking, although it does not modify XML to display any
new information.
* src/util/virstoragefile.h (_virStorageSource): Add fields.
* src/conf/storage_conf.h (_virStorageVolDef): Drop redundant
fields.
* src/storage/storage_backend.c (virStorageBackendCreateBlockFrom)
(createRawFile, virStorageBackendCreateQemuImgCmd)
(virStorageBackendCreateQcowCreate): Update clients.
* src/storage/storage_driver.c (storageVolDelete)
(storageVolCreateXML, storageVolCreateXMLFrom, storageVolResize)
(storageVolWipeInternal, storageVolGetInfo): Likewise.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget)
(virStorageBackendFileSystemRefresh)
(virStorageBackendFileSystemVolResize)
(virStorageBackendFileSystemVolRefresh): Likewise.
* src/storage/storage_backend_logical.c
(virStorageBackendLogicalMakeVol)
(virStorageBackendLogicalCreateVol): Likewise.
* src/storage/storage_backend_scsi.c
(virStorageBackendSCSINewLun): Likewise.
* src/storage/storage_backend_mpath.c
(virStorageBackendMpathNewVol): Likewise.
* src/storage/storage_backend_rbd.c
(volStorageBackendRBDRefreshVolInfo)
(virStorageBackendRBDCreateImage): Likewise.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskMakeDataVol)
(virStorageBackendDiskCreateVol): Likewise.
* src/storage/storage_backend_sheepdog.c
(virStorageBackendSheepdogBuildVol)
(virStorageBackendSheepdogParseVdiList): Likewise.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
* src/conf/storage_conf.c (virStorageVolDefFormat)
(virStorageVolDefParseXML): Likewise.
* src/test/test_driver.c (testOpenVolumesForPool)
(testStorageVolCreateXML, testStorageVolCreateXMLFrom)
(testStorageVolDelete, testStorageVolGetInfo): Likewise.
* src/esx/esx_storage_backend_iscsi.c (esxStorageVolGetXMLDesc):
Likewise.
* src/esx/esx_storage_backend_vmfs.c (esxStorageVolGetXMLDesc)
(esxStorageVolCreateXML): Likewise.
* src/parallels/parallels_driver.c (parallelsAddHddByVolume):
Likewise.
* src/parallels/parallels_storage.c (parallelsDiskDescParseNode)
(parallelsStorageVolDefineXML, parallelsStorageVolCreateXMLFrom)
(parallelsStorageVolDefRemove, parallelsStorageVolGetInfo):
Likewise.
* src/vbox/vbox_tmpl.c (vboxStorageVolCreateXML)
(vboxStorageVolGetXMLDesc): Likewise.
* tests/storagebackendsheepdogtest.c (test_vdi_list_parser):
Likewise.
* src/phyp/phyp_driver.c (phypStorageVolCreateXML): Likewise.
A fairly smooth transition. And now that domain disks and
storage volumes share a common struct, it opens the doors for
a future patch to expose more details in the XML for both
objects.
* src/conf/storage_conf.h (_virStorageVolTarget): Delete.
(_virStorageVolDef): Use common type.
* src/conf/storage_conf.c (virStorageVolDefFree)
(virStorageVolTargetDefFormat): Update clients.
* src/storage/storage_backend.h: Likewise.
* src/storage/storage_backend.c
(virStorageBackendDetectBlockVolFormatFD)
(virStorageBackendUpdateVolTargetInfo)
(virStorageBackendUpdateVolTargetInfoFD): Likewise.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Some preparatory work before consolidating storage volume
structs with the rest of virstoragefile. Making these
changes allows a volume target to be much closer to (a
subset of) the virStorageSource struct.
Making perms be a pointer allows it to be optional if we
have a storage pool that doesn't expose permissions in a
way we can access. It also allows future patches to
optionally expose permissions details learned about a disk
image via domain <disk> listings, rather than just
limiting it to storage volume listings.
Disk partition types was only used by internal code to
control what type of partition to create when carving up
an MS-DOS partition table storage pool (and is not used
for GPT partition tables or other storage pools). It was
not exposed in volume XML, and as it is more closely
related to extent information of the overall block device
than it is to the <target> information describing the host
file. Besides, if we ever decide to expose it in XML down
the road, we can move it back as needed.
* src/conf/storage_conf.h (_virStorageVolTarget): Change perms to
pointer, enhance comments. Move partition type...
(_virStorageVolSource): ...here.
* src/conf/storage_conf.c (virStorageVolDefFree)
(virStorageVolDefParseXML, virStorageVolTargetDefFormat): Update
clients.
* src/storage/storage_backend_fs.c (createFileDir): Likewise.
* src/storage/storage_backend.c (virStorageBackendCreateBlockFrom)
(virStorageBackendCreateRaw, virStorageBackendCreateExecCommand)
(virStorageBackendUpdateVolTargetInfoFD): Likewise.
* src/storage/storage_backend_logical.c
(virStorageBackendLogicalCreateVol): Likewise.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskMakeDataVol)
(virStorageBackendDiskPartTypeToCreate): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1072714
Use the "gluster" command line tool to retrieve information about remote
volumes on a gluster server to allow storage pool source lookup.
Unfortunately gluster doesn't provide a management library so that we
could use that directly, instead the RPC calls are hardcoded in the
command line tool.
Extract the NFS related stuff into a separate function and tidy up the
rest of the code so we can reuse it to add gluster backend detection.
Additionally avoid reporting of errors from "showmount" and return an
empty source list instead. This will help when adding other detection
backends.
Any source file which calls the logging APIs now needs
to have a VIR_LOG_INIT("source.name") declaration at
the start of the file. This provides a static variable
of the virLogSource type.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently, 'vol-resize --allocate' allocates new space at the
vol->capacity offset. But the vol->capacity is not necessarily the same
as vol->allocation. For instance:.
[root@localhost ~]# virsh vol-list --pool tmp-pool --details
Name Path Type Capacity Allocation
-------------------------------------------------------------
tmp-vol /root/tmp-pool/tmp-vol file 1.00 GiB 1.00 GiB
[root@localhost ~]# virsh vol-resize tmp-vol --pool tmp-pool 2G
[root@localhost ~]# virsh vol-list --pool tmp-pool --details
Name Path Type Capacity Allocation
-------------------------------------------------------------
tmp-vol /root/tmp-pool/tmp-vol file 2.00 GiB 1.00 GiB
So, if we want to allocate more bytes, so the file is say 3G big, the
real allocated size is 2G actually:
[root@localhost ~]# virsh vol-resize tmp-vol --pool tmp-pool 3G --allocate
[root@localhost ~]# virsh vol-list --pool tmp-pool --details
Name Path Type Capacity Allocation
-------------------------------------------------------------
tmp-vol /root/tmp-pool/tmp-vol file 3.00 GiB 2.00 GiB
This commit uses the correct vol->allocation instead of incorrect
vol->capacity, so the output of the commands above looks like this:
[root@localhost ~]# virsh vol-resize tmp-vol --pool tmp-pool 3G --allocate
[root@localhost ~]# virsh vol-list --pool tmp-pool --details
Name Path Type Capacity Allocation
-------------------------------------------------------------
tmp-vol /root/tmp-pool/tmp-vol file 3.00 GiB 3.00 GiB
Moreover, if the '--alocate' flag was used, we must update the
vol->allocation member in storageVolResize API too, not just
vol->capacity.
Reported-by: Wang Sen <wangsen@linux.vnet.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In the 'directory' and 'netfs' storage pools, a user can see
both 'file' and 'dir' storage volume types, to know when they
can descend into a subdirectory. But in a network-based storage
pool, such as the upcoming 'gluster' pool, we use 'network'
instead of 'file', and did not have any counterpart for a
directory until this patch. Adding a new volume type
'network-dir' is better than reusing 'dir', because it makes
it clear that the only way to access 'network' volumes within
that container is through the network mounting (leaving 'dir'
for something accessible in the local file system).
* include/libvirt/libvirt.h.in (virStorageVolType): Expand enum.
* docs/formatstorage.html.in: Document it.
* docs/schemasa/storagevol.rng (vol): Allow new value.
* src/conf/storage_conf.c (virStorageVol): Use new value.
* src/qemu/qemu_command.c (qemuBuildVolumeString): Fix client.
* src/qemu/qemu_conf.c (qemuTranslateDiskSourcePool): Likewise.
* tools/virsh-volume.c (vshVolumeTypeToString): Likewise.
* src/storage/storage_backend_fs.c
(virStorageBackendFileSystemVolDelete): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
This gets rid of another stat() per volume, as well as cutting
bytes read in half, when populating the volumes of a directory
pool during a pool refresh. Not to mention that it provides an
interface that can let gluster pools also probe file types.
* src/util/virstoragefile.h (virStorageFileProbeFormatFromFD):
Delete.
(virStorageFileProbeFormatFromBuf): New prototype.
(VIR_STORAGE_MAX_HEADER): New constant, based on...
* src/util/virstoragefile.c (STORAGE_MAX_HEAD): ...old name.
(vmdk4GetBackingStore, virStorageFileGetMetadataInternal)
(virStorageFileProbeFormat): Adjust clients.
(virStorageFileProbeFormatFromFD): Delete.
(virStorageFileProbeFormatFromBuf): Export.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Adjust client.
* src/libvirt_private.syms (virstoragefile.h): Adjust exports.
Signed-off-by: Eric Blake <eblake@redhat.com>
We are calling fstat() at least twice per storage volume in
a directory storage pool; this is rather wasteful. Refactoring
this is also a step towards making code reusable for gluster,
where gluster can provide struct stat but cannot use fstat().
* src/storage/storage_backend.h
(virStorageBackendVolOpenCheckMode)
(virStorageBackendUpdateVolTargetInfoFD): Update signature.
* src/storage/storage_backend.c
(virStorageBackendVolOpenCheckMode): Pass stat results back.
(virStorageBackendUpdateVolTargetInfoFD): Use existing stats.
(virStorageBackendVolOpen, virStorageBackendUpdateVolTargetInfo):
Update callers.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Likewise.
* src/storage/storage_backend_scsi.c
(virStorageBackendSCSIUpdateVolTargetInfo): Likewise.
* src/storage/storage_backend_mpath.c
(virStorageBackendMpathUpdateVolTargetInfo): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Most of the usage of getuid()/getgid() is in cases where we are
considering what privileges we have. As such the code should be
using the effective IDs, not real IDs.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Don't reuse the return value of virStorageBackendFileSystemIsMounted.
If it's 0, we'd return it even if the mount command failed.
Also, don't report another error if it's -1, since one has already
been reported.
Introduced by 258e06c.
https://bugzilla.redhat.com/show_bug.cgi?id=981251
Convert the type of loop iterators named 'i', 'j', k',
'ii', 'jj', 'kk', to be 'size_t' instead of 'int' or
'unsigned int', also santizing 'ii', 'jj', 'kk' to use
the normal 'i', 'j', 'k' naming
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Add <features> and <compat> elements to volume target XML.
<compat> is a string which for qcow2 represents the QEMU version
it should be compatible with. Valid values are 0.10 and 1.1.
1.1 is implicit if the <features> element is present, otherwise
qemu-img default is used. 0.10 can be specified to explicitly
create older images after the qemu-img default changes.
<features> contains optional features, so far
<lazy_refcounts/> is available, which enables caching of reference
counters, improving performance for snapshots.
The document for "vol-resize" says the new capacity will be sparse
unless "--allocate" is specified, however, the "--allocate" flag
is never implemented. This implements the "--allocate" flag for
fs backend's raw type volume, based on posix_fallocate and the
syscall SYS_fallocate.
qemu-img resize will fail with "The new size must be a multiple of 512"
if libvirt doesn't round it first.
This fixes rhbz#951495
Signed-off-by: Christophe Fergeau <cfergeau@redhat.com>
The source code base needs to be adapted as well. Some files
include virutil.h just for the string related functions (here,
the include is substituted to match the new file), some include
virutil.h without any need (here, the include is removed), and
some require both.
The bfree and blocks fields are supposed to be in units of frsize. We were
calculating capacity correctly using those units, but the available
calculation was using bsize instead. Most file systems report these as the
same value specifically because many programs are buggy, but that is no
reason to rely on that behavior, or to behave inconsistently.
This bug has been present since e266ded (2008) and aa296e6c, when the code
was originally introduced (the latter via cut and paste).
Signed-off-by: Sage Weil <sage@newdream.net>