The storage file drivers are currently loaded as a side effect of
loading the storage driver. This is a bogus dependancy because the
storage file code has no interaction with the storage drivers, and
even ultimately be running in a completely separate daemon.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The storage file code needs to be run in the hypervisor drivers, while
the storage backend code needs to be run in the storage driver. Split
the source code as a preparatory step for creating separate loadable
modules.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The QEMU driver loadable module needs to be able to resolve all ELF
symbols it references against libvirt.so. Some of its symbols can only
be resolved against the storage_driver.so loadable module which creates
a hard dependancy between them. By moving the storage file backend
framework into the util directory, this gets included directly in the
libvirt.so library. The actual backend implementations are still done as
loadable modules, so this doesn't re-add deps on gluster libraries.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The storage driver backends are serving the public storage pools API,
while the storage file backends are serving the internal QEMU driver and
/ or libvirt utility code.
To prep for moving this storage file backend framework into the utility
code, split out the backend definitions.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Now that we can open connections to the secondary drivers on demand,
there is no need to pass a virConnectPtr into all the backend
functions.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Storage driver uses virStorageSource only partially to store it's
configuration but fully when parsing backing files of storage volumes.
This patch sets the 'type' field to a value other than
VIR_STORAGE_TYPE_NONE so that further patches can add a terminator
element to backing chains without breaking iteration.
Create/use virStoragePoolObjAddVol in order to add volumes onto list.
Create/use virStoragePoolObjRemoveVol in order to remove volumes from list.
Create/use virStoragePoolObjGetVolumesCount to get count of volumes on list.
For the storage driver, the logic alters when the volumes.obj list grows
to after we've fetched the volobj. This is an optimization of sorts, but
also doesn't "needlessly" grow the volumes.objs list and then just decr
the count if the virGetStorageVol fails.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Currently, @port is type of string. Well, that's overkill and
waste of memory. Port is always an integer. Use it as such.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
After restart of libvirtd the 'checkPool' method is supposed to validate
that the pool is online. Since libvirt then refreshes the pool contents
anyways just return whether the pool was supposed to be online so that
the code can be reached. This is necessary since if a pool does not
implement the method it's automatically considered as inactive.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1436065
The native gluster pool source list data differs from the data used for
attaching gluster volumes as netfs pools. Currently the only difference
was the format. Since native pools don't use it and later there will be
more differences add a more deterministic way to switch between the
types instead.
Add APIs that allow to dynamically register driver backends so that the
list of available drivers does not need to be known during compile time.
This will allow us to modularize the storage driver on runtime.
The file became a garbage dump for all kinds of utility functions over
time. Move them to a separate file so that the files can become a clean
interface for the storage backends.
The tool is used for pool discovery. Since we call an external binary we
don't really need to compile out the code that uses it. We can check
whether it exists at runtime.
The code at the very bottom of the DAC secdriver that calls
chown() should be fine with read-only data. If something needs to
be prepared it should have been done beforehand.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The current LUKS support has a "luks" volume type which has
a "luks" encryption format.
This partially makes sense if you consider the QEMU shorthand
syntax only requires you to specify a format=luks, and it'll
automagically uses "raw" as the next level driver. QEMU will
however let you override the "raw" with any other driver it
supports (vmdk, qcow, rbd, iscsi, etc, etc)
IOW the intention though is that the "luks" encryption format
is applied to all disk formats (whether raw, qcow2, rbd, gluster
or whatever). As such it doesn't make much sense for libvirt
to say the volume type is "luks" - we should be saying that it
is a "raw" file, but with "luks" encryption applied.
IOW, when creating a storage volume we should use this XML
<volume>
<name>demo.raw</name>
<capacity>5368709120</capacity>
<target>
<format type='raw'/>
<encryption format='luks'>
<secret type='passphrase' uuid='0a81f5b2-8403-7b23-c8d6-21ccd2f80d6f'/>
</encryption>
</target>
</volume>
and when configuring a guest disk we should use
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/home/berrange/VirtualMachines/demo.raw'/>
<target dev='sda' bus='scsi'/>
<encryption format='luks'>
<secret type='passphrase' uuid='0a81f5b2-8403-7b23-c8d6-21ccd2f80d6f'/>
</encryption>
</disk>
This commit thus removes the "luks" storage volume type added
in
commit 318ebb36f1
Author: John Ferlan <jferlan@redhat.com>
Date: Tue Jun 21 12:59:54 2016 -0400
util: Add 'luks' to the FileTypeInfo
The storage file probing code is modified so that it can probe
the actual encryption formats explicitly, rather than merely
probing existance of encryption and letting the storage driver
guess the format.
The rest of the code is then adapted to deal with
VIR_STORAGE_FILE_RAW w/ VIR_STORAGE_ENCRYPTION_FORMAT_LUKS
instead of just VIR_STORAGE_FILE_LUKS.
The commit mentioned above was included in libvirt v2.0.0.
So when querying volume XML this will be a change in behaviour
vs the 2.0.0 release - it'll report 'raw' instead of 'luks'
for the volume format, but still report 'luks' for encryption
format. I think this change is OK because the storage driver
did not include any support for creating volumes, nor starting
guets with luks volumes in v2.0.0 - that only since then.
Clearly if we change this we must do it before v2.1.0 though.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
As gluster natively supports multiple hosts for failover reasons we can
easily add the support to the storage driver code in libvirt.
Extract the code setting an individual host into a separate function and
call them in a loop. The new code also tries to keep the debug log
entries sane.
Use correct mode when pre-creating files (for snapshots). The refactor
changing to storage driver usage caused a regression as some systems
created the file with 000 permissions forbidding qemu to write the file.
Pass mode to the creating functions to avoid the problem.
Regression since 185e07a5f8.
Gluster storage works on a similar principle to NFS where it takes the
uid and gid of the actual process and uses it to access the storage
volume on the remote server. This introduces a need to chown storage
files on gluster via native API.
To allow reusing this function in the qemu driver we need to allow
specifying the storage format. Also separate return of the backing store
path now isn't necessary.
Add storage driver based functions to access headers of storage files
for metadata extraction. Along with this patch a local filesystem and
gluster via libgfapi implementation is provided. The gluster
implementation is based on code of the saferead_lim function.
Print the debug statements of individual file access functions from the
main API functions instead of the individual backend functions.
Also enhance initialization debug messages on a per-backend basis.
The gluster volume name was previously stored as part of the source path
string. This is unfortunate when we want to do operations on the path as
the volume is used separately.
Parse and store the volume name separately for gluster storage volumes
and use the newly stored variable appropriately.
All callers of virStorageFileGetMetadataFromBuf were first calling
virStorageFileProbeFormatFromBuf, to learn what format to pass in.
But this function is already wired to do the exact same probe if
the incoming format is VIR_STORAGE_FILE_AUTO, so it's simpler to
just refactor the probing into the central function.
* src/util/virstoragefile.h (virStorageFileGetMetadataFromBuf):
Drop parameter.
(virStorageFileProbeFormatFromBuf): Drop declaration.
* src/util/virstoragefile.c (virStorageFileGetMetadataFromBuf):
Do probe here instead of in callers.
(virStorageFileProbeFormatFromBuf): Make static.
* src/libvirt_private.syms (virstoragefile.h): Drop function.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Update caller.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit id 'ac9a0963' refactored out the 'withCapacity' for the
virStorageBackendUpdateVolInfo() API. See:
http://www.redhat.com/archives/libvir-list/2014-April/msg00043.html
This resulted in a difference in how 'virsh vol-info --pool <poolName>
<volume>' or 'virsh vol-list vol-list --pool <poolName> --details' outputs
the capacity information for a directory pool with a qcow2 sparse file.
For example, using the following XML
mkdir /home/TestPool
cat testpool.xml
<pool type='dir'>
<name>TestPool</name>
<uuid>6bf80895-10b6-75a6-6059-89fdea2aefb7</uuid>
<source>
</source>
<target>
<path>/home/TestPool</path>
<permissions>
<mode>0755</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
virsh pool-create testpool.xml
virsh vol-create-as --pool TestPool temp_vol_1 \
--capacity 1048576 --allocation 1048576 --format qcow2
virsh vol-info --pool TestPool temp_vol_1
Results in listing a Capacity value. Prior to the commit, the value would
be '1.0 MiB' (1048576 bytes). However, after the commit the output would be
(for example) '192.50 KiB', which for my system was the size of the volume
in my file system (eg 'ls -l TestPool/temp_vol_1' results in '197120' bytes
or 192.50 KiB). While perhaps technically correct, it's not necessarily
what the user expected (certainly virt-test didn't expect it).
This patch restores the code to not update the target capacity for this path
A couple pieces of virStorageFileMetadata are used only while
collecting information about the chain, and don't need to
live permanently in the struct. This patch refactors external
callers to collect the information separately, so that the
next patch can remove the fields.
* src/util/virstoragefile.h (virStorageFileGetMetadataFromBuf):
Alter signature.
* src/util/virstoragefile.c (virStorageFileGetMetadataInternal):
Likewise.
(virStorageFileGetMetadataFromFDInternal): Adjust callers.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Likewise.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Now that we store all metadata about a storage image in a
virStorageSource struct let's use it also to store information needed by
the storage driver to access and do operations on the files.
A future patch will merge virStorageFileMetadata and virStorageSource,
but I found it easier to do if both structs use the same information
for tracking whether a source file needs encryption keys.
* src/util/virstoragefile.h (_virStorageFileMetadata): Prepare
full encryption struct instead of just a bool.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Use transfer semantics.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
* src/util/virstoragefile.c (virStorageFileGetMetadataInternal):
Populate struct.
(virStorageFileFreeMetadata): Adjust clients.
* tests/virstoragetest.c (testStorageChain): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Now that each virStorageSource can track allocation information,
and given that we already have the information without extra
syscalls, it's easier to just always populate the information
directly into the struct than it is to sometimes pass the address
of the struct members down the call chain.
* src/storage/storage_backend.h (virStorageBackendUpdateVolInfo)
(virStorageBackendUpdateVolTargetInfo)
(virStorageBackendUpdateVolTargetInfoFD): Update signature.
* src/storage/storage_backend.c (virStorageBackendUpdateVolInfo)
(virStorageBackendUpdateVolTargetInfo)
(virStorageBackendUpdateVolTargetInfoFD): Always populate struct
members instead.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskMakeDataVol): Update client.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget)
(virStorageBackendFileSystemRefresh)
(virStorageBackendFileSystemVolRefresh): Likewise.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
* src/storage/storage_backend_logical.c
(virStorageBackendLogicalMakeVol): Likewise.
* src/storage/storage_backend_mpath.c
(virStorageBackendMpathNewVol): Likewise.
* src/storage/storage_backend_scsi.c
(virStorageBackendSCSINewLun): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
One of the features of qcow2 is that a wrapper file can have
more capacity than its backing file from the guest's perspective;
what's more, sparse files make tracking allocation of both
the active and backing file worthwhile. As such, it makes
more sense to show allocation numbers for each file in a chain,
and not just the top-level file. This sets up the fields for
the tracking, although it does not modify XML to display any
new information.
* src/util/virstoragefile.h (_virStorageSource): Add fields.
* src/conf/storage_conf.h (_virStorageVolDef): Drop redundant
fields.
* src/storage/storage_backend.c (virStorageBackendCreateBlockFrom)
(createRawFile, virStorageBackendCreateQemuImgCmd)
(virStorageBackendCreateQcowCreate): Update clients.
* src/storage/storage_driver.c (storageVolDelete)
(storageVolCreateXML, storageVolCreateXMLFrom, storageVolResize)
(storageVolWipeInternal, storageVolGetInfo): Likewise.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget)
(virStorageBackendFileSystemRefresh)
(virStorageBackendFileSystemVolResize)
(virStorageBackendFileSystemVolRefresh): Likewise.
* src/storage/storage_backend_logical.c
(virStorageBackendLogicalMakeVol)
(virStorageBackendLogicalCreateVol): Likewise.
* src/storage/storage_backend_scsi.c
(virStorageBackendSCSINewLun): Likewise.
* src/storage/storage_backend_mpath.c
(virStorageBackendMpathNewVol): Likewise.
* src/storage/storage_backend_rbd.c
(volStorageBackendRBDRefreshVolInfo)
(virStorageBackendRBDCreateImage): Likewise.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskMakeDataVol)
(virStorageBackendDiskCreateVol): Likewise.
* src/storage/storage_backend_sheepdog.c
(virStorageBackendSheepdogBuildVol)
(virStorageBackendSheepdogParseVdiList): Likewise.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
* src/conf/storage_conf.c (virStorageVolDefFormat)
(virStorageVolDefParseXML): Likewise.
* src/test/test_driver.c (testOpenVolumesForPool)
(testStorageVolCreateXML, testStorageVolCreateXMLFrom)
(testStorageVolDelete, testStorageVolGetInfo): Likewise.
* src/esx/esx_storage_backend_iscsi.c (esxStorageVolGetXMLDesc):
Likewise.
* src/esx/esx_storage_backend_vmfs.c (esxStorageVolGetXMLDesc)
(esxStorageVolCreateXML): Likewise.
* src/parallels/parallels_driver.c (parallelsAddHddByVolume):
Likewise.
* src/parallels/parallels_storage.c (parallelsDiskDescParseNode)
(parallelsStorageVolDefineXML, parallelsStorageVolCreateXMLFrom)
(parallelsStorageVolDefRemove, parallelsStorageVolGetInfo):
Likewise.
* src/vbox/vbox_tmpl.c (vboxStorageVolCreateXML)
(vboxStorageVolGetXMLDesc): Likewise.
* tests/storagebackendsheepdogtest.c (test_vdi_list_parser):
Likewise.
* src/phyp/phyp_driver.c (phypStorageVolCreateXML): Likewise.