Take the format of the backing store from the 'meta' object directly and
use g_steal_pointer to steal the path.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Replace all occurrences of
if (VIR_STRDUP(a, b) < 0)
/* effectively dead code */
with:
a = g_strdup(b);
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Now that all the types using VIR_AUTOUNREF have a cleanup func defined
to virObjectUnref, use g_autoptr instead of VIR_AUTOUNREF.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since commit 44e7f02915
util: rewrite auto cleanup macros to use glib's equivalent
VIR_AUTOPTR aliases to g_autoptr. Replace all of its use by the GLib
macro version.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since commit 44e7f02915
util: rewrite auto cleanup macros to use glib's equivalent
VIR_AUTOFREE is just an alias for g_autofree. Use the GLib macros
directly instead of our custom aliases.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
If pool refresh failed, then the internal table of volumes is
probably left in inconsistent or incomplete state anyways. Clear
it out then. This has an advantage that we can move the
virStoragePoolObjClearVols() from those very few backends that
do call it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Since virStorageSource is now a subclass of virObject, we can use
VIR_AUTOUNREF instead.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Add virStorageSourceNew and refactor places allocating that structure to
use the helper.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Let's make use of the auto __cleanup capabilities cleaning up any
now unnecessary goto paths.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Let's make use of the auto __cleanup capabilities. This also allows
for the cleanup of some goto paths.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Let's make use of the auto __cleanup capabilities cleaning up any
now unnecessary goto paths.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Let's make use of the auto __cleanup capabilities cleaning up any
now unnecessary goto paths.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The storage file drivers are currently loaded as a side effect of
loading the storage driver. This is a bogus dependancy because the
storage file code has no interaction with the storage drivers, and
even ultimately be running in a completely separate daemon.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The storage file code needs to be run in the hypervisor drivers, while
the storage backend code needs to be run in the storage driver. Split
the source code as a preparatory step for creating separate loadable
modules.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The QEMU driver loadable module needs to be able to resolve all ELF
symbols it references against libvirt.so. Some of its symbols can only
be resolved against the storage_driver.so loadable module which creates
a hard dependancy between them. By moving the storage file backend
framework into the util directory, this gets included directly in the
libvirt.so library. The actual backend implementations are still done as
loadable modules, so this doesn't re-add deps on gluster libraries.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The storage driver backends are serving the public storage pools API,
while the storage file backends are serving the internal QEMU driver and
/ or libvirt utility code.
To prep for moving this storage file backend framework into the utility
code, split out the backend definitions.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Now that we can open connections to the secondary drivers on demand,
there is no need to pass a virConnectPtr into all the backend
functions.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Storage driver uses virStorageSource only partially to store it's
configuration but fully when parsing backing files of storage volumes.
This patch sets the 'type' field to a value other than
VIR_STORAGE_TYPE_NONE so that further patches can add a terminator
element to backing chains without breaking iteration.
Create/use virStoragePoolObjAddVol in order to add volumes onto list.
Create/use virStoragePoolObjRemoveVol in order to remove volumes from list.
Create/use virStoragePoolObjGetVolumesCount to get count of volumes on list.
For the storage driver, the logic alters when the volumes.obj list grows
to after we've fetched the volobj. This is an optimization of sorts, but
also doesn't "needlessly" grow the volumes.objs list and then just decr
the count if the virGetStorageVol fails.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Currently, @port is type of string. Well, that's overkill and
waste of memory. Port is always an integer. Use it as such.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
After restart of libvirtd the 'checkPool' method is supposed to validate
that the pool is online. Since libvirt then refreshes the pool contents
anyways just return whether the pool was supposed to be online so that
the code can be reached. This is necessary since if a pool does not
implement the method it's automatically considered as inactive.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1436065
The native gluster pool source list data differs from the data used for
attaching gluster volumes as netfs pools. Currently the only difference
was the format. Since native pools don't use it and later there will be
more differences add a more deterministic way to switch between the
types instead.
Add APIs that allow to dynamically register driver backends so that the
list of available drivers does not need to be known during compile time.
This will allow us to modularize the storage driver on runtime.
The file became a garbage dump for all kinds of utility functions over
time. Move them to a separate file so that the files can become a clean
interface for the storage backends.
The tool is used for pool discovery. Since we call an external binary we
don't really need to compile out the code that uses it. We can check
whether it exists at runtime.
The code at the very bottom of the DAC secdriver that calls
chown() should be fine with read-only data. If something needs to
be prepared it should have been done beforehand.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The current LUKS support has a "luks" volume type which has
a "luks" encryption format.
This partially makes sense if you consider the QEMU shorthand
syntax only requires you to specify a format=luks, and it'll
automagically uses "raw" as the next level driver. QEMU will
however let you override the "raw" with any other driver it
supports (vmdk, qcow, rbd, iscsi, etc, etc)
IOW the intention though is that the "luks" encryption format
is applied to all disk formats (whether raw, qcow2, rbd, gluster
or whatever). As such it doesn't make much sense for libvirt
to say the volume type is "luks" - we should be saying that it
is a "raw" file, but with "luks" encryption applied.
IOW, when creating a storage volume we should use this XML
<volume>
<name>demo.raw</name>
<capacity>5368709120</capacity>
<target>
<format type='raw'/>
<encryption format='luks'>
<secret type='passphrase' uuid='0a81f5b2-8403-7b23-c8d6-21ccd2f80d6f'/>
</encryption>
</target>
</volume>
and when configuring a guest disk we should use
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/home/berrange/VirtualMachines/demo.raw'/>
<target dev='sda' bus='scsi'/>
<encryption format='luks'>
<secret type='passphrase' uuid='0a81f5b2-8403-7b23-c8d6-21ccd2f80d6f'/>
</encryption>
</disk>
This commit thus removes the "luks" storage volume type added
in
commit 318ebb36f1
Author: John Ferlan <jferlan@redhat.com>
Date: Tue Jun 21 12:59:54 2016 -0400
util: Add 'luks' to the FileTypeInfo
The storage file probing code is modified so that it can probe
the actual encryption formats explicitly, rather than merely
probing existance of encryption and letting the storage driver
guess the format.
The rest of the code is then adapted to deal with
VIR_STORAGE_FILE_RAW w/ VIR_STORAGE_ENCRYPTION_FORMAT_LUKS
instead of just VIR_STORAGE_FILE_LUKS.
The commit mentioned above was included in libvirt v2.0.0.
So when querying volume XML this will be a change in behaviour
vs the 2.0.0 release - it'll report 'raw' instead of 'luks'
for the volume format, but still report 'luks' for encryption
format. I think this change is OK because the storage driver
did not include any support for creating volumes, nor starting
guets with luks volumes in v2.0.0 - that only since then.
Clearly if we change this we must do it before v2.1.0 though.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
As gluster natively supports multiple hosts for failover reasons we can
easily add the support to the storage driver code in libvirt.
Extract the code setting an individual host into a separate function and
call them in a loop. The new code also tries to keep the debug log
entries sane.
Use correct mode when pre-creating files (for snapshots). The refactor
changing to storage driver usage caused a regression as some systems
created the file with 000 permissions forbidding qemu to write the file.
Pass mode to the creating functions to avoid the problem.
Regression since 185e07a5f8.
Gluster storage works on a similar principle to NFS where it takes the
uid and gid of the actual process and uses it to access the storage
volume on the remote server. This introduces a need to chown storage
files on gluster via native API.
To allow reusing this function in the qemu driver we need to allow
specifying the storage format. Also separate return of the backing store
path now isn't necessary.