The storage file code needs to be run in the hypervisor drivers, while
the storage backend code needs to be run in the storage driver. Split
the source code as a preparatory step for creating separate loadable
modules.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The QEMU driver loadable module needs to be able to resolve all ELF
symbols it references against libvirt.so. Some of its symbols can only
be resolved against the storage_driver.so loadable module which creates
a hard dependancy between them. By moving the storage file backend
framework into the util directory, this gets included directly in the
libvirt.so library. The actual backend implementations are still done as
loadable modules, so this doesn't re-add deps on gluster libraries.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The storage driver backends are serving the public storage pools API,
while the storage file backends are serving the internal QEMU driver and
/ or libvirt utility code.
To prep for moving this storage file backend framework into the utility
code, split out the backend definitions.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Now that we can open connections to the secondary drivers on demand,
there is no need to pass a virConnectPtr into all the backend
functions.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The native gluster pool source list data differs from the data used for
attaching gluster volumes as netfs pools. Currently the only difference
was the format. Since native pools don't use it and later there will be
more differences add a more deterministic way to switch between the
types instead.
Add APIs that allow to dynamically register driver backends so that the
list of available drivers does not need to be known during compile time.
This will allow us to modularize the storage driver on runtime.
Right now, we use simple string comparison both on the source paths
(mount's output vs pool's source) and the target (mount's mnt_dir vs
pool's target). The problem are symlinks and mount indeed returns
symlinks in its output, e.g. /dev/mappper/lvm_symlink. The same goes for
the pool's source/target, so in order to successfully compare these two
replace plain string comparison with virFileComparePaths which will
resolve all symlinks and canonicalize the paths prior to comparison.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1417203
Signed-off-by: Erik Skultety <eskultet@redhat.com>
When FS pool's source is already mounted on the target location instead
of just simply marking the pool as active, thus starting it we fail with
an error stating that the source is indeed already mounted on the target.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Commit id '5f07c3c07' broke the freebsd build in the libvirt CI test
environment because the UMOUNT was not defined unless WITH_STORAGE_FS
is defined.
So remove the virStorageBackendUmountLocal from storage_util.c,h and
restore the code back in the storage_backend_fs.c and _vstorage.c
modules.
Move all the volume functions to storage_util to create local/common helpers
using the same naming syntax as the existing upload, download, and wipe
virStorageBackend*Local API's.
In the process of doing so, found more API's that can now become local
to storage_util. In order to distinguish between local/external - I
changed the names of the now local only ones from "virStorageBackend..."
to just "storageBackend..."
Signed-off-by: John Ferlan <jferlan@redhat.com>
Move some pool functions to storage_util to create local/common helpers
using the same naming syntax as the existing upload, download, and wipe
virStorageBackend*Local API's.
In the process of doing so, found a few API's that can now become local
to storage_util. In order to distinguish between local/external - I
changed the names of the now local only ones from "virStorageBackend..."
to just "storageBackend..."
Signed-off-by: John Ferlan <jferlan@redhat.com>
Just moving code around with minor adjustment to have the Stop
code combine with the Unmount code since all the Stop code did
was call the Unmount code.
The file backend code was mistakenly put into #if WITH_STORAGE_FS. This
is not necessary since all the backends just access files on disk, and
thus the code for WITH_STORAGE_DIR is sufficient to compile everything.
The file became a garbage dump for all kinds of utility functions over
time. Move them to a separate file so that the files can become a clean
interface for the storage backends.
For volume processing in virStorageBackendUpdateVolTargetInfo to get
the capacity commit id 'a760ba3a7' added the ability to probe a volume
that didn't list a target format. Unfortunately, the code used the
virStorageSource (e.g. target->type - virStorageType) rather than
virStorageVolDef (e.g. vol->type - virStorageVolType) in order to
make the comparison. As it turns out target->type for a volume is
not filled in at all for a voldef as the code relies on vol->type.
Ironically the result is that only VIR_STORAGE_VOL_BLOCK's would get
their capacity updated.
This patch will adjust the code to check the "vol->type" field instead
as an argument. This way for a voldef, the correct comparison is made.
Additionally for a backingStore, the 'type' field is never filled in;
however, since we know that the provided path is a location at which
the backing store can be accessed on the local filesystem thus just
pass VIR_STORAGE_VOL_FILE in order to satisfy the adjusted voltype
check. Whether it's a FILE or a BLOCK only matters if we're trying to
get more data based on the target->format.
The tool is used for pool discovery. Since we call an external binary we
don't really need to compile out the code that uses it. We can check
whether it exists at runtime.
In commit 4090e15399 we went back from reporting no errors if no storage
pools were found on a given host to reporting a bad error. And only in
cases when gluster was not installed.
Report a less bad error in case there are no volumes. Also report the
error when gluster is installed but no volumes were found, since
virStorageBackendFindGlusterPoolSources would return success in that
case.
The check does not work properly (crashes) with netfs filesystems and
also checking that a device is not empty when attempting to mount a
filesystem is not very usefull since the mount will fail anyways.
As the code would improve only a very minor corner case I don't really
see a reason to have this code at all.
This code would also fail if libvirt is compiled without support for
blkid and without parted.
This reverts commit a11fd69735.
Prior to starting up, let's be sure the target volume device is
formatted as we expect; otherwise, inhibit the start.
Signed-off-by: John Ferlan <jferlan@redhat.com>
It's possible that the API could be called from a startup path in
order to check whether the label on the device matches what our
format is. In order to handle that condition, add a 'writelabel'
boolean to the API in order to indicate whether a write or just
read is about to happen.
This alters two "error" conditions that would care about knowing.
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1363586
Commit id '27758859' introduced the "NO_OVERWRITE" flag check for
file system backends; however, the implementation, documentation,
and algorithm was inconsistent. For the "flag" description for the
API the flag was described as "Do not overwrite existing pool";
however, within the storage backend code the flag is described
as "it probes to determine if filesystem already exists on the
target device, renurning an error if exists".
The code itself was implemented using the paradigm to set up the
superblock probe by creating a filter that would cause the code
to only search for the provided format type. If that type wasn't
found, then the algorithm would return success allowing the caller
to format the device. If the format type already existed on the
device, then the code would fail indicating that the a filesystem
of the same type existed on the device.
The result is that if someone had a file system of one type on the
device, it was possible to overwrite it if a different format type
was specified in updated XML effectively trashing whatever was on
the device already.
This patch alters what NO_OVERWRITE does for a file system backend
to be more realistic and consistent with what should be expected when
the caller requests to not overwrite the data on the disk.
Rather than filter results based on the expected format type, the
code will allow success/failure be determined solely on whether the
blkid_do_probe calls finds some known format on the device. This
adjustment also allows removal of the virStoragePoolProbeResult
enum that was under utilized.
If it does find a formatted file system different errors will be
generated indicating a file system of a specific type already exists
or a file system of some other type already exists.
In the original virsh support commit id 'ddcd5674', the description
for '--no-overwrite' within the 'pool-build' command help output
has an ambiguous "of this type" included in the short description.
Compared to the longer description within the "Build a given pool."
section of the virsh.pod file it's more apparent that the meaning
of this flag would cause failure if a probe of the target already
has a filesystem.
So this patch also modifies the short description to just be the
antecedent of the 'overwrite' flag, which matches the API description.
This patch also modifies the grammar in virsh.pod for no-overwrite
as well as reworking the paragraph formats to make it easier to read.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rename virStorageBackendFileSystemProbe and to virStorageBackendBLKIDFindFS
and move to the more common storage_backend module.
Create a shim virStorageBackendDeviceIsEmpty which will make the call
to the virStorageBackendBLKIDFindFS and check the return value.
Signed-off-by: John Ferlan <jferlan@redhat.com>
The code at the very bottom of the DAC secdriver that calls
chown() should be fine with read-only data. If something needs to
be prepared it should have been done beforehand.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1366460
When using the --overwrite switch on a pool-build or pool-create, the
The mkfs.ext{2|3|4} commands use mke2fs which requires using the '-F' switch
in order to force overwriting the current filesystem on the whole disk.
Likewise, the mkfs.vfat command uses mkfs.fat which requires using the '-I'
switch in order to force overwriting the current filesystem on the whole disk.
A LUKS volume uses the volume secret type just like the QCOW2 secret, so
adjust the loading of the default secrets to handle any volume that the
virStorageFileGetMetadataFromBuf code has deemed to be an encrypted volume
to search for the volume's secret. This lookup is done by volume usage
where the usage is expected to be the path to volume.
The current LUKS support has a "luks" volume type which has
a "luks" encryption format.
This partially makes sense if you consider the QEMU shorthand
syntax only requires you to specify a format=luks, and it'll
automagically uses "raw" as the next level driver. QEMU will
however let you override the "raw" with any other driver it
supports (vmdk, qcow, rbd, iscsi, etc, etc)
IOW the intention though is that the "luks" encryption format
is applied to all disk formats (whether raw, qcow2, rbd, gluster
or whatever). As such it doesn't make much sense for libvirt
to say the volume type is "luks" - we should be saying that it
is a "raw" file, but with "luks" encryption applied.
IOW, when creating a storage volume we should use this XML
<volume>
<name>demo.raw</name>
<capacity>5368709120</capacity>
<target>
<format type='raw'/>
<encryption format='luks'>
<secret type='passphrase' uuid='0a81f5b2-8403-7b23-c8d6-21ccd2f80d6f'/>
</encryption>
</target>
</volume>
and when configuring a guest disk we should use
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/home/berrange/VirtualMachines/demo.raw'/>
<target dev='sda' bus='scsi'/>
<encryption format='luks'>
<secret type='passphrase' uuid='0a81f5b2-8403-7b23-c8d6-21ccd2f80d6f'/>
</encryption>
</disk>
This commit thus removes the "luks" storage volume type added
in
commit 318ebb36f1
Author: John Ferlan <jferlan@redhat.com>
Date: Tue Jun 21 12:59:54 2016 -0400
util: Add 'luks' to the FileTypeInfo
The storage file probing code is modified so that it can probe
the actual encryption formats explicitly, rather than merely
probing existance of encryption and letting the storage driver
guess the format.
The rest of the code is then adapted to deal with
VIR_STORAGE_FILE_RAW w/ VIR_STORAGE_ENCRYPTION_FORMAT_LUKS
instead of just VIR_STORAGE_FILE_LUKS.
The commit mentioned above was included in libvirt v2.0.0.
So when querying volume XML this will be a change in behaviour
vs the 2.0.0 release - it'll report 'raw' instead of 'luks'
for the volume format, but still report 'luks' for encryption
format. I think this change is OK because the storage driver
did not include any support for creating volumes, nor starting
guets with luks volumes in v2.0.0 - that only since then.
Clearly if we change this we must do it before v2.1.0 though.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
In order to use more common code and set up for a future type, modify the
encryption secret to allow the "usage" attribute or the "uuid" attribute
to define the secret. The "usage" in the case of a volume secret would be
the path to the volume as dictated by the backwards compatibility brought
on by virStorageGenerateQcowEncryption where it set up the usage field as
the vol->target.path and didn't allow someone to provide it. This carries
into virSecretObjListFindByUsageLocked which takes the secret usage attribute
value from from the domain disk definition and compares it against the
usage type from the secret definition. Since none of the code dealing
with qcow/qcow2 encryption secrets uses usage for lookup, it's a mostly
cosmetic change. The real usage comes in a future path where the encryption
is expanded to be a luks volume and the secret will allow definition of
the usage field.
This code will make use of the virSecretLookup{Parse|Format}Secret common code.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than have virCommandRun just spit out the error, allow callers
to decide to pass the exitstatus so the caller can make intelligent
decisions based on the error.
Remove all the plumbing needed for the different qcow-create/kvm-img
non-raw file creation.
We can drop the error messages because CreateQemuImg will thrown an
error for us but with slightly less fidelity (unable to find qemu-img),
which I think is acceptable given the unlikeliness of that error in
practice.
This an ubuntu/debian packaging convention. At one point it may have
been an actually different binary, but at least as of ubuntu precise
(the oldest supported ubuntu distro, released april 2012) kvm-img is
just a symlink to qemu-img for back compat.
I think it's safe to drop support for it
qcow-create was a crippled qemu-img impl that shipped with xen. I
think supporting this was only relevant for really old distros
that didn't have a proper qemu package, like early RHEL5. I think
it's fair to drop support
Refreshes meta-information such as allocation, capacity, format, etc.
Ploop volumes differ from other volume types. Path to volume is the path
to directory with image file root.hds and DiskDescriptor.xml.
https://openvz.org/Ploop/format
Due to this fact, operations of opening the volume have to be done once
again. get the information.
To decide whether the given volume is ploops one, it is necessary to check
the presence of root.hds and DiskDescriptor.xml files in volumes' directory.
Only in this case the volume can be manipulated as the ploops one.
Such strategy helps us to resolve problems that might occure, when we
upload some other volume type from ploop source.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Recursively deletes whole directory of a ploop volume.
To delete ploop image it has to be unmounted.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
These callbacks let us to create ploop volumes in dir, fs and etc. pools.
If a ploop volume was created via buildVol callback, then this volume
is an empty ploop device with DiskDescriptor.xml.
If the volume was created via .buildFrom - then its content is similar to
input volume content.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Ploop image consists of directory with two files: ploop image itself,
called root.hds and DiskDescriptor.xml that contains information about
ploop device: https://openvz.org/Ploop/format.
Such volume are difficult to manipulate in terms of existing volume types
because they are neither a single files nor a directory.
This patch introduces new volume type - ploop. This volume type is used
by ploop volume's exclusively.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1025230
When determining whether a FS pool is mounted, rather than assuming that
the FS pool is mounted just because the target.path is in the mount list,
let's make sure that the FS pool source matches what is mounted
Refactor the code that builds the pool source string during the FS
storage pool mount to be a separate helper.
A future patch will use the helper in order to validate the mounted
FS matches the pool's expectation during poolCheck processing
The libvirt file system storage driver determines what file to
act on by concatenating the pool location with the volume name.
If a user is able to pick names like "../../../etc/passwd", then
they can escape the bounds of the pool. For that matter,
virStoragePoolListVolumes() doesn't descend into subdirectories,
so a user really shouldn't use a name with a slash.
Normally, only privileged users can coerce libvirt into creating
or opening existing files using the virStorageVol APIs; and such
users already have full privilege to create any domain XML (so it
is not an escalation of privilege). But in the case of
fine-grained ACLs, it is feasible that a user can be granted
storage_vol:create but not domain:write, and it violates
assumptions if such a user can abuse libvirt to access files
outside of the storage pool.
Therefore, prevent all use of volume names that contain "/",
whether or not such a name is actually attempting to escape the
pool.
This changes things from:
$ virsh vol-create-as default ../../../../../../etc/haha --capacity 128
Vol ../../../../../../etc/haha created
$ rm /etc/haha
to:
$ virsh vol-create-as default ../../../../../../etc/haha --capacity 128
error: Failed to create vol ../../../../../../etc/haha
error: Requested operation is not valid: volume name '../../../../../../etc/haha' cannot contain '/'
Signed-off-by: Eric Blake <eblake@redhat.com>
Similar to the openflags which allow VIR_STORAGE_VOL_OPEN_NOERROR to be
passed to avoid open errors, add a 'readflags' variable so that in the
future read failures could also be ignored.
Similar to commit id '35847860', it's possible to attempt to create
a 'netfs' directory in an NFS root-squash environment which will cause
the 'vol-delete' command to fail. It's also possible error paths from
the 'vol-create' would result in an error to remove a created directory
if the permissions were incorrect (and disallowed root access).
Thus rename the virFileUnlink to be virFileRemove to match the C API
functionality, adjust the code to following using rmdir or unlink
depending on the path type, and then use/call it for the VIR_STORAGE_VOL_DIR