The VIR_ENUM_DECL/VIR_ENUM_IMPL helper macros already append 'Type'
to the enum name being converted; it looks silly to have functions
with 'TypeType' in their name. Even though some of our enums have
to have a 'Type' suffix, the corresponding string conversion
functions do not.
* src/conf/secret_conf.h (VIR_ENUM_DECL): Rename virSecretUsageType.
* src/conf/storage_conf.h (VIR_ENUM_DECL): Rename
virStoragePoolAuthType, virStoragePoolSourceAdapterType,
virStoragePartedFsType.
* src/conf/domain_conf.c (virDomainDiskDefParseXML)
(virDomainFSDefParseXML, virDomainFSDefFormat): Update callers.
* src/conf/secret_conf.c (virSecretDefParseUsage)
(virSecretDefFormatUsage): Likewise.
* src/conf/storage_conf.c (virStoragePoolDefParseAuth)
(virStoragePoolDefParseSource, virStoragePoolSourceFormat):
Likewise.
* src/lxc/lxc_controller.c (virLXCControllerSetupLoopDevices):
Likewise.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskPartFormat): Likewise.
* src/util/virstorageencryption.c (virStorageEncryptionSecretParse)
(virStorageEncryptionSecretFormat): Likewise.
* tools/virsh-secret.c (cmdSecretList): Likewise.
* src/libvirt_private.syms (secret_conf.h, storage_conf.h): Export
corrected names.
Signed-off-by: Eric Blake <eblake@redhat.com>
In "src/conf/" there are many enumeration (enum) declarations.
Similar to the recent cleanup to "src/util" directory, it's
better to use a typedef for variable types, function types and
other usages. Other enumeration and folders will be changed to
typedef's in the future. Most of the files changed in this
commit are related to storage (storage_conf) enums.
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1092882
Refactoring in commit id '0c2305b3' resulted in the wrong storage
volume object being passed to the new storageVolDeleteInternal().
It should have passed 'voldef' which is the address found in the
pool->volumes.objs[i] array. By passing 'voldef', the DeleteInternal
code will find and remove the voldef from the volumes.objs[] list.
When creating a new volume, it is possible to copy data into it from
another already existing volume (referred to as @origvol). Obviously,
the read-only access to @origvol is required, which is thread safe
(probably not performance-wise though). However, with current code
both @newvol and @origvol are marked as building for the time of
copying data from the @origvol to @newvol. The rationale behind
is to disallow some operations on both @origvol and @newvol, e.g.
vol-wipe, vol-delete, vol-download. While it makes sense to not allow
such operations on partly copied mirror, but it doesn't make sense to
disallow vol-create or vol-download on the source (@origvol).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In "src/util/" there are many enumeration (enum) declarations.
Sometimes, it's better using a typedef for variable types,
function types and other usages. Other enumeration will be
changed to typedef's in the future.
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
All callers of virStorageFileGetMetadataFromBuf were first calling
virStorageFileProbeFormatFromBuf, to learn what format to pass in.
But this function is already wired to do the exact same probe if
the incoming format is VIR_STORAGE_FILE_AUTO, so it's simpler to
just refactor the probing into the central function.
* src/util/virstoragefile.h (virStorageFileGetMetadataFromBuf):
Drop parameter.
(virStorageFileProbeFormatFromBuf): Drop declaration.
* src/util/virstoragefile.c (virStorageFileGetMetadataFromBuf):
Do probe here instead of in callers.
(virStorageFileProbeFormatFromBuf): Make static.
* src/libvirt_private.syms (virstoragefile.h): Drop function.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Update caller.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit id 'ac9a0963' refactored out the 'withCapacity' for the
virStorageBackendUpdateVolInfo() API. See:
http://www.redhat.com/archives/libvir-list/2014-April/msg00043.html
This resulted in a difference in how 'virsh vol-info --pool <poolName>
<volume>' or 'virsh vol-list vol-list --pool <poolName> --details' outputs
the capacity information for a directory pool with a qcow2 sparse file.
For example, using the following XML
mkdir /home/TestPool
cat testpool.xml
<pool type='dir'>
<name>TestPool</name>
<uuid>6bf80895-10b6-75a6-6059-89fdea2aefb7</uuid>
<source>
</source>
<target>
<path>/home/TestPool</path>
<permissions>
<mode>0755</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
virsh pool-create testpool.xml
virsh vol-create-as --pool TestPool temp_vol_1 \
--capacity 1048576 --allocation 1048576 --format qcow2
virsh vol-info --pool TestPool temp_vol_1
Results in listing a Capacity value. Prior to the commit, the value would
be '1.0 MiB' (1048576 bytes). However, after the commit the output would be
(for example) '192.50 KiB', which for my system was the size of the volume
in my file system (eg 'ls -l TestPool/temp_vol_1' results in '197120' bytes
or 192.50 KiB). While perhaps technically correct, it's not necessarily
what the user expected (certainly virt-test didn't expect it).
This patch restores the code to not update the target capacity for this path
The stripe_unit and stripe_count arguments are passed to rbd_create3 in
the wrong order, resulting in a stripe size of 1 byte with 4194304
stripes on newly created RBD volumes.
https://bugzilla.redhat.com/show_bug.cgi?id=1092208
Signed-off-by: Steven McDonald <steven.mcdonald@anchor.net.au>
More instances of failure to report (unlikely) readdir errors.
In one case, I chose to ignore them, given that a readdir error
would be no different than timing out on the loop, where the
fallback path behaves correctly either way.
* src/storage/storage_backend.c (virStorageBackendStablePath):
Ignore readdir errors.
* src/storage/storage_backend_fs.c
(virStorageBackendFileSystemRefresh): Report readdir errors.
* src/storage/storage_backend_iscsi.c
(virStorageBackendISCSIGetHostNumber): Likewise.
* src/storage/storage_backend_scsi.c (getNewStyleBlockDevice)
(getBlockDevice, virStorageBackendSCSIFindLUs): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Instead of hardcoding LIBEXECDIR as the location of the libvirt_parthelper
binary, use virFileFindResource to optionally find it in the current
build directory.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Commit id '18642d10' caused a virt-test regression for NFS backend
storage error path checks when running the command:
'virsh find-storage-pool-sources-as netfs Unknown '
when the host did not have Gluster installed. Prior to the commit,
the test would fail with the error:
error: internal error: Child process (/usr/sbin/showmount --no-headers
--exports Unknown) unexpected exit status 1: clnt_create: RPC: Unknown host
After the commit, the error would be ignored, the call would succeed,
and an empty list of pool sources returned. This was tucked into the
commit message as an expected outcome.
When the target host does not have a GLUSTER_CLI this is a regression
over the previous release. Furthermore, even if Gluster CLI was present,
but had a failure to get devices, the API would return a failure even if
the NFS backend had found devices.
Modify the logic to return failure when the NFS backend check fails and
there's no GLUSTER_CLI or when both backend checks fail.
If either returns success and GLUSTER_CLI is defined, then fetch and return
a list of source devices even if it's empty
A couple pieces of virStorageFileMetadata are used only while
collecting information about the chain, and don't need to
live permanently in the struct. This patch refactors external
callers to collect the information separately, so that the
next patch can remove the fields.
* src/util/virstoragefile.h (virStorageFileGetMetadataFromBuf):
Alter signature.
* src/util/virstoragefile.c (virStorageFileGetMetadataInternal):
Likewise.
(virStorageFileGetMetadataFromFDInternal): Adjust callers.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Likewise.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Deciding if a user string represents a local file instead of a
network path is an operation worth exposing directly, particularly
since the next patch will be removing a redundant variable that
was caching the information.
* src/util/virstoragefile.h (virStorageIsFile): New declaration.
* src/util/virstoragefile.c (virBackingStoreIsFile): Rename...
(virStorageIsFile): ...export, and allow NULL input.
(virStorageFileGetMetadataInternal)
(virStorageFileGetMetadataRecurse, virStorageFileGetMetadata):
Update callers.
* src/conf/domain_conf.c (virDomainDiskDefForeachPath): Use it.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Likewise.
* src/libvirt_private.syms (virstoragefile.h): Export function.
Signed-off-by: Eric Blake <eblake@redhat.com>
Now that we store all metadata about a storage image in a
virStorageSource struct let's use it also to store information needed by
the storage driver to access and do operations on the files.
https://bugzilla.redhat.com/show_bug.cgi?id=1024159
If adding a volume to a storage pool fails during the CreateXML or
CreateXMLFrom API's, we don't want to adjust the available and
allocation values for the storage pool during storageVolDelete
since we haven't adjusted the values for the create.
Refactor storageVolDelete() a bit to create a storageVolDeleteInternal()
which will handle the primary deletion activities. Add a parameter
updateMeta which will signify whether to update the values or not.
Adjust the calls from CreateXML and CreateXMLFrom to directly call the
DeleteInternal with the pool lock held. This does bypass the call
to virStorageVolDeleteEnsureACL().
Commit id '18642d10' refactored the call to virCommandRunRegex()
inside a new function virStorageBackendFileSystemNetFindNFSPoolSources(),
but the cut-n-paste didn't remove the "&state". Now that state is passed
by reference, it results in a libvirtd core with a messages entry:
"...internal error: unknown storage pool type Unknow"
Currently VolOpen notifies the user of a potentially non-fatal failure by
returning -2 and logging a VIR_WARN or VIR_INFO. Unfortunately most
callers treat -2 as fatal but don't actually report any message with
the error APIs.
Rename the VOL_OPEN_ERROR flag to VOL_OPEN_NOERROR. If NOERROR is specified,
we preserve the current behavior of returning -2 (there's only one caller
that wants this).
However in the default case, only return -1, and actually use the error
APIs. Fix up a couple callers as a result.
A future patch will merge virStorageFileMetadata and virStorageSource,
but I found it easier to do if both structs use the same information
for tracking whether a source file needs encryption keys.
* src/util/virstoragefile.h (_virStorageFileMetadata): Prepare
full encryption struct instead of just a bool.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Use transfer semantics.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
* src/util/virstoragefile.c (virStorageFileGetMetadataInternal):
Populate struct.
(virStorageFileFreeMetadata): Adjust clients.
* tests/virstoragetest.c (testStorageChain): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Now that each virStorageSource can track allocation information,
and given that we already have the information without extra
syscalls, it's easier to just always populate the information
directly into the struct than it is to sometimes pass the address
of the struct members down the call chain.
* src/storage/storage_backend.h (virStorageBackendUpdateVolInfo)
(virStorageBackendUpdateVolTargetInfo)
(virStorageBackendUpdateVolTargetInfoFD): Update signature.
* src/storage/storage_backend.c (virStorageBackendUpdateVolInfo)
(virStorageBackendUpdateVolTargetInfo)
(virStorageBackendUpdateVolTargetInfoFD): Always populate struct
members instead.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskMakeDataVol): Update client.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget)
(virStorageBackendFileSystemRefresh)
(virStorageBackendFileSystemVolRefresh): Likewise.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
* src/storage/storage_backend_logical.c
(virStorageBackendLogicalMakeVol): Likewise.
* src/storage/storage_backend_mpath.c
(virStorageBackendMpathNewVol): Likewise.
* src/storage/storage_backend_scsi.c
(virStorageBackendSCSINewLun): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
One of the features of qcow2 is that a wrapper file can have
more capacity than its backing file from the guest's perspective;
what's more, sparse files make tracking allocation of both
the active and backing file worthwhile. As such, it makes
more sense to show allocation numbers for each file in a chain,
and not just the top-level file. This sets up the fields for
the tracking, although it does not modify XML to display any
new information.
* src/util/virstoragefile.h (_virStorageSource): Add fields.
* src/conf/storage_conf.h (_virStorageVolDef): Drop redundant
fields.
* src/storage/storage_backend.c (virStorageBackendCreateBlockFrom)
(createRawFile, virStorageBackendCreateQemuImgCmd)
(virStorageBackendCreateQcowCreate): Update clients.
* src/storage/storage_driver.c (storageVolDelete)
(storageVolCreateXML, storageVolCreateXMLFrom, storageVolResize)
(storageVolWipeInternal, storageVolGetInfo): Likewise.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget)
(virStorageBackendFileSystemRefresh)
(virStorageBackendFileSystemVolResize)
(virStorageBackendFileSystemVolRefresh): Likewise.
* src/storage/storage_backend_logical.c
(virStorageBackendLogicalMakeVol)
(virStorageBackendLogicalCreateVol): Likewise.
* src/storage/storage_backend_scsi.c
(virStorageBackendSCSINewLun): Likewise.
* src/storage/storage_backend_mpath.c
(virStorageBackendMpathNewVol): Likewise.
* src/storage/storage_backend_rbd.c
(volStorageBackendRBDRefreshVolInfo)
(virStorageBackendRBDCreateImage): Likewise.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskMakeDataVol)
(virStorageBackendDiskCreateVol): Likewise.
* src/storage/storage_backend_sheepdog.c
(virStorageBackendSheepdogBuildVol)
(virStorageBackendSheepdogParseVdiList): Likewise.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Likewise.
* src/conf/storage_conf.c (virStorageVolDefFormat)
(virStorageVolDefParseXML): Likewise.
* src/test/test_driver.c (testOpenVolumesForPool)
(testStorageVolCreateXML, testStorageVolCreateXMLFrom)
(testStorageVolDelete, testStorageVolGetInfo): Likewise.
* src/esx/esx_storage_backend_iscsi.c (esxStorageVolGetXMLDesc):
Likewise.
* src/esx/esx_storage_backend_vmfs.c (esxStorageVolGetXMLDesc)
(esxStorageVolCreateXML): Likewise.
* src/parallels/parallels_driver.c (parallelsAddHddByVolume):
Likewise.
* src/parallels/parallels_storage.c (parallelsDiskDescParseNode)
(parallelsStorageVolDefineXML, parallelsStorageVolCreateXMLFrom)
(parallelsStorageVolDefRemove, parallelsStorageVolGetInfo):
Likewise.
* src/vbox/vbox_tmpl.c (vboxStorageVolCreateXML)
(vboxStorageVolGetXMLDesc): Likewise.
* tests/storagebackendsheepdogtest.c (test_vdi_list_parser):
Likewise.
* src/phyp/phyp_driver.c (phypStorageVolCreateXML): Likewise.
A fairly smooth transition. And now that domain disks and
storage volumes share a common struct, it opens the doors for
a future patch to expose more details in the XML for both
objects.
* src/conf/storage_conf.h (_virStorageVolTarget): Delete.
(_virStorageVolDef): Use common type.
* src/conf/storage_conf.c (virStorageVolDefFree)
(virStorageVolTargetDefFormat): Update clients.
* src/storage/storage_backend.h: Likewise.
* src/storage/storage_backend.c
(virStorageBackendDetectBlockVolFormatFD)
(virStorageBackendUpdateVolTargetInfo)
(virStorageBackendUpdateVolTargetInfoFD): Likewise.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Some preparatory work before consolidating storage volume
structs with the rest of virstoragefile. Making these
changes allows a volume target to be much closer to (a
subset of) the virStorageSource struct.
Making perms be a pointer allows it to be optional if we
have a storage pool that doesn't expose permissions in a
way we can access. It also allows future patches to
optionally expose permissions details learned about a disk
image via domain <disk> listings, rather than just
limiting it to storage volume listings.
Disk partition types was only used by internal code to
control what type of partition to create when carving up
an MS-DOS partition table storage pool (and is not used
for GPT partition tables or other storage pools). It was
not exposed in volume XML, and as it is more closely
related to extent information of the overall block device
than it is to the <target> information describing the host
file. Besides, if we ever decide to expose it in XML down
the road, we can move it back as needed.
* src/conf/storage_conf.h (_virStorageVolTarget): Change perms to
pointer, enhance comments. Move partition type...
(_virStorageVolSource): ...here.
* src/conf/storage_conf.c (virStorageVolDefFree)
(virStorageVolDefParseXML, virStorageVolTargetDefFormat): Update
clients.
* src/storage/storage_backend_fs.c (createFileDir): Likewise.
* src/storage/storage_backend.c (virStorageBackendCreateBlockFrom)
(virStorageBackendCreateRaw, virStorageBackendCreateExecCommand)
(virStorageBackendUpdateVolTargetInfoFD): Likewise.
* src/storage/storage_backend_logical.c
(virStorageBackendLogicalCreateVol): Likewise.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskMakeDataVol)
(virStorageBackendDiskPartTypeToCreate): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Noticed during my work on storage struct cleanups.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskPartBoundaries): Fix spelling errors.
Signed-off-by: Eric Blake <eblake@redhat.com>
Now that we have a common struct, it's time to start using it!
Since external snapshots make a longer backing chain, it is
only natural to use the same struct for the file created by
the snapshot as what we use for <domain> disks.
* src/conf/snapshot_conf.h (_virDomainSnapshotDiskDef): Use common
struct instead of open-coded duplicate fields.
* src/conf/snapshot_conf.c (virDomainSnapshotDiskDefClear)
(virDomainSnapshotDiskDefParseXML, virDomainSnapshotAlignDisks)
(virDomainSnapshotDiskDefFormat)
(virDomainSnapshotDiskGetActualType): Adjust clients.
* src/qemu/qemu_conf.c (qemuTranslateSnapshotDiskSourcePool):
Likewise.
* src/qemu/qemu_driver.c (qemuDomainSnapshotDiskGetSourceString)
(qemuDomainSnapshotCreateInactiveExternal)
(qemuDomainSnapshotPrepareDiskExternalOverlayActive)
(qemuDomainSnapshotPrepareDiskExternal)
(qemuDomainSnapshotPrepare)
(qemuDomainSnapshotCreateSingleDiskActive): Likewise.
* src/storage/storage_driver.c
(virStorageFileInitFromSnapshotDef): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1072714
Use the "gluster" command line tool to retrieve information about remote
volumes on a gluster server to allow storage pool source lookup.
Unfortunately gluster doesn't provide a management library so that we
could use that directly, instead the RPC calls are hardcoded in the
command line tool.
Extract the NFS related stuff into a separate function and tidy up the
rest of the code so we can reuse it to add gluster backend detection.
Additionally avoid reporting of errors from "showmount" and return an
empty source list instead. This will help when adding other detection
backends.
According to our documentation the "key" value has the following
meaning: "Providing an identifier for the volume which identifies a
single volume." The currently used keys for gluster volumes consist of
the gluster volume name and file path. This can't be considered unique
as a different storage server can serve a volume with the same name.
Unfortunately I wasn't able to figure out a way to retrieve the gluster
volume UUID which would avoid the possibility of having two distinct
keys identifying a single volume.
Use the full URI as the key for the volume to avoid the more critical
ambiguity problem and document the possible change to UUID.
The libgfapi function glfs_fini doesn't tolerate NULL pointers. Add a
check on the error paths as it's possible to crash libvirtd if the
gluster volume can't be initialized.
It's finally time to start tracking disk backing chains in
<domain> XML. The first step is to start refactoring code
so that we have an object more convenient for representing
each host source resource in the context of a single guest
<disk>. Ultimately, I plan to move the new type into src/util
where it can be reused by virStorageFile, but to make the
transition easier to review, this patch just creates the
new type then fixes everything until it compiles again.
* src/conf/domain_conf.h (_virDomainDiskDef): Split...
(_virDomainDiskSourceDef): ...to new struct.
(virDomainDiskAuthClear): Use new type.
* src/conf/domain_conf.c (virDomainDiskDefFree): Split...
(virDomainDiskSourceDefClear): ...to new function.
(virDomainDiskGetType, virDomainDiskSetType)
(virDomainDiskGetSource, virDomainDiskSetSource)
(virDomainDiskGetDriver, virDomainDiskSetDriver)
(virDomainDiskGetFormat, virDomainDiskSetFormat)
(virDomainDiskAuthClear, virDomainDiskGetActualType)
(virDomainDiskDefParseXML, virDomainDiskSourceDefFormat)
(virDomainDiskDefFormat, virDomainDiskDefForeachPath)
(virDomainDiskDefGetSecurityLabelDef)
(virDomainDiskSourceIsBlockType): Adjust all users.
* src/lxc/lxc_controller.c (virLXCControllerSetupDisk):
Likewise.
* src/lxc/lxc_driver.c (lxcDomainAttachDeviceMknodHelper):
Likewise.
* src/qemu/qemu_command.c (qemuAddRBDHost, qemuParseRBDString)
(qemuParseDriveURIString, qemuParseGlusterString)
(qemuParseISCSIString, qemuParseNBDString)
(qemuDomainDiskGetSourceString, qemuBuildDriveStr)
(qemuBuildCommandLine, qemuParseCommandLineDisk)
(qemuParseCommandLine): Likewise.
* src/qemu/qemu_conf.c (qemuCheckSharedDevice)
(qemuAddISCSIPoolSourceHost, qemuTranslateDiskSourcePool):
Likewise.
* src/qemu/qemu_driver.c (qemuDomainUpdateDeviceConfig)
(qemuDomainPrepareDiskChainElement)
(qemuDomainSnapshotCreateInactiveExternal)
(qemuDomainSnapshotPrepareDiskExternalBackingInactive)
(qemuDomainSnapshotPrepareDiskInternal)
(qemuDomainSnapshotPrepare)
(qemuDomainSnapshotCreateSingleDiskActive)
(qemuDomainSnapshotUndoSingleDiskActive)
(qemuDomainBlockPivot, qemuDomainBlockJobImpl)
(qemuDomainBlockCopy, qemuDomainBlockCommit): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationIsSafe): Likewise.
* src/qemu/qemu_process.c (qemuProcessGetVolumeQcowPassphrase)
(qemuProcessInitPasswords): Likewise.
* src/security/security_selinux.c
(virSecuritySELinuxSetSecurityFileLabel): Likewise.
* src/storage/storage_driver.c (virStorageFileInitFromDiskDef):
Likewise.
* tests/securityselinuxlabeltest.c (testSELinuxLoadDef):
Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
If we cannot stat/open a file on pool refresh, returning -1 aborts
the refresh and the pool is undefined.
Only treat missing files as fatal unless VolOpenCheckMode is called
with the VIR_STORAGE_VOL_OPEN_ERROR flag. If this flag is missing
(when it's called from virStorageBackendProbeTarget in
virStorageBackendFileSystemRefresh), only emit a warning and return
-2 to let the caller skip over the file.
https://bugzilla.redhat.com/show_bug.cgi?id=977706
Without this, using /dev/mapper as a directory pool
fails in virStorageBackendUpdateVolTargetInfoFD:
cannot seek to end of file '/dev/mapper/control': Illegal seek
Skip over character devices by default.
https://bugzilla.redhat.com/show_bug.cgi?id=710866
virStorageBackendISCSISession only needs the path of the source
device and virStorageBackendISCSIRescanLUNs doesn't need the pool
at all.
This will allow the functions to be moved to src/util.
Any source file which calls the logging APIs now needs
to have a VIR_LOG_INIT("source.name") declaration at
the start of the file. This provides a static variable
of the virLogSource type.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Coverity found an issue in lxc_driver and uml_driver that we don't
check the return value of register functions.
I've also updated all other places and unify the way we check the
return value.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
From commit id 'd53bbfd1'
Found one core and one possible memory leak. Core seen during local
virt-test/tp_libvirt run for the vol_create_from test. The memory leak
was seen by inspection during a review of all VIR_APPEND_ELEMENT changes
In storage_backend_disk/virStorageBackendDiskMakeDataVol(), the 'vol'
needs to be kept around since it's used later, so use the _COPY macro.
This caused a segv in libvirtd:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffe87c3700 (LWP 6919)]
virStorageBackendDiskMakeDataVol (vol=0x0, groups=0x7fffc8000d70, pool=0x7fffc8002460) at storage/storage_backend_disk.c:66
66 if (vol->target.path == NULL) {
In storage_backend_rbd/virStorageBackendRBDRefreshPool() there's a failure
path where the 'vol' needs to go through virStorageVolDefFree() since it
wouldn't be appended.
In storageVolLookupByPath the provided path is "sanitized" at first.
This removes some extra slashes and stuff. When the lookup of the volume
fails the original path is used which makes it hard to trace errors in
some cases.
Improve the error message to print the sanitized path along with the
user provided path if they are not equal.
When looking up a volume by path on a non-local filesystem don't use the
"cleaned" path that might be mangled in such a way that it will differ
from a path provided by a storage backend.
Skip the cleanup step for gluster, sheepdog and RBD.
Pools that are not backed by files in the filesystem cause problems with
some APIs. Error out when attempting to upload a volume in such a pool
as currently we expect a local file representation for it.
Auditing all callers of virCommandRun and virCommandWait that
passed a non-NULL pointer for exit status turned up some
interesting observations. Many callers were merely passing
a pointer to avoid the overall command dying, but without
caring what the exit status was - but these callers would
be better off treating a child death by signal as an abnormal
exit. Other callers were actually acting on the status, but
not all of them remembered to filter by WIFEXITED and convert
with WEXITSTATUS; depending on the platform, this can result
in a status being reported as 256 times too big. And among
those that correctly parse the output, it gets rather verbose.
Finally, there were the callers that explicitly checked that
the status was 0, and gave their own message, but with fewer
details than what virCommand gives for free.
So the best idea is to move the complexity out of callers and
into virCommand - by default, we return the actual exit status
already cleaned through WEXITSTATUS and treat signals as a
failed command; but the few callers that care can ask for raw
status and act on it themselves.
* src/util/vircommand.h (virCommandRawStatus): New prototype.
* src/libvirt_private.syms (util/command.h): Export it.
* docs/internals/command.html.in: Document it.
* src/util/vircommand.c (virCommandRawStatus): New function.
(virCommandWait): Adjust semantics.
* tests/commandtest.c (test1): Test it.
* daemon/remote.c (remoteDispatchAuthPolkit): Adjust callers.
* src/access/viraccessdriverpolkit.c (virAccessDriverPolkitCheck):
Likewise.
* src/fdstream.c (virFDStreamCloseInt): Likewise.
* src/lxc/lxc_process.c (virLXCProcessStart): Likewise.
* src/qemu/qemu_command.c (qemuCreateInBridgePortWithHelper):
Likewise.
* src/xen/xen_driver.c (xenUnifiedXendProbe): Simplify.
* tests/reconnect.c (mymain): Likewise.
* tests/statstest.c (mymain): Likewise.
* src/bhyve/bhyve_process.c (virBhyveProcessStart)
(virBhyveProcessStop): Don't overwrite virCommand error.
* src/libvirt.c (virConnectAuthGainPolkit): Likewise.
* src/openvz/openvz_driver.c (openvzDomainGetBarrierLimit)
(openvzDomainSetBarrierLimit): Likewise.
* src/util/virebtables.c (virEbTablesOnceInit): Likewise.
* src/util/viriptables.c (virIpTablesOnceInit): Likewise.
* src/util/virnetdevveth.c (virNetDevVethCreate): Fix debug
message.
* src/qemu/qemu_capabilities.c (virQEMUCapsInitQMP): Add comment.
* src/storage/storage_backend_iscsi.c
(virStorageBackendISCSINodeUpdate): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
These timeout values make librados/librbd return -ETIMEDOUT when a
operation is blocking due to a failing/unreachable Ceph cluster.
By having the operations time out libvirt will not block.
The internal pools were an idea in one of the first iterations of the
gluster series, which we decided not to use. Somehow the patch still
got pushed. Remove it as the internal flag isn't needed.
This reverts commit 362da8209d.
In a44b7b87bc I've introduced a function
that initializes a storage file wrapper object on gluster based volumes.
The initialization function leaks the private data pointer in case of
failure. This patch fixes it.
Reported by John Ferlan.
In commit e32268184b I accidentally added
twice a typedef for virStorageFileBackend when I moved it between files
across patch iterations. The double declaration breaks build on older
compilers in RHEL5 and FreeBSD.
Remove the spurious definition.
Add APIs that will allow to use the storage driver to assist in
operations on files even for remote filesystems without native
representation as files in the host.
virGetStorageVol can return NULL on out-of-memory. If it does, cleanly
abort the volume clone operation.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
This reverts commit 67ccf91bf2.
We only generate the volume key after we've built it, but the storage
driver expects it to be filled after createVol finishes.
Squash the volume building back with creating to fulfill this
expectation.
This new RBD format supports snapshotting and cloning. By having
libvirt create images in format 2 end-users of the created images
can benefit from the new RBD format.
Older versions of libvirt can work with this new RBD format as long
as librbd supports format 2. RBD format is supported by librbd since
version 0.56 (Ceph Bobtail).
Signed-off-by: Wido den Hollander <wido@widodh.nl>
When restarting sheepdog pool, all volumes are missing.
This patch add automatically all volume from the added pool.
Adding last Daniel P. Berrange's syntaxes correction.
Adding vol on separeted function 'inspired' from parallels_storage :
parallelsAddDiskVolume
The "checkPool" is a bit different for pool with "fc_host"
type source adapter, since the vHBA it's based on might be
not created yet (it's created by "startPool", which is
involked after "checkPool" in storageDriverAutostart). So it
should not fail, otherwise the "autostart" of the pool will
fail either.
The problem is easy to reproduce:
* Enable "autostart" for the pool
* Restart libvirtd service
* Check the pool's state
For pool which relies on remote resources, such as a "iscsi" type
pool, since how long it takes to export the corresponding devices
to host's sysfs is really depended, it could depend on the network
connection, it also could depend on the host's udev procedures. So
it's likely that the volumes are not able to be detected during pool
starting process, polling the sysfs doesn't work, since we don't
know how much time is best for the polling, and even worse, the
volumes could still be not detected or partly not detected even after
the polling. So we end up with a documentation to prompt the fact,
in virsh manual.
And as a small improvement, let's explicitly say no LUNs found in
the debug log in that case.
The public virConnectRef and virConnectClose API are just thin
wrappers around virObjectRef/virObjectRef, with added object
validation and an error reset. Within our backend drivers, use
of the object validation is just an inefficiency since we always
pass valid objects. More important to think about is what
happens with the error reset; our uses of virConnectRef happened
to be safe (since we hadn't encountered any earlier errors), but
in several cases the use of virConnectClose could lose a real
error.
Ideally, we should also avoid calling virConnectOpen() from
within backend drivers - but that is a known situation that
needs much more design work.
* src/qemu/qemu_process.c (qemuProcessReconnectHelper)
(qemuProcessReconnect): Avoid nested public API call.
* src/qemu/qemu_driver.c (qemuAutostartDomains)
(qemuStateInitialize, qemuStateStop): Likewise.
* src/qemu/qemu_migration.c (doPeer2PeerMigrate): Likewise.
* src/storage/storage_driver.c (storageDriverAutostart):
Likewise.
* src/uml/uml_driver.c (umlAutostartConfigs): Likewise.
* src/lxc/lxc_process.c (virLXCProcessAutostartAll): Likewise.
(virLXCProcessReboot): Likewise, and avoid leaking conn on error.
Signed-off-by: Eric Blake <eblake@redhat.com>
To allow using the storage driver APIs to do operation on generic domain
disks we will need to introduce internal storage pools that will give is
a base to support this stuff even on files that weren't originally
defined as a part of the pool.
This patch introduces the 'internal' flag for a storage pool that will
prevent it from being listed along with the user defined storage pools.
Separate the steps to create libvirt's volume metadata from the actual
volume building process. This is already done for regular file based
pools to allow job support for storage APIs.
The commit cad3cf9a95 introduced a crash
due to wrong order of parameters being passed to the function. When
deleting an element, the function decreased the iterator instead of
count and if listing volumes after that (or undefining the pool, NULL
was being dereferenced.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Commit 6cd60b6 was flat out broken - it tried to print into the
wrong variable. My testing was obviously too cursory (did the
name get a slash added?); valgrind would have caught the error.
Thankfully it didn't hit any release.
Reported by Peter Krempa.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Fix bogus code.
Signed-off-by: Eric Blake <eblake@redhat.com>
Move the code around so that the forward declaration isn't needed. Also
fix code style of the opening brace of the function by moving it to a
separate line.
Currently, 'vol-resize --allocate' allocates new space at the
vol->capacity offset. But the vol->capacity is not necessarily the same
as vol->allocation. For instance:.
[root@localhost ~]# virsh vol-list --pool tmp-pool --details
Name Path Type Capacity Allocation
-------------------------------------------------------------
tmp-vol /root/tmp-pool/tmp-vol file 1.00 GiB 1.00 GiB
[root@localhost ~]# virsh vol-resize tmp-vol --pool tmp-pool 2G
[root@localhost ~]# virsh vol-list --pool tmp-pool --details
Name Path Type Capacity Allocation
-------------------------------------------------------------
tmp-vol /root/tmp-pool/tmp-vol file 2.00 GiB 1.00 GiB
So, if we want to allocate more bytes, so the file is say 3G big, the
real allocated size is 2G actually:
[root@localhost ~]# virsh vol-resize tmp-vol --pool tmp-pool 3G --allocate
[root@localhost ~]# virsh vol-list --pool tmp-pool --details
Name Path Type Capacity Allocation
-------------------------------------------------------------
tmp-vol /root/tmp-pool/tmp-vol file 3.00 GiB 2.00 GiB
This commit uses the correct vol->allocation instead of incorrect
vol->capacity, so the output of the commands above looks like this:
[root@localhost ~]# virsh vol-resize tmp-vol --pool tmp-pool 3G --allocate
[root@localhost ~]# virsh vol-list --pool tmp-pool --details
Name Path Type Capacity Allocation
-------------------------------------------------------------
tmp-vol /root/tmp-pool/tmp-vol file 3.00 GiB 3.00 GiB
Moreover, if the '--alocate' flag was used, we must update the
vol->allocation member in storageVolResize API too, not just
vol->capacity.
Reported-by: Wang Sen <wangsen@linux.vnet.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This variable shadows the stat(2) function, which only became visible in
this scope as of commit 9cac8639. Rename the variable so it doesn't
conflict.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
When doing 'virsh vol-dumpxml' on a gluster pool's volume, the
resulting URI incorrectly omitted a slash between hostname and
path: gluster://192.168.122.206rhsvol1/fedora-19.img
This is fallout from me rebasing earlier versions of my patch
that ended up as commit efee1af; I had originally played with
always requiring the gluster volume to have a leading slash,
but it was easier to use the gluster API if the gluster volume
name was guaranteed to have no slash. While I got the URI of
the pool correct, I forgot to fix the URI of a libvirt volume.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Use correct starting point
since uri construction requires leading slash.
Signed-off-by: Eric Blake <eblake@redhat.com>
Kill the use of atoi() and introduce syntax check to forbid it and it's
friends (atol, atoll, atof, atoq).
Also fix a typo in variable name holding the cylinders count of a disk
pool (apparently unused).
examples/domsuspend/suspend.c will need a larger scale refactor as the
whole example file is broken thus it will be exempted from the syntax
check for now.
The storageRegister() didn't check the return from the
virRegisterStorageDriver() like other callers did, so Coverity
flagged it. Just check the return and handle.
When attempting to backport gluster pools to an older versoin
where there is no VIR_STRDUP, I got a crash from calling
strdup(,NULL). Rather than relying on the current else branch
safely doing nothing when there is no fd, it is easier to just
skip it. While at it, there's no need to explicitly set
perms.label to NULL after a VIR_FREE().
* src/storage/storage_backend.c
(virStorageBackendUpdateVolTargetInfoFD): Minor optimization.
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit 348b4e2 introduced a potential problem (thankfully not
in any release): we are attempting to use virFileReadHeaderFD()
on a file that was opened with O_NONBLOCK. While this
shouldn't be a problem in practice (because O_NONBLOCK
typically doesn't affect regular or block files, and fifos and
sockets cannot be storage volumes), it's better to play it safe
to avoid races from opening an unexpected file type while also
avoiding problems with having to handle EAGAIN while read()ing.
Based on a report by Dan Berrange.
* src/storage/storage_backend.c
(virStorageBackendVolOpenCheckMode): Fix up fd after avoiding race.
Signed-off-by: Eric Blake <eblake@redhat.com>
You'd think I'd learn to actually COMMIT my working tree
between testing that a last-minute fix compiles and pushing.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Typo fix.
Signed-off-by: Eric Blake <eblake@redhat.com>
Putting together pieces from previous patches, it is now possible
for 'virsh vol-dumpxml --pool gluster volname' to report metadata
about a qcow2 file stored on gluster. The backing file is still
treated as raw; to fix that, more patches are needed to make the
storage backing chain analysis recursive rather than halting at
a network protocol name, but that work will not need any further
calls into libgfapi so much as just reusing this code, and that
should be the only code outside of the storage driver that needs
any help from libgfapi. Any additional use of libgfapi within
libvirt should only be needed for implementing storage pool APIs
such as volume creation or resizing, where backing chain analysis
should be unaffected.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterReadHeader): New helper function.
(virStorageBackendGlusterRefreshVol): Probe non-raw files.
Signed-off-by: Eric Blake <eblake@redhat.com>
With this patch, dangling and looping symlinks are silently
ignored, while links to files and directories are treated the
same as the underlying file or directory. This is the same
behavior as both 'directory' and 'netfs' pools.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Treat symlinks similar to
directory and netfs pools.
Signed-off-by: Eric Blake <eblake@redhat.com>
We already had code for handling allocation different than
capacity for sparse files; we just had to wire it up to be
used when inspecting gluster images.
* src/storage/storage_backend.c
(virStorageBackendUpdateVolTargetInfoFD): Handle no fd.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Handle sparse files.
Signed-off-by: Eric Blake <eblake@redhat.com>
Take advantage of the previous patch's addition of 'netdir' as
a distinct volume type, to expose rather than silently skip
directories embedded in a gluster pool. Also serves as an XML
validation for the previous patch.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshVol): Don't skip directories.
* tests/storagevolxml2xmltest.c (mymain): Add test.
* tests/storagevolxml2xmlin/vol-gluster-dir.xml: New file.
* tests/storagevolxml2xmlout/vol-gluster-dir.xml: Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
In the 'directory' and 'netfs' storage pools, a user can see
both 'file' and 'dir' storage volume types, to know when they
can descend into a subdirectory. But in a network-based storage
pool, such as the upcoming 'gluster' pool, we use 'network'
instead of 'file', and did not have any counterpart for a
directory until this patch. Adding a new volume type
'network-dir' is better than reusing 'dir', because it makes
it clear that the only way to access 'network' volumes within
that container is through the network mounting (leaving 'dir'
for something accessible in the local file system).
* include/libvirt/libvirt.h.in (virStorageVolType): Expand enum.
* docs/formatstorage.html.in: Document it.
* docs/schemasa/storagevol.rng (vol): Allow new value.
* src/conf/storage_conf.c (virStorageVol): Use new value.
* src/qemu/qemu_command.c (qemuBuildVolumeString): Fix client.
* src/qemu/qemu_conf.c (qemuTranslateDiskSourcePool): Likewise.
* tools/virsh-volume.c (vshVolumeTypeToString): Likewise.
* src/storage/storage_backend_fs.c
(virStorageBackendFileSystemVolDelete): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Actually put gfapi to use, by allowing the creation of a gluster
pool. Right now, all volumes are treated as raw and directories
are skipped; further patches will allow peering into files to
allow for qcow2 files and backing chains, and reporting proper
volume allocation. This implementation was tested against Fedora
19's glusterfs 3.4.1; it might be made simpler by requiring a
higher minimum, and/or require more hacks to work with a lower
minimum.
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshPool): Initial implementation.
(virStorageBackendGlusterOpen, virStorageBackendGlusterClose)
(virStorageBackendGlusterRefreshVol): New helper functions.
Signed-off-by: Eric Blake <eblake@redhat.com>
We support gluster volumes in domain XML, so we also ought to
support them as a storage pool. Besides, a future patch will
want to take advantage of libgfapi to handle the case of a
gluster device holding qcow2 rather than raw storage, and for
that to work, we need a storage backend that can read gluster
storage volume contents. This sets up the framework.
Note that the new pool is named 'gluster' to match a
<disk type='network'><source protocol='gluster'> image source
already supported in a <domain>; it does NOT match the
<pool type='netfs'><source><target type='glusterfs'>,
since that uses a FUSE mount to a local file name rather than
a network name.
This and subsequent patches have been tested against glusterfs
3.4.1 (available on Fedora 19); there are likely bugs in older
versions that may prevent decent use of gfapi, so this patch
enforces the minimum version tested. A future patch may lower
the minimum. On the other hand, I hit at least two bugs in
3.4.1 that will be fixed in 3.5/3.4.2, where it might be worth
raising the minimum: glfs_readdir is nicer to use than
glfs_readdir_r [1], and glfs_fini should only return failure on
an actual failure [2].
[1] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00085.html
[2] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00086.html
* configure.ac (WITH_STORAGE_GLUSTER): New conditional.
* m4/virt-gluster.m4: new file.
* libvirt.spec.in (BuildRequires): Support gluster in spec file.
* src/conf/storage_conf.h (VIR_STORAGE_POOL_GLUSTER): New pool
type.
* src/conf/storage_conf.c (poolTypeInfo): Treat similar to
sheepdog and rbd.
(virStoragePoolDefFormat): Don't output target for gluster.
* src/storage/storage_backend_gluster.h: New file.
* src/storage/storage_backend_gluster.c: Likewise.
* po/POTFILES.in: Add new file.
* src/storage/storage_backend.c (backends): Register new type.
* src/Makefile.am (STORAGE_DRIVER_GLUSTER_SOURCES): Build new files.
* src/storage/storage_backend.h (_virStorageBackend): Documet
assumption.
Signed-off-by: Eric Blake <eblake@redhat.com>
$ touch /var/lib/libvirt/images/'a<b>c'
$ virsh pool-refresh default
$ virsh vol-dumpxml 'a<b>c' default | head -n2
<volume>
<name>a<b>c</name>
Oops. That's not valid XML. And when we fix the XML
generation, it fails RelaxNG validation.
I'm also tired of seeing <key>(null)</key> in the example
output for volume xml; while we used NULLSTR() to avoid
a NULL deref rather than relying on glibc's printf
extension behavior, it's even better if we avoid the issue
in the first place. But this requires being careful that
we don't invalidate any storage backends that were relying
on key being unassigned during virStoragVolCreateXML[From].
I would have split this into two patches (one for escaping,
one for avoiding <key>(null)</key>), but since they both
end up touching a lot of the same test files, I ended up
merging it into one.
Note that this patch allows pretty much any volume name
that can appear in a directory (excluding . and .. because
those are special), but does nothing to change the current
(unenforced) RelaxNG claim that pool names will consist
only of letters, numbers, _, -, and +. Tightening the C
code to match RelaxNG patterns and/or relaxing the grammar
to match the C code for pool names is a task for another
day (but remember, we DID recently tighten C code for
domain names to exclude a leading '.').
* src/conf/storage_conf.c (virStoragePoolSourceFormat)
(virStoragePoolDefFormat, virStorageVolTargetDefFormat)
(virStorageVolDefFormat): Escape user-controlled strings.
(virStorageVolDefParseXML): Parse key, for use in unit tests.
* src/storage/storage_driver.c (storageVolCreateXML)
(storageVolCreateXMLFrom): Ensure parsed key doesn't confuse
volume creation.
* docs/schemas/basictypes.rng (volName): Relax definition.
* tests/storagepoolxml2xmltest.c (mymain): Test it.
* tests/storagevolxml2xmltest.c (mymain): Likewise.
* tests/storagepoolxml2xmlin/pool-dir-naming.xml: New file.
* tests/storagepoolxml2xmlout/pool-dir-naming.xml: Likewise.
* tests/storagevolxml2xmlin/vol-file-naming.xml: Likewise.
* tests/storagevolxml2xmlout/vol-file-naming.xml: Likewise.
* tests/storagevolxml2xmlout/vol-*.xml: Fix fallout.
Signed-off-by: Eric Blake <eblake@redhat.com>
It makes no sense to go forward to get the parent host number of a
HBA, and treat the HBA as a vHBA with trying to delete it.
Signed-off-by: Osier Yang <jyang@redhat.com>
Most of our code base uses space after comma but not before;
fix the remaining uses before adding a syntax check.
* src/network/bridge_driver.c: Consistently use commas.
* src/node_device/node_device_hal.c: Likewise.
* src/node_device/node_device_udev.c: Likewise.
* src/storage/storage_backend_rbd.c: Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
vol-clone reports out of memory error with disk type on ppc64.
Currently, wbytes is defined as size_t type (8 bytes), but
args's value in ioctl(fd, args..) in kernel is int (4 bytes).
This makes wbytes 2^32 times larger, causing an out of memory error.
This patch changes size_t to int to synchronize with kernel.
[1] https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/block/ioctl.c?id=5e01dc7b#n363
[2] https://lkml.org/lkml/2013/11/1/620
Signed-off-by: Li Zhang <zhlcindy@linux.vnet.ibm.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
This gets rid of another stat() per volume, as well as cutting
bytes read in half, when populating the volumes of a directory
pool during a pool refresh. Not to mention that it provides an
interface that can let gluster pools also probe file types.
* src/util/virstoragefile.h (virStorageFileProbeFormatFromFD):
Delete.
(virStorageFileProbeFormatFromBuf): New prototype.
(VIR_STORAGE_MAX_HEADER): New constant, based on...
* src/util/virstoragefile.c (STORAGE_MAX_HEAD): ...old name.
(vmdk4GetBackingStore, virStorageFileGetMetadataInternal)
(virStorageFileProbeFormat): Adjust clients.
(virStorageFileProbeFormatFromFD): Delete.
(virStorageFileProbeFormatFromBuf): Export.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Adjust client.
* src/libvirt_private.syms (virstoragefile.h): Adjust exports.
Signed-off-by: Eric Blake <eblake@redhat.com>
We are calling fstat() at least twice per storage volume in
a directory storage pool; this is rather wasteful. Refactoring
this is also a step towards making code reusable for gluster,
where gluster can provide struct stat but cannot use fstat().
* src/storage/storage_backend.h
(virStorageBackendVolOpenCheckMode)
(virStorageBackendUpdateVolTargetInfoFD): Update signature.
* src/storage/storage_backend.c
(virStorageBackendVolOpenCheckMode): Pass stat results back.
(virStorageBackendUpdateVolTargetInfoFD): Use existing stats.
(virStorageBackendVolOpen, virStorageBackendUpdateVolTargetInfo):
Update callers.
* src/storage/storage_backend_fs.c (virStorageBackendProbeTarget):
Likewise.
* src/storage/storage_backend_scsi.c
(virStorageBackendSCSIUpdateVolTargetInfo): Likewise.
* src/storage/storage_backend_mpath.c
(virStorageBackendMpathUpdateVolTargetInfo): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Using size_t counts will let us use VIR_APPEND_ELEMENT and friends.
* src/conf/storage_conf.h (_virStoragePoolObjList)
(_virStorageVolDefList): Track list sizes with size_t.
* src/storage/storage_backend_rbd.c
(virStorageBackendRBDRefreshPool): Fix type fallout.
Signed-off-by: Eric Blake <eblake@redhat.com>
The rbd code had a confusing typedef ending in Ptr that was not
actually a pointer, which made the rest of the code harder to
read. This fixes things to actually pass by pointer rather than
by copy.
* src/storage/storage_backend_rbd.c (virStorageBackendStatePtr):
Fix typedef.
(virStorageBackendRBDOpenRADOSConn)
(virStorageBackendRBDCloseRADOSConn)
(volStorageBackendRBDRefreshVolInfo)
(virStorageBackendRBDRefreshPool, virStorageBackendRBDDeleteVol)
(virStorageBackendRBDCreateVol, virStorageBackendRBDRefreshVol)
(virStorageBackendRBDResizeVol): Fix fallout.
Signed-off-by: Eric Blake <eblake@redhat.com>
Most of the usage of getuid()/getgid() is in cases where we are
considering what privileges we have. As such the code should be
using the effective IDs, not real IDs.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This should resolve:
https://bugzilla.redhat.com/show_bug.cgi?id=924672
For BZ 924672 the problem stems from the fact that thin pool logical
volume devices show up in /sbin/lvs output just like normal logical
volumes do. Libvirt incorrectly assumes they are just normal logical
volumes and that they will have a corresponding /dev/vgname/lvname
device that has been created by udev and tries to use this device.
To illustrate here is an example of the /dev/vgname/ directory and
the lvs output for a normal lv, thin lv, and thin pool:
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lv vgguests -wi-a---- 1.00g
pool vgguests twi-a-tz- 11.00g 0.00
thinlv vgguests Vwi-a-tz- 1.00g pool 0.00
total 0
lrwxrwxrwx. 1 root root 7 Oct 8 19:35 lv -> ../dm-7
lrwxrwxrwx. 1 root root 7 Oct 8 19:37 thinlv -> ../dm-6
This patch modifies virStorageBackendLogicalMakeVol() to ignore thin pool
devices.
Commit id '532fef36' added a call to fallocate() and some error
handling based on whether or not the function existed. This new
call resulted in libvirt-cim/cimtest failures when attempting to
create a volume with "0" (zero) allocation value. The failure is
logged as:
Oct 9 07:51:33 localhost libvirtd[8030]: cannot allocate 0 bytes in
file '/var/lib/libvirt/images/cimtest-vol.img': Invalid argument
This can also be seen with virsh vol-create-as:
error: Failed to create vol test
error: cannot allocate 0 bytes in file '/home/vm-images/test': Invalid
argument
error: Failed to create vol test
error: cannot allocate 0 bytes in file '/home/vm-images/test': Invalid
argument
It turns out fallocate() will return EINVAL when the incoming 'len'
(or allocation) value is 0 (or less).
We currently have other error codes in singular form, e.g.
VIR_ERR_NETWORK_EXIST. Cleanup the previous patch to match the form.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
I created a storage volume(eg: test) from a storage pool(eg:vg10) using
the following command:"virsh vol-create-as --pool vg10 --name test --capacity 300M."
When I re-executed the above command, the output was as the following:
"error: Failed to create vol test
error: Storage volume not found: storage vol 'test' already exists"
I think the output "Storage volume not found" is not appropriate. Because in fact storage
vol test has been found at this time. And then I think virErrorNumber should includes
VIR_ERR_STORAGE_EXIST which can also be used elsewhere. So I make this patch. The result
is as following:
"error: Failed to create vol test
error: storage volume 'test' exists already"
On RHEL 5, compilation fails with:
storage/storage_backend.c: In function 'createRawFile':
storage/storage_backend.c:339: warning: implicit declaration of function 'fallocate'
storage/storage_backend.c:339: warning: nested extern declaration of 'fallocate' [-Wnested-externs]
But:
$ grep HAVE_FALLOCATE config.h
/* #undef HAVE_FALLOCATE */
Huh? It turns out that in kernels that old, fallocate() is not
implemented (config.h is correct), but <linux/fs.h> defines
HAVE_FALLOCATE as an empty witness macro for a completely
different purpose. Since storage_backend.c is including
<linux/fs.h> on RHEL 5, we are hosed by the kernel definition.
Newer kernels no longer pollute the namespace, and it's fairly
easy to convert to an expression that works with both the old
kernel witness and the new-style config.h (undefined or 1).
Problem introduced in commit 532fef3.
* src/storage/storage_backend.c (createRawFile): Avoid namespace
pollution from kernel, by checking HAVE_FALLOCATE for a value.
Signed-off-by: Eric Blake <eblake@redhat.com>
Fixed the safezero call for allocating the rest of the file after cloning
an existing volume; it used to always use a zero offset, causing it to
only allocate the beginning of the file.
Also modified file creation to try to use fallocate(2) to pre-allocate
disk space before copying any data to make sure it fails early on if disk
is full and makes sure we can skip zero blocks when copying file contents.
If fallocate isn't available we will zero out the rest of the file after
cloning and only use sparse cloning if client requested a lower allocation
than the input volume's capacity.
Signed-off-by: Oskari Saarenmaa <os@ohmu.fi>
The VIR_FREE() macro will cast away any const-ness. This masked a
number of places where we passed a 'const char *' string to
VIR_FREE. Fortunately in all of these cases, the variable was not
in fact const data, but a heap allocated string. Fix all the
variable declarations to reflect this.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
qemu-img is going to switch the default for QCOW2
to QCOW2v3 (compat=1.1)
Extend the probing for qemu-img command line options to check
if -o compat is supported. If the volume definition specifies
the qcow2 format but no compat level and -o compat is supported,
specify -o compat=0.10 to create a QCOW2v2 image.
https://bugzilla.redhat.com/show_bug.cgi?id=997977
Introduced by commit e0139e3044. virStorageVolDefFree free'ed the
pointers that are still used by the added volume object, this changes
it back to VIR_FREE.
Each of the modules handled reporting error messages from the secret fetching
slightly differently with respect to the error. Provide a similar message
for each error case and provide as much data as possible.
One has to refresh the pool to get the correct pool info after
adding/removing/resizing a volume, this updates the pool metadata
(allocation, available) after those operation are done.
Add a privileged field to storageDriverState
Use the privileged value in order to generate a connection which could
be passed to the various storage backend drivers.
In particular, the iSCSI driver will need a connect in order to perform
pool authentication using the 'chap' secrets and the RBD driver utilizes
the connection during pool refresh for pools using 'ceph' secrets.
For now that connection will be to be to qemu driver until a mechanism
is devised to get a connection to just the secret driver without qemu.
Update virStorageBackendRBDOpenRADOSConn() to use the internal API to the
secret driver in order to get the secret value instead of the external
virSecretGetValue() path. Without the flag VIR_SECRET_GET_VALUE_INTERNAL_CALL
there is no way to get the value of private secret.
This also requires ensuring there is a connection which wasn't true for
for the refreshPool() path calls from storageDriverAutostart() prior to
adding support for the connection to a qemu driver. It seems calls to
virSecretLookupByUUIDString() and virSecretLookupByUsage() from the
refreshPool() path would have failed with no way to find the secret - that is
theoretically speaking since the 'conn' was NULL the failure would have been
"failed to find the secret".
Although the XML for CHAP authentication with plain "password"
was introduced long ago, the function was never implemented. This
patch replaces the login/password mechanism by following the
'ceph' (or RBD) model of using a 'username' with a 'secret' which
has the authentication information.
This patch performs the authentication during startPool() processing
of pools with an authType of VIR_STORAGE_POOL_AUTH_CHAP specified
for iSCSI pools.
There are two types of CHAP configurations supported for iSCSI
authentication:
* Initiator Authentication
Forward, one-way; The initiator is authenticated by the target.
* Target Authentication
Reverse, Bi-directional, mutual, two-way; The target is authenticated
by the initiator; This method also requires Initiator Authentication
This only supports the "Initiator Authentication". (I don't have any
enterprise iSCSI env for testing, only have a iSCSI target setup with
tgtd, which doesn't support "Target Authentication").
"Discovery authentication" is not supported by tgt yet too. So this only
setup the session authentication by executing 3 iscsiadm commands, E.g:
% iscsiadm -m node --target "iqn.2013-05.test:iscsi.foo" --name \
"node.session.auth.authmethod" -v "CHAP" --op update
% iscsiadm -m node --target "iqn.2013-05.test:iscsi.foo" --name \
"node.session.auth.username" -v "Jim" --op update
% iscsiadm -m node --target "iqn.2013-05.test:iscsi.foo" --name \
"node.session.auth.password" -v "Jimsecret" --op update
Not all RBD (Ceph) storage pools have cephx authentication turned on,
so "secret" might not be initialized.
It could also be that the secret couldn't be located.
Only call virSecretFree() if "secret" is initialized earlier.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
When using logical pools, we had to trust the target->path provided.
This parameter, however, can be completely ommited and we can use
'/dev/<source.name>' safely and populate it to target.path.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=952973
The switch statement in 'virStorageBackendCreateQemuImgOpts' used the
for loop end condition 'VIR_STORAGE_FILE_FEATURE_LAST' as a possible value,
but since that cannot happen Coverity spits out a DEADCODE message. Adding
the Coverity tag just removes the Coverity message
Don't reuse the return value of virStorageBackendFileSystemIsMounted.
If it's 0, we'd return it even if the mount command failed.
Also, don't report another error if it's -1, since one has already
been reported.
Introduced by 258e06c.
https://bugzilla.redhat.com/show_bug.cgi?id=981251
Convert the type of loop iterators named 'i', 'j', k',
'ii', 'jj', 'kk', to be 'size_t' instead of 'int' or
'unsigned int', also santizing 'ii', 'jj', 'kk' to use
the normal 'i', 'j', 'k' naming
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>