libvirt/src/storage/storage_backend_gluster.c

823 lines
24 KiB
C
Raw Normal View History

storage: initial support for linking with libgfapi We support gluster volumes in domain XML, so we also ought to support them as a storage pool. Besides, a future patch will want to take advantage of libgfapi to handle the case of a gluster device holding qcow2 rather than raw storage, and for that to work, we need a storage backend that can read gluster storage volume contents. This sets up the framework. Note that the new pool is named 'gluster' to match a <disk type='network'><source protocol='gluster'> image source already supported in a <domain>; it does NOT match the <pool type='netfs'><source><target type='glusterfs'>, since that uses a FUSE mount to a local file name rather than a network name. This and subsequent patches have been tested against glusterfs 3.4.1 (available on Fedora 19); there are likely bugs in older versions that may prevent decent use of gfapi, so this patch enforces the minimum version tested. A future patch may lower the minimum. On the other hand, I hit at least two bugs in 3.4.1 that will be fixed in 3.5/3.4.2, where it might be worth raising the minimum: glfs_readdir is nicer to use than glfs_readdir_r [1], and glfs_fini should only return failure on an actual failure [2]. [1] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00085.html [2] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00086.html * configure.ac (WITH_STORAGE_GLUSTER): New conditional. * m4/virt-gluster.m4: new file. * libvirt.spec.in (BuildRequires): Support gluster in spec file. * src/conf/storage_conf.h (VIR_STORAGE_POOL_GLUSTER): New pool type. * src/conf/storage_conf.c (poolTypeInfo): Treat similar to sheepdog and rbd. (virStoragePoolDefFormat): Don't output target for gluster. * src/storage/storage_backend_gluster.h: New file. * src/storage/storage_backend_gluster.c: Likewise. * po/POTFILES.in: Add new file. * src/storage/storage_backend.c (backends): Register new type. * src/Makefile.am (STORAGE_DRIVER_GLUSTER_SOURCES): Build new files. * src/storage/storage_backend.h (_virStorageBackend): Documet assumption. Signed-off-by: Eric Blake <eblake@redhat.com>
2013-11-19 23:26:05 +00:00
/*
* storage_backend_gluster.c: storage backend for Gluster handling
*
conf: split network host structs to util/ Continuing the refactoring of host-side storage descriptions out of conf/domain_conf and into util/virstoragefile, this patch focuses on details about a host name/port/transport as used by a network storage volume. * src/conf/domain_conf.h (virDomainDiskProtocolTransport) (virDomainDiskHostDef, virDomainDiskHostDefClear) (virDomainDiskHostDefFree, virDomainDiskHostDefCopy): Move... * src/util/virstoragefile.h (virStorageNetHostTransport) (virStorageNetHostDef, virStorageNetHostDefClear) (virStorageNetHostDefFree, virStorageNetHostDefCopy): ...here, with better names. * src/util/virstoragefile.c (virStorageNetHostDefClear) (virStorageNetHostDefFree, virStorageNetHostDefCopy): Moved from... * src/conf/domain_conf.c (virDomainDiskHostDefClear) (virDomainDiskHostDefFree, virDomainDiskHostDefCopy): ...here. (virDomainDiskSourceDefClear, virDomainDiskSourceDefParse) (virDomainDiskSourceDefFormatInternal): Adjust callers. * src/conf/snapshot_conf.h (_virDomainSnapshotDiskDef): Likewise. * src/conf/snapshot_conf.c (virDomainSnapshotDiskDefClear): Likewise. * src/qemu/qemu_command.c (qemuAddRBDHost) (qemuParseDriveURIString, qemuParseNBDString) (qemuBuildNetworkDriveURI, qemuParseCommandLineDisk) (qemuParseCommandLine, qemuGetDriveSourceString): Likewise. * src/qemu/qemu_command.h: Likewise. * src/qemu/qemu_conf.c (qemuAddISCSIPoolSourceHost) (qemuTranslateDiskSourcePool): Likewise. * src/qemu/qemu_driver.c (qemuDomainSnapshotCreateSingleDiskActive) (qemuDomainSnapshotUndoSingleDiskActive): Likewise. * src/storage/storage_backend_gluster.c (virStorageFileBackendGlusterInit): Likewise. * src/storage/storage_driver.c (virStorageFileFree) (virStorageFileInitInternal): Likewise. * src/storage/storage_driver.h (_virStorageFile): Likewise. * src/libvirt_private.syms (domain_conf.h): Move symbols... (virstoragefile.h): ...as appropriate. Signed-off-by: Eric Blake <eblake@redhat.com>
2014-03-26 22:33:08 +00:00
* Copyright (C) 2013-2014 Red Hat, Inc.
storage: initial support for linking with libgfapi We support gluster volumes in domain XML, so we also ought to support them as a storage pool. Besides, a future patch will want to take advantage of libgfapi to handle the case of a gluster device holding qcow2 rather than raw storage, and for that to work, we need a storage backend that can read gluster storage volume contents. This sets up the framework. Note that the new pool is named 'gluster' to match a <disk type='network'><source protocol='gluster'> image source already supported in a <domain>; it does NOT match the <pool type='netfs'><source><target type='glusterfs'>, since that uses a FUSE mount to a local file name rather than a network name. This and subsequent patches have been tested against glusterfs 3.4.1 (available on Fedora 19); there are likely bugs in older versions that may prevent decent use of gfapi, so this patch enforces the minimum version tested. A future patch may lower the minimum. On the other hand, I hit at least two bugs in 3.4.1 that will be fixed in 3.5/3.4.2, where it might be worth raising the minimum: glfs_readdir is nicer to use than glfs_readdir_r [1], and glfs_fini should only return failure on an actual failure [2]. [1] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00085.html [2] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00086.html * configure.ac (WITH_STORAGE_GLUSTER): New conditional. * m4/virt-gluster.m4: new file. * libvirt.spec.in (BuildRequires): Support gluster in spec file. * src/conf/storage_conf.h (VIR_STORAGE_POOL_GLUSTER): New pool type. * src/conf/storage_conf.c (poolTypeInfo): Treat similar to sheepdog and rbd. (virStoragePoolDefFormat): Don't output target for gluster. * src/storage/storage_backend_gluster.h: New file. * src/storage/storage_backend_gluster.c: Likewise. * po/POTFILES.in: Add new file. * src/storage/storage_backend.c (backends): Register new type. * src/Makefile.am (STORAGE_DRIVER_GLUSTER_SOURCES): Build new files. * src/storage/storage_backend.h (_virStorageBackend): Documet assumption. Signed-off-by: Eric Blake <eblake@redhat.com>
2013-11-19 23:26:05 +00:00
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library. If not, see
* <http://www.gnu.org/licenses/>.
*
*/
#include <config.h>
#include <glusterfs/api/glfs.h>
#include "storage_backend_gluster.h"
#include "storage_conf.h"
#include "viralloc.h"
#include "virerror.h"
#include "virlog.h"
#include "virstoragefile.h"
#include "virstring.h"
#include "viruri.h"
storage: initial support for linking with libgfapi We support gluster volumes in domain XML, so we also ought to support them as a storage pool. Besides, a future patch will want to take advantage of libgfapi to handle the case of a gluster device holding qcow2 rather than raw storage, and for that to work, we need a storage backend that can read gluster storage volume contents. This sets up the framework. Note that the new pool is named 'gluster' to match a <disk type='network'><source protocol='gluster'> image source already supported in a <domain>; it does NOT match the <pool type='netfs'><source><target type='glusterfs'>, since that uses a FUSE mount to a local file name rather than a network name. This and subsequent patches have been tested against glusterfs 3.4.1 (available on Fedora 19); there are likely bugs in older versions that may prevent decent use of gfapi, so this patch enforces the minimum version tested. A future patch may lower the minimum. On the other hand, I hit at least two bugs in 3.4.1 that will be fixed in 3.5/3.4.2, where it might be worth raising the minimum: glfs_readdir is nicer to use than glfs_readdir_r [1], and glfs_fini should only return failure on an actual failure [2]. [1] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00085.html [2] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00086.html * configure.ac (WITH_STORAGE_GLUSTER): New conditional. * m4/virt-gluster.m4: new file. * libvirt.spec.in (BuildRequires): Support gluster in spec file. * src/conf/storage_conf.h (VIR_STORAGE_POOL_GLUSTER): New pool type. * src/conf/storage_conf.c (poolTypeInfo): Treat similar to sheepdog and rbd. (virStoragePoolDefFormat): Don't output target for gluster. * src/storage/storage_backend_gluster.h: New file. * src/storage/storage_backend_gluster.c: Likewise. * po/POTFILES.in: Add new file. * src/storage/storage_backend.c (backends): Register new type. * src/Makefile.am (STORAGE_DRIVER_GLUSTER_SOURCES): Build new files. * src/storage/storage_backend.h (_virStorageBackend): Documet assumption. Signed-off-by: Eric Blake <eblake@redhat.com>
2013-11-19 23:26:05 +00:00
#define VIR_FROM_THIS VIR_FROM_STORAGE
VIR_LOG_INIT("storage.storage_backend_gluster");
struct _virStorageBackendGlusterState {
glfs_t *vol;
/* Accept the same URIs as qemu's block/gluster.c:
* gluster[+transport]://[server[:port]]/vol/[dir/]image[?socket=...] */
virURI *uri;
char *volname; /* vol from URI, no '/' */
char *dir; /* dir from URI, or "/"; always starts and ends in '/' */
};
typedef struct _virStorageBackendGlusterState virStorageBackendGlusterState;
typedef virStorageBackendGlusterState *virStorageBackendGlusterStatePtr;
static void
virStorageBackendGlusterClose(virStorageBackendGlusterStatePtr state)
{
if (!state)
return;
/* Yuck - glusterfs-api-3.4.1 appears to always return -1 for
* glfs_fini, with errno containing random data, so there's no way
* to tell if it succeeded. 3.4.2 is supposed to fix this.*/
if (state->vol && glfs_fini(state->vol) < 0)
VIR_DEBUG("shutdown of gluster volume %s failed with errno %d",
state->volname, errno);
virURIFree(state->uri);
VIR_FREE(state->volname);
VIR_FREE(state->dir);
VIR_FREE(state);
}
static virStorageBackendGlusterStatePtr
virStorageBackendGlusterOpen(virStoragePoolObjPtr pool)
{
virStorageBackendGlusterStatePtr ret = NULL;
const char *name = pool->def->source.name;
const char *dir = pool->def->source.dir;
bool trailing_slash = true;
/* Volume name must not contain '/'; optional path allows use of a
* subdirectory within the volume name. */
if (strchr(name, '/')) {
virReportError(VIR_ERR_XML_ERROR,
_("gluster pool name '%s' must not contain /"),
name);
return NULL;
}
if (dir) {
if (*dir != '/') {
virReportError(VIR_ERR_XML_ERROR,
_("gluster pool path '%s' must start with /"),
dir);
return NULL;
}
if (strchr(dir, '\0')[-1] != '/')
trailing_slash = false;
}
if (VIR_ALLOC(ret) < 0)
return NULL;
if (VIR_STRDUP(ret->volname, name) < 0)
goto error;
if (virAsprintf(&ret->dir, "%s%s", dir ? dir : "/",
trailing_slash ? "" : "/") < 0)
goto error;
/* FIXME: Currently hard-coded to tcp transport; XML needs to be
* extended to allow alternate transport */
if (VIR_ALLOC(ret->uri) < 0)
goto error;
if (VIR_STRDUP(ret->uri->scheme, "gluster") < 0)
goto error;
if (VIR_STRDUP(ret->uri->server, pool->def->source.hosts[0].name) < 0)
goto error;
if (virAsprintf(&ret->uri->path, "/%s%s", ret->volname, ret->dir) < 0)
goto error;
ret->uri->port = pool->def->source.hosts[0].port;
/* Actually connect to glfs */
if (!(ret->vol = glfs_new(ret->volname))) {
virReportOOMError();
goto error;
}
if (glfs_set_volfile_server(ret->vol, "tcp",
ret->uri->server, ret->uri->port) < 0 ||
glfs_init(ret->vol) < 0) {
char *uri = virURIFormat(ret->uri);
virReportSystemError(errno, _("failed to connect to %s"),
NULLSTR(uri));
VIR_FREE(uri);
goto error;
}
if (glfs_chdir(ret->vol, ret->dir) < 0) {
virReportSystemError(errno,
_("failed to change to directory '%s' in '%s'"),
ret->dir, ret->volname);
goto error;
}
return ret;
error:
virStorageBackendGlusterClose(ret);
return NULL;
}
static ssize_t
virStorageBackendGlusterReadHeader(glfs_fd_t *fd,
const char *name,
ssize_t maxlen,
char **buf)
{
char *s;
size_t nread = 0;
if (VIR_ALLOC_N(*buf, maxlen) < 0)
return -1;
s = *buf;
while (maxlen) {
ssize_t r = glfs_read(fd, s, maxlen, 0);
if (r < 0 && errno == EINTR)
continue;
if (r < 0) {
VIR_FREE(*buf);
virReportSystemError(errno, _("unable to read '%s'"), name);
return r;
}
if (r == 0)
return nread;
s += r;
maxlen -= r;
nread += r;
}
return nread;
}
static int
virStorageBackendGlusterSetMetadata(virStorageBackendGlusterStatePtr state,
virStorageVolDefPtr vol,
const char *name)
{
int ret = -1;
char *path = NULL;
char *tmp;
VIR_FREE(vol->key);
VIR_FREE(vol->target.path);
vol->type = VIR_STORAGE_VOL_NETWORK;
vol->target.format = VIR_STORAGE_FILE_RAW;
if (name) {
VIR_FREE(vol->name);
if (VIR_STRDUP(vol->name, name) < 0)
goto cleanup;
}
if (virAsprintf(&path, "%s%s%s", state->volname, state->dir,
vol->name) < 0)
goto cleanup;
tmp = state->uri->path;
if (virAsprintf(&state->uri->path, "/%s", path) < 0) {
state->uri->path = tmp;
goto cleanup;
}
if (!(vol->target.path = virURIFormat(state->uri))) {
VIR_FREE(state->uri->path);
state->uri->path = tmp;
goto cleanup;
}
VIR_FREE(state->uri->path);
state->uri->path = tmp;
/* the path is unique enough to serve as a volume key */
if (VIR_STRDUP(vol->key, vol->target.path) < 0)
goto cleanup;
ret = 0;
cleanup:
VIR_FREE(path);
return ret;
}
/* Populate *volptr for the given name and stat information, or leave
* it NULL if the entry should be skipped (such as "."). Return 0 on
* success, -1 on failure. */
static int
virStorageBackendGlusterRefreshVol(virStorageBackendGlusterStatePtr state,
const char *name,
struct stat *st,
virStorageVolDefPtr *volptr)
{
int ret = -1;
virStorageVolDefPtr vol = NULL;
glfs_fd_t *fd = NULL;
virStorageSourcePtr meta = NULL;
char *header = NULL;
ssize_t len = VIR_STORAGE_MAX_HEADER;
int backingFormat;
*volptr = NULL;
/* Silently skip '.' and '..'. */
if (STREQ(name, ".") || STREQ(name, ".."))
return 0;
/* Follow symlinks; silently skip broken links and loops. */
if (S_ISLNK(st->st_mode) && glfs_stat(state->vol, name, st) < 0) {
if (errno == ENOENT || errno == ELOOP) {
VIR_WARN("ignoring dangling symlink '%s'", name);
ret = 0;
} else {
virReportSystemError(errno, _("cannot stat '%s'"), name);
}
return ret;
}
if (VIR_ALLOC(vol) < 0)
goto cleanup;
if (virStorageBackendUpdateVolTargetInfoFD(&vol->target, -1, st) < 0)
goto cleanup;
if (virStorageBackendGlusterSetMetadata(state, vol, name) < 0)
goto cleanup;
if (S_ISDIR(st->st_mode)) {
vol->type = VIR_STORAGE_VOL_NETDIR;
vol->target.format = VIR_STORAGE_FILE_DIR;
*volptr = vol;
vol = NULL;
ret = 0;
goto cleanup;
}
/* No need to worry about O_NONBLOCK - gluster doesn't allow creation
* of fifos, so there's nothing it would protect us from. */
if (!(fd = glfs_open(state->vol, name, O_RDONLY | O_NOCTTY))) {
/* A dangling symlink now implies a TOCTTOU race; report it. */
virReportSystemError(errno, _("cannot open volume '%s'"), name);
goto cleanup;
}
if ((len = virStorageBackendGlusterReadHeader(fd, name, len, &header)) < 0)
goto cleanup;
if (!(meta = virStorageFileGetMetadataFromBuf(name, header, len,
VIR_STORAGE_FILE_AUTO,
&backingFormat)))
goto cleanup;
if (meta->backingStoreRaw) {
if (VIR_ALLOC(vol->target.backingStore) < 0)
goto cleanup;
vol->target.backingStore->path = meta->backingStoreRaw;
if (backingFormat < 0)
vol->target.backingStore->format = VIR_STORAGE_FILE_RAW;
else
vol->target.backingStore->format = backingFormat;
meta->backingStoreRaw = NULL;
}
vol->target.format = meta->format;
if (meta->capacity)
conf: track sizes directly in source struct One of the features of qcow2 is that a wrapper file can have more capacity than its backing file from the guest's perspective; what's more, sparse files make tracking allocation of both the active and backing file worthwhile. As such, it makes more sense to show allocation numbers for each file in a chain, and not just the top-level file. This sets up the fields for the tracking, although it does not modify XML to display any new information. * src/util/virstoragefile.h (_virStorageSource): Add fields. * src/conf/storage_conf.h (_virStorageVolDef): Drop redundant fields. * src/storage/storage_backend.c (virStorageBackendCreateBlockFrom) (createRawFile, virStorageBackendCreateQemuImgCmd) (virStorageBackendCreateQcowCreate): Update clients. * src/storage/storage_driver.c (storageVolDelete) (storageVolCreateXML, storageVolCreateXMLFrom, storageVolResize) (storageVolWipeInternal, storageVolGetInfo): Likewise. * src/storage/storage_backend_fs.c (virStorageBackendProbeTarget) (virStorageBackendFileSystemRefresh) (virStorageBackendFileSystemVolResize) (virStorageBackendFileSystemVolRefresh): Likewise. * src/storage/storage_backend_logical.c (virStorageBackendLogicalMakeVol) (virStorageBackendLogicalCreateVol): Likewise. * src/storage/storage_backend_scsi.c (virStorageBackendSCSINewLun): Likewise. * src/storage/storage_backend_mpath.c (virStorageBackendMpathNewVol): Likewise. * src/storage/storage_backend_rbd.c (volStorageBackendRBDRefreshVolInfo) (virStorageBackendRBDCreateImage): Likewise. * src/storage/storage_backend_disk.c (virStorageBackendDiskMakeDataVol) (virStorageBackendDiskCreateVol): Likewise. * src/storage/storage_backend_sheepdog.c (virStorageBackendSheepdogBuildVol) (virStorageBackendSheepdogParseVdiList): Likewise. * src/storage/storage_backend_gluster.c (virStorageBackendGlusterRefreshVol): Likewise. * src/conf/storage_conf.c (virStorageVolDefFormat) (virStorageVolDefParseXML): Likewise. * src/test/test_driver.c (testOpenVolumesForPool) (testStorageVolCreateXML, testStorageVolCreateXMLFrom) (testStorageVolDelete, testStorageVolGetInfo): Likewise. * src/esx/esx_storage_backend_iscsi.c (esxStorageVolGetXMLDesc): Likewise. * src/esx/esx_storage_backend_vmfs.c (esxStorageVolGetXMLDesc) (esxStorageVolCreateXML): Likewise. * src/parallels/parallels_driver.c (parallelsAddHddByVolume): Likewise. * src/parallels/parallels_storage.c (parallelsDiskDescParseNode) (parallelsStorageVolDefineXML, parallelsStorageVolCreateXMLFrom) (parallelsStorageVolDefRemove, parallelsStorageVolGetInfo): Likewise. * src/vbox/vbox_tmpl.c (vboxStorageVolCreateXML) (vboxStorageVolGetXMLDesc): Likewise. * tests/storagebackendsheepdogtest.c (test_vdi_list_parser): Likewise. * src/phyp/phyp_driver.c (phypStorageVolCreateXML): Likewise.
2014-04-01 23:43:36 +00:00
vol->target.capacity = meta->capacity;
if (meta->encryption) {
vol->target.encryption = meta->encryption;
meta->encryption = NULL;
if (vol->target.format == VIR_STORAGE_FILE_QCOW ||
vol->target.format == VIR_STORAGE_FILE_QCOW2)
vol->target.encryption->format = VIR_STORAGE_ENCRYPTION_FORMAT_QCOW;
}
vol->target.features = meta->features;
meta->features = NULL;
vol->target.compat = meta->compat;
meta->compat = NULL;
*volptr = vol;
vol = NULL;
ret = 0;
cleanup:
virStorageSourceFree(meta);
virStorageVolDefFree(vol);
if (fd)
glfs_close(fd);
VIR_FREE(header);
return ret;
}
storage: initial support for linking with libgfapi We support gluster volumes in domain XML, so we also ought to support them as a storage pool. Besides, a future patch will want to take advantage of libgfapi to handle the case of a gluster device holding qcow2 rather than raw storage, and for that to work, we need a storage backend that can read gluster storage volume contents. This sets up the framework. Note that the new pool is named 'gluster' to match a <disk type='network'><source protocol='gluster'> image source already supported in a <domain>; it does NOT match the <pool type='netfs'><source><target type='glusterfs'>, since that uses a FUSE mount to a local file name rather than a network name. This and subsequent patches have been tested against glusterfs 3.4.1 (available on Fedora 19); there are likely bugs in older versions that may prevent decent use of gfapi, so this patch enforces the minimum version tested. A future patch may lower the minimum. On the other hand, I hit at least two bugs in 3.4.1 that will be fixed in 3.5/3.4.2, where it might be worth raising the minimum: glfs_readdir is nicer to use than glfs_readdir_r [1], and glfs_fini should only return failure on an actual failure [2]. [1] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00085.html [2] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00086.html * configure.ac (WITH_STORAGE_GLUSTER): New conditional. * m4/virt-gluster.m4: new file. * libvirt.spec.in (BuildRequires): Support gluster in spec file. * src/conf/storage_conf.h (VIR_STORAGE_POOL_GLUSTER): New pool type. * src/conf/storage_conf.c (poolTypeInfo): Treat similar to sheepdog and rbd. (virStoragePoolDefFormat): Don't output target for gluster. * src/storage/storage_backend_gluster.h: New file. * src/storage/storage_backend_gluster.c: Likewise. * po/POTFILES.in: Add new file. * src/storage/storage_backend.c (backends): Register new type. * src/Makefile.am (STORAGE_DRIVER_GLUSTER_SOURCES): Build new files. * src/storage/storage_backend.h (_virStorageBackend): Documet assumption. Signed-off-by: Eric Blake <eblake@redhat.com>
2013-11-19 23:26:05 +00:00
static int
virStorageBackendGlusterRefreshPool(virConnectPtr conn ATTRIBUTE_UNUSED,
virStoragePoolObjPtr pool)
storage: initial support for linking with libgfapi We support gluster volumes in domain XML, so we also ought to support them as a storage pool. Besides, a future patch will want to take advantage of libgfapi to handle the case of a gluster device holding qcow2 rather than raw storage, and for that to work, we need a storage backend that can read gluster storage volume contents. This sets up the framework. Note that the new pool is named 'gluster' to match a <disk type='network'><source protocol='gluster'> image source already supported in a <domain>; it does NOT match the <pool type='netfs'><source><target type='glusterfs'>, since that uses a FUSE mount to a local file name rather than a network name. This and subsequent patches have been tested against glusterfs 3.4.1 (available on Fedora 19); there are likely bugs in older versions that may prevent decent use of gfapi, so this patch enforces the minimum version tested. A future patch may lower the minimum. On the other hand, I hit at least two bugs in 3.4.1 that will be fixed in 3.5/3.4.2, where it might be worth raising the minimum: glfs_readdir is nicer to use than glfs_readdir_r [1], and glfs_fini should only return failure on an actual failure [2]. [1] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00085.html [2] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00086.html * configure.ac (WITH_STORAGE_GLUSTER): New conditional. * m4/virt-gluster.m4: new file. * libvirt.spec.in (BuildRequires): Support gluster in spec file. * src/conf/storage_conf.h (VIR_STORAGE_POOL_GLUSTER): New pool type. * src/conf/storage_conf.c (poolTypeInfo): Treat similar to sheepdog and rbd. (virStoragePoolDefFormat): Don't output target for gluster. * src/storage/storage_backend_gluster.h: New file. * src/storage/storage_backend_gluster.c: Likewise. * po/POTFILES.in: Add new file. * src/storage/storage_backend.c (backends): Register new type. * src/Makefile.am (STORAGE_DRIVER_GLUSTER_SOURCES): Build new files. * src/storage/storage_backend.h (_virStorageBackend): Documet assumption. Signed-off-by: Eric Blake <eblake@redhat.com>
2013-11-19 23:26:05 +00:00
{
int ret = -1;
virStorageBackendGlusterStatePtr state = NULL;
struct {
struct dirent ent;
/* See comment below about readdir_r needing padding */
char padding[MAX(1, 256 - (int) (sizeof(struct dirent)
- offsetof(struct dirent, d_name)))];
} de;
struct dirent *ent;
glfs_fd_t *dir = NULL;
struct stat st;
struct statvfs sb;
if (!(state = virStorageBackendGlusterOpen(pool)))
goto cleanup;
/* Why oh why did glfs 3.4 decide to expose only readdir_r rather
* than readdir? POSIX admits that readdir_r is inherently a
* flawed design, because systems are not required to define
* NAME_MAX: http://austingroupbugs.net/view.php?id=696
* http://womble.decadent.org.uk/readdir_r-advisory.html
*
* Fortunately, gluster appears to limit its underlying bricks to
* only use file systems such as XFS that have a NAME_MAX of 255;
* so we are currently guaranteed that if we provide 256 bytes of
* tail padding, then we should have enough space to avoid buffer
* overflow no matter whether the OS used d_name[], d_name[1], or
* d_name[256] in its 'struct dirent'.
* http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00083.html
*/
if (!(dir = glfs_opendir(state->vol, state->dir))) {
virReportSystemError(errno, _("cannot open path '%s' in '%s'"),
state->dir, state->volname);
goto cleanup;
}
while (!(errno = glfs_readdirplus_r(dir, &st, &de.ent, &ent)) && ent) {
virStorageVolDefPtr vol;
int okay = virStorageBackendGlusterRefreshVol(state,
ent->d_name, &st,
&vol);
if (okay < 0)
goto cleanup;
if (vol && VIR_APPEND_ELEMENT(pool->volumes.objs, pool->volumes.count,
vol) < 0)
goto cleanup;
}
if (errno) {
virReportSystemError(errno, _("failed to read directory '%s' in '%s'"),
state->dir, state->volname);
goto cleanup;
}
if (glfs_statvfs(state->vol, state->dir, &sb) < 0) {
virReportSystemError(errno, _("cannot statvfs path '%s' in '%s'"),
state->dir, state->volname);
goto cleanup;
}
pool->def->capacity = ((unsigned long long)sb.f_frsize *
(unsigned long long)sb.f_blocks);
pool->def->available = ((unsigned long long)sb.f_bfree *
(unsigned long long)sb.f_frsize);
pool->def->allocation = pool->def->capacity - pool->def->available;
ret = 0;
cleanup:
if (dir)
glfs_closedir(dir);
virStorageBackendGlusterClose(state);
if (ret < 0)
virStoragePoolObjClearVols(pool);
return ret;
storage: initial support for linking with libgfapi We support gluster volumes in domain XML, so we also ought to support them as a storage pool. Besides, a future patch will want to take advantage of libgfapi to handle the case of a gluster device holding qcow2 rather than raw storage, and for that to work, we need a storage backend that can read gluster storage volume contents. This sets up the framework. Note that the new pool is named 'gluster' to match a <disk type='network'><source protocol='gluster'> image source already supported in a <domain>; it does NOT match the <pool type='netfs'><source><target type='glusterfs'>, since that uses a FUSE mount to a local file name rather than a network name. This and subsequent patches have been tested against glusterfs 3.4.1 (available on Fedora 19); there are likely bugs in older versions that may prevent decent use of gfapi, so this patch enforces the minimum version tested. A future patch may lower the minimum. On the other hand, I hit at least two bugs in 3.4.1 that will be fixed in 3.5/3.4.2, where it might be worth raising the minimum: glfs_readdir is nicer to use than glfs_readdir_r [1], and glfs_fini should only return failure on an actual failure [2]. [1] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00085.html [2] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00086.html * configure.ac (WITH_STORAGE_GLUSTER): New conditional. * m4/virt-gluster.m4: new file. * libvirt.spec.in (BuildRequires): Support gluster in spec file. * src/conf/storage_conf.h (VIR_STORAGE_POOL_GLUSTER): New pool type. * src/conf/storage_conf.c (poolTypeInfo): Treat similar to sheepdog and rbd. (virStoragePoolDefFormat): Don't output target for gluster. * src/storage/storage_backend_gluster.h: New file. * src/storage/storage_backend_gluster.c: Likewise. * po/POTFILES.in: Add new file. * src/storage/storage_backend.c (backends): Register new type. * src/Makefile.am (STORAGE_DRIVER_GLUSTER_SOURCES): Build new files. * src/storage/storage_backend.h (_virStorageBackend): Documet assumption. Signed-off-by: Eric Blake <eblake@redhat.com>
2013-11-19 23:26:05 +00:00
}
static int
virStorageBackendGlusterVolDelete(virConnectPtr conn ATTRIBUTE_UNUSED,
virStoragePoolObjPtr pool,
virStorageVolDefPtr vol,
unsigned int flags)
{
virStorageBackendGlusterStatePtr state = NULL;
int ret = -1;
virCheckFlags(0, -1);
switch ((virStorageVolType) vol->type) {
case VIR_STORAGE_VOL_FILE:
case VIR_STORAGE_VOL_DIR:
case VIR_STORAGE_VOL_BLOCK:
case VIR_STORAGE_VOL_PLOOP:
case VIR_STORAGE_VOL_LAST:
virReportError(VIR_ERR_NO_SUPPORT,
_("removing of '%s' volumes is not supported "
"by the gluster backend: %s"),
virStorageVolTypeToString(vol->type),
vol->target.path);
goto cleanup;
break;
case VIR_STORAGE_VOL_NETWORK:
if (!(state = virStorageBackendGlusterOpen(pool)))
goto cleanup;
if (glfs_unlink(state->vol, vol->name) < 0) {
if (errno != ENOENT) {
virReportSystemError(errno,
_("cannot remove gluster volume file '%s'"),
vol->target.path);
goto cleanup;
}
}
break;
case VIR_STORAGE_VOL_NETDIR:
if (!(state = virStorageBackendGlusterOpen(pool)))
goto cleanup;
if (glfs_rmdir(state->vol, vol->target.path) < 0) {
if (errno != ENOENT) {
virReportSystemError(errno,
_("cannot remove gluster volume dir '%s'"),
vol->target.path);
goto cleanup;
}
}
break;
}
ret = 0;
cleanup:
virStorageBackendGlusterClose(state);
return ret;
}
static char *
virStorageBackendGlusterFindPoolSources(virConnectPtr conn ATTRIBUTE_UNUSED,
const char *srcSpec,
unsigned int flags)
{
virStoragePoolSourceList list = { .type = VIR_STORAGE_POOL_GLUSTER,
.nsources = 0,
.sources = NULL
};
virStoragePoolSourcePtr source = NULL;
char *ret = NULL;
size_t i;
virCheckFlags(0, NULL);
if (!srcSpec) {
virReportError(VIR_ERR_INVALID_ARG, "%s",
_("hostname must be specified for gluster sources"));
return NULL;
}
if (!(source = virStoragePoolDefParseSourceString(srcSpec,
VIR_STORAGE_POOL_GLUSTER)))
return NULL;
if (source->nhost != 1) {
virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
_("Expected exactly 1 host for the storage pool"));
goto cleanup;
}
if (virStorageBackendFindGlusterPoolSources(source->hosts[0].name,
0, /* currently ignored */
&list) < 0)
goto cleanup;
if (!(ret = virStoragePoolSourceListFormat(&list)))
goto cleanup;
cleanup:
for (i = 0; i < list.nsources; i++)
virStoragePoolSourceClear(&list.sources[i]);
VIR_FREE(list.sources);
virStoragePoolSourceFree(source);
return ret;
}
storage: initial support for linking with libgfapi We support gluster volumes in domain XML, so we also ought to support them as a storage pool. Besides, a future patch will want to take advantage of libgfapi to handle the case of a gluster device holding qcow2 rather than raw storage, and for that to work, we need a storage backend that can read gluster storage volume contents. This sets up the framework. Note that the new pool is named 'gluster' to match a <disk type='network'><source protocol='gluster'> image source already supported in a <domain>; it does NOT match the <pool type='netfs'><source><target type='glusterfs'>, since that uses a FUSE mount to a local file name rather than a network name. This and subsequent patches have been tested against glusterfs 3.4.1 (available on Fedora 19); there are likely bugs in older versions that may prevent decent use of gfapi, so this patch enforces the minimum version tested. A future patch may lower the minimum. On the other hand, I hit at least two bugs in 3.4.1 that will be fixed in 3.5/3.4.2, where it might be worth raising the minimum: glfs_readdir is nicer to use than glfs_readdir_r [1], and glfs_fini should only return failure on an actual failure [2]. [1] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00085.html [2] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00086.html * configure.ac (WITH_STORAGE_GLUSTER): New conditional. * m4/virt-gluster.m4: new file. * libvirt.spec.in (BuildRequires): Support gluster in spec file. * src/conf/storage_conf.h (VIR_STORAGE_POOL_GLUSTER): New pool type. * src/conf/storage_conf.c (poolTypeInfo): Treat similar to sheepdog and rbd. (virStoragePoolDefFormat): Don't output target for gluster. * src/storage/storage_backend_gluster.h: New file. * src/storage/storage_backend_gluster.c: Likewise. * po/POTFILES.in: Add new file. * src/storage/storage_backend.c (backends): Register new type. * src/Makefile.am (STORAGE_DRIVER_GLUSTER_SOURCES): Build new files. * src/storage/storage_backend.h (_virStorageBackend): Documet assumption. Signed-off-by: Eric Blake <eblake@redhat.com>
2013-11-19 23:26:05 +00:00
virStorageBackend virStorageBackendGluster = {
.type = VIR_STORAGE_POOL_GLUSTER,
.refreshPool = virStorageBackendGlusterRefreshPool,
.findPoolSources = virStorageBackendGlusterFindPoolSources,
.deleteVol = virStorageBackendGlusterVolDelete,
storage: initial support for linking with libgfapi We support gluster volumes in domain XML, so we also ought to support them as a storage pool. Besides, a future patch will want to take advantage of libgfapi to handle the case of a gluster device holding qcow2 rather than raw storage, and for that to work, we need a storage backend that can read gluster storage volume contents. This sets up the framework. Note that the new pool is named 'gluster' to match a <disk type='network'><source protocol='gluster'> image source already supported in a <domain>; it does NOT match the <pool type='netfs'><source><target type='glusterfs'>, since that uses a FUSE mount to a local file name rather than a network name. This and subsequent patches have been tested against glusterfs 3.4.1 (available on Fedora 19); there are likely bugs in older versions that may prevent decent use of gfapi, so this patch enforces the minimum version tested. A future patch may lower the minimum. On the other hand, I hit at least two bugs in 3.4.1 that will be fixed in 3.5/3.4.2, where it might be worth raising the minimum: glfs_readdir is nicer to use than glfs_readdir_r [1], and glfs_fini should only return failure on an actual failure [2]. [1] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00085.html [2] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00086.html * configure.ac (WITH_STORAGE_GLUSTER): New conditional. * m4/virt-gluster.m4: new file. * libvirt.spec.in (BuildRequires): Support gluster in spec file. * src/conf/storage_conf.h (VIR_STORAGE_POOL_GLUSTER): New pool type. * src/conf/storage_conf.c (poolTypeInfo): Treat similar to sheepdog and rbd. (virStoragePoolDefFormat): Don't output target for gluster. * src/storage/storage_backend_gluster.h: New file. * src/storage/storage_backend_gluster.c: Likewise. * po/POTFILES.in: Add new file. * src/storage/storage_backend.c (backends): Register new type. * src/Makefile.am (STORAGE_DRIVER_GLUSTER_SOURCES): Build new files. * src/storage/storage_backend.h (_virStorageBackend): Documet assumption. Signed-off-by: Eric Blake <eblake@redhat.com>
2013-11-19 23:26:05 +00:00
};
typedef struct _virStorageFileBackendGlusterPriv virStorageFileBackendGlusterPriv;
typedef virStorageFileBackendGlusterPriv *virStorageFileBackendGlusterPrivPtr;
struct _virStorageFileBackendGlusterPriv {
glfs_t *vol;
char *canonpath;
};
static void
virStorageFileBackendGlusterDeinit(virStorageSourcePtr src)
{
virStorageFileBackendGlusterPrivPtr priv = src->drv->priv;
VIR_DEBUG("deinitializing gluster storage file %p (gluster://%s:%s/%s%s)",
src, src->hosts->name, src->hosts->port ? src->hosts->port : "0",
src->volume, src->path);
if (priv->vol)
glfs_fini(priv->vol);
VIR_FREE(priv->canonpath);
VIR_FREE(priv);
src->drv->priv = NULL;
}
static int
virStorageFileBackendGlusterInit(virStorageSourcePtr src)
{
virStorageFileBackendGlusterPrivPtr priv = NULL;
virStorageNetHostDefPtr host = &(src->hosts[0]);
const char *hostname;
int port = 0;
if (src->nhosts != 1) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
_("Expected exactly 1 host for the gluster volume"));
return -1;
}
hostname = host->name;
VIR_DEBUG("initializing gluster storage file %p (gluster://%s:%s/%s%s)[%u:%u]",
src, hostname, host->port ? host->port : "0",
NULLSTR(src->volume), src->path,
(unsigned int)src->drv->uid, (unsigned int)src->drv->gid);
if (!src->volume) {
virReportError(VIR_ERR_INTERNAL_ERROR,
_("missing gluster volume name for path '%s'"),
src->path);
return -1;
}
if (VIR_ALLOC(priv) < 0)
return -1;
if (host->port &&
virStrToLong_i(host->port, NULL, 10, &port) < 0) {
virReportError(VIR_ERR_INTERNAL_ERROR,
_("failed to parse port number '%s'"),
host->port);
goto error;
}
conf: split network host structs to util/ Continuing the refactoring of host-side storage descriptions out of conf/domain_conf and into util/virstoragefile, this patch focuses on details about a host name/port/transport as used by a network storage volume. * src/conf/domain_conf.h (virDomainDiskProtocolTransport) (virDomainDiskHostDef, virDomainDiskHostDefClear) (virDomainDiskHostDefFree, virDomainDiskHostDefCopy): Move... * src/util/virstoragefile.h (virStorageNetHostTransport) (virStorageNetHostDef, virStorageNetHostDefClear) (virStorageNetHostDefFree, virStorageNetHostDefCopy): ...here, with better names. * src/util/virstoragefile.c (virStorageNetHostDefClear) (virStorageNetHostDefFree, virStorageNetHostDefCopy): Moved from... * src/conf/domain_conf.c (virDomainDiskHostDefClear) (virDomainDiskHostDefFree, virDomainDiskHostDefCopy): ...here. (virDomainDiskSourceDefClear, virDomainDiskSourceDefParse) (virDomainDiskSourceDefFormatInternal): Adjust callers. * src/conf/snapshot_conf.h (_virDomainSnapshotDiskDef): Likewise. * src/conf/snapshot_conf.c (virDomainSnapshotDiskDefClear): Likewise. * src/qemu/qemu_command.c (qemuAddRBDHost) (qemuParseDriveURIString, qemuParseNBDString) (qemuBuildNetworkDriveURI, qemuParseCommandLineDisk) (qemuParseCommandLine, qemuGetDriveSourceString): Likewise. * src/qemu/qemu_command.h: Likewise. * src/qemu/qemu_conf.c (qemuAddISCSIPoolSourceHost) (qemuTranslateDiskSourcePool): Likewise. * src/qemu/qemu_driver.c (qemuDomainSnapshotCreateSingleDiskActive) (qemuDomainSnapshotUndoSingleDiskActive): Likewise. * src/storage/storage_backend_gluster.c (virStorageFileBackendGlusterInit): Likewise. * src/storage/storage_driver.c (virStorageFileFree) (virStorageFileInitInternal): Likewise. * src/storage/storage_driver.h (_virStorageFile): Likewise. * src/libvirt_private.syms (domain_conf.h): Move symbols... (virstoragefile.h): ...as appropriate. Signed-off-by: Eric Blake <eblake@redhat.com>
2014-03-26 22:33:08 +00:00
if (host->transport == VIR_STORAGE_NET_HOST_TRANS_UNIX)
hostname = host->socket;
if (!(priv->vol = glfs_new(src->volume))) {
virReportOOMError();
goto error;
}
if (glfs_set_volfile_server(priv->vol,
conf: split network host structs to util/ Continuing the refactoring of host-side storage descriptions out of conf/domain_conf and into util/virstoragefile, this patch focuses on details about a host name/port/transport as used by a network storage volume. * src/conf/domain_conf.h (virDomainDiskProtocolTransport) (virDomainDiskHostDef, virDomainDiskHostDefClear) (virDomainDiskHostDefFree, virDomainDiskHostDefCopy): Move... * src/util/virstoragefile.h (virStorageNetHostTransport) (virStorageNetHostDef, virStorageNetHostDefClear) (virStorageNetHostDefFree, virStorageNetHostDefCopy): ...here, with better names. * src/util/virstoragefile.c (virStorageNetHostDefClear) (virStorageNetHostDefFree, virStorageNetHostDefCopy): Moved from... * src/conf/domain_conf.c (virDomainDiskHostDefClear) (virDomainDiskHostDefFree, virDomainDiskHostDefCopy): ...here. (virDomainDiskSourceDefClear, virDomainDiskSourceDefParse) (virDomainDiskSourceDefFormatInternal): Adjust callers. * src/conf/snapshot_conf.h (_virDomainSnapshotDiskDef): Likewise. * src/conf/snapshot_conf.c (virDomainSnapshotDiskDefClear): Likewise. * src/qemu/qemu_command.c (qemuAddRBDHost) (qemuParseDriveURIString, qemuParseNBDString) (qemuBuildNetworkDriveURI, qemuParseCommandLineDisk) (qemuParseCommandLine, qemuGetDriveSourceString): Likewise. * src/qemu/qemu_command.h: Likewise. * src/qemu/qemu_conf.c (qemuAddISCSIPoolSourceHost) (qemuTranslateDiskSourcePool): Likewise. * src/qemu/qemu_driver.c (qemuDomainSnapshotCreateSingleDiskActive) (qemuDomainSnapshotUndoSingleDiskActive): Likewise. * src/storage/storage_backend_gluster.c (virStorageFileBackendGlusterInit): Likewise. * src/storage/storage_driver.c (virStorageFileFree) (virStorageFileInitInternal): Likewise. * src/storage/storage_driver.h (_virStorageFile): Likewise. * src/libvirt_private.syms (domain_conf.h): Move symbols... (virstoragefile.h): ...as appropriate. Signed-off-by: Eric Blake <eblake@redhat.com>
2014-03-26 22:33:08 +00:00
virStorageNetHostTransportTypeToString(host->transport),
hostname, port) < 0) {
virReportSystemError(errno,
_("failed to set gluster volfile server '%s'"),
hostname);
goto error;
}
if (glfs_init(priv->vol) < 0) {
virReportSystemError(errno,
_("failed to initialize gluster connection to "
"server: '%s'"), hostname);
goto error;
}
src->drv->priv = priv;
return 0;
error:
if (priv->vol)
glfs_fini(priv->vol);
VIR_FREE(priv);
return -1;
}
static int
virStorageFileBackendGlusterCreate(virStorageSourcePtr src)
{
virStorageFileBackendGlusterPrivPtr priv = src->drv->priv;
glfs_fd_t *fd = NULL;
mode_t mode = S_IRUSR;
if (!src->readonly)
mode |= S_IWUSR;
if (!(fd = glfs_creat(priv->vol, src->path,
O_CREAT | O_TRUNC | O_WRONLY, mode)))
return -1;
ignore_value(glfs_close(fd));
return 0;
}
static int
virStorageFileBackendGlusterUnlink(virStorageSourcePtr src)
{
virStorageFileBackendGlusterPrivPtr priv = src->drv->priv;
return glfs_unlink(priv->vol, src->path);
}
static int
virStorageFileBackendGlusterStat(virStorageSourcePtr src,
struct stat *st)
{
virStorageFileBackendGlusterPrivPtr priv = src->drv->priv;
return glfs_stat(priv->vol, src->path, st);
}
static ssize_t
virStorageFileBackendGlusterReadHeader(virStorageSourcePtr src,
ssize_t max_len,
char **buf)
{
virStorageFileBackendGlusterPrivPtr priv = src->drv->priv;
glfs_fd_t *fd = NULL;
ssize_t ret = -1;
*buf = NULL;
if (!(fd = glfs_open(priv->vol, src->path, O_RDONLY))) {
virReportSystemError(errno, _("Failed to open file '%s'"),
src->path);
return -1;
}
ret = virStorageBackendGlusterReadHeader(fd, src->path, max_len, buf);
if (fd)
glfs_close(fd);
return ret;
}
static int
virStorageFileBackendGlusterAccess(virStorageSourcePtr src,
int mode)
{
virStorageFileBackendGlusterPrivPtr priv = src->drv->priv;
return glfs_access(priv->vol, src->path, mode);
}
static int
virStorageFileBackendGlusterReadlinkCallback(const char *path,
char **linkpath,
void *data)
{
virStorageFileBackendGlusterPrivPtr priv = data;
char *buf = NULL;
size_t bufsiz = 0;
ssize_t ret;
struct stat st;
*linkpath = NULL;
if (glfs_stat(priv->vol, path, &st) < 0) {
virReportSystemError(errno,
_("failed to stat gluster path '%s'"),
path);
return -1;
}
if (!S_ISLNK(st.st_mode))
return 1;
realloc:
if (VIR_EXPAND_N(buf, bufsiz, 256) < 0)
goto error;
if ((ret = glfs_readlink(priv->vol, path, buf, bufsiz)) < 0) {
virReportSystemError(errno,
_("failed to read link of gluster file '%s'"),
path);
goto error;
}
if (ret == bufsiz)
goto realloc;
buf[ret] = '\0';
*linkpath = buf;
return 0;
error:
VIR_FREE(buf);
return -1;
}
static const char *
virStorageFileBackendGlusterGetUniqueIdentifier(virStorageSourcePtr src)
{
virStorageFileBackendGlusterPrivPtr priv = src->drv->priv;
char *filePath = NULL;
if (priv->canonpath)
return priv->canonpath;
if (!(filePath = virStorageFileCanonicalizePath(src->path,
virStorageFileBackendGlusterReadlinkCallback,
priv)))
return NULL;
ignore_value(virAsprintf(&priv->canonpath, "gluster://%s:%s/%s/%s",
src->hosts->name,
src->hosts->port,
src->volume,
filePath));
VIR_FREE(filePath);
return priv->canonpath;
}
static int
virStorageFileBackendGlusterChown(virStorageSourcePtr src,
uid_t uid,
gid_t gid)
{
virStorageFileBackendGlusterPrivPtr priv = src->drv->priv;
return glfs_chown(priv->vol, src->path, uid, gid);
}
virStorageFileBackend virStorageFileBackendGluster = {
conf: move host disk type to util/ A continuation of the migration of disk details to virstoragefile. This patch moves a single enum, but converting the name has quite a bit of fallout. * src/conf/domain_conf.h (virDomainDiskType): Move... * src/util/virstoragefile.h (virStorageType): ...and rename. * src/bhyve/bhyve_command.c (bhyveBuildDiskArgStr) (virBhyveProcessBuildLoadCmd): Update clients. * src/conf/domain_conf.c (virDomainDiskSourceDefParse) (virDomainDiskDefParseXML, virDomainDiskSourceDefFormatInternal) (virDomainDiskDefFormat, virDomainDiskGetActualType) (virDomainDiskDefForeachPath, virDomainDiskSourceIsBlockType): Likewise. * src/conf/snapshot_conf.h (_virDomainSnapshotDiskDef): Likewise. * src/conf/snapshot_conf.c (virDomainSnapshotDiskDefParseXML) (virDomainSnapshotAlignDisks, virDomainSnapshotDiskDefFormat): Likewise. * src/esx/esx_driver.c (esxAutodetectSCSIControllerModel) (esxDomainDefineXML): Likewise. * src/locking/domain_lock.c (virDomainLockManagerAddDisk): Likewise. * src/lxc/lxc_controller.c (virLXCControllerSetupLoopDeviceDisk) (virLXCControllerSetupNBDDeviceDisk) (virLXCControllerSetupLoopDevices, virLXCControllerSetupDisk): Likewise. * src/parallels/parallels_driver.c (parallelsGetHddInfo): Likewise. * src/phyp/phyp_driver.c (phypDiskType): Likewise. * src/qemu/qemu_command.c (qemuGetDriveSourceString) (qemuDomainDiskGetSourceString, qemuBuildDriveStr) (qemuBuildCommandLine, qemuParseCommandLineDisk) (qemuParseCommandLine): Likewise. * src/qemu/qemu_conf.c (qemuCheckSharedDevice) (qemuTranslateDiskSourcePool) (qemuTranslateSnapshotDiskSourcePool): Likewise. * src/qemu/qemu_domain.c (qemuDomainDeviceDefPostParse) (qemuDomainDetermineDiskChain): Likewise. * src/qemu/qemu_driver.c (qemuDomainGetBlockInfo) (qemuDomainSnapshotPrepareDiskExternalBackingInactive) (qemuDomainSnapshotPrepareDiskExternalBackingActive) (qemuDomainSnapshotPrepareDiskExternalOverlayActive) (qemuDomainSnapshotPrepareDiskExternalOverlayInactive) (qemuDomainSnapshotPrepareDiskInternal) (qemuDomainSnapshotPrepare) (qemuDomainSnapshotCreateSingleDiskActive): Likewise. * src/qemu/qemu_hotplug.c (qemuDomainChangeEjectableMedia): Likewise. * src/qemu/qemu_migration.c (qemuMigrationIsSafe): Likewise. * src/security/security_apparmor.c (AppArmorRestoreSecurityImageLabel) (AppArmorSetSecurityImageLabel): Likewise. * src/security/security_dac.c (virSecurityDACSetSecurityImageLabel) (virSecurityDACRestoreSecurityImageLabelInt) (virSecurityDACSetSecurityAllLabel): Likewise. * src/security/security_selinux.c (virSecuritySELinuxRestoreSecurityImageLabelInt) (virSecuritySELinuxSetSecurityImageLabel) (virSecuritySELinuxSetSecurityAllLabel): Likewise. * src/storage/storage_backend.c (virStorageFileBackendForType): Likewise. * src/storage/storage_backend_fs.c (virStorageFileBackendFile) (virStorageFileBackendBlock): Likewise. * src/storage/storage_backend_gluster.c (virStorageFileBackendGluster): Likewise. * src/vbox/vbox_tmpl.c (vboxDomainGetXMLDesc, vboxAttachDrives) (vboxDomainAttachDeviceImpl, vboxDomainDetachDevice): Likewise. * src/vmware/vmware_conf.c (vmwareVmxPath): Likewise. * src/vmx/vmx.c (virVMXParseDisk, virVMXFormatDisk) (virVMXFormatFloppy): Likewise. * src/xenxs/xen_sxpr.c (xenParseSxprDisks, xenParseSxpr) (xenFormatSxprDisk): Likewise. * src/xenxs/xen_xm.c (xenParseXM, xenFormatXMDisk): Likewise. * tests/securityselinuxlabeltest.c (testSELinuxLoadDef): Likewise. * src/libvirt_private.syms (domain_conf.h): Move symbols... (virstoragefile.h): ...as appropriate. Signed-off-by: Eric Blake <eblake@redhat.com>
2014-03-27 21:57:49 +00:00
.type = VIR_STORAGE_TYPE_NETWORK,
.protocol = VIR_STORAGE_NET_PROTOCOL_GLUSTER,
.backendInit = virStorageFileBackendGlusterInit,
.backendDeinit = virStorageFileBackendGlusterDeinit,
.storageFileCreate = virStorageFileBackendGlusterCreate,
.storageFileUnlink = virStorageFileBackendGlusterUnlink,
.storageFileStat = virStorageFileBackendGlusterStat,
.storageFileReadHeader = virStorageFileBackendGlusterReadHeader,
.storageFileAccess = virStorageFileBackendGlusterAccess,
.storageFileChown = virStorageFileBackendGlusterChown,
.storageFileGetUniqueIdentifier = virStorageFileBackendGlusterGetUniqueIdentifier,
};