Add APIs that allow to dynamically register driver backends so that the
list of available drivers does not need to be known during compile time.
This will allow us to modularize the storage driver on runtime.
The file became a garbage dump for all kinds of utility functions over
time. Move them to a separate file so that the files can become a clean
interface for the storage backends.
We have couple of functions that operate over NULL terminated
lits of strings. However, our naming sucks:
virStringJoin
virStringFreeList
virStringFreeListCount
virStringArrayHasString
virStringGetFirstWithPrefix
We can do better:
virStringListJoin
virStringListFree
virStringListFreeCount
virStringListHasString
virStringListGetFirstWithPrefix
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
By default, `zfs create -V ...` reserves space for the entire volsize,
plus some extra (which attempts to account for overhead).
If `zfs create -s -V ...` is used instead, zvols are (fully) sparse.
A middle ground (partial allocation) can be achieved with
`zfs create -s -o refreservation=... -V ...`. Both libvirt and ZFS
support this approach, so the ZFS storage backend should support it.
Signed-off-by: Richard Laager <rlaager@wiktel.com>
I tried compiling libvirt with older gcc and probably because I used
different configure options I got some shadowed declarations.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
This reverts commit bb5f2dc91f.
The "if (vol->target.format != VIR_STORAGE_FILE_RAW)" check in the
createVol backend. This check is bogus because virStorageVolDefParseXML()
in conf/storage_conf.c sets target.format only if volOptions in
virStoragePoolTypeInfo has formatFromString set, and that's not the
case the zfs backend.
So the check always fails and breaks volume creation.
Generates a false positive for Coverity, but it turns out there's no need
to check ret == -1 since if VIR_APPEND_ELEMENT is successful, the local
vol pointer is cleared anyway.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Found by my Coverity checker - virCheckFlags call could return -1, but
not virCommandFree(destroy_cmd).
Signed-off-by: John Ferlan <jferlan@redhat.com>
There are slight differences in various ZFS implementations.
Specifically, ZFS on FreeBSD requires to set value of 'volmode'
option to 'dev' to expose volumes as raw disk device (that's what
we need) rather than geom provides, for example.
With ZFS on Linux, however, such option is not available and
volumes exposed like we need by default.
To make our implementation more flexible, only pass 'volmode'
when it's supported. Support is checked by parsing usage
information of the 'zfs get' command.
In order to be able to use 'checkPool' inside functions which do not
have any connection reference, 'conn' attribute needs to be discarded
from the checkPool's signature, since it's not used by any storage backend
anyway.
- Provide an implementation for buildPool and deletePool operations
for the ZFS storage backend.
- Add VIR_STORAGE_POOL_SOURCE_DEVICE flag to ZFS pool poolOptions
as now we can specify devices to build pool from
- storagepool.rng: add an optional 'sourceinfodev' to 'sourcezfs' and
add an optional 'target' to 'poolzfs' entity
- Add a couple of tests to storagepoolxml2xmltest
Currently, after calling commands to create a new volumes,
virStorageBackendZFSCreateVol calls virStorageBackendZFSFindVols that
calls virStorageBackendZFSParseVol.
virStorageBackendZFSParseVol checks if a volume already exists by
trying to get it using virStorageVolDefFindByName.
For a just created volume it returns NULL, so volume is reported as
new and appended to pool->volumes. This causes a volume to be listed
twice as storageVolCreateXML appends this new volume to the list as
well.
Fix that by passing a new volume definition to
virStorageBackendZFSParseVol so it could determine if it needs to add
this volume to the list.
Implement ZFS storage backend driver. Currently supported
only on FreeBSD because of ZFS limitations on Linux.
Features supported:
- pool-start, pool-stop
- pool-info
- vol-list
- vol-create / vol-delete
Pool definition looks like that:
<pool type='zfs'>
<name>myzfspool</name>
<source>
<name>actualpoolname</name>
</source>
</pool>
The 'actualpoolname' value is a name of the pool on the system,
such as shown by 'zpool list' command. Target makes no sense
here because volumes path is always /dev/zvol/$poolname/$volname.
User has to create a pool on his own, this driver doesn't
support pool creation currently.
A volume could be used with Qemu by adding an entry like this:
<disk type='volume' device='disk'>
<driver name='qemu' type='raw'/>
<source pool='myzfspool' volume='vol5'/>
<target dev='hdc' bus='ide'/>
</disk>