This page describes the storage management capabilities in libvirt.
The storage management APIs are based around 2 core concepts
These two concepts are mapped through to two libvirt objects, a
virStorageVolPtr
and a virStoragePoolPtr
,
each with a collection of APIs for their management.
Although all storage pool backends share the same public APIs and XML format, they have varying levels of capabilities. Some may allow creation of volumes, others may only allow use of pre-existing volumes. Some may have constraints on volume size, or placement.
The is the top level tag for a storage pool document is 'pool'. It has
a single attribute type
, which is one of dir
,
fs
,netfs
,disk
,iscsi
,
logical
. This corresponds to the storage backend drivers
listed further along in this document.
path
which is the fully qualified
path to the block device node.path
which is the fully qualified path to the block device node.directory
or device
element. Contains an attribute name
which is the hostname or IP address of the server. May optionally
contain a port
attribute for the protocol specific
port number.
type
whose value is
backend specific. This is typically used to indicate filesystem
type, or network filesystem type, or partition table type, or
LVM metadata type. All drivers are required to have a default
value for this, so it is optional./dev/
may seem
like the logical choice, however, devices nodes there are not
guaranteed stable across reboots, since they are allocated on
demand. It is preferable to use a stable location such as one
of the /dev/disk/by-{path,id,uuid,label
locations.
mode
element contains the octal permission set. The
owner
element contains the numeric user ID. The group
element contains the numeric group ID. The label
element
contains the MAC (eg SELinux) label string.
If a storage pool exposes information about its underlying
placement / allocation scheme, the device
element
within the source
element may contain information
about its available extents. Some pools have a constraint that
a volume must be allocated entirely within a single constraint
(eg disk partition pools). Thus the extent information allows an
application to determine the maximum possible size for a new
volume
For storage pools supporting extent information, within each
device
element there will be zero or more freeExtent
elements. Each of these elements contains two attributes, start
and end
which provide the boundaries of the extent on the
device, measured in bytes.
A storage volume will be either a file or a device node.
/dev/
may seem
like the logical choice, however, devices nodes there are not
guaranteed stable across reboots, since they are allocated on
demand. It is preferrable to use a stable location such as one
of the /dev/disk/by-{path,id,uuid,label
locations.
type
. Consult the pool-specific docs for the
list of valid values.mode
element contains the octal permission set. The
owner
element contains the numeric user ID. The group
element contains the numeric group ID. The label
element
contains the MAC (eg SELinux) label string.
This section illustrates the capabilities / format for each of the different backend storage pool drivers
A pool with a type of dir
provides the means to manage
files within a directory. The files can be fully allocated raw files,
sparsely allocated raw files, or one of the special disk formats
such as qcow
,qcow2
,vmdk
,
cow
, etc as supported by the qemu-img
program. If the directory does not exist at the time the pool is
defined, the build
operation can be used to create it.
<pool type="dir"> <name>virtimages</name> <target> <path>/var/lib/virt/images</path> </target> </pool>
The directory pool does not use the pool format type element.
One of the following options:
raw
: a plain filebochs
: Bochs disk image formatcloop
: compressed loopback disk image formatcow
: User Mode Linux disk image formatdmg
: Mac disk image formatiso
: CDROM disk image formatqcow
: QEMU v1 disk image formatqcow2
: QEMU v2 disk image formatvmdk
: VMWare disk image formatvpc
: VirtualPC disk image format
When listing existing volumes all these formats are supported
natively. When creating new volumes, only a subset may be
available. The raw
type is guaranteed always
available. The qcow2
type can be created if
either qemu-img
or qcow-create
tools
are present. The others are dependent on support of the
qemu-img
tool.
This is a variant of the directory pool. Instead of creating a directory on an existing mounted filesystem though, it expects a source block device to be named. This block device will be mounted and files managed in the directory of its mount point. It will default to allowing the kernel to automatically discover the filesystem type, though it can be specified manually if required.
<pool type="fs"> <name>virtimages</name> <source> <device path="/dev/VolGroup00/VirtImages"/> </source> <target> <path>/var/lib/virt/images</path> </target> </pool>
The filesystem pool supports the following formats:
auto
- automatically determine formatext2
ext3
ext4
ufs
iso9660
udf
gfs
gfs2
vfat
hfs+
xfs
The valid volume types are the same as for the directory
pool type.
This is a variant of the filesystem pool. Instead of requiring a local block device as the source, it requires the name of a host and path of an exported directory. It will mount this network filesystem and manage files within the directory of its mount point. It will default to using NFS as the protocol.
<pool type="netfs"> <name>virtimages</name> <source> <host name="nfs.example.com"/> <dir path="/var/lib/virt/images"/> </source> <target> <path>/var/lib/virt/images</path> </target> </pool>
The network filesystem pool supports the following formats:
auto
- automatically determine formatnfs
The valid volume types are the same as for the directory
pool type.
This provides a pool based on an LVM volume group. For a pre-defined LVM volume group, simply providing the group name is sufficient, while to build a new group requires providing a list of source devices to serve as physical volumes. Volumes will be allocated by carving out chunks of storage from the volume group.
<pool type="logical"> <name>HostVG</name> <source> <device path="/dev/sda1"/> <device path="/dev/sdb1"/> <device path="/dev/sdc1"/> </source> <target> <path>/dev/HostVG</path> </target> </pool>
The logical volume pool does not use the pool format type element.
The logical volume pool does not use the volume format type element.
This provides a pool based on a physical disk. Volumes are created by adding partitions to the disk. Disk pools are have constraints on the size and placement of volumes. The 'free extents' information will detail the regions which are available for creating new volumes. A volume cannot span across 2 different free extents.
<pool type="disk"> <name>sda</name> <source> <device path='/dev/sda'/> </source> <target> <path>/dev</path> </target> </pool>
The disk volume pool accepts the following pool format types, representing the common partition table types:
dos
dvh
gpt
mac
bsd
pc98
sun
The dos
or gpt
formats are recommended for
best portability - the latter is needed for disks larger than 2TB.
The disk volume pool accepts the following volume format types, representing the common partition entry types:
none
linux
fat16
fat32
linux-swap
linux-lvm
linux-raid
extended
This provides a pool based on an iSCSI target. Volumes must be
pre-allocated on the iSCSI server, and cannot be created via
the libvirt APIs. Since /dev/XXX names may change each time libvirt
logs into the iSCSI target, it is recommended to configure the pool
to use /dev/disk/by-path
or /dev/disk/by-id
for the target path. These provide persistent stable naming for LUNs
<pool type="iscsi"> <name>virtimages</name> <source> <host name="iscsi.example.com"/> <device path="demo-target"/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool>
The logical volume pool does not use the pool format type element.
The logical volume pool does not use the volume format type element.
Graphics and design by Diana Fong