Sometimes, when a new domain is to be created it may come handy to know the capabilities of the hypervisor so the correct combination of devices and drivers is used. For example, when management application is considering the mode for a host device's passthrough there are several options depending not only on host, but on hypervisor in question too. If the hypervisor is qemu then it needs to be more recent to support VFIO, while legacy KVM is achievable just fine with older qemus.
The main difference between
virConnectGetCapabilities
and the emulator capabilities API is, the former one aims more on
the host capabilities (e.g. NUMA topology, security models in
effect, etc.) while the latter one specializes on the hypervisor
capabilities.
While the Driver Capabilities provides the host capabilities (e.g NUMA topology, security models in effect, etc.), the Domain Capabilities provides the hypervisor specific capabilities for Management Applications to query and make decisions regarding what to utilize.
The Domain Capabilities can provide information such as the correct combination of devices and drivers that are supported. Knowing which host and hypervisor specific options are available or supported would allow the management application to choose an appropriate mode for a pass-through host device as well as which adapter to utilize.
Some XML elements may be entirely omitted from the domaincapabilities XML, depending on what the libvirt driver has filled in. Applications should only act on what is explicitly reported in the domaincapabilities XML. For example, if <disk supported='yes'/> is present, you can safely assume the driver supports <disk> devices. If <disk supported='no'/> is present, you can safely assume the driver does NOT support <disk> devices. If the <disk> block is omitted entirely, the driver is not indicating one way or the other whether it supports <disk> devices, and applications should not interpret the missing block to mean any thing in particular.
A new query interface was added to the virConnect API's to retrieve the XML listing of the set of domain capabilities (Since 1.2.7):
virConnectGetDomainCapabilities
The root element that emulator capability XML document starts with has
name domainCapabilities
. It contains at least four direct
child elements:
<domainCapabilities> <path>/usr/bin/qemu-system-x86_64</path> <domain>kvm</domain> <machine>pc-i440fx-2.1</machine> <arch>x86_64</arch> ... </domainCapabilities>
path
domain
machine
arch
Before any devices capability occurs, there might be info on domain wide capabilities, e.g. virtual CPUs:
<domainCapabilities> ... <vcpu max='255'/> ... </domainCapabilities>
vcpu
Sometimes users might want to tweak some BIOS knobs or use
UEFI. For cases like that, os
element exposes what values can be passed to its children.
<domainCapabilities> ... <os supported='yes'> <enum name='firmware'> <value>bios</value> <value>efi</value> </enum> <loader supported='yes'> <value>/usr/share/OVMF/OVMF_CODE.fd</value> <enum name='type'> <value>rom</value> <value>pflash</value> </enum> <enum name='readonly'> <value>yes</value> <value>no</value> </enum> <enum name='secure'> <value>yes</value> <value>no</value> </enum> </loader> </os> ... <domainCapabilities>
The firmware
enum corresponds to the
firmware
attribute of the os
element in
the domain XML. The presence of this enum means libvirt is capable
of the so-called firmware auto-selection feature. And the listed
firmware values represent the accepted input in the domain
XML. Note that the firmware
enum reports only those
values for which a firmware "descriptor file" exists on the host.
Firmware descriptor file is a small JSON document that describes
details about a given BIOS or UEFI binary on the host, e.g. the
firmware binary path, its architecture, supported machine types,
NVRAM template, etc. This ensures that the reported values won't
cause a failure on guest boot.
For the loader
element, the following can occur:
value
type
rom
)
or a UEFI firmware (pflash
). Each value
sub-element under the type
enum represents a possible
value for the type
attribute for the <loader/>
element in the domain XML. E.g. the presence
of pfalsh
under the type
enum means that
a domain XML can use UEFI firmware via: <loader/>
type="pflash" ...>/path/to/the/firmware/binary/</loader>.
readonly
readonly
attribute of the
<loader/> element in the domain XML.secure
secure
attribute of the
<loader/> element in the domain XML. Note that the
value yes
is listed only if libvirt detects a
firmware descriptor file that has path to an OVMF binary that
supports Secure boot, and lists its architecture and supported
machine type.
The cpu
element exposes options usable for configuring
guest CPUs.
<domainCapabilities> ... <cpu> <mode name='host-passthrough' supported='yes'> <enum name='hostPassthroughMigratable'> <value>on</value> <value>off</value> </enum> </mode> <mode name='maximum' supported='yes'> <enum name='maximumMigratable'> <value>on</value> <value>off</value> </enum> </mode> <mode name='host-model' supported='yes'> <model fallback='allow'>Broadwell</model> <vendor>Intel</vendor> <feature policy='disable' name='aes'/> <feature policy='require' name='vmx'/> </mode> <mode name='custom' supported='yes'> <model usable='no' deprecated='no'>Broadwell</model> <model usable='yes' deprecated='no'>Broadwell-noTSX</model> <model usable='no' deprecated='yes'>Haswell</model> ... </mode> </cpu> ... <domainCapabilities>
Each CPU mode understood by libvirt is described with a
mode
element which tells whether the particular mode
is supported and provides (when applicable) more details about it:
host-passthrough
hostPassthroughMigratable
enum shows possible values
of the migratable
attribute for the <cpu> element
with mode='host-passthrough'
in the domain XML.
host-model
host-model
is supported by the hypervisor, the
mode
describes the guest CPU which will be used when
starting a domain with host-model
CPU. The hypervisor
specifics (such as unsupported CPU models or features, machine type,
etc.) may be accounted for in this guest CPU specification and thus
the CPU can be different from the one shown in host capabilities XML.
This is indicated by the fallback
attribute of the
model
sub element: allow
means not all
specifics were accounted for and thus the CPU a guest will see may
be different; forbid
indicates that the CPU a guest will
see should match this CPU definition.
custom
mode
element contains a list of supported CPU
models, each described by a dedicated model
element.
The usable
attribute specifies whether the model can
be used directly on the host. When usable='no' the corresponding model
cannot be used without disabling some features that the CPU of such
model is expected to have. A special value unknown
indicates libvirt does not have enough information to provide the
usability data. The deprecated
attribute reflects
the hypervisor's policy on usage of this model
(since 7.1.0).
The iothread
elements indicates whether or not
I/O threads
are supported.
<domainCapabilities> ... <iothread supported='yes'/> ... <domainCapabilities>
Another set of XML elements describe the supported devices and their
capabilities. All devices occur as children of the main
devices
element.
<domainCapabilities> ... <devices> <disk supported='yes'> <enum name='diskDevice'> <value>disk</value> <value>cdrom</value> <value>floppy</value> <value>lun</value> </enum> ... </disk> <hostdev supported='no'/> </devices> </domainCapabilities>
Reported capabilities are expressed as an enumerated list of available
options for each of the element or attribute. For example, the
<disk/> element has an attribute device
which can
support the values disk
, cdrom
,
floppy
, or lun
.
Disk capabilities are exposed under the disk
element. For
instance:
<domainCapabilities> ... <devices> <disk supported='yes'> <enum name='diskDevice'> <value>disk</value> <value>cdrom</value> <value>floppy</value> <value>lun</value> </enum> <enum name='bus'> <value>ide</value> <value>fdc</value> <value>scsi</value> <value>virtio</value> <value>xen</value> <value>usb</value> <value>sata</value> <value>sd</value> </enum> </disk> ... </devices> </domainCapabilities>
diskDevice
device
attribute of the <disk/>
element.bus
bus
attribute of the <target/>
element for a <disk/>.Graphics device capabilities are exposed under the
graphics
element. For instance:
<domainCapabilities> ... <devices> <graphics supported='yes'> <enum name='type'> <value>sdl</value> <value>vnc</value> <value>spice</value> </enum> </graphics> ... </devices> </domainCapabilities>
type
type
attribute of the <graphics/>
element.Video device capabilities are exposed under the
video
element. For instance:
<domainCapabilities> ... <devices> <video supported='yes'> <enum name='modelType'> <value>vga</value> <value>cirrus</value> <value>vmvga</value> <value>qxl</value> <value>virtio</value> </enum> </video> ... </devices> </domainCapabilities>
modelType
type
attribute of the
<video><model> element.Some host devices can be passed through to a guest (e.g. USB, PCI and SCSI). Well, only if the following is enabled:
<domainCapabilities> ... <devices> <hostdev supported='yes'> <enum name='mode'> <value>subsystem</value> <value>capabilities</value> </enum> <enum name='startupPolicy'> <value>default</value> <value>mandatory</value> <value>requisite</value> <value>optional</value> </enum> <enum name='subsysType'> <value>usb</value> <value>pci</value> <value>scsi</value> </enum> <enum name='capsType'> <value>storage</value> <value>misc</value> <value>net</value> </enum> <enum name='pciBackend'> <value>default</value> <value>kvm</value> <value>vfio</value> <value>xen</value> </enum> </hostdev> </devices> </domainCapabilities>
mode
mode
attribute of the <hostdev/>
element.startupPolicy
startupPolicy
attribute of the
<hostdev/> element.subsysType
type
attribute of the <hostdev/>
element in case of mode="subsystem"
.capsType
type
attribute of the <hostdev/>
element in case of mode="capabilities"
.pciBackend
name
attribute of the <driver/>
element.RNG device capabilities are exposed under the
rng
element. For instance:
<domainCapabilities> ... <devices> <rng supported='yes'> <enum name='model'> <value>virtio</value> <value>virtio-transitional</value> <value>virtio-non-transitional</value> </enum> <enum name='backendModel'> <value>random</value> <value>egd</value> <value>builtin</value> </enum> </rng> ... </devices> </domainCapabilities>
model
model
attribute of the
<rng> element.backendModel
model
attribute of the
<rng><backend> element.One more set of XML elements describe the supported features and
their capabilities. All features occur as children of the main
features
element.
<domainCapabilities> ... <features> <gic supported='yes'> <enum name='version'> <value>2</value> <value>3</value> </enum> </gic> <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='yes'/> <backup supported='yes'/> <sev> <cbitpos>47</cbitpos> <reduced-phys-bits>1</reduced-phys-bits> </sev> </features> </domainCapabilities>
Reported capabilities are expressed as an enumerated list of
possible values for each of the elements or attributes. For example, the
gic
element has an attribute version
which can
support the values 2
or 3
.
For information about the purpose of each feature, see the relevant section in the domain XML documentation.
GIC capabilities are exposed under the gic
element.
version
version
attribute of the
gic
element.Reports whether the vmcoreinfo feature can be enabled.
Reports whether the genid feature can be used by the domain.
Reports whether the hypervisor will obey the <backingStore> elements configured for a <disk> when booting the guest, hotplugging the disk to a running guest, or similar. (Since 5.10)
Reports whether the hypervisor supports the backup, checkpoint, and
related features. (virDomainBackupBegin
,
virDomainCheckpointCreateXML
etc). The presence of the
backup
element even if supported='no'
implies that
the VIR_DOMAIN_UNDEFINE_CHECKPOINTS_METADATA
flag for
virDomainUndefine
is supported.
AMD Secure Encrypted Virtualization (SEV) capabilities are exposed under
the sev
element.
SEV is an extension to the AMD-V architecture which supports running
virtual machines (VMs) under the control of a hypervisor. When supported,
guest owner can create a VM whose memory contents will be transparently
encrypted with a key unique to that VM.
For more details on the SEV feature, please follow resources in the AMD developer's document store. In order to use SEV with libvirt have a look at SEV in domain XML
cbitpos
reducedPhysBits