The <teaming> element in <interface> allows pairing two interfaces
together as a simple "failover bond" network device in a guest. One of
the devices is the "transient" interface - it will be preferred for
all network traffic when it is present, but may be removed when
necessary, in particular during migration, when traffic will instead
go through the other interface of the pair - the "persistent"
interface. As it happens, in the QEMU implementation of this teaming
pair (called "virtio failover" in QEMU) the transient interface is
always a host network device assigned to the guest using VFIO (aka
"hostdev"); the persistent interface is always an emulated virtio NIC.
When support was initially added for <teaming>, it was written to
require that the transient/hostdev device be defined using <interface
type='hostdev'>; this was done because the virtio failover
implementation in QEMU and the virtio guest driver demands that the
two interfaces in the pair have matching MAC addresses, and the only
way libvirt can guarantee the MAC address of a hostdev network device
is to use <interface type='hostdev'>, whose main purpose is to
configure the device's MAC address before handing the device to
QEMU. (note that <interface type='hostdev'> in turn requires that the
network device be an SRIOV VF (Virtual Function), as that is the only
type of network device whose MAC address we can set in a way that will
survive the device's driver init in the guest).
It has recently come up that some users are unable to use <teaming>
because they are running in a container environment where libvirt
doesn't have the necessary privileges or resources to set the VF's MAC
address (because setting the VF MAC is done via the same device's PF
(Physical Function), and the PF is not exposed to libvirt's container).
At the same time, these users *are* able to set the VF's MAC address
themselves in advance of staring up libvirt in the container. So they
could theoretically use the <teaming> feature if libvirt just skipped
the "setting the MAC address" part.
Fortunately, that is *exactly* the difference between <interface
type='hostdev'> (which must be a "hostdev VF") and <hostdev> (a "plain
hostdev" - it could be *any* PCI device; libvirt doesn't know what type
of PCI device it is, and doesn't care).
But what is still needed is for libvirt to provide a small bit of
information on the QEMU commandline argument for the hostdev, telling
QEMU that this device will be part of a team ("failover pair"), and
the id of the other device in the pair.
To make both of those goals simultaneously possible, this patch adds
support for the <teaming> element to plain <hostdev> - libvirt doesn't
try to set any MAC addresses, and QEMU gets the extra commandline
argument it needs)
(actually, this patch adds only the parsing/formatting of the
<teaming> element in <hostdev>. The next patch will actually wire that
into the qemu driver.)
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Starting a VM with swtpm device fails with qemu-system-aarch64.
E.g. with TPM device config
<tpm model='tpm-tis'>
<backend type='emulator' version='2.0'/>
</tpm>
QEMU reports the following error
error: internal error: process exited while connecting to monitor:
2021-02-07T05:15:35.378927Z qemu-system-aarch64: -device
tpm-tis,tpmdev=tpm-tpm0,id=tpm0: 'tpm-tis' is not a valid device model name
Indeed the TPM device name is 'tpm-tis-device' [1][2] for aarch64,
versus the shorter 'tpm-tis' for x86. The devices are the same from
a functional POV, i.e. they both emulate a TPM device conforming to
the TIS specification. Account for the unfortunate name difference
when building the TPM device option in qemuBuildTPMDevStr(). Also
include a test case for 'tpm-tis-device'.
[1] https://qemu.readthedocs.io/en/latest/specs/tpm.html
[2] c294ac327c
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
In commit 88957116c9 I've adapted
libvirt to QEMU's deprecation of -mem-path and -mem-prealloc and
switched to memory-backend-* even for system memory. My claim was
that that's what QEMU does under the hood anyway. And indeed it
was: see QEMU commit 900c0ba373aada4c13d47d95330aa72ec4067ba5 and
look at function create_default_memdev().
However, then commit d96c4d5f193e0e45beec80a6277728b32875bddb was
merged into QEMU. While it was fixing a bug, it also changed the
create_default_memdev() function in which it started turning off
use of canonical path (by setting
"x-use-canonical-path-for-ramblock-id" attribute to false). This
wasn't documented until QEMU commit
8db0b20415c129cf5e577a593a4a0372d90b7cc9. The path affects
migration - the same path has to be used on the source and on the
destination. Therefore, if there is old guest started with '-m X'
it has "pc.ram" block which doesn't use canonical path and thus
when migrating to newer QEMU which uses memory-backend-* we have
to turn off the canonical path explicitly. Otherwise,
"/objects/pc.ram" path would be expected by QEMU which doesn't
match the source.
Ideally, we would need to set it only for some machine types
(4.0 and older) because newer machine types already do what we
are doing. However, we treat machine types as opaque strings and
therefore we don't want to parse nor inspect their versions. But
then again, newer machine types already do what we are doing in
this commit, so when old machine types are deprecated and removed
we can remove our hack and forget it ever happened.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1912201
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
The "max" model can be treated the same way as "host" model in general.
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The test doesn't depend on a specific machine type.
The test uses a machine type which is becoming deprecated so it would
break the _LATEST version of the test once we update the qemu data.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
All of these options are actually supported by vhostuser disk so
we should allow them to be usable.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Implements QEMU support for vhost-user-blk together with live
hotplug/unplug.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Pass the parameter clock rt to qemu to ensure that the
virtual machine is not synchronized with the host time
Signed-off-by: gongwei <gongwei@smartx.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Wire up the QEMU command line for this option.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Add virtio related options iommu, ats and packed as driver element attributes
to vsock devices. Ex:
<vsock model='virtio'>
<cid auto='no' address='3'/>
<driver iommu='on'/>
</vsock>
Signed-off-by: Boris Fiuczynski <fiuczy@linux.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
As explained in QEMU commit 4c257911dcc7c4189768e9651755c849ce9db4e8
intel-pt features should never be included in the CPU models as it was
not supported by KVM back then and even once it started to be supported,
users have to enable it by passing pt_mode=1 parameter to kvm_intel
module. The Icelake-* CPU models with intel-pt included were added to
QEMU 3.1.0 and removed right in the following 4.0.0 release (and even in
3.1.1 maintenance release).
In libvirt 6.10.0 I introduced 'removed' attribute for features included
in our CPU model definitions which we can use to drop intel-pt from
Icelake-* CPU models. Back then I explained we can safely do so only for
features which could never be enabled, which is not the case of intel-pt.
Theoretically, it could be possible to create an environment in which
QEMU would enable intel-pt without asking for it explicitly: it would
need to use a new enough kernel (not available at the time of QEMU
3.1.0) and pt_mode KVM parameter in combination with QEMU 3.1.0 running
a domain with q35 machine type and all that on a CPU which didn't really
exist at that time.
Migrating such domain to a host with newer SW stack including libvirt
with this patch applied would result in incompatible guest ABI (the
virtual CPU would lose intel-pt). However, QEMU changed its CPU models
unconditionally and thus migration would not work even without this
patch. That said, it is safe to follow QEMU and remove the feature from
Icelake-* CPU models in our cpu_map.
https://bugzilla.redhat.com/show_bug.cgi?id=1853972
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
Now we have everything prepared for generating the command line.
The device alias prefix was chosen to be 'virtiopmem'.
Since virtio-pmem-pci device goes onto PCI bus generating device
alias must have been changed slightly because
qemuAssignDeviceMemoryAlias() might have used DIMM slot number to
generate the alias. This obviously won't work and thus the "old"
way (which includes qemuDomainDeviceAliasIndex()) must be used.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1735375
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The virtio-pmem is a virtio variant of NVDIMM and just like
NVDIMM virtio-pmem also allows accessing host pages bypassing
guest page cache. The difference is that if a regular file is
used to back guest's NVDIMM (model='nvdimm') the persistence of
guest writes might not be guaranteed while with virtio-pmem it
is.
To express this new model at domain XML level, I've chosen the
following:
<memory model='virtio-pmem' access='shared'>
<source>
<path>/tmp/virtio_pmem</path>
</source>
<target>
<size unit='KiB'>524288</size>
</target>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</memory>
Another difference between NVDIMM and virtio-pmem is that while
the former supports NUMA node locality the latter doesn't. And
also, the latter goes onto PCI bus and not into a DIMM module.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Commit 154df5840d added support for <metadata_cache> as property of a
<disk>. Since the same parser is used to parse the XML used with
virDomainBlockCopy it starts the copy job with the appropriate cache
configured, but the <mirror> doesn't show this configuration nor it's
preserved if libvirtd is restarted during the mirror.
Add parsing, formatting and tests for <metadata_cache> for a <mirror>.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
qemu's qcow2 driver allows control of the metadata cache of qcow2 driver
by the 'cache-size' property. Wire it up to the recently introduced
elements.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In certain specific cases it might be beneficial to be able to control
the metadata caching of storage image format drivers of a hypervisor.
Introduce XML machinery to set the maximum size of the metadata cache
which will be used by qemu's qcow2 driver.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Currently, swtpm TPM state file is removed when a transient domain is
powered off or undefined. When we store TPM state on a shared storage
such as NFS and use transient domain, TPM states should be kept as it is.
Add per-TPM emulator option `persistent_sate` for keeping TPM state.
This option only works for the emulator type backend and looks as follows:
<tpm model='tpm-tis'>
<backend type='emulator' persistent_state='yes'/>
</tpm>
Signed-off-by: Eiichi Tsukata <eiichi.tsukata@nutanix.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Currently, we configure QEMU to prealloc memory almost by
default. Well, by default for NVDIMMs, hugepages and if user
asked us to (via memoryBacking <allocation mode="immediate"/>).
However, when guest's NVDIMM is backed by real life NVDIMM this
approach is not the best. In this case users should put <pmem/>
into the <memory/> device <source/>, like this:
<memory model='nvdimm' access='shared'>
<source>
<path>/dev/pmem0</path>
<pmem/>
</source>
</memory>
Instructing QEMU to do prealloc in this case means that each
page of the NVDIMM is "touched" (the first byte is read and
written back - see QEMU commit v2.9.0-rc1~26^2) which cripples
device wear.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1894053
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Let us introduce the xml and reply files for QEMU 5.2.0 on s390x.
Signed-off-by: Shalini Chellathurai Saroja <shalini@linux.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.ibm.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
In v6.8.0-27-g88957116c9 and friends I've switched the way the
default RAM is specified for QEMU (from plain -m to
memory-backend-*). This means, that even if a guest doesn't have
any NUMA nodes configured we can use memory-backend-* attributes
to translate user config requests. For instance, we can allow
memory to be shared (<access mode='shared'/> under
<memoryBacking/>). But what my original commits are missing is
allowing such configuration in our validator.
Fixes: 88957116c9
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1839034#c12
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
qemu-5.2 is out! Let's update the capabilities for the final version.
Note that the 'enable-fips' feature vanishing in this update is expected
as the removal was tied to a version check (see commit 7b1ed1cd73 ).
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
virDomainControllerDefParseXML() does a lot of checks with
virDomainPCIControllerOpts parameters that can be moved to
virDomainControllerDefValidate, sharing the logic with other use
cases that does not rely on XML parsing.
'pseries-default-phb-numa-node' parse error was changed to reflect
the error that is being thrown by qemuValidateDomainDeviceDefController()
via deviceValidateCallback, that is executed before
virDomainControllerDefValidate().
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This check isn't exclusive to XML parsing. Let's move it to
virDomainDefVideoValidate() in domain_validate.c
We don't have a failure test for this scenario, so a new test called
'video-multiple-primaries' was added to test this failure case.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Test slices on top of nvme-backed disks.
Note that the changes in seemingly irrelevant parts of the output are
due to re-naming the nodenames.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
qemuDomainAlignMemorySizes() has an operation order problem. We are
calculating 'initialmem' without aligning the memory modules first.
Since we're aligning the dimms afterwards this can create inconsistencies
in the end result. x86 has alignment of 1-2MiB and it's not severely
impacted by it, but pSeries works with 256MiB alignment and the difference
is noticeable.
This is the case of the existing 'memory-hotplug-ppc64-nonuma' test.
The test consists of a 2GiB (aligned value) guest with 2 ~520MiB dimms,
both unaligned. 'initialmem' is calculated by taking total_mem and
subtracting the dimms size (via virDomainDefGetMemoryInitial()), which
wil give us 2GiB - 520MiB - 520MiB, ending up with a little more than
an 1GiB of 'initialmem'. Note that this value is now unaligned, and
will be aligned up via VIR_ROUND_UP(), and we'll end up with 'initialmem'
of 1GiB + 256MiB. Given that the dimms are aligned later on, the end
result for QEMU is that the guest will have a 'mem' size of 1310720k,
plus the two 512 MiB dimms, exceeding in 256MiB the desired 2GiB
memory and currentMemory specified in the XML.
Existing guests can't be fixed without breaking ABI, but we have
code already in place to align pSeries NVDIMM modules for new guests.
Let's extend it to align all pSeries mem modules.
A new test, 'memory-hotplug-ppc64-nonuma-abi-update', a copy of the
existing 'memory-hotplug-ppc64-nonuma', was added to demonstrate the
result for new pSeries guests. For the same unaligned XML mentioned
above, after applying this patch:
- starting QEMU mem size without PARSE_ABI_UPDATE:
-m size=1310720k,slots=16,maxmem=4194304k \ (no changes)
- starting QEMU mem size with PARSE_ABI_UPDATE:
-m size=1048576k,slots=16,maxmem=4194304k \ (size fixed)
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
A previous patch removed the pSeries NVDIMM align that wasn't
being done properly. This patch reintroduces it in the right
fashion, making it reliant on VIR_DOMAIN_DEF_PARSE_ABI_UPDATE.
This makes it complying with the intended design defined by
commit c7d7ba85a6.
Since the PARSE_ABI_UPDATE is more restrictive than checking for
!migrate && !snapshot, like is being currently done with
qemuDomainAlignMemorySizes(), this means that we'll align the
pSeries NVDIMMs in two places - in post parse time for new
guests, and in qemuDomainAlignMemorySizes() for all guests
that aren't migrating or in a snapshot.
Another difference is that the logic is now in the QEMU driver
instead of domain_conf.c. This was necessary because all
considerations made about the PARSE_ABI_UPDATE flag were done
under QEMU. Given that no other driver supports ppc64 there is no
impact in this change.
A new test was added to exercise what we're doing. It consists
of a a copy of the existing 'memory-hotplug-nvdimm-ppc64' xml2xml
test, called with the PARSE_ABI_UPDATE flag. As intended, we're
not changing QEMU command line or any XML without the flag,
while the pseries NVDIMM memory is being aligned when the
flag is used.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
We can leave out things like USB controller, memballoon device,
kernel and initrd since they're not the focus of the tests.
Propagating some information from the output files back to the
input files makes it easier to compare them, as it reduces the
resulting diff, and in the case of the qemuxml2xml test for
memory-hotplug-ppc64-nonuma it allows us to convert the output
file into a symlink, since in the specific case the XML doesn't
change at all.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The ppc64 tests
memory-hotplug-ppc64-nonuma
memory-hotplug-nvdimm-ppc64
are not passed the same information for qemuxml2argv and
qemuxml2xml tests; the former, in particular, doesn't show up
at all in qemuxml2xml. Address this inconsistency.
Note that one of the new output files had been introduced with
5540acb9a2 despite not being actually used as of that commit.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The feature is never enabled by default on KVM and QEMU dropped it from
the models long ago.
https://bugzilla.redhat.com/show_bug.cgi?id=1798004
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
The ESP SCSI controllers (NCR53C90, DC390, AM53C974) have the same
requirement as the LSI Logic controller for each disk to be set via
the scsi-id=NNN property, not the lun=NNN property.
Switching the code to use an enum will force authors to pay attention
to this difference when adding future SCSI controllers.
Reviewed-by: Laine Stump <laine@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Update the KVM feature tests for QEMU's kvm-poll-control performance
hint.
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The aim is to eliminate virDomainCapsDeviceDefValidate(). And in
order to do so, the domain video model has to be validated in
qemuValidateDomainDeviceDefVideo().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The unsignedInt XML schema type allows for values up to 2^32 - 1, i.e.,
using 4294967296 or greater TSC frequency would fail schema validation.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Add logic to validate and then pass through 'fmode' and 'dmode' to the
QEMU call.
Signed-off-by: Brian Turek <brian.turek@gmail.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Expose QEMU's 9pfs 'fmode' and 'dmode' options via attributes on the
'filesystem' node in the domain XML. These options control the creation
mode of files and directories, respectively, when using
accessmode=mapped.
Signed-off-by: Brian Turek <brian.turek@gmail.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Simulate that the device is a cdrom when the path equals to /dev/cdrom
to provide testing for the 'host_cdrom' backend.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Rename 'FLAG_FIPS' to 'FLAG_FIPS_HOST' to signify that we are simulating
a host supporting fips mode and use the flag to assert 'enabeFips'
argument of 'qemuProcessCreatePretendCmdBuild' rather than passing it
via QEMU_CAPS_ENABLE_FIPS.
This prepares the testsuite for testing of -enable-fips deprecation in
qemu-5.2.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Enable <interface type='vdpa'> for qemu domains. This provides basic
support and does not support hotplug or migration.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>