There are currently some limitations in the emulated GICv3
that make it unsuitable as a default. Use GICv2 instead.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1450433
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Currently we consider all UNIX paths with specific prefix as generated
by libvirt, but that's a wrong assumption. Let's make the detection
better by actually checking whether the whole path matches one of the
paths that we generate or generated in the past.
The UNIX path isn't stored in config XML since libvirt-1.3.1.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1446980
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Add kernel_irqchip=split/on to the QEMU command line
and a capability that looks for it in query-command-line-options
output. For the 'split' option, use a version check
since it cannot be reasonably probed.
https://bugzilla.redhat.com/show_bug.cgi?id=1427005
Add a new <ioapic> element with a driver attribute.
Possible values are qemu and kvm. With 'qemu', the I/O
APIC can be put in the userspace even for KVM domains.
https://bugzilla.redhat.com/show_bug.cgi?id=1427005
This is a USB3 controller and it's a better choice than piix3-uhci.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Acked-by: Andrea Bolognani <abologna@redhat.com>
This patch maps /domain/cpu/cache element into -cpu parameters:
- <cache mode='passthrough'/> is translated to host-cache-info=on
- <cache level='3' mode='emulate'/> is transformed into l3-cache=on
- <cache mode='disable'/> is turned in host-cache-info=off,l3-cache=off
Any other <cache> element is forbidden.
The tricky part is detecting whether QEMU supports the CPU properties.
The 'host-cache-info' property is introduced in v2.4.0-1389-ge265e3e480,
earlier QEMU releases enabled host-cache-info by default and had no way
to disable it. If the property is present, it defaults to 'off' for any
QEMU until at least 2.9.0.
The 'l3-cache' property was introduced later by v2.7.0-200-g14c985cffa.
Earlier versions worked as if l3-cache=off was passed. For any QEMU
until at least 2.9.0 l3-cache is 'off' by default.
QEMU 2.9.0 was the first release which supports probing both properties
by running device-list-properties with typename=host-x86_64-cpu. Older
QEMU releases did not support device-list-properties command for CPU
devices. Thus we can't really rely on probing them and we can just use
query-cpu-model-expansion QMP command as a witness.
Because the cache property probing is only reliable for QEMU >= 2.9.0
when both are already supported for quite a few releases, we let QEMU
report an error if a specific cache mode is explicitly requested. The
other mode (or both if a user requested CPU cache to be disabled) is
explicitly turned off for QEMU >= 2.9.0 to avoid any surprises in case
the QEMU defaults change. Any older QEMU already turns them off so not
doing so explicitly does not make any harm.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We are currently parsing only rx/frames/max because that's the only
value that makes sense for us. The tun device just added support for
this one and the others are only supported by hardware devices which
we don't need to worry about as the only way we'd pass those to the
domain is using <hostdev/> or <interface type='hostdev'/>. And in
those cases the guest can modify the settings itself.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Our test data used a lot of different qemu binary paths and some
of them were based on downstream systems.
Note that there is one file where I had to add "accel=kvm" because
the qemuargv2xml code parses "/usr/bin/kvm" as virt type="kvm".
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
The virt type for QEMU can be modified by -machine attribute "accel"
so there is no need to have different QEMU binary paths.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Like all devices, add the 'id' option for mdevs as well. Patch also
adjusts the test accordingly.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1438431
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Depending on the architecture, requirements for ACPI and UEFI can
be different; more specifically, while on x86 UEFI requires ACPI,
on aarch64 it's the other way around.
Enforce these requirements when validating the domain, and make
the error message more accurate by mentioning that they're not
necessarily applicable to all architectures.
Several aarch64 test cases had to be tweaked because they would
have failed the validation step otherwise.
The capabilities used in test cases should match those used
during normal operation for the tests to make any sense.
This results in the generated command line for a few test
cases (most notably non-x86 test cases that were wrongly
assuming they could use -no-acpi) changing.
This reverts commit c2e60ad0e5.
Turns out this check is excessively strict: there are ways
other than <memtune><hard_limit> to raise the memory locking
limit for QEMU processes, one prominent example being
tweaking /etc/security/limits.conf.
Partially-resolves: https://bugzilla.redhat.com/1431793
QEMU allows for TSC frequency to be explicitly set to enable migration
with invtsc (migration fails if the destination QEMU cannot set the
exact same frequency used when starting the domain on the source host).
Libvirt already supports setting the TSC frequency in the XML using
<clock>
<timer name='tsc' frequency='1234567890'/>
</clock>
which will be transformed into
-cpu Model,tsc-frequency=1234567890
QEMU command line.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We want pcie-root-ports to be used when available in QEMU,
but at the same time we need to ensure that hosts running
older QEMU releases keep working and that the user can
override the default at any time.
Add a comment for the original pcie-root-port test cases
to make it clear how these new test cases are different.
For NVDIMM devices it is optionally possible to specify the size
of internal storage for namespaces. Namespaces are a feature that
allows users to partition the NVDIMM for different uses.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Now that NVDIMM has found its way into libvirt, users might want
to fine tune some settings for each module separately. One such
setting is 'share=on|off' for the memory-backend-file object.
This setting - just like its name suggest already - enables
sharing the nvdimm module with other applications. Under the hood
it controls whether qemu mmaps() the file as MAP_PRIVATE or
MAP_SHARED.
Yet again, we have such config knob in domain XML, but it's just
an attribute to numa <cell/>. This does not give fine enough
tuning on per-memdevice basis so we need to have the attribute
for each device too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So, majority of the code is just ready as-is. Well, with one
slight change: differentiate between dimm and nvdimm in places
like device alias generation, generating the command line and so
on.
Speaking of the command line, we also need to append 'nvdimm=on'
to the '-machine' argument so that the nvdimm feature is
advertised in the ACPI tables properly.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
NVDIMM is new type of memory introduced into QEMU 2.6. The idea
is that we have a Non-Volatile memory module that keeps the data
persistent across domain reboots.
At the domain XML level, we already have some representation of
'dimm' modules. Long story short, NVDIMM will utilize the
existing <memory/> element that lives under <devices/> by adding
a new attribute 'nvdimm' to the existing @model and introduce a
new <path/> element for <source/> while reusing other fields. The
resulting XML would appear as:
<memory model='nvdimm'>
<source>
<path>/tmp/nvdimm</path>
</source>
<target>
<size unit='KiB'>523264</size>
<node>0</node>
</target>
<address type='dimm' slot='0'/>
</memory>
So far, this is just a XML parser/formatter extension. QEMU
driver implementation is in the next commit.
For more info on NVDIMM visit the following web page:
http://pmem.io/
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
While reviewing a patch from Andrea that modified this test case, I
realized that although it was "properly failing" (it's a negative
test), that it was failing for the wrong reason (the MULTIFUNCTION cap
wasn't set in the test case, so it was saying that multifunction=on
wasn't supported by the QEMU binary; instead it should have been
complaining that it had run out of PCI slots of the appropriate type
and couldn't automatically add any more).
This improper failure had started when I added the patch to
automatically aggregate pcie-root-ports onto multiple functions of
each pcie-root slot, but I hadn't noticed it because the test still
failed.
This patch corrects the test case to 1) set the MULTIFUNCTION flag in
the caps, and 2) attempt to add 241 pcie-root-ports to a domain. Since
there are 30 slots available on a pcie-root (slot 0 is reserved, and
slot 31 is used by the integrated SATA controller), and a
pcie-root-port can only be placed on a function of a slot on
pcie-root, the maximum number of pcie-root-ports in any domain is 240.
virQEMUCapsHasPCIMultiBus() performs a version check on
the QEMU binary to figure out whether multiple buses are
supported, so to get the correct aliases assigned when
dealing with pSeries guests we need to spoof the version
accordingly in the test suite.
Up until a while ago, libvirt would automatically add a legacy
PCI controllers combo (dmi-to-pci-bridge + pci-bridge) to any
PCIe machine type (x86_64/q35 and aarch64/virt).
As a result, a number of input and output files in the test
suite ended up containing the legacy PCI controllers, even
though they are not needed or in any way relevant to the
feature being tested.
Get rid of most of the occurrences. Most of the time, this
just means removing the controllers from the input file and
regenerating the output files; in a few instances, some
minor tweaking is performed on the input file, most notably
removing the memory balloon: as memory balloon support was
not the scope of the test being changed, there is no loss
of test coverage from doing so.
Several occurrences of the legacy PCI controllers remain in
the test suite, both because removing their usage would have
required even more tweaking, and because we still want to
have coverage of this perfectly valid combination.
Add a new attribute 'rendernode' to <gl> spice element.
Give it to QEMU if qemu supports it (queued for 2.9).
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Due to a logic error, the autofilling of USB port when a bus is
specified:
<address type='usb' bus='0'/>
does not work for non-hub devices on domain startup.
Fix the logic in qemuDomainAssignUSBPortsIterator to also
assign ports for USB addresses that do not yet have one.
https://bugzilla.redhat.com/show_bug.cgi?id=1374128
This patch add support for file memory backing on numa topology.
The specified access mode in memoryBacking can be overriden
by specifying token memAccess in numa cell.
This part introduces new xml elements for file based
memorybacking support and their parsing.
(It allows vhost-user to be used without hugepages.)
New xml elements:
<memoryBacking>
<source type="file|anonymous"/>
<access mode="shared|private"/>
<allocation mode="immediate|ondemand"/>
</memoryBacking>
In order for memory locking to work, the hard limit on memory
locking (and usage) has to be set appropriately by the user.
The documentation mentions the requirement already: with this
patch, it's going to be enforced by runtime checks as well,
by forbidding a non-compliant guest from being defined as well
as edited and started.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1316774
Commit 815d98a started auto-adding one hub if there are more USB devices
than available USB ports.
This was a strange choice, since there might be even more devices.
Before USB address allocation was implemented in libvirt, QEMU
automatically added a new USB hub if the old one was full.
Adjust the logic to try adding as many hubs as will be needed
to plug in all the specified devices.
https://bugzilla.redhat.com/show_bug.cgi?id=1410188
So far we allow to set MTU for libvirt networks. However, not all
domain interfaces have to be plugged into a libvirt network and
even if they are, they might want to have a different MTU (e.g.
for testing purposes).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Set the VIR_PCI_CONNECT_AGGREGATE_SLOT flag for pcie-root-ports so
that they will be assigned to all the functions on a slot.
Some qemu test case outputs had to be adjusted due to the
pcie-root-ports now being put on multiple functions.
If there are multiple devices assigned to the different functions of a
single PCI slot, they will not work properly if the device at function
0 doesn't have its "multi" attribute turned on, so it makes sense for
libvirt to turn it on during PCI address assignment. Setting multi
then assures that the new setting is stored in the config (so it will
be used next time the domain is started), preventing any potential
problems in the case that a future change in the configuration
eliminates the devices on all non-0 functions (multi will still be set
for function 0 even though it is the only function in use on the slot,
which has no useful purpose, but also doesn't cause any problems).
(NB: If we were to instead just decide on the setting for
multifunction at runtime, a later removal of the non-0 functions of a
slot would result in a silent change in the guest ABI for the
remaining device on function 0 (although it may seem like an
inconsequential guest ABI change, it *is* a guest ABI change to turn
off the multi bit).)