There are currently some limitations in the emulated GICv3
that make it unsuitable as a default. Use GICv2 instead.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1450433
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Currently we consider all UNIX paths with specific prefix as generated
by libvirt, but that's a wrong assumption. Let's make the detection
better by actually checking whether the whole path matches one of the
paths that we generate or generated in the past.
The UNIX path isn't stored in config XML since libvirt-1.3.1.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1446980
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Add kernel_irqchip=split/on to the QEMU command line
and a capability that looks for it in query-command-line-options
output. For the 'split' option, use a version check
since it cannot be reasonably probed.
https://bugzilla.redhat.com/show_bug.cgi?id=1427005
Add a new <ioapic> element with a driver attribute.
Possible values are qemu and kvm. With 'qemu', the I/O
APIC can be put in the userspace even for KVM domains.
https://bugzilla.redhat.com/show_bug.cgi?id=1427005
This is a USB3 controller and it's a better choice than piix3-uhci.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Acked-by: Andrea Bolognani <abologna@redhat.com>
This patch maps /domain/cpu/cache element into -cpu parameters:
- <cache mode='passthrough'/> is translated to host-cache-info=on
- <cache level='3' mode='emulate'/> is transformed into l3-cache=on
- <cache mode='disable'/> is turned in host-cache-info=off,l3-cache=off
Any other <cache> element is forbidden.
The tricky part is detecting whether QEMU supports the CPU properties.
The 'host-cache-info' property is introduced in v2.4.0-1389-ge265e3e480,
earlier QEMU releases enabled host-cache-info by default and had no way
to disable it. If the property is present, it defaults to 'off' for any
QEMU until at least 2.9.0.
The 'l3-cache' property was introduced later by v2.7.0-200-g14c985cffa.
Earlier versions worked as if l3-cache=off was passed. For any QEMU
until at least 2.9.0 l3-cache is 'off' by default.
QEMU 2.9.0 was the first release which supports probing both properties
by running device-list-properties with typename=host-x86_64-cpu. Older
QEMU releases did not support device-list-properties command for CPU
devices. Thus we can't really rely on probing them and we can just use
query-cpu-model-expansion QMP command as a witness.
Because the cache property probing is only reliable for QEMU >= 2.9.0
when both are already supported for quite a few releases, we let QEMU
report an error if a specific cache mode is explicitly requested. The
other mode (or both if a user requested CPU cache to be disabled) is
explicitly turned off for QEMU >= 2.9.0 to avoid any surprises in case
the QEMU defaults change. Any older QEMU already turns them off so not
doing so explicitly does not make any harm.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We are currently parsing only rx/frames/max because that's the only
value that makes sense for us. The tun device just added support for
this one and the others are only supported by hardware devices which
we don't need to worry about as the only way we'd pass those to the
domain is using <hostdev/> or <interface type='hostdev'/>. And in
those cases the guest can modify the settings itself.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Our test data used a lot of different qemu binary paths and some
of them were based on downstream systems.
Note that there is one file where I had to add "accel=kvm" because
the qemuargv2xml code parses "/usr/bin/kvm" as virt type="kvm".
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
The virt type for QEMU can be modified by -machine attribute "accel"
so there is no need to have different QEMU binary paths.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Like all devices, add the 'id' option for mdevs as well. Patch also
adjusts the test accordingly.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1438431
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Depending on the architecture, requirements for ACPI and UEFI can
be different; more specifically, while on x86 UEFI requires ACPI,
on aarch64 it's the other way around.
Enforce these requirements when validating the domain, and make
the error message more accurate by mentioning that they're not
necessarily applicable to all architectures.
Several aarch64 test cases had to be tweaked because they would
have failed the validation step otherwise.
The capabilities used in test cases should match those used
during normal operation for the tests to make any sense.
This results in the generated command line for a few test
cases (most notably non-x86 test cases that were wrongly
assuming they could use -no-acpi) changing.
This reverts commit c2e60ad0e5.
Turns out this check is excessively strict: there are ways
other than <memtune><hard_limit> to raise the memory locking
limit for QEMU processes, one prominent example being
tweaking /etc/security/limits.conf.
Partially-resolves: https://bugzilla.redhat.com/1431793
QEMU allows for TSC frequency to be explicitly set to enable migration
with invtsc (migration fails if the destination QEMU cannot set the
exact same frequency used when starting the domain on the source host).
Libvirt already supports setting the TSC frequency in the XML using
<clock>
<timer name='tsc' frequency='1234567890'/>
</clock>
which will be transformed into
-cpu Model,tsc-frequency=1234567890
QEMU command line.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We want pcie-root-ports to be used when available in QEMU,
but at the same time we need to ensure that hosts running
older QEMU releases keep working and that the user can
override the default at any time.
Add a comment for the original pcie-root-port test cases
to make it clear how these new test cases are different.
For NVDIMM devices it is optionally possible to specify the size
of internal storage for namespaces. Namespaces are a feature that
allows users to partition the NVDIMM for different uses.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Now that NVDIMM has found its way into libvirt, users might want
to fine tune some settings for each module separately. One such
setting is 'share=on|off' for the memory-backend-file object.
This setting - just like its name suggest already - enables
sharing the nvdimm module with other applications. Under the hood
it controls whether qemu mmaps() the file as MAP_PRIVATE or
MAP_SHARED.
Yet again, we have such config knob in domain XML, but it's just
an attribute to numa <cell/>. This does not give fine enough
tuning on per-memdevice basis so we need to have the attribute
for each device too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So, majority of the code is just ready as-is. Well, with one
slight change: differentiate between dimm and nvdimm in places
like device alias generation, generating the command line and so
on.
Speaking of the command line, we also need to append 'nvdimm=on'
to the '-machine' argument so that the nvdimm feature is
advertised in the ACPI tables properly.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
NVDIMM is new type of memory introduced into QEMU 2.6. The idea
is that we have a Non-Volatile memory module that keeps the data
persistent across domain reboots.
At the domain XML level, we already have some representation of
'dimm' modules. Long story short, NVDIMM will utilize the
existing <memory/> element that lives under <devices/> by adding
a new attribute 'nvdimm' to the existing @model and introduce a
new <path/> element for <source/> while reusing other fields. The
resulting XML would appear as:
<memory model='nvdimm'>
<source>
<path>/tmp/nvdimm</path>
</source>
<target>
<size unit='KiB'>523264</size>
<node>0</node>
</target>
<address type='dimm' slot='0'/>
</memory>
So far, this is just a XML parser/formatter extension. QEMU
driver implementation is in the next commit.
For more info on NVDIMM visit the following web page:
http://pmem.io/
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
While reviewing a patch from Andrea that modified this test case, I
realized that although it was "properly failing" (it's a negative
test), that it was failing for the wrong reason (the MULTIFUNCTION cap
wasn't set in the test case, so it was saying that multifunction=on
wasn't supported by the QEMU binary; instead it should have been
complaining that it had run out of PCI slots of the appropriate type
and couldn't automatically add any more).
This improper failure had started when I added the patch to
automatically aggregate pcie-root-ports onto multiple functions of
each pcie-root slot, but I hadn't noticed it because the test still
failed.
This patch corrects the test case to 1) set the MULTIFUNCTION flag in
the caps, and 2) attempt to add 241 pcie-root-ports to a domain. Since
there are 30 slots available on a pcie-root (slot 0 is reserved, and
slot 31 is used by the integrated SATA controller), and a
pcie-root-port can only be placed on a function of a slot on
pcie-root, the maximum number of pcie-root-ports in any domain is 240.
virQEMUCapsHasPCIMultiBus() performs a version check on
the QEMU binary to figure out whether multiple buses are
supported, so to get the correct aliases assigned when
dealing with pSeries guests we need to spoof the version
accordingly in the test suite.