The test takes
x86-cpuid-Something-guest.xml CPU (the CPU libvirt would use for
host-model on a CPU described by x86_64-cpuid-Something.xml without
talking to QEMU about what it supports on the host)
and updates it according to CPUID data from QEMU:
x86_64-cpuid-Something-enabled.xml (reported as "feature-words"
property of the CPU device)
and
x86_64-cpuid-Something-disabled.xml (reported as "filtered-features"
property of the CPU device).
The result is compared to
x86_64-cpuid-Something-json.xml (the CPU libvirt would use as
host-model based on the reply from query-cpu-model-expansion).
The comparison is a bit tricky because the *-json.xml CPU contains fewer
disabled features. Only the features which are included in the base CPU
model, but listed as disabled in *.json will be disabled in *-json.xml.
The CPU computed by virCPUUpdateLive from the test data will list all
features present in the host's CPUID data and not enabled in *.json as
disabled. The cpuTestUpdateLiveCompare function checks that the computed
and expected sets of enabled features match.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
All CPU features which QEMU does not know about but libvirt knows them
(currently "cmt" is the only one) are implicitly disabled by QEMU and
should be present in x86_64-cpuid-*-disabled.xml.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Commit v3.1.0-26-gd60012b4e started filtering hle and rtm features from
broken Intel Haswell CPUs. QEMU implemented similar functionality and
thus it doesn't report rtm and hle features as enabled for Core i5-4670T
CPU anymore.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The new command can be used to generate test data for virCPUUpdateLive.
When "cpu-cpuid.py diff x86-cpuid-Something.json" is run, it reads raw
CPUID data stored in x86-cpuid-Something.xml and CPUID data from QEMU
stored in x86-cpuid-Something.json to produce two more CPUID files:
x86-cpuid-Something-enabled.xml and x86-cpuid-Something-disabled.xml.
- x86-cpuid-Something-enabled.xml will contain CPUID bits present in
x86-cpuid-Something.json (i.e., enabled by QEMU for the "host" CPU)
- x86-cpuid-Something-disabled.xml will contain all CPUID bits from
x86-cpuid-Something.xml which are not present in
x86-cpuid-Something.json (i.e., CPUID bits which the host CPU
supports, but QEMU does not enable them for the "host" CPU)
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The new script is going to be more general and the original
functionality can be requested by "cpu-cpuid.py convert".
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The public API flags are handled by the cpuBaselineXML wrapper. The
internal cpuBaseline API only needs to know whether it is supposed to
drop non-migratable features.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
cpuBaseline is responsible for computing a baseline CPU while feature
expansion is done by virCPUExpandFeatures. The cpuBaselineXML wrapper
(used by hypervisor drivers to implement virConnectBaselineCPU API)
calls cpuBaseline followed by virCPUExpandFeatures if requested by
VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES flag.
The features in the three changed test files had to be sorted using
"sort -k 3" because virCPUExpandFeatures returns a sorted list of
features.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
A mediated device will be identified by a UUID (with 'model' now being
a mandatory <hostdev> attribute to represent the mediated device API) of
the user pre-created mediated device. We also need to make sure that if
user explicitly provides a guest address for a mdev device, the address
type will be matching the device API supported on that specific mediated
device and error out with an incorrect XML message.
The resulting device XML:
<devices>
<hostdev mode='subsystem' type='mdev' model='vfio-pci'>
<source>
<address uuid='c2177883-f1bb-47f0-914d-32a22e3a8804'>
</source>
</hostdev>
</devices>
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Instead of generating all of the capabilities, let's test more of our
code by probing sysfs data. This test needs quite some mocking for
now, but it paves the road for more future enhancements (hugepages
probing, for example).
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
All mocked functions are related to numactl/virNuma and rely only on
virsysfs, so the paths they touch can be nicely controlled. And
because it is so nicely self-contained NUMA mock, it is named
numamock (instead of naming it after the test that will use it first).
We need top level API mock because some APIs might call libnuma
directly, e.g. virNumaIsAvailable(), virNumaGetMaxNode().
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Bit more test data, this time with complete info copied, mainly with
cache information, so we can easily add tests for it.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
There is no "node driver" as there was before, drivers have to do
their own ACL checking anyway, so they all specify their functions and
nodeinfo is basically just extending conf/capablities. Hence moving
the code to src/conf/ is the right way to go.
Also that way we can de-duplicate some code that is in virsysfs and/or
virhostcpu that got duplicated during the virhostcpu.c split. And
Some cleanup is done throughout the changes, like adding the vir*
prefix etc.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
There is no reason for it not to be in the utils, all global symbols
under that file already have prefix vir* and there is no reason for it
to be part of DRIVER_SOURCES because that is just a leftover from
older days (pre-driver modules era, I believe).
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
While on that, drop support for kernels from RHEL-5 era (missing
cpu/present file). Also add some useful functions and export them.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The functionality these tests partially relied on (scanning the cpu
directory for cpu[0-9]+ subdirectories) is going to be removed, so we
need additional files that are present on all non-medieval systems.
Removing all these tests would be an option but we would lose the
ability to test the topologies. Even though we just extract number of
sockets/cores/threads from all these directory trees.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
oVirt uses relative names with directories in them. Test such
configuration. Also tests a snapshot done with _REUSE_EXTERNAL and a
relative backing file pre-specified in the qcow2 metadata.
Since we have to match the images by filename a common backing image
will break the detection process. Add a test case to see that the code
correctly did not continue the detection process.
The event is fired when a given block backend node (identified by the
node name) experiences a write beyond the bound set via
block-set-write-threshold QMP command. This wires up the monitor code to
extract the data and allow us receiving the events and the capability.
Along with video and VNC support, bhyve has introduced USB tablet
support as an input device. This tablet is exposed to a guest
as a device on an XHCI controller.
At present, tablet is the only supported device on the XHCI controller
in bhyve, so to make things simple, it's allowed to only have a
single XHCI controller with a single tablet device.
In detail, this commit:
- Introduces a new capability bit for XHCI support in bhyve
- Adds an XHCI controller and tabled support with 1:1 mapping
between them
- Adds a couple of unit tests
* Extract filling bhyve capabilities from virBhyveDomainCapsBuild()
into a new function virBhyveDomainCapsFill() to make testing
easier by not having to mock firmware directory listing and
hypervisor capabilities probing
* Also, just presence of the firmware files is not sufficient
to enable os.loader.supported, hypervisor should support UEFI
boot too
* Add tests to domaincapstest for the main caps possible flows:
- when UEFI bootrom is supported
- when video (fbus) is supported
- neither of above is supported
Add the fields to support setting tls-creds and tls-hostname during
a migration (either source or target). Modify the query migration
function to check for the presence and set the field for future
consumers to determine which of 3 conditions is being met (NULL,
present and set to "", or present and sent to something). These
correspond to qemu commit id '4af245dc3' which added support to
default the value to "" and allow setting (or resetting) to ""
in order to disable. This reset option allows libvirt to properly
use the tls-creds and tls-hostname parameters.
Modify code paths that either allocate or use stack space in order
to call qemuMigrationParamsClear or qemuMigrationParamsFree for cleanup.
Signed-off-by: John Ferlan <jferlan@redhat.com>
It was pointed out here:
https://bugzilla.redhat.com/show_bug.cgi?id=1331796#c4
that we shouldn't be adding a "no-resolv" to the dnsmasq.conf file for
a network if there isn't any <forwarder> element that specifies an IP
address but no qualifying domain. If there is such an element, it will
handle all DNS requests that weren't otherwise handled by one of the
forwarder entries with a matching domain attribute. If not, then DNS
requests that don't match the domain of any <forwarder> would not be
resolved if we added no-resolv.
So, only add "no-resolv" when there is at least one <forwarder>
element that specifies an IP address but no qualifying domain.
qemuMonitorGetGuestCPU can now optionally create CPU data from
filtered-features in addition to feature-words.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We want pcie-root-ports to be used when available in QEMU,
but at the same time we need to ensure that hosts running
older QEMU releases keep working and that the user can
override the default at any time.
Add a comment for the original pcie-root-port test cases
to make it clear how these new test cases are different.
QEMU 2.9 introduces the pcie-root-port device, which is
a generic version of the existing ioh3420 device.
Make the new device available to libvirt users.
There were couple of reports on the list (e.g. [1]) that guests
with huge amounts of RAM are unable to start because libvirt
kills qemu in the initialization phase. The problem is that if
guest is configured to use hugepages kernel has to zero them all
out before handing over to qemu process. For instance, 402GiB
worth of 1GiB pages took around 105 seconds (~3.8GiB/s). Since we
do not want to make the timeout for connecting to monitor
configurable, we have to teach libvirt to count with this
fact. This commit implements "1s per each 1GiB of RAM" approach
as suggested here [2].
1: https://www.redhat.com/archives/libvir-list/2017-March/msg00373.html
2: https://www.redhat.com/archives/libvir-list/2017-March/msg00405.html
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
For NVDIMM devices it is optionally possible to specify the size
of internal storage for namespaces. Namespaces are a feature that
allows users to partition the NVDIMM for different uses.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Now that NVDIMM has found its way into libvirt, users might want
to fine tune some settings for each module separately. One such
setting is 'share=on|off' for the memory-backend-file object.
This setting - just like its name suggest already - enables
sharing the nvdimm module with other applications. Under the hood
it controls whether qemu mmaps() the file as MAP_PRIVATE or
MAP_SHARED.
Yet again, we have such config knob in domain XML, but it's just
an attribute to numa <cell/>. This does not give fine enough
tuning on per-memdevice basis so we need to have the attribute
for each device too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So, majority of the code is just ready as-is. Well, with one
slight change: differentiate between dimm and nvdimm in places
like device alias generation, generating the command line and so
on.
Speaking of the command line, we also need to append 'nvdimm=on'
to the '-machine' argument so that the nvdimm feature is
advertised in the ACPI tables properly.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
NVDIMM is new type of memory introduced into QEMU 2.6. The idea
is that we have a Non-Volatile memory module that keeps the data
persistent across domain reboots.
At the domain XML level, we already have some representation of
'dimm' modules. Long story short, NVDIMM will utilize the
existing <memory/> element that lives under <devices/> by adding
a new attribute 'nvdimm' to the existing @model and introduce a
new <path/> element for <source/> while reusing other fields. The
resulting XML would appear as:
<memory model='nvdimm'>
<source>
<path>/tmp/nvdimm</path>
</source>
<target>
<size unit='KiB'>523264</size>
<node>0</node>
</target>
<address type='dimm' slot='0'/>
</memory>
So far, this is just a XML parser/formatter extension. QEMU
driver implementation is in the next commit.
For more info on NVDIMM visit the following web page:
http://pmem.io/
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>