The 'device' field reported by 'query-block' is empty when -blockdev is
used. Add an argument which will allow matching disk by using the qdev
id so we can use this code with -blockdev.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The 'device' argument matches only the legacy drive alias. For blockdev
we need to set the throttling for a QOM id and thus we'll need to use
the 'id' field.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use qemuDomainAttachDeviceDiskLive to change the media in
qemuDomainChangeDiskLive as the former function already does all the
necessary steps to prepare the new medium.
This also allows us to turn qemuDomainChangeEjectableMedia static.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We don't use it for anything useful so it does not make much sense to
extract it.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Print the differences in case when the expected data does not match.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Since we're not saving the platform-specific data into a cache, we're
not going to populate the structure, which in turn will cause a crash
upon calling virNodeGetSEVInfo because of a NULL pointer dereference.
Ultimately, we should start caching this data along with host-specific
capabilities like NUMA and SELinux stuff into a separate cache, but for
the time being, this is a semi-proper fix for a potential crash.
Backtrace (requires libvirtd restart to load qemu caps from cache):
#0 qemuGetSEVInfoToParams
#1 qemuNodeGetSEVInfo
#2 virNodeGetSEVInfo
#3 remoteDispatchNodeGetSevInfo
#4 remoteDispatchNodeGetSevInfoHelper
#5 virNetServerProgramDispatchCall
#6 virNetServerProgramDispatch
#7 virNetServerProcessMsg
#8 virNetServerHandleJob
#9 virThreadPoolWorker
#10 virThreadHelper
https: //bugzilla.redhat.com/show_bug.cgi?id=1612009
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Acked-by: Peter Krempa <pkrempa@redhat.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
So the procedure to detect SEV support works like this:
1) we detect that sev-guest is among the QOM types and set the cap flag
2) we probe the monitor for SEV support
- this is tricky, because QEMU with compiled SEV support will always
report -object sev-guest and query-sev-capabilities command, that
however doesn't mean SEV is supported
3) depending on what the monitor returned, we either keep or clear the
capability flag for SEV
Commit a349c6c21c added an explicit check for "GenericError" in the
monitor reply to prevent libvirtd to spam logs about missing
'query-sev-capabilities' command. At the same time though, it returned
success in this case which means that we didn't clear the capability
flag afterwards and happily formatted SEV into qemuCaps. Therefore,
adjust all the relevant callers to handle -1 on errors, 0 on SEV being
unsupported and 1 on SEV being supported.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Acked-by: Peter Krempa <pkrempa@redhat.com>
In order to test SEV we need real QEMU capabilities. Ideally, this would
be tested with -latest capabilities, however, our capabilities are
currently tied to Intel HW, even the 2.12.0 containing SEV were edited by
hand, so we can only use that one for now, as splitting the capabilities
according to the vendor is a refactor for another day. The need for real
capabilities comes from the extended SEV platform data (PDH, cbitpos,
etc.) we'll need to cache/parse.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Acked-by: Peter Krempa <pkrempa@redhat.com>
Qemu-3.0 supports Hyper-V-style PV TLB flush, Windows guests can benefit
from this feature as KVM knows which vCPUs are not currently scheduled (and
thus don't require any immediate action).
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Qemu-3.0 supports so-called 'Reenlightenment' notifications and this (in
conjunction with 'hv-frequencies') can be used make Hyper-V on KVM pass
stable TSC page clocksource to L2 guests.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Qemu-2.12 gained 'hv-frequencies' cpu flag to enable Hyper-V frequency
MSRs. These MSRs are required (but not sufficient) to make Hyper-V on
KVM pass stable TSC page clocksource to L2 guests.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
As advertised in the previous commit, we need the list of
accessed files to also contain action that caused the $path to
appear on the list. Not only this enables us to fine tune our
white list rules it also helps us to see why $path is reported.
For instance:
/run/user/1000/libvirt/libvirt-sock: connect: qemuxml2argvtest: QEMU XML-2-ARGV net-vhostuser-multiq
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The check-file-access.pl script is used to match access list
generated by virtestmock against whitelisted rules stored in
file_access_whitelist.txt. So far the rules are in form:
$path: $progname: $testname
This is not sufficient because the rule does not take into
account 'action' that caused $path to appear in the list of
accessed files. After this commit the rule can be in new form:
$path: $action: $progname: $testname
where $action is one from ("open", "fopen", "access", "stat",
"lstat", "connect"). This way the white list can be fine tuned to
allow say access() but not connect().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
So far we are setting only fake secret and storage drivers.
Therefore if the code wants to call a public NWFilter API (like
qemuBuildInterfaceCommandLine() and qemuBuildNetCommandLine() are
doing) the virGetConnectNWFilter() function will try to actually
spawn session daemon because there's no connection object set to
handle NWFilter driver.
Even though I haven't experienced the same problem with the rest
of the drivers (interface, network and node dev), the reasoning
above can be applied to them as well.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
This proves libvirt can now handle high socket_id and
core_id values correctly and ensures we won't introduce
regressions in this area.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
The latter are no longer used by libvirt, and the former
never were; moreover, both have a corresponding *_list
file which we can manipulate very conveniently using our
bitmap APIs, so dropping them makes sure in the future
developers will look into that rather than trying to
parse the kernel binary bitmaps.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Some of the data dumps didn't include them; luckily,
we're not actually missing any information since we
can recreate them by looking at the corresponding
thread_sibilings files.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Add new XML section to report host's memory bandwidth allocation
capability. The format as below example:
<host>
.....
<memory_bandwidth>
<node id='0' cpus='0-19'>
<control granularity='10' min ='10' maxAllocs='8'/>
</node>
</memory_bandwidth>
</host>
granularity ---- granularity of memory bandwidth, unit percentage.
min ---- minimum memory bandwidth allowed, unit percentage.
maxAllocs ---- maximum memory bandwidth allocation group supported.
Signed-off-by: Bing Niu <bing.niu@intel.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Introduce a new section memorytune to support memory bandwidth allocation.
This is consistent with existing cachetune. As the example:
below:
<cputune>
......
<memorytune vcpus='0'>
<node id='0' bandwidth='30'/>
</memorytune>
</cputune>
vpus --- vpus subjected to this memory bandwidth.
id --- on which node memory bandwidth to be set.
bandwidth --- the memory bandwidth percent to set.
Signed-off-by: Bing Niu <bing.niu@intel.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
If a domain has hugepages configured and we're currently building
memory-backend-file for a nvdimm device that domain has we will
put hugepages path onto the command line. It should have been
nvdimm path configured in the XML.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
This reverts commit 9cf38263d0.
Jansson cannot parse QEMU's quirky JSON.
Revert back to yajl.
https://bugzilla.redhat.com/show_bug.cgi?id=1614569
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
This reverts commit 4dd6054000.
Jansson cannot parse QEMU's quirky JSON.
Revert back to yajl.
https://bugzilla.redhat.com/show_bug.cgi?id=1614569
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
This reverts commit c31146685f.
Jansson cannot parse QEMU's quirky JSON.
Revert back to yajl.
https://bugzilla.redhat.com/show_bug.cgi?id=1614569
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
This reverts commit 397447f805.
Jansson cannot parse QEMU's quirky JSON.
Revert back to yajl.
https://bugzilla.redhat.com/show_bug.cgi?id=1614569
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Previously we were ignoring "nodeset" attribute for hugepage pages
if there was no guest NUMA topology configured in the domain XML.
Commit <fa6bdf6afa878b8d7c5ed71664ee72be8967cdc5> partially fixed
that issue but it introduced a somehow valid regression.
In case that there is no guest NUMA topology configured and the
"nodeset" attribute is set to "0" it was accepted and was working
properly even though it was not completely valid XML.
This patch introduces a workaround that it will ignore the nodeset="0"
only in case that there is no guest NUMA topology in order not to
hit the validation error.
After this commit the following XML configuration is valid:
<memoryBacking>
<hugepages>
<page size='2048' unit='KiB' nodeset='0'/>
</hugepages>
</memoryBacking>
but this configuration remains invalid:
<memoryBacking>
<hugepages>
<page size='2048' unit='KiB' nodeset='0'/>
<page size='1048576' unit='KiB'/>
</hugepages>
</memoryBacking>
The issue with the second configuration is that it was originally
working, however changing the order of the <page> elements resolved
into using different page size for the guest. The code is written
in a way that it expect only one page configured and always uses only
the first page in case that there is no guest NUMA topology configured.
See qemuBuildMemPathStr() function for details.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1591235
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
We can safely validate the hugepage nodeset attribute at a define time.
This validation is not done for already existing domains when the daemon
is restarted.
All the changes to the tests are necessary because we move the error
from domain start into XML parse.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
This use-case was broken by commit
<fa6bdf6afa878b8d7c5ed71664ee72be8967cdc5>.
We allowed this configuration and it was working as expected therefore
we can consider it as regression. We should have never allowed such
configuration so now the best solution is in case of non-numa guest
silently ignore the 'nodeset' attribute if it's set to '0'.
That will be fixed by following patches.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
This test case is currently working but it uncovers existing issue
in our code that the generated QEMU commandline uses the default 1G
hugepage instead of the 2M hugepage specified for exact node.
The issue in our code is that for non-numa guests we take into account
only the first hugepage. This will be fixed as invalid configuration
since it doesn't make any sense to set default and specific hugepage
for non-numa guest.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Remove unnecessary XML elements as well.
<numatune> for numa guest is tested by numatune-memnode test.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
From the args output you can see that the 'discard' feature is not
honored if you don't use hugepages, that is a bug, following patche
will fix it.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
There are couple of files that are the same in both
qemuxml2argvdata and qemuxml2xmloutdata directories. Link them
instead of having full copy.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Starting from pc-q35-2.4 the floppy controller is not enabled by
default. Fix the version check so that it does not match 2.11 as being
2.1.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Fix regression introduced in <42fd5a58adb>. With q35 machine type which
requires the explicitly specified FDC we'd format twoisa-fdc
controllers to the command line as the code was moved to a place where
it's called per-disk.
Move the call back after formatting all disks and reiterate the disks to
find the floppy controllers.
This also moves the '-global' directive which sets up the default
ISA-FDC to the end after all the disks but since we are modifying the
properties it is safe to do so.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The floppy drive command line is different on the q35 machine. Make sure
to test that both drives are supported and also multiple machine
versions as we generate the commandline differently.
Note that both output files show wrong command line which will be fixed
subsequently.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>