Also, validate that the requested feature is supported by QEMU.
Signed-off-by: Andrew Melnychenko <andrew@daynix.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Added "rss" and "rss_hash_report" configuration that should be
used with qemu virtio RSS. Both options are triswitches. Used as
"driver" options and affects only NIC with model type "virtio".
In other patches - options should turn on virtio-net RSS and hash
properties.
Signed-off-by: Andrew Melnychenko <andrew@daynix.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Starting with qemu-3.1 we always have the '-overcommit' argument and use
it instead of '-realtime'. Remove the capability check and fix all
fake-caps tests.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
All qemu versions now support FD passing either directly or via FDset.
Assume that we always have this capability so that we can simplify
chardev handling in many cases.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Upcoming patches will raise the minimum required qemu version to 3.1.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Upcoming patches will raise the minimum required qemu version to 3.1.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Upcoming patches will raise the minimum required qemu version to 3.1.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
virtio-iommu needs to be an integrated device, and our address
assignment code will make sure that is the case. If the user has
provided an explicit address, however, we should make sure any
addresses pointing to a different bus are rejected.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
virtio-iommu is a PCI device and attempts to use a different
address type should be rejected.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
virtio-iommu doesn't work without ACPI, so we need to make sure
the latter is enabled.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The QEMU binary is built from the v7.0.0-rc2 tag.
This causes the argument to -device to be generated in JSON
format, same as what 1a691fe1c8 has done for x86_64.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
The QEMU binary is built from the v7.0.0-rc2 tag.
Some of the additional capabilities that show up are a
consequence of more features being enabled in this build than
in the one used to generate the replies initially.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
While commit a5e659f0 removed the restriction against multiple queues
for the vdpa net device, there were some missing pieces. Configuring a
device statically and then starting the domain worked as expected, but
hotplugging a device didn't have the expected multiqueue support
enabled. Add the missing bits.
Consider the following device xml:
<interface type="vdpa">
<mac address="00:11:22:33:44:03" />
<source dev="/dev/vhost-vdpa-0" />
<model type="virtio" />
<driver queues='2' />
</interface>
Without this patch, hotplugging the above XML description resulted in
the following:
{"execute":"netdev_add","arguments":{"type":"vhost-vdpa","vhostdev":"/dev/fdset/0","id":"hostnet1"},"id":"libvirt-392"}
{"execute":"device_add","arguments":{"driver":"virtio-net-pci","netdev":"hostnet1","id":"net1","mac":"00:11:22:33:44:03","bus":"pci.5","addr":"0x0"},"id":"libvirt-393"}
With the patch, hotplugging results in the following:
{"execute":"netdev_add","arguments":{"type":"vhost-vdpa","vhostdev":"/dev/fdset/0","queues":2,"id":"hostnet1"},"id":"libvirt-392"}
{"execute":"device_add","arguments":{"driver":"virtio-net-pci","mq":true,"vectors":6,"netdev":"hostnet1","id":"net1","mac":"00:11:22:33:44:03","bus":"pci.5","addr":"0x0"},"id":"libvirt-393"}
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2024406
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Apply the user-requested changes to the device definition as requested
by the <qemu:deviceOverride> element from the custom qemu XML namespace.
Closes: https://gitlab.com/libvirt/libvirt/-/issues/287
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
This reverts commit 06c960e477.
Turns out, this feature is not needed and QEMU will fix TSC
without any intervention from outside.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>P
QEMU 7.0.0 adds a new property tsc-clear-on-reset to x86 CPU, corresponding
to Libvirt's <tsc on_reboot="clear"/> element. Plumb it in the validation,
command line handling and tests.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Let's generate prealloc-threads property onto the cmd line if
domain configuration requests so.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Since its v5.0.0 release QEMU is capable of specifying number of
threads used to allocate memory. It defaults to 1, which may be
too low for humongous guests with gigantic pages.
In general, on QEMU cmd line level it is possible to use
different number of threads per each memory-backend-* object, in
practical terms it's not useful. Therefore, use <memoryBacking/>
to set guest wide value and let all memory devices 'inherit' it,
silently. IOW, don't introduce per device knob because that would
only complicate things for a little or no benefit.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
In cases when the hostname of the NBD server doesn't match the hostname
in the TLS certificate the new attribute 'tlsHostname' can be used to
override it.
Add the XML infrastructure and tests.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
By using the auto-generated NVRAM path in test data files, we won't see
bugs where a user specified path gets accidentally overwritten by a
post-parse callback, or VM startup. For example, this caused us to miss
the bug fixed by:
commit 24adb6c7a6
Author: Michal Prívozník <mprivozn@redhat.com>
Date: Wed Feb 23 08:50:44 2022 +0100
qemu: Don't regenerate NVRAM path if parsed from domain XML
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When building the default memory backend (which has id='pc.ram')
and no guest NUMA is configured then
qemuBuildMemCommandLineMemoryDefaultBackend() is called. However,
its return value is ignored which means that on invalid
configuration (e.g. when non-existent hugepage size was
requested) an error is reported into the logs but QEMU is started
anyway. And while QEMU does error out its error message doesn't
give much clue what's going on:
qemu-system-x86_64: Memory backend 'pc.ram' not found
While at it, introduce a test case. While I could chose a nice
looking value (e.g. 4MiB) that's exactly what I wanted to avoid,
because while such value might not be possible on x84_64 it may
be possible on other arches (e.g. ppc is notoriously known for
supporting wide range of HP sizes). Let's stick with obviously
wrong value of 5MiB.
Reported-by: Charles Polisher <chas@chasmo.org>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This demonstrates that
<os>
<loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.fd</loader>
<nvram template="/usr/share/OVMF/OVMF_VARS.fd"/>
</os>
gets expanded to give a per-VM NVRAM path.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The following is expected to raise an error:
<os>
<loader readonly='yes' type='pflash'/>
</os>
because no path to the pflash loader is given and there is
no default built-in.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Upcoming patches will remove support for qemu-2.12. Since tests of
'sev' use hacked data we need to use our capability dump of qemu-6.0 as
it has the required fields.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Originally when I started working on '-blockdev' support I added version
locked variants of all the relevant disk tests locked to qemu-2.12, but
blockdev was finally enabled with qemu-4.2.
This patch bumps the rest of the test cases with no functional changes
related to disks.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The 'device_id' property was added in qemu-4.0. Since upcoming patch
will be modernizing all disk test cases we specifically want to preserve
the instance of 'device_id' not being used with qemu-3.1 and earlier.
Change the 'disk-cache' and 'disk-shared' cases to have a qemu-3.1 and a
qemu-4.1 version for testing pre-'device_id' and pre-blockdev scenarios.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Starting with qemu-3.0 release we use the 'werror' and 'rerror'
properties with the frontend (device) rather than the storage backend
(with a minor caveat of s390, where we use it earlier as it doesn't
support USB disks, and other disk types supported it earlier).
Add specific test cases after the change, but before '-blockdev' was
enabled.
This is done separately from the changes in the next commit which simply
moves all other disk tests to the last pre-blockdev qemu as we have a
semantic change happening after 2.12.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Rewrite the parts which already pass FDs via fdset or directly to use
the new infrastructure.
Apart from simpler code this also adds the appropriate names to the fds
in the fdsets which will allow us to properly remove the fdsets won
hot-unplug of chardevs, which we didn't do for now and resulted in
leaking the FDs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Prefix the file descriptor name with the alias of the network device so
that it's similar to other upcoming use.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Code paths which don't wish to use FD passing are supposed to not call
the function which sets up the chardev for FD passing.
This is ensured by calling it only in the host prepare step.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When the <loader> had an explicit readonly='no' attribute we
accidentally still marked the plfash as readonly due to the
bad conversion from virTristateBool to bool. This was missed
because the test cases run with no capabilities set and thus
are validated the -drive approach for pflash configuration,
not the -blockdev approach.
This affected the following config:
<os>
<loader readonly='no' type='pflash'>/var/lib/libvirt/qemu/nvram/test-bios.fd</loader>
</os>
for the sake of completeness, we also add a test XML config
with no readonly attribute at all, to demonstrate that the
default for pflash is intended to be r/w.
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The XML-to-XML test validates that we don't accidentally copy the
isa-debug <serial> into a <console>.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This is a perfectly valid configuration that we need to keep
working, so add test coverage for it.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Currently, memory device (def->mems) part of cmd line is
generated before any controller. In majority of cases it doesn't
matter because neither of memory devices live on a bus that's
created by an exposed controller (e.g. there's no DIMM
controller, at least not exposed). Except for virtio-mem and
virtio-pmem, which do have a PCI address. And if it so happens
that the device goes onto non-default bus (pci.0) starting such
guest fails, because the controller that creates the desired bus
wasn't processed yet. QEMU processes arguments in order.
For instance, if virtio-mem has address with bus='0x01' QEMU
refuses to start with the following message:
Bus 'pci.1' not found
Similarly for virtio-pmem. I've successfully tested migration and
changing the order does not affect migration stream.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2047271
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
There are a some scenarios in which we want to prealloc guest
memory (e.g. when requested in domain XML, when using hugepages,
etc.). With 'regular' <memory/> models (like 'dimm', 'nvdimm' or
'virtio-pmem') or regular guest memory it is corresponding
memory-backend-* object that ends up with .prealloc attribute
set. And that's desired because neither of those devices can
change its size on the fly. However, with virtio-mem model things
are a bit different. While one can set .prealloc attribute on
corresponding memory-backend-* object it doesn't make much sense,
because virtio-mem can inflate/deflate on the fly, i.e. change
how big of a portion of the memory-backend-* object is exposed to
the guest. For instance, from a say 4GiB module only a half can
be exposed to the guest. Therefore, it doesn't make much sense to
preallocate whole 4GiB and keep them allocated. But we still want
the part exposed to the guest preallocated (when conditions
described at the beginning are met).
Having said that, with new enough QEMU the virtio-mem-pci device
gained new attribute ".prealloc" which instructs the device to
talk to the memory backend object and allocate only the requested
portion of memory.
Now, that our algorithm for setting .prealloc was isolated in a
single function, the function can be called when constructing cmd
line for virtio-mem-pci device.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We need to use a hardcoded list of capabilities because we don't
yet have proper replies files obtained from QEMU running on actual
macOS machines.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Tested-by: Brad Laue <brad@brad-x.com>
Tested-by: Christophe Fergeau <cfergeau@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
When trying to attach vhost-user-blk device to virtual machine using
qemu < 4.2 libvirt would mistakenly add a scsi=off parameter, which is
not supported by qemu.
Fixes: https://gitlab.com/libvirt/libvirt/-/issues/265
Signed-off-by: shenjiatong <yshxxsjt715@gmail.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
With qemu versions prior to qemu-5.0 we'll format 'scsi=off' for
virtio-blk disks, but also for vhost-user-blk. This is a bug as it's not
supported.
Add a test case to show that wrong configuration is generated by adding
running 'disk-vhostuser' test case on capabilities from qemu-4.2.
For this to be possible it's required to enable shared memory via NUMA
configuration as old QEMU's don't allow configuration of the default
memory backend. This is achieved by adding a copy of the
'disk-vhostuser' XML with NUMA enabled.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
After previous cleanups, the virDomainNetDefParseXML() function
uses a mixture of virXMLProp*() and the old virXMLPropString() +
virXXXTypeFromString() patterns. Rework it so that virXMLProp*()
is used.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Changes in all 'ppc64-latest.ags' files were needed due to the
JSONification of command line devices.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Since there's no capability to check now, we can simply move the
formatting of 'max_outputs' earlier.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Both the QXL video device and 'virtio' video device support
'max_outputs' in all qemu versions libvirt supports. This means we no
longer have to check the QEMU_CAPS_QXL_MAX_OUTPUTS and
QEMU_CAPS_VIRTIO_GPU_MAX_OUTPUTS capabilities.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>