Although nearly all host devices that are assigned to guests using
VFIO ("<hostdev>" devices in libvirt) are physically PCI Express
devices, until now libvirt's PCI address assignment has always
assigned them addresses on legacy PCI controllers in the guest, even
if the guest's machinetype has a PCIe root bus (e.g. q35 and
aarch64/virt).
This patch tries to assign them to an address on a PCIe controller
instead, when appropriate. First we do some preliminary checks that
might allow setting the flags without doing any extra work, and if
those conditions aren't met (and if libvirt is running privileged so
that it has proper permissions), we perform the (relatively) time
consuming task of reading the device's PCI config to see if it is an
Express device. If this is successful, the connect flags are set based
on the result, but if we aren't able to read the PCI config (most
likely due to the device not being present on the system at the time
of the check) we assume it is (or will be) an Express device, since
that is almost always the case anyway.
If libvirtd is running unprivileged, it can open a device's PCI config
data in sysfs, but can only read the first 64 bytes. But as part of
determining whether a device is Express or legacy PCI,
qemuDomainDeviceCalculatePCIConnectFlags() will be updated in a future
patch to call virPCIDeviceIsPCIExpress(), which tries to read beyond
the first 64 bytes of the PCI config data and fails with an error log
if the read is unsuccessful.
In order to avoid creating a parallel "quiet" version of
virPCIDeviceIsPCIExpress(), this patch passes a virQEMUDriverPtr down
through all the call chains that initialize the
qemuDomainFillDevicePCIConnectFlagsIterData, and saves the driver
pointer with the rest of the iterdata so that it can be used by
qemuDomainDeviceCalculatePCIConnectFlags(). This pointer isn't used
yet, but will be used in an upcoming patch (that detects Express vs
legacy PCI for VFIO assigned devices) to examine driver->privileged.
The path to the config file for a PCI device is conventiently stored
in a virPCIDevice object, but that object's contents aren't directly
visible outside of virpci.c, so we need to have an accessor function
for it if anyone needs to look at it.
This new function just calls fstat() (if provided with a valid fd) or
stat() (if fd is -1) and returns st_size (or -1 if there is an
error). We may decide we want this function to be more complex, and
handle things like block devices - this is a placeholder (that works)
for any more complicated function.
Adding first build of year minor number reset to 0.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
We can't change feature names for compatibility reasons even if they
contain typos or other software uses different names for the same
features. By adding alternative spellings in our CPU map we at least
allow anyone to grep for them and find the correct libvirt's name.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When virt-aa-helper parses xml content it can fail on security labels.
It fails by requiring to parse active domain content on seclabels that
are not yet filled in.
Testcase with virt-aa-helper on a minimal xml:
$ cat << EOF > /tmp/test.xml
<domain type='kvm'>
<name>test-seclabel</name>
<uuid>12345678-9abc-def1-2345-6789abcdef00</uuid>
<memory unit='KiB'>1</memory>
<os><type arch='x86_64'>hvm</type></os>
<seclabel type='dynamic' model='apparmor' relabel='yes'/>
<seclabel type='dynamic' model='dac' relabel='yes'/>
</domain>
EOF
$ /usr/lib/libvirt/virt-aa-helper -d -r -p 0 \
-u libvirt-12345678-9abc-def1-2345-6789abcdef00 < /tmp/test.xml
Current Result:
virt-aa-helper: error: could not parse XML
virt-aa-helper: error: could not get VM definition
Expected Result is a valid apparmor profile
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Signed-off-by: Guido Günther <agx@sigxcpu.org>
Only the latest APIs are fully documented and the documentation of the
older variants (which are just limited versions of the new APIs anyway)
points to the newest APIs.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When trying to install libvirtd from sources I've noticed the
following failure:
/usr/bin/install: cannot stat 'virt-guest-shutdown.target': No such file or directory
Makefile:2792: recipe for target 'install-init-systemd' failed
make[3]: *** [install-init-systemd] Error 1
make[3]: *** Waiting for unfinished jobs....
The problem is that while other files around that location in
Makefile are firstly generated into the builddir and only after
that installed, virt-guest-shutdown.target file is not generated
at all and should be installed from the srcdir.
This was introduced in 01079727.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Restarting libvirtd on the source host at the end of migration when a
domain is already running on the destination would cause image labels to
be reset effectively killing the domain. Commit e8d0166e1d fixed similar
issue on the destination host, but kept the source always resetting the
labels, which was mostly correct except for the specific case handled by
this patch.
https://bugzilla.redhat.com/show_bug.cgi?id=1343858
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Post-copy migration needs bi-directional communication between the
source and the destination QEMU processes, which is not supported by
tunnelled migration.
https://bugzilla.redhat.com/show_bug.cgi?id=1371358
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We had a lot of rados_conf_set and check works.
Use helper virStorageBackendRBDRADOSConfSet for them.
Signed-off-by: Chen Hanxiao <chenhanxiao@gmail.com>
Thanks to the complex capability caching code virQEMUCapsProbeQMP was
never called when we were starting a new qemu VM. On the other hand,
when we are reconnecting to the qemu process we reload the capability
list from the status XML file. This means that the flag preventing the
function being called was not set and thus we partially reprobed some of
the capabilities.
The recent addition of CPU hotplug clears the
QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS if the machine does not support it.
The partial re-probe on reconnect results into attempting to call the
unsupported command and then killing the VM.
Remove the partial reprobe and depend on the stored capabilities. If it
will be necessary to reprobe the capabilities in the future, we should
do a full reprobe rather than this partial one.
QEMU 2.8.0 adds support for unavailable-features in
query-cpu-definitions reply. The unavailable-features array lists CPU
features which prevent a corresponding CPU model from being usable on
current host. It can only be used when all the unavailable features are
disabled. Empty array means the CPU model can be used without
modifications.
We can use unavailable-features for providing CPU model usability info
in domain capabilities XML:
<domainCapabilities>
...
<cpu>
<mode name='host-passthrough' supported='yes'/>
<mode name='host-model' supported='yes'>
<model fallback='allow'>Skylake-Client</model>
...
</mode>
<mode name='custom' supported='yes'>
<model usable='yes'>qemu64</model>
<model usable='yes'>qemu32</model>
<model usable='no'>phenom</model>
<model usable='yes'>pentium3</model>
<model usable='yes'>pentium2</model>
<model usable='yes'>pentium</model>
<model usable='yes'>n270</model>
<model usable='yes'>kvm64</model>
<model usable='yes'>kvm32</model>
<model usable='yes'>coreduo</model>
<model usable='yes'>core2duo</model>
<model usable='no'>athlon</model>
<model usable='yes'>Westmere</model>
<model usable='yes'>Skylake-Client</model>
...
</mode>
</cpu>
...
</domainCapabilities>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
"host" CPU model is supported by a special host-passthrough CPU mode and
users is not allowed to specify this model directly with custom mode.
Thus we should not advertise "host" CPU model in domain capabilities.
This worked well on architectures for which libvirt provides a list of
supported CPU models in cpu_map.xml (since "host" is not in the list).
But we need to explicitly filter "host" model out for all other
architectures.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
CPU models (and especially some additional details which we will start
probing for later) differ depending on the accelerator. Thus we need to
call query-cpu-definitions in both KVM and TCG mode to get all data we
want.
Tests in tests/domaincapstest.c are temporarily switched to TCG to avoid
having to squash even more stuff into this single patch. They will all
be switched back later in separate commits.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This patch moves the CPU models formatting code from
virQEMUCapsFormatCache into a separate function.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The function just returned cached capabilities without checking whether
they are still valid. We should check that and refresh the capabilities
to make sure we don't return stale data. In other words, we should do
what all other lookup functions do.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The function is made a little bit more readable and the code which
refreshes cached capabilities if they are not valid any more was moved
into a separate function (virQEMUCapsCacheValidate) so that it can be
reused in other places.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If a user asked for a KVM domain capabilities when KVM is not available,
we would happily return data we got when probing through TCG and
pretended they were relevant for KVM. Let's just report KVM is not
supported to avoid confusion.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When domain capabilities were introduced we did not have enough data to
decide whether KVM works on the host or not and thus working legacy/VFIO
device assignment was used as a witness. Now that we know whether KVM
was enabled when probing QEMU capabilities (and thus we know it's
working), we can use this knowledge to provide better default value for
virttype.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Since some may depend on the accelerator used when probing QEMU the
cache becomes invalid when KVM becomes available or if it is not
available anymore.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
CPU related capabilities may differ depending on accelerator used when
probing. Let's use KVM if available when probing QEMU and fall back to
TCG. The created capabilities already contain all we need to distinguish
whether KVM or TCG was used:
- KVM was used when probing capabilities:
QEMU_CAPS_KVM is set
QEMU_CAPS_ENABLE_KVM is not set
- TCG was used and QEMU supports KVM, but it failed (e.g., missing
kernel module or wrong /dev/kvm permissions)
QEMU_CAPS_KVM is not set
QEMU_CAPS_ENABLE_KVM is set
- KVM was not used and QEMU does not support it
QEMU_CAPS_KVM is not set
QEMU_CAPS_ENABLE_KVM is not set
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When starting QEMU more than once during a single probing process,
qemucapsprobe utility would save QMP greeting several times, which
doesn't play well with our test monitor.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Let's set QEMU_CAPS_KVM and QEMU_CAPS_ENABLE_KVM early so that the rest
of the probing code can use these capabilities to handle KVM/TCG replies
differently.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Using -machine instead of -M for QMP probing is safe because any QEMU
binary which is capable of QMP probing supports -machine.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The code that runs a new QEMU process to be used for probing
capabilities is separated into four reusable functions so that any code
that wants to probe a QEMU process may just follow a few simple steps:
cmd = virQEMUCapsInitQMPCommandNew(...);
virQEMUCapsInitQMPCommandRun(cmd);
/* talk to the running QEMU process using its QMP monitor */
if (reprobeIsRequired) {
virQEMUCapsInitQMPCommandAbort(cmd, ...);
virQEMUCapsInitQMPCommandRun(cmd);
/* talk to the running QEMU process again */
}
virQEMUCapsInitQMPCommandFree(cmd);
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>