In commit c43718ef67 I've added a disclaimer that the new stats which
are fetched from qemu and passed directly to the user are not guaranteed
by libvirt. I didn't notice that per-vcpu hypervisor specific stats are
also snuck into the VIR_DOMAIN_STATS_VCPU group along with other
pre-existing stats we do guarantee.
Extend the disclaimer for VIR_DOMAIN_STATS_VCPU too.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Mention 'virt-qemu-sev-validate', SGX EPC, vTPM migration, cpu flag
additions and other notable changes in this release.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Guests are allowed to change their MAC addresses. Subsequently,
we may respond to that with tweaking that part of host side
configuration that depends on it. In this particular case: QoS.
Some parts of QoS are in fact set on corresponding bridge, where
overall view on traffic can be seen. Here, TC filters are used to
place incoming packets into qdiscs. These filters match source
MAC address. Therefore, upon guest changing its MAC address, the
corresponding TC filter needs to be updated too. This is done by
simply removing the old one and instantiating a new one, with new
MAC address.
Now, u32 filters (which we use) use a hash table for matching,
internally. And when deleting the old filter, we used to remove
the hash table (ID = 800::) and let the new filter instantiate
new hash table. This used to work, until kernel release 4.20
(specifically commit v4.20-rc1~27^2~131^2~11 and its friends)
where this practice was turned into error.
But that's okay - we can delete the specific filter we are after
and not touch the hash table at all.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
When setting up QoS for a domain <interface/>, or when reporting
its statistics we may need to swap TX/RX values. This is all
explained in comment to virDomainNetTypeSharesHostView().
However, this function claims that VIR_DOMAIN_NET_TYPE_ETHERNET
also shares the 'host view', meaning the TX/RX values must be
swapped. But that's not true.
An easy reproducer is to start a domain with two <interface/>-s:
one type of network, the other of type ethernet and configure the
same <bandwidth/> for both. Reversed setting can then be observed
(e.g. via tc).
Reported-by: Oleg Vasilev <oleg.vasilev@virtuozzo.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
We already report whether iSCSI backend was enabled at compile
time, but we don't do the same with iSCSI-direct backend.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
When displaying long version (virsh -V), the 'Virtuozzo Storage'
substring lacks leading space and thus produces awful output.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Our RPC calls can be divided into two groups: regular and high
priority. The latter can be then processed by so called high
priority worker threads. This is our way of defeating a
'deadlock' and allowing some RPCs to be processed even when all
(regular) worker threads are stuck. For instance: if all regular
worker threads get stuck when talking to QEMU on monitor, the
virDomainDestroy() can be processed by a high priority worker
thread(s) and thus unstuck those threads.
Now, this is all fine, except if users want to use virsh
non interactively:
virsh destroy $dom
This does a bit more - it needs to open a connection. And that
consists of multiple RPC calls: AUTH_LIST,
CONNECT_SUPPORTS_FEATURE, CONNECT_OPEN, and finally
CONNECT_REGISTER_CLOSE_CALLBACK. All of them are marked as high
priority except the last one. Therefore, virsh just sits there
with a partially open connection.
There's one requirement for high priority calls though: they can
not get stuck. Hopefully, the reason is obvious by now. And
looking into the server side implementation the
CONNECT_REGISTER_CLOSE_CALLBACK processing can't ever get stuck.
The only driver that implements the callback for public API is
Parallels (vz). And that can't block really.
And for virConnectUnregisterCloseCallback() it's the same story.
Therefore, both can be marked as high priority.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2143840
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The setting is needed for the windows driver to work properly and doesn't have negative effects on other usage.
Signed-off-by: Lukas Ke nicelukas@hotmail.com
Use same style in the 'struct option' as:
struct option opt[] = {
{ a, b },
{ a, b },
...
{ a, b },
};
Signed-off-by: Jiang Jiacheng <jiangjiacheng@huawei.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
For the handling of usb we already allow plenty of read access,
but so far /sys/bus/usb/devices only needed read access to the directory
to enumerate the symlinks in there that point to the actual entries via
relative links to ../../../devices/.
But in more recent systemd with updated libraries a program might do
getattr calls on those symlinks. And while symlinks in apparmor usually
do not matter, as it is the effective target of an access that has to be
allowed, here the getattr calls are on the links themselves.
On USB hostdev usage that causes a set of denials like:
apparmor="DENIED" operation="getattr" class="file"
name="/sys/bus/usb/devices/usb1" comm="qemu-system-x86"
requested_mask="r" denied_mask="r" ...
It is safe to read the links, therefore add a rule to allow it to
the block of rules that covers the usb related access.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Reviewed-by: Michal Privoznik <mprivozn at redhat.com>
When there is no vIOMMU, vfio devices don't need to lock the entire guest
memory per-device, but they still need to lock the entire guest memory to
share between all vfio devices. This memory accounting is not shared
with vDPA devices, so it should be added to the memlock limit separately.
Commit 8d5704e2 added support for multiple vfio/vdpa devices but
calculated the limits incorrectly when there were both vdpa and vfio
devices and no vIOMMU. In this case, the memory lock limit was not
increased separately for the vfio devices.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2143838
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
When post-copy migration is running in Finish phase we already did
everything needed and we're just waiting for all the memory to transfer
to the destination. The domain is already running on there at this
point. Once all data is transferred (QEMU sends a MIGRATION completed
event) we're done. So in this specific post-copy case the source does
not need to care about the result of the Finish call as long as QEMU
says migration completed. The Finish call to the destination daemon may
fail for reasons that do not affect QEMU, e.g., libvirt daemon was
restarted there or the libvirt connection broke.
Currently we just mark the post-copy migration as failed on the source
and keep the domain paused there. But when libvirt daemon is restarted
at this point, it will detect migration finished successfully and kill
the domain as migrated. It make sense to do this even without having to
restart the daemon.
Closes: https://gitlab.com/libvirt/libvirt/-/issues/338
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
We need the restored job even in case the migration already finished
even though we will stop it just a few lines below as the functions we
call in between require an existing migration job.
This fixes a crash on reconnect when post-copy migration finished while
the daemon was not running.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
On 32-bit arches, it's possible not only to request
-D_FILE_OFFSET_BITS=64 (which is always done with meson) but also
-D_TIME_BITS=64. With glibc, both of these affect what variant of
stat() or lstat() is called. With 64 bit time it's:
__stat64_time64() or __lstat64_time64(), respectively.
Fortunately, no other variant (__xstat(), __xstat64()) has
_time64 alternative and thus does not need similar treatment.
Similarly, musl is not affected by this.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/404
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Nothing in virnettlscontexttest nor virnettlssessiontest calls
any of random number generator functions overridden
virrandommock. GnuTLS handles RNG within itself.
Therefore, there's no need to preload the mock.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
These format are left unchanged when convert 'unsigned long' to
'unsigned long long', which caused compile warning.
Signed-off-by: Jiang Jiacheng <jiangjiacheng@huawei.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Just adds a tool to the applications list. This tool helps managing
multiple VMs at once using the python binding.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Cédric Bosdonnat <cbosdonnat@suse.com>
If the running firewalld doesn't support getPolicies() then we fallback
to the "libvirt" zone. Throwing an error log is excessive since we
gracefully fallback.
Avoids these logs:
error : virGDBusCallMethod:242 : error from service: \
GDBus.Error:org.freedesktop.DBus.Error.UnknownMethod
Fixes: ab56f84976 ("util: add virFirewallDGetPolicies()")
Signed-off-by: Eric Garver <eric@garver.life>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Register virDomainMemoryDefFree() to do the cleanup.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
This is new allocator for virDomainMemoryDef struct which also
sets some default values: @model and @targetNode.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
The idea here is that virVMXConfigScanResultsCollector() sets the
networks_max_index to the highest ethernet index seen. Well, the
struct member is signed int, we parse just seen index into uint
and then typecast to compare the two. This is not necessary,
because the maximum number of NICs a vSphere domain can have is
(<drumrolll/>): ten [1]. This will fit into signed int easily
anywhere.
1: https://configmax.esp.vmware.com/guest?vmwareproduct=vSphere&release=vSphere%208.0&categories=1-0
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
Now that we have STRCASESKIP() there's no need to open code it.
Convert virVMXConfigScanResultsCollector() so that it uses this
new macro.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
There is so far one case where STRCASEPREFIX(a, b) && a +
strlen(b) combo is used (in virVMXConfigScanResultsCollector()),
but there will be more. Do what we do usually: introduce a macro.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
We document use of our STR*() macros, but somehow missed
STRCASEPREFIX() and STRSKIP().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
We require a space after a comma and even document this in our
coding style document. However, our own rule is broken in the
very same document when listing string comparison macros.
Separate macro arguments properly.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
Avocado 99.0 causes the TCK test suite to fail with the nwfilter tests
(which is another Bash framework underneath). Until the culprit is
identified and fixed in Avocado, let's lock the version to 98.0 which
worked with the test suite just fine.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Despite efforts to make the virt-qemu-sev-validate tool friendly, it is
a certainty that almost everyone who tries it will hit false negative
results, getting a failure despite the VM being trustworthy.
Diagnosing these problems is no easy matter, especially for those not
familiar with SEV/SEV-ES in general. This extra docs text attempts to
set out a checklist of items to look at to identify what went wrong.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
In general we expect to be able to construct a SEV-ES VMSA
blob from knowledge about the AMD achitectural CPU register
defaults, KVM setup and QEMU setup. If any of this unexpectedly
changes, figuring out what's wrong could be horrible. This
systemtap script demonstrates how to capture the real VMSA
that is used for a SEV-ES as it is booted. The captured data
can be fed into the 'sevctl vmsa show' command in order to
produce formatted info with named registers, allowing a
'diff' to be performed.
This script will need updating for any kernel version that is
not 6.0, to set the correct line numbers.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Expand the SEV guest kbase guide with information about how to configure
a SEV/SEV-ES guest when attestation is required, and mention the use of
virt-qemu-sev-validate as a way to confirm it.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
It is possible to build OVMF for SEV with an embedded Grub that can
fetch LUKS disk secrets. This adds support for injecting secrets in
the required format.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When validating a SEV-ES guest, we need to know the CPU count and VMSA
state. We can get the CPU count directly from libvirt's guest info. The
VMSA state can be constructed automatically if we query the CPU SKU from
host capabilities XML. Neither of these is secure, however, so this
behaviour is restricted.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The VMSA files contain the expected CPU register state for the VM. Their
content varies based on a few pieces of the stack
- AMD CPU architectural initial state
- KVM hypervisor VM CPU initialization
- QEMU userspace VM CPU initialization
- AMD CPU SKU (family/model/stepping)
The first three pieces of information we can obtain through code
inspection. The last piece of information we can take on the command
line. This allows a user to validate a SEV-ES guest merely by providing
the CPU SKU information, using --cpu-family, --cpu-model,
--cpu-stepping. This avoids the need to obtain or construct VMSA files
directly.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
With the SEV-ES policy the VMSA state of each vCPU must be included in
the measured data. The VMSA state can be generated using the 'sevctl'
tool, by telling it a QEMU VMSA is required, and passing the hypevisor's
CPU SKU (family, model, stepping).
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When connected to libvirt we can validate that the guest configuration
has the kernel hashes property enabled, otherwise including the kernel
GUID table in our expected measurements is not likely to match the
actual measurement.
When running locally we can also automatically detect the kernel/initrd
paths, along with the cmdline string from the XML.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When doing direct kernel boot we need to include the kernel, initrd and
cmdline in the measurement.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Accept information about a connection to libvirt and a guest on the
command line. Talk to libvirt to obtain the running guest state and
automatically detect as much configuration as possible.
It will refuse to use a libvirt connection that is thought to be local
to the current machine, as running this tool on the hypervisor itself is
not considered secure. This can be overridden using the --insecure flag.
When querying the guest, it will also analyse the XML configuration in
an attempt to detect any options that are liable to be mistakes. For
example the NVRAM being measured should not have a persistent varstore.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The virt-qemu-sev-validate program will compare a reported SEV/SEV-ES
domain launch measurement, to a computed launch measurement. This
determines whether the domain has been tampered with during launch.
This initial implementation requires all inputs to be provided
explicitly, and as such can run completely offline, without any
connection to libvirt.
The tool is placed in the libvirt-client-qemu sub-RPM since it is
specific to the QEMU driver.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This function is fine to use in other languages
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When generating memory for main guest memory memory-backend-*
might be used. This means, we may need to generate thread-context
objects too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
When generating memory for memory devices memory-backend-* might
be used. This means, we may need to generate thread-context
objects too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
When generating memory for guest NUMA memory-backend-* might be
used. This means, we may need to generate thread-context objects
too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
While technically thread-context objects can be reused, we only
use them (well, will use them) to pin memory allocation threads.
Therefore, once we connect to QEMU monitor, all memory (with
prealloc=yes) was allocated and thus these objects are no longer
needed and can be removed. For on demand allocation the TC object
is left behind.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
The aim of thread-context object is to set affinity on threads
that allocate memory for a memory-backend-* object. For instance:
-object '{"qom-type":"thread-context","id":"tc-ram-node0","node-affinity":[3]}' \
-object '{"qom-type":"memory-backend-memfd","id":"ram-node0","hugetlb":true,\
"hugetlbsize":2097152,"share":true,"prealloc":true,"prealloc-threads":8,\
"size":15032385536,"host-nodes":[3],"policy":"preferred",\
"prealloc-context":"tc-ram-node0"}' \
allocates 14GiB worth of memory, backed by 2MiB hugepages from
host NUMA node 3, using 8 threads. If it weren't for
thread-context these threads wouldn't have any affinity and thus
theoretically could be scheduled to run on CPUs of different NUMA
node (which is what I saw occasionally).
Therefore, whenever we are pinning memory (IOW setting host-nodes
attribute), we can generate thread-context object with the same
affinity.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
In its commit v7.1.0-1429-g7208429223 QEMU gained new object
thread-context, which allows running specialized tasks with
affinity set to a given subset of host CPUs/NUMA nodes. Even
though only memory allocation task accepts this new object, it's
exactly what we aim to implement in libvirt. Therefore, introduce
a new capability to track whether QEMU is capable of this object.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
On aarch64 the 'id' file is not present for CPU cache information in
sysfs. This causes the local stateful hypervisor drivers to fail to
initialize capabilities:
virStateInitialize:657 : Initialisation of cloud-hypervisor state driver failed: no error
The 'no error' is because the 'virFileReadValueNNN' methods return
ret==-2, with no error raised, when the requeted file does not exist.
None of the callers were checking for this scenario when populating
capabilities. The most graceful way to handle this is to skip the
cache bank in question. This fixes failure to launch libvirt drivers
on certain aarch64 hardware.
Fixes: https://gitlab.com/libvirt/libvirt/-/issues/389
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>