To ensure that we can hot-unplug the disk including the associated fdset
we need to store the fdset ID in the status XML.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Rollback of FD sets passed to qemu is also needed after possible restart
of libvirtd when we need to serialize the data into status XML. For this
purpose we need to access the fdset ID once it was passed to qemu and
potentially re-create a 'qemuFDPass' struct in passed state.
Introduce 'qemuFDPassNewPassed' and 'qemuFDPassIsPassed'.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Until now the code didn't expect that we'd want to rollback/detach a FD
passed on the commandline, but whith disk backend FD passing this can
happen.
Properly mark the 'qemuFDPass' object as passed to qemu even when it was
done on the commandline.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Copy the pointer to qemuFDPass into struct qemuBlockStorageSourceAttachData
so that it can be used from qemuBuildBlockStorageSourceAttachDataCommandline
rather than looping again in qemuBuildDiskSourceCommandLineFDs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Be consistent with other children buffer variable naming scheme.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
The iTCO watchdog is part of the q35 machine type since its inception,
we just did not add it implicitly.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2137346
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
In order for the iTCO watchdog to be operational we must disable the
noreboot pin strap in qemu. This is the default starting from 8.0
machine types, but desirable for older ones as well. And we can safely
do that since that is not guest-visible.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
This is already possible with qemu, and actually already happening with
q35 machines and a specified watchdog since q35 already includes a
watchdog we do not include in the XML. In order to express such
posibility multiple watchdogs need to be supported.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The g_hash_table_unref() function does not accept NULL. Passing
NULL results in a glib warning being triggered. Check whether the
hash table is not NULL and unref it only then.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Support virtio-crypto device, also support cryptodev types:
- builtin
- lkcf
Finally, we can launch a VM(QEMU) with one or more crypto devices by
libvirt.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Changes in this commit:
- docs: formatdomaincaps.rst
- conf: crypto related domain caps
- qemu: crypto related
- tests: crypto related test
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Introduce crypto device like:
<crypto model='virtio' type='qemu'>
<backend model='builtin' queues='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
</crypto>
<crypto model='virtio' type='qemu'>
<backend model='lkcf'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
</crypto>
Currently, crypto model supports virtio only, type supports qemu only
(vhost-user in the plan). For the qemu type, backend supports modle
builtin/lkcf, and the queues is optional.
Changes in this commit:
- docs: formatdomain.rst
- schemas: domaincommon.rng
- conf: crypto related domain conf
- qemu: crypto related
- tests: crypto related test
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The field is no longer used so we can remove it and the code filling it.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
All callers pass 'false' so we no longer need it.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Commit b7798a07f9 (in fall of 2016) changed the way we generate aliases
for 'dimm' memory devices as the alias itself is part of the migration
stream section naming and thus must be treated as ABI.
The code added compatibility layer for VMs with memory hotplug started
with the old scheme to prevent from generating wrong aliases. The
compatibility layer broke though later when 'nvdimm' and 'pmem' devices
were introduced as it wrongly detected them as old configuration.
Now rather than attempting to fix the legacy compat layer to treat other
devices properly we'll be better off simply removing it as it's
extremely unlikely that somebody has a VM started in 2016 running with
today's libvirt and attempts to hotplug more memory.
This fixes a corner case when a user hot-adds a 'dimm' into a VM with a
'dimm' and a 'nvdimm' after restart of libvirtd and then attempts to
migrate the VM.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2158701
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
In error case, unref event->vm instead of vm. This makes it
easier for the reader to understand as it is the event struct
that's holding the reference.
Signed-off-by: Shaleen Bathla <shaleen.bathla@oracle.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
We have virDomainGetCPUStats() API which offers querying
statistics on host CPU usage by given guest. And it works in two
modes: getting overall stats (@start_cpu == -1, @ncpus == 1) or
getting per host CPU usage.
For the QEMU driver it is implemented by looking into values
stored in corresponding cpuacct CGroup controller. Well, this
works for system instances, where libvirt has permissions to
create CGroups and place QEMU process into them. But it does not
fly for session connection, where no CGroups are set up.
Fortunately, we can do something similar to v8.8.0-rc1~95 and use
virProcessGetStatInfo() to fill the overall stats. Unfortunately,
I haven't found any source of per host CPU usage, so we just
continue throwing an error in that case.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Firstly, the virProcessGetStatInfo() does not fail really. But
even if it did, it sets correct errno only sometimes (and even
that is done in a helper it's calling - virProcessGetStat() and
even there it's the case only in very few error paths).
Therefore, using virReportSystemError() to report errors is very
misleading. Use plain virReportError() instead. Luckily, there
are only two places where the former was used:
chDomainHelperGetVcpus() and qemuDomainHelperGetVcpus() (not a
big surprise since CH driver is heavily inspired by QEMU driver).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
In a recent commit of v9.0.0-rc1~192 I've tried to forbid case
where a TAP device already exists, but at the same time it's
managed by Libvirt (<interface type='ethernet'> <target
dev='tap0' managed='yes'/> </interface>). NB, if @managed
attribute is missing then it's assumed to be managed by Libvirt.
Anyway, I've mistakenly put setting of
VIR_NETDEV_TAP_CREATE_ALLOW_EXISTING flag into managed='yes'
branch instead of managed='no' branch in
qemuInterfaceEthernetConnect().
Move the setting of the flag into the correct branch.
Fixes: a2ae3d299c
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
The systemd service files of the qemu and libxl driver currently have a
'Requires' dependency on virtlockd, which is too strong since virtlockd
is not enabled by default in either driver. Change the dependency to a
'Wants' to avoid a package dependency between the driver subpackages and
the new libvirt-daemon-lock subpackage.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
This commented-out option was pointed out by jtomko during review, but
I missed taking it out when addressing his comments.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
This attribute was added to support setting the --interface option for
passt, but in a post-push/pre-9.0-release review, danpb pointed out
that it would be better to use the existing <source dev='xxx'/>
attribute to set --interface rather than creating a new attribute (in
the wrong place). So we remove backend/upstream, and change the passt
commandline creation to grab the name for --interface from source/dev.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Currently, the ThreadContext object is generated whenever we see
.host-nodes attribute for a memory-backend-* object. The idea was
that when the backend is pinned to a specific set of host NUMA
nodes, then the allocation could be happening on CPUs from those
nodes too. But this may not be always possible.
Users might configure their guests in such way that vCPUs and
corresponding guest NUMA nodes are on different host NUMA nodes
than emulator thread. In this case, ThreadContext won't work,
because ThreadContext objects live in context of the emulator
thread (vCPU threads are moved around by us later, when emulator
thread finished its setup and spawned vCPU threads - see
qemuProcessSetupVcpus()). Therefore, memory allocation is done by
emulator thread which is pinned to a subset of host NUMA nodes,
but tries to create a ThreadContext object with a disjoint subset
of host NUMA nodes, which fails.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2154750
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Jumping to the error label and reading the pidfile does not make sense
until we reached qemuSecurityCommandRun which creates the pidfile.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
The pidfile is guaranteed to be non-NULL (thanks to glib allocation
functions) and it's dereferenced two lines above anyway.
Reported by coverity:
/src/qemu/qemu_passt.c: 278 in qemuPasstStart()
272 return 0;
273
274 error:
275 ignore_value(virPidFileReadPathIfLocked(pidfile, &pid));
276 if (pid != -1)
277 virProcessKillPainfully(pid, true);
>>> CID 404360: Null pointer dereferences (REVERSE_INULL)
>>> Null-checking "pidfile" suggests that it may be null, but it
>>> has already been dereferenced on all paths leading to the check.
278 if (pidfile)
279 unlink(pidfile);
280
281 return -1;
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
To ensure same behaviour when remote driver is or is not used we must
not steal the FDs and array holding them passed to qemuDomainFDAssociate
but rather duplicate them. At the same time the remote driver must close
and free them to prevent leak.
Pointed out by Coverity as FD leak on error path:
*** CID 404348: Resource leaks (RESOURCE_LEAK)
/src/remote/remote_daemon_dispatch.c: 7484 in remoteDispatchDomainFdAssociate()
7478 rv = 0;
7479
7480 cleanup:
7481 if (rv < 0)
7482 virNetMessageSaveError(rerr);
7483 virObjectUnref(dom);
>>> CID 404348: Resource leaks (RESOURCE_LEAK)
>>> Variable "fds" going out of scope leaks the storage it points to.
7484 return rv;
Fixes: abd9025c2f
Fixes: f762f87534
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This consists of (1) adding the necessary args to the qemu commandline
netdev option, and (2) starting a passt process prior to starting
qemu, and making sure that it is terminated when it's no longer
needed. Under normal circumstances, passt will terminate itself as
soon as qemu closes its socket, but in case of some error where qemu
is never started, or fails to startup completely, we need to terminate
passt manually.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
passt support requires "-netdev stream", which was added to QEMU in
qemu-7.2.0.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This fits better with the element containing the value (<driver>), and
allows us to use virDomainNetBackend* for things in the <backend>
element.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Assert support for VIR_DOMAIN_DEF_FEATURE_DISK_FD in the qemu driver
now that all code paths are adapted.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Probing stats and block copy to a FD passed image is not yet supported.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
We assume that FD passed images already exist so all existance checks
are skipped.
For the case that a FD-passed image is passed without a terminated
backing chain (thus forcing us to detect) we attempt to read the header
from the FD.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Prepare the internal data for passing FDs instead of having qemu open
the file internally.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When starting up a VM with FD-passed images we need to look up the
corresponding named FD set and associate it with the virStorageSource
based on the name.
The association is brought into virStorageSource as security labelling
code will need to access the FD to perform selinux labelling.
Similarly when startup is complete in certain cases we no longer need to
keep the copy of FDs and thus can close them.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The new helper qemuDomainStartupCleanup is used to perform cleanup after
a startup of a VM (successful or not). The initial implementation just
calls qemuDomainSecretDestroy, which can be un-exported.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Implement passing and storage of FDs for the qemu driver. The FD tuples
are g_object instances stored in a per-domain hash table and are
automatically removed once the connection is closed.
In the future we can consider supporting also to not tie the lifetime of
the passed FDs bound to the connection.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The patch moving the code didn't faithfully represent the typecasting
of the 'bandwidth' variable needed to properly convert from the legacy
'unsigned long' argument which resulted in a build failure on 32 bit
systems:
../src/qemu/qemu_block.c: In function ‘qemuBlockCommit’:
../src/qemu/qemu_block.c:3249:23: error: comparison is always false due to limited range of data type [-Werror=type-limits]
3249 | if (bandwidth > LLONG_MAX >> 20) {
| ^
Fix it by returning the check into qemuDomainBlockCommit as it's needed
only because of the legacy argument type in the old API and use
'unsigned long long' for qemuBlockCommit.
Fixes: f5a77198bf
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Some compilers aren't happy when an automatically freed variable is used
just to free something (thus it's only assigned in the code):
When compiling qemuSnapshotDelete after recent commits they complain:
../src/qemu/qemu_snapshot.c:3153:61: error: variable 'delData' set but not used [-Werror,-Wunused-but-set-variable]
g_autoslist(qemuSnapshotDeleteExternalData) delData = NULL;
^
To work around the issue we can restructure the code which also has the
following semantic implications:
- since qemuSnapshotDeleteExternalPrepare does validation we error out
sooner than attempting to start the VM
- we read the temporary variable at least in one code path
Fixes: 4a4d89a925
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
If the daemon crashes or is restarted while the snapshot delete is in
progress we have to handle it gracefully to not leave any block jobs
active.
For now we will simply abort the snapshot delete operation so user can
start it again. We need to refuse deleting external snapshots if there
is already another active job as we would have to figure out which jobs
we can abort.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When daemon is restarted and libvirt tries to recover domain jobs we
need to know if the snapshot job was a snapshot delete in order to
safely abort running QEMU block jobs.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When deleting external snapshots the operation may fail at any point
which could lead to situation that some disks finished the block commit
operation but for some disks it failed and the libvirt job ends.
In order to make sure that the qcow2 images are in consistent state
introduce new element "<snapshotDeleteInProgress/>" that will mark the
disk in snapshot metadata as invalid until the snapshot delete is
completed successfully.
This will prevent deleting snapshot with the invalid disk and in future
reverting to snapshot with the invalid disk.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
With external snapshots we need to modify the metadata bit more then
what is required for internal snapshots. Mainly the storage source
location changes with every external snapshot.
This means that if we delete non-leaf snapshot we need to update all
children snapshots and modify the disk sources for all affected disks.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When deleting snapshot we are starting block-commit job over all disks
that are part of the snapshot.
This operation may fail as it writes data changes to the backing qcow2
image so we need to wait for all the disks to finish the operation and
wait for correct signal from QEMU. If deleting active snapshot we will
get `ready` signal and for inactive snapshots we need to disable
autofinalize in order to get `pending` signal.
At this point if commit for any disk fails for some reason and we abort
the VM is still in consistent state and user can fix the reason why the
deletion failed.
After that we do `pivot` or `finalize` if it's active snapshot or not to
finish the block job. It still may fail but there is nothing else we can
do about it.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>