Commit Graph

12227 Commits

Author SHA1 Message Date
Andrea Bolognani
981879d026 qemu_firmware: enrolled-keys requires secure-boot
No sane firmware build will fail this check, but just to be on
the safe side let's check anyway.

Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-01 15:10:40 +02:00
Andrea Bolognani
262672dbbf qemu_firmware: Enable loader.secure when requires-smm
Currently, a firmware configuration such as

  <os firmware='efi'>
    <firmware>
      <feature enabled='yes' name='enrolled-keys'/>
    </firmware>
  </os>

will correctly pick a firmware that implements the Secure Boot
feature and initialize the NVRAM file so that it contains the
keys necessary to enforce the signing requirements. However, the
lack of a

  <loader secure='yes'/>

element makes it possible for pflash writes to happen outside
of SMM mode. This means that the authenticated UEFI variables
where the keys are stored could potentially be overwritten by
malicious code running in the guest, thus making it possible to
circumvent Secure Boot.

To prevent that from happening, automatically turn on the
loader.secure feature whenever a firmware that implements Secure
Boot is chosen by the firmware autoselection logic. This is
identical to the way we already automatically enable SMM in such
a scenario.

Note that, while this is technically a guest-visible change, it
will not affect migration of existings VMs and will not prevent
legitimate guest code from running.

Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-01 15:10:39 +02:00
Jiri Denemark
766abdc291 qemu_migration: Apply max-postcopy-bandwidth on post-copy resume
When resuming post-copy migration users may want to limit the bandwidth
used by the migration and use a value that is different from the one
specified when the migration was originally started.

Resolves: https://gitlab.com/libvirt/libvirt/-/issues/333

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-01 11:28:34 +02:00
Jiri Denemark
8c335b5530 qemu_migration: Pass migParams to qemuMigrationSrcResume
So the we can apply selected migration parameters even when resuming
post-copy migration.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-01 11:28:34 +02:00
Jiri Denemark
184749691f qemu_migration_params: Replace qemuMigrationParamTypes array
We will need to annotate individual parameters a bit more than just
noting their type. Let's introduce qemuMigrationParamInfo replacing
simple qemuMigrationParamTypes with an array of structs.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-01 11:28:34 +02:00
Jiri Denemark
0eae541257 qemu: Pass migration flags to qemuMigrationParamsApply
The flags will later be used to determine which parameters should
actually be applied.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-01 11:28:34 +02:00
Jiri Denemark
f9dcc01a0f qemu_migration: Avoid mem.hard_limit > 0 check
My original commit v8.4.0-288-gf01fc4d119 accidentally forgot to fix
both instances of the same problem. While it fixed the destination side
of migration, the source one remained broken.

However, that commit was also wrong in saying the issue could have
caused unlimited memory locking to be allowed for QEMU when RDMA
migration was used. It could not, because the code would refuse to even
think about starting RDMA migration if hard_limit was not set. But
avoiding the "mem.hard_limit > 0" check is useful anyway.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-01 11:28:34 +02:00
Michal Privoznik
f3f877cfa6 qemu_hotplug: Don't skip cleanup in qemuDomainAttachNetDevice()
Introduced in v8.4.0-rc1~183 but the first real problem
introduced in v8.4.0-rc1~170, there's a
qemuBuildInterfaceConnect() call inside of
qemuDomainAttachNetDevice(). If the former fails, then the
function is immediately returned from instead of jumping onto the
cleanup label. This is crucial, because at this point the domain
definition contains 'borrowed' net definition, which is then
freed, since an error was met. The domain definition is then left
with a dangling pointer which leads to all sorts of different
crashes.

Fixes: 29d022b1eb
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2102009
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
2022-07-01 10:45:26 +02:00
Jiri Denemark
d375993ab3 qemu_migration: Implement VIR_MIGRATE_ZEROCOPY flag
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/306

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-06-23 16:45:39 +02:00
Jiri Denemark
f01fc4d119 qemu_migration: Don't set unlimited memlock limit for RDMA
Our documentation says RDMA migration requires hard_limit to be set so
that we know how big memory locking limit should be set for the domain
during migration. But since commit v1.2.13-71-gcf521fc8ba (which changed
the default hard_limit value from 0 to
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED) we were actually setting memlock
limit to unlimited if hard_limit was not set.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-06-23 16:45:39 +02:00
Jiri Denemark
d4d3bb8130 qemu_migration: Restore original memory locking limit
For RDMA migration we update memory locking limit, but never set it back
once migration finishes (on the destination host) or aborts (on the
source host).

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-06-23 16:45:39 +02:00
Jiri Denemark
22ee8cbf09 qemu_migration: Use qemuDomainSetMaxMemLock
This helper will not try to set the limit if it is already big enough,
which may be useful when libvirt daemon is running in a containerized
environment and is not allowed to change memory locking limit.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-06-23 16:45:39 +02:00
Jiri Denemark
dff51c7f57 qemu: Add qemuDomainSetMaxMemLock helper
qemuDomainAdjustMaxMemLock combined computing the desired limit with
applying it. This patch separates the code to apply a memory locking
limit to a new qemuDomainSetMaxMemLock helper for better reusability.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-06-23 16:45:39 +02:00
Ján Tomko
7b5dd948b8 qemu: remove cleanup label from qemuMigrationSrcGraphicsRelocate
Remove the label and use 'rc' instead of 'ret'.

Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
2022-06-22 12:28:29 +02:00
Ján Tomko
8d9bd178e2 Use g_auto for virURI almost everywhere
Convert all the cases where we can unconditionally free
the virURI at the end of scope.

In libxlDomainMigrationDstPrepare, uri is only filled
if uri_in was present, so moving the virURIFree out of
the condition is safe.

Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
2022-06-22 12:28:29 +02:00
Cole Robinson
5f0765f90f qemu: validate: use domcaps for tpm validation
Replace tpm->type and tpm->model qemuCaps validation with the
similar logic in domcaps.

Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
2022-06-21 08:23:18 -04:00
Cole Robinson
b233bf89dc qemu: command: Use correct tpm device for all non-x86
The qemu `tpm-tis` device is an ISA device, so only really applicable
to x86 archs. For all non-x86 archs we should use `tpm-tis-device`

This fixes tpm-tis usage on armv7l and riscv

Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
2022-06-21 08:23:18 -04:00
Cole Robinson
5aec476e2e qemu: validate: Drop tpm-tis arch validation
Checking against qemu capabilities should be enough here

Resolves: https://gitlab.com/libvirt/libvirt/-/issues/329

Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
2022-06-21 08:23:18 -04:00
Andrea Bolognani
03771f5f04 qemu: Fix alignment in qemuFirmwareMappingFlashFormat()
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
2022-06-16 15:27:16 +02:00
Andrea Bolognani
8c75efd4ef qemu: Simplify handling of virTristateBool values
We explicitly check whether the value is YES or NO, which makes
it unnecessary to make sure it's not ABSENT beforehand.

Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-06-16 15:27:16 +02:00
Ján Tomko
2753eba20c qemu: virtiofs: format --thread-pool-size
https://bugzilla.redhat.com/show_bug.cgi?id=2079582

Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-06-16 14:58:25 +02:00
Peng Liang
bc16c1bcf6 qemu: Remove unused includes
Signed-off-by: Peng Liang <tcx4c70@gmail.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-06-16 06:43:57 +02:00
Rohit Kumar
468a0a6027 conf: Add support to parse/format <source> for NVRAM
This patch introduces the logic to format and parse remote NVRAM.

Update NVRAM element schema, and docs for supporting network backed
NVRAM. NVRAM backed over network would give the flexibility to start
the VM on any host without having to worry about where to get the latest
nvram image.

<nvram type='network'>
  <source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/0'>
    <host name='example.com' port='6000'/>
  </source>
</nvram>

or

<nvram type='file'>
  <source file='/var/lib/libvirt/nvram/guest_VARS.fd'/>
</nvram>

In the qemu driver we will support the new definition only with qemu's
supporting -blockdev.

Signed-off-by: Prerna Saxena <prerna.saxena@nutanix.com>
Signed-off-by: Florian Schmidt <flosch@nutanix.com>
Signed-off-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
2022-06-14 15:53:11 +02:00
Peter Krempa
9d8abe0480 qemuFirmwareFillDomain: Don't fill in firmware for network backed nvram
Prepare for network backed nvram by refusing the reset of nvram on boot
and don't check whether it exists. We will not support filling it from a
template.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
2022-06-14 15:53:11 +02:00
Rohit Kumar
bca731d0f5 qemu: validate: Reject virStorageSource features we don't want to support with nvram
Signed-off-by: Prerna Saxena <prerna.saxena@nutanix.com>
Signed-off-by: Florian Schmidt <flosch@nutanix.com>
Signed-off-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
2022-06-14 15:53:11 +02:00
Peter Krempa
c3c586baa1 qemuDomainInitializePflashStorageSource: Properly and fully initialize nvram source
Setup all fields for use with -blockdev.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
2022-06-14 15:53:11 +02:00
Peter Krempa
9945c24259 qemuProcessReconnect: Don't re-instantiate pflash storage source
We don't really use it besides when starting up the VM so when
reconnecting this step is totally pointless.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
2022-06-14 15:53:11 +02:00
Peter Krempa
baf224f1f9 qemu: Properly setup the NVRAM virStorageSource
Use the designated helpers for virStorageSource instead using the
file-based ones with a check.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
2022-06-14 15:53:11 +02:00
Peter Krempa
5709b31f35 qemu: Use 'def->os.loader->nvram' directly instead of 'priv->pflash1'
Since we now have a full virStorageSource for storing the nvram path we
don't need the extra dance of transferring the data into the 'pflash1'
variable which was an intermediary solution to use -blockdev.

For now we keep it functionally identical to the previous impl.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
2022-06-14 14:39:55 +02:00
Rohit Kumar
911c3cb2f0 conf: Convert def->os.loader->nvram a virStorageSource
Currently, libvirt allows only local filepaths to specify the location
of the 'nvram' image. Changing it to virStorageSource type will allow
supporting remote storage for nvram.

Signed-off-by: Prerna Saxena <prerna.saxena@nutanix.com>
Signed-off-by: Florian Schmidt <flosch@nutanix.com>
Signed-off-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
2022-06-14 14:39:55 +02:00
Peter Krempa
c3cf2a2b60 qemuBuildPflashBlockdevCommandLine: Take virDomainObj instead of private data
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
2022-06-14 14:39:55 +02:00
Peter Krempa
f23b0ac13e qemuDomainPrepareStorageSourceBlockdev: Add a variant for custom nodename
Extract the internals of qemuDomainPrepareStorageSourceBlockdev into
qemuDomainPrepareStorageSourceBlockdevNodename so that we can reuse it
when instantiating the virStorageSource for pflash backing.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
2022-06-14 14:39:55 +02:00
Mark Mielke
31b5ad06e3 Fix incorrect uses of g_clear_pointer() introduced in 8.1.0
This is a partial revert of 87a43a907f

The change to use g_clear_pointer() in more places was accidentally
applied to cases involving vir_g_source_unref().

In some cases, the ordering of g_source_destroy() and
vir_g_source_unref() was reversed, which resulted in the source being
marked as destroyed, after it is already unreferenced. This
use-after-free case might work in many cases, but with versions of
glib older than 2.64.0 it may defer unref to run within the main
thread to avoid a race condition, which creates a large distance
between the g_source_unref() and g_source_destroy().

In some cases, the call to vir_g_source_unref() was replaced with a
second call to g_source_destroy(), leading to a memory leak or worse.

In our experience, the symptoms were that use of libvirt-python became
slower over time, with OpenStack nova-compute initially taking around
one second to periodically query the host PCI devices, and within an
hour it was taking over a minute to complete the same operation, until
it is was eventually running this query back-to-back, resulting in the
nova-compute process consuming 100% of one CPU thread, losing its
RabbitMQ connection frequently, and showing up as down to the control
plane.

Signed-off-by: Mark Mielke <mark.mielke@gmail.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2022-06-13 20:42:47 +02:00
Michal Privoznik
67e4fed61c qemuBuildInterfaceConnect: Initialize @tapfd array
When creating a TAP interface we can end up with multiple FDs,
each representing one queue. However, these FDs must be
relabelled as they are then passed to QEMU. In case of
qemuBuildInterfaceConnect() we allocate the array for the FDs and
then let function corresponding to the <interface/> type to fill
the array with FDs. When any of the functions meets an error,
it's also responsible for closing previously opened FDs. However,
the functions take a shortcut: iterate through each member of the
array and close it (if it's non-negative). This assumes that the
array is initialized to negative values, which use to be the case
before rewrite in v8.4.0-rc1~170 but after it it's no longer the
case. Subsequently, "random" FDs are closed (okay, not that
random since the array is allocated via g_new0(), but hey - FD 0
is still valid FD and might be valuable, actually).

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2075383#c18
Fixes: 7a38d3946b
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
2022-06-13 16:06:54 +02:00
Michal Privoznik
425d3b12a4 qemu: Generate command line for <defaultiothread/> pool size
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2059511
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
2022-06-10 14:01:08 +02:00
Michal Privoznik
94b71589f1 qemu_validate: Check if QEMU's capable of setting <defaultiothread/> pool size
Since the main-loop and iothread classes are derived from the
same class (EventLoopBaseClass) we don't need new capability and
can use QEMU_CAPS_IOTHREAD_THREAD_POOL_MAX directly to check
whether QEMU's capable of setting defaultiothread pool size.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
2022-06-10 14:01:06 +02:00
Michal Privoznik
f078db9dab qemu: Wire up new virDomainSetIOThreadParams parameters
Introduced in previous commit, QEMU driver needs to be taught how
to set VIR_DOMAIN_IOTHREAD_THREAD_POOL_MIN and
VIR_DOMAIN_IOTHREAD_THREAD_POOL_MAX parameters on given IOThread.
Fortunately, this is fairly trivial to do and since these two
parameters are exposed in domain XML too the update of inactive
XML can be wired up too.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
2022-06-10 14:00:44 +02:00
Michal Privoznik
86c10f81e5 qemu: Generate command line for IOThread pool size
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
2022-06-10 14:00:13 +02:00
Michal Privoznik
2bfb8159bb qemu_validate: Check if QEMU's capable of setting iothread pool size
Now that we have a capability that reflects whether QEMU is
capable of setting iothread pool size, let's introduce a
validator check to make sure users are not trying to use this
feature with QEMU that doesn't support it.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
2022-06-10 14:00:11 +02:00
Michal Privoznik
38a67a9a9e qemu: Introduce QEMU_CAPS_IOTHREAD_THREAD_POOL_MAX
This capability reflects whether QEMU allows setting
thread-pool-min and thread-pool-max attributes on iothread
object. Since both attributes were introduced in the same commit
(v7.0.0-878-g71ad4713cc) and can't exist independently of each
other we can stick with one capability covering both of them.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
2022-06-10 14:00:08 +02:00
Jiri Denemark
4582267782 qemu: Improve error messages using qemuMigrationJobName
They were constructed from two separate strings using "%s: %s", which
is ugly and does not work well with translations.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
2022-06-08 11:00:43 +02:00
Jiri Denemark
87257c76b9 qemu: Fix VSERPORT_CHANGE event in post-copy migration
When a domain has a guest agent channel enabled and the agent is running
in the guest, we will get VSERPORT_CHANGE event on a destination host as
soon as we start vCPUs there. This is not an issue for normal migration,
but post-copy migration will remain running after we started vCPUs on
the destination. If it runs for more than 30s, the VSERPORT_CHANGE event
handler will fail to get a job and log the following error message:

    Timed out during operation: cannot acquire state change lock (held
    by monitor=remoteDispatchDomainMigrateFinish3Params)

and of course we will think the guest agent is not connected and thus
all APIs talking to it will fail. Until the agent or libvirt daemon is
restarted.

Luckily we only need to update the channel state (to mark it as
connected) and connect to the agent neither of which conflicts with
migration. Thus we can safely enable processing this event during
migration.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
2022-06-07 17:40:21 +02:00
Jiri Denemark
b01426a238 Introduce VIR_JOB_MIGRATION_SAFE job type
This is a special job for operations that need to modify domain state
during an active migration. The modification must not affect any state
that could conflict with the migration code. This is useful mainly for
event handlers that need to be processed during migration and which
could otherwise time out on acquiring a normal MODIFY job.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
2022-06-07 17:40:21 +02:00
Jiri Denemark
01d65a1520 qemu: Implement VIR_DOMAIN_ABORT_JOB_POSTCOPY flag
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
2022-06-07 17:40:21 +02:00
Jiri Denemark
fb50e56569 qemu: Implement virDomainAbortJobFlags
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
2022-06-07 17:40:21 +02:00
Jiri Denemark
cf3842ef08 qemu: Enable support for VIR_MIGRATE_POSTCOPY_RESUME
Since all parts of post-copy recovery have been implemented now, it's
time to enable the corresponding flag.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
2022-06-07 17:40:20 +02:00
Jiri Denemark
9189301fe5 qemu: Implement VIR_MIGRATE_POSTCOPY_RESUME for peer-to-peer migration
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
2022-06-07 17:40:20 +02:00
Jiri Denemark
56348173fa qemu: Call qemuDomainCleanupAdd from qemuMigrationJobContinue
Every single call to qemuMigrationJobContinue needs to register a
cleanup callback in case the migrating domain dies between phases or
when migration is paused due to a failure in postcopy mode.

Let's integrate registering the callback in qemuMigrationJobContinue to
make sure the current thread does not release a migration job without
setting a cleanup callback.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
2022-06-07 17:40:20 +02:00
Jiri Denemark
21469f6076 qemu: Register qemuProcessCleanupMigrationJob after Begin phase
The callback will properly cleanup non-p2p migration job in case the
migrating domain dies between Begin and Perform while the client which
controls the migration is not cooperating (normally the API for the next
migration phase would handle this).

The same situation can happen even after Prepare and Perform phases, but
they both already register a suitable callback, so no fix is needed
there.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
2022-06-07 17:40:20 +02:00
Jiri Denemark
776311df23 qemu: Create completed jobData in qemuMigrationSrcComplete
Normally the structure is created once the source reports completed
migration, but with post-copy migration we can get here even after
libvirt daemon was restarted. It doesn't make sense to preserve the
structure in our status XML as we're going to rewrite almost all of it
while refreshing the stats anyway. So we just create the structure here
if it doesn't exist to make sure we can properly report statistics of a
completed migration.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
2022-06-07 17:40:20 +02:00