978 Commits

Author SHA1 Message Date
Stefan Berger
384138d790 qemu: tpm: Introduce qemuTPMHasSharedStorage()
New qemuTPMHasSharedStorage() function is introduced which
returns whether the swtpm state directory is on a shared
filesystem (e.g. NFS).

Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-11-09 12:26:24 +01:00
Peter Krempa
423d93967a virDomainJobObj: Use 'unsigned int' instead of 'unsigned long' for 'apiFlags' field
The callers store only an 'unsigned int' in the field. Convert it to the
proper type including parser/formatter.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-11-02 09:20:58 +01:00
Jiri Denemark
1a570f9712 qemu: Do not crash when canceling migration on reconnect
When libvirtd is restarted during an active outgoing migration (or
snapshot, save, or dump which are internally implemented as migration)
it wants to cancel the migration. But by a mistake in commit
v8.7.0-57-g2d7b22b561 the qemuMigrationSrcCancel function is called with
wait == true, which leads to an instant crash by dereferencing NULL
pointer stored in priv->job.current.

When canceling migration to file (snapshot, save, dump), we don't need
to wait until it is really canceled as no migration capabilities or
parameters need to be restored.

On the other hand we need to wait when canceling outgoing migration and
since we don't have virDomainJobData at this point, we have to
temporarily restore the migration job to make sure we can process
MIGRATION events from QEMU.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-10-24 15:28:47 +02:00
Jiri Denemark
4dd86f334b qemu_migration: Properly wait for migration to be canceled
In my commit v8.7.0-57-g2d7b22b561 I attempted to make
qemuMigrationSrcCancel synchronous, but failed. When we are canceling
migration after some kind of error which is detected in
in qemuMigrationSrcWaitForCompletion, jobData->status will be set to
VIR_DOMAIN_JOB_STATUS_FAILED regardless on QEMU state. So instead of
relying on the translated jobData->status in qemuMigrationSrcIsCanceled
we need to check the migration status we get from QEMU MIGRATION event.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-10-24 15:28:47 +02:00
Stefan Berger
92f7aafced qemu: tpm: Remove TPM state after successful migration
This patch 'fixes' the behavior of the persistent_state TPM domain XML
attribute that intends to preserve the state of the TPM but should not
keep the state around on all the hosts a VM has been migrated to. It
removes the TPM state directory structure from the source host upon
successful migration when non-shared storage is used. Similarly, it
removes it from the destination host upon migration failure when
non-shared storage is used.

Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-10-04 16:34:28 +02:00
Stefan Berger
60a06693cc qemu: Add UNDEFINE_TPM and UNDEFINE_KEEP_TPM flags
Add UNDEFINE_TPM and UNDEFINE_KEEP_TPM flags to qemuDomainUndefineFlags()
API and --tpm and --keep-tpm to 'virsh undefine'. Pass the
virDomainUndefineFlagsValues via qemuDomainRemoveInactive()
from qemuDomainUndefineFlags() all the way down to
qemuTPMEmulatorCleanupHost() and delete TPM storage there considering that
the UNDEFINE_TPM flag has priority over the persistent_state attribute
from the domain XML. Pass 0 in all other API call sites to
qemuDomainRemoveInactive() for now.

Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-10-04 16:34:28 +02:00
Peter Krempa
bc753aa6f7 qemu: migration: Remove QEMU_MONITOR_MIGRATE_BACKGROUND
'qemuMonitorJSONMigrate' is called from:
 - qemuMonitorMigrateToHost
 - qemuMonitorMigrateToSocket
   Both of the above function are called only from
   qemuMigrationSrcStart.

 - qemuMonitorMigrateToFd
   - called from:
     - qemuMigrationSrcToFile
       Both instances here pass QEMU_MONITOR_MIGRATE_BACKGROUND
       directly.
     - qemuMigrationSrcStart

qemuMigrationSrcStart is then called from qemuMigrationSrcRun and
qemuMigrationSrcResume, both of which always add QEMU_MONITOR_MIGRATE_BACKGROUND
to the flags.

Thus any caller always passes the flag so that we can remove the flag
altogether.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-09-09 16:10:47 +02:00
Peter Krempa
62b3f97aee qemu: migration: Don't attempt to fall back to old-style storage migration
QEMU supported the NBD server required for the new-style migration for a
long time already and when coupled with -blockdev the old style
migration doesn't even work, thus remove support for it.

This patch modifies the code to check that the destination returned data
for the NBD migration and returns an error if it did not and deletes the
fallback code paths which would not work.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-09-09 16:10:47 +02:00
Peter Krempa
94ff4f2f91 qemu: migration: Always assume support for QEMU_CAPS_NBD_SERVER
The NBD server (detected via 'nbd-server-start' qmp command) was added
to qemu in v1.3 and can't be compiled out.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-09-09 16:10:47 +02:00
Peter Krempa
83ffeae75a qemu: migration: Fix setup of non-shared storage migration in qemuMigrationSrcBeginPhase
In commit 6111b2352242e9 removing pre-blockdev code paths I've
improperly refactored the setup of non-shared storage migration.

Specifically the code checking that there are disks and setting up the
NBD data in the migration cookie was originally outside of the loop
checking the user provided list of specific disks to migrate, but became
part of the block as it was not un-indented when a higher level block
was being removed.

The above caused that if non-shared storage migration is requested, but
the user doesn't provide the list of disks to migrate (thus implying to
migrate every appropriate disk) the code doesn't actually setup the
migration and then later on falls back to the old-style migration which
no longer works with blockdev.

Move the check that there's anything to migrate out of the
'nmigrate_disks' block.

Fixes: 6111b2352242e93c6d2c29f9549d596ed1056ce5
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2125111
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/373
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-09-09 16:10:47 +02:00
Kristina Hanicova
4435c026b7 qemu & conf: move BeginAsyncJob & EndAsyncJob into src/conf
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2022-09-07 12:15:06 +02:00
Kristina Hanicova
9085ccbfb4 qemu: use virDomainObjEndJob()
This patch moves qemuDomainObjEndJob() into
src/conf/virdomainjob as universal virDomainObjEndJob().

Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2022-09-07 12:14:07 +02:00
Kristina Hanicova
0d22febfc6 qemu: use virDomainObjBeginJob()
This patch moves qemuDomainObjBeginJob() into
src/conf/virdomainjob as universal virDomainObjBeginJob().

Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2022-09-07 12:13:30 +02:00
Kristina Hanicova
0150f7a8c1 virdomainjob: make drivers use job object in the domain object
This patch uses the job object directly in the domain object and
removes the job object from private data of all drivers that use
it as well as other relevant code (initializing and freeing the
structure).

Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-09-07 12:13:13 +02:00
Jiri Denemark
2d7b22b561 qemu: Make qemuMigrationSrcCancel optionally synchronous
We have always considered "migrate_cancel" QMP command to return after
successfully cancelling the migration. But this is no longer true (to be
honest I'm not sure it ever was) as it just changes the migration state
to "cancelling". In most cases the migration is canceled pretty quickly
and we don't really notice anything, but sometimes it takes so long we
even get to clearing migration capabilities before the migration is
actually canceled, which fails as capabilities can only be changed when
no migration is running. So to avoid this issue, we can wait for the
migration to be really canceled after sending migrate_cancel. The only
place where we don't need synchronous behavior is when we're cancelling
migration on user's request while it is actively watched by another
thread.

https://bugzilla.redhat.com/show_bug.cgi?id=2114866

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
2022-09-06 18:28:10 +02:00
Jiri Denemark
4e55fe21b5 qemu: Create wrapper for qemuMonitorMigrateCancel
We will need a little bit more code around qemuMonitorMigrateCancel to
make sure it works as expected. The new qemuMigrationSrcCancel helper
will avoid repeating the code in several places.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
2022-09-06 18:28:10 +02:00
Jiri Denemark
0ff8c175f7 qemu: Rename qemuMigrationSrcCancel
Let's call this qemuMigrationSrcCancelUnattended as the function is
supposed to be used when no other thread is watching the migration.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
2022-09-06 18:28:10 +02:00
Peter Krempa
d5857ea611 qemu: block: Remove pre-blockdev code paths
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-08-11 15:07:54 +02:00
Peter Krempa
e06c1fa7ee qemu: migration: Assume support for QEMU_CAPS_BLOCKDEV_DEL
The migration code was using few blockdev bits before blockdev was
fully integrated to allow TLS with NBD.

Since we now always use blockdev we can remove the check.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-08-11 15:07:05 +02:00
Peter Krempa
9f15b8fb18 qemuMigrationSrcNBDStorageCopyBlockdev: Remove some arguments
We no longer need the arguments which were conditionally filled based on
presence of the QEMU_CAPS_BLOCKDEV feature.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-08-11 15:06:55 +02:00
Peter Krempa
6111b23522 qemu: migration: Remove pre-blockdev code paths
Assume that QEMU_CAPS_BLOCKDEV is present and remove all code executed
when it's not.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-08-11 15:06:36 +02:00
Peter Krempa
b76c58081c qemuMigrationSrcWaitForSpice: Remove return value
The only caller doesn't check the return value and actually doesn't have
one either. Remove the return value and adjust return statements.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-08-11 14:34:54 +02:00
Peter Krempa
4d1a1fdffd qemuDomainObjWait: Report error when VM is being destroyed
Since we started handling the monitor EOF event inside a job any code
which uses virDomainObjWait would no longer properly abort in case when
the VM crashed during the wait.

This is because virDomainObjWait uses virDomainObjIsActive which checks
'vm->def->id' to see if the VM is still active. Unfortunately the domain
id is cleared in qemuProcessStop which is run only inside the job.

To fix this we can use the 'beingDestroyed' flag stored in the VM
private data which is set to true around the time when the condition is
signalled.

Reported-by: Pavel Hrdina <phrdina@redhat.com>
Fixes: 8c9ff9960b29d4703a99efdd1cadcf6f48799cc0
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-08-11 14:34:25 +02:00
Peter Krempa
b2f1daa36d qemu: Replace virDomainObjWait with qemuDomainObjWait
The qemu code will need to check other qemu-private conditions when
reporting success for waiting. Thus we must replace all use of it with a
qemu-specific helper. For now the helper forwards directly to
virDomainObjWait.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-08-11 13:15:02 +02:00
Kristina Hanicova
42543a083a qemu: refactor functions with removed driver if possible
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2022-08-10 16:50:07 +02:00
Kristina Hanicova
203e74ff42 qemu: remove unused driver and all its propagations
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2022-08-10 16:50:07 +02:00
Laine Stump
640d185f01 qemu: don't call qemuMigrationSrcIsAllowedHostdev() from qemuMigrationDstPrepareFresh()
This call to qemuMigrationSrcIsAllowedHostdev() (which does a
hardcoded fail of the migration if there is any PCI or mdev hostdev
device in the domain) while doing the destination side of migration
prep was found once the call to that same function was removed from
the source side migration prep (commit 25883cd5).

According to jdenemar, for the V2 migration protocol, prep of the
destination is the first step, so this *was* the proper place to do
the check, but for V3 migration this is in a way redundant (since we
will have already done the check on the source side (updated by
25883cd5 to query QEMU rather than do a hardcoded fail)).

Of course it's possible that the source could support migration of a
particular VFIO device, but the destination doesn't. But the current
check on the destination side is worthless even in that case, since it
is just *always* failing rather than querying QEMU; and QEMU can't be
queried at the point where the destination check is happening, since
it isn't yet running.

Anyway QEMU should complain when it's started if it's going to fail,
so removing this check should just move the failure to happen a bit
later. So the best solution to this problem is to simply remove the
hardcoded check/fail from qemuMigrationDstPrepareFresh() and rely on
QEMU to fail if it needs to.

Fixes: 25883cd5f0b188f2417f294b7d219a77b219f7c2
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
2022-07-28 08:16:29 -04:00
Jiri Denemark
bb9badb916 qemu: Restore original memory locking limit on reconnect
Commit v8.4.0-287-gd4d3bb8130 tried to make sure the original
pre-migration memory locking limit is restored at the end of migration,
but it missed the case when libvirt daemon is restarted during
migration which needs to be aborted on reconnect.

And if this was not enough, I forgot to actually save the status XML
after setting the field in priv (in the commit mentioned above and also
in v8.4.0-291-gd375993ab3).

https://bugzilla.redhat.com/show_bug.cgi?id=2107424

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-28 13:04:45 +02:00
Jiri Denemark
c723894135 qemu_migration: Store original migration params in status XML
We keep original values of migration parameters so that we can restore
them at the end of migration to make sure later migration does not use
some random values. However, this does not really work when libvirt
daemon is restarted on the source host because we failed to explicitly
save the status XML after getting the migration parameters from QEMU.
Actually it might work if the status XML is written later for some other
reason such as domain state change, but that's not how it should work.

https://bugzilla.redhat.com/show_bug.cgi?id=2107892

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-26 10:09:00 +02:00
Peter Krempa
ef0fef79e7 qemuMigrationSrcIOFunc: Avoid unnecessary string construction
Use full strings for better greppability.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-25 16:47:02 +02:00
Peter Krempa
ec272cd94e qemu: migration: Overwrite 'dname' only when NULL
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-25 16:22:36 +02:00
Peter Krempa
b9fab14b81 qemuMigrationDstPersist: Avoid multi-line ternary operator in function call
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-25 16:22:36 +02:00
Peter Krempa
a74fceb7d5 qemuMigrationDstFinishFresh: Avoid multi-line ternary operator in function call
Rewrite the code using a temporary variable.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-25 16:22:36 +02:00
Martin Kletzander
69e0e33873 qemu_migration: Acquire correct job in qemuMigrationSrcIsAllowed
Commit 62627524607f added the acquiring of a job, but it is not always
VIR_ASYNC_JOB_MIGRATION_OUT, so the code fails when doing save or anything else.
Correct the async job by passing it from the caller as another parameter.

Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
2022-07-22 12:47:32 +02:00
Laine Stump
25883cd5f0 qemu: skip hardcoded hostdev migration check if QEMU can do it for us
libvirt currently will block migration for any vfio-assigned device
unless it is a network device that is associated with a virtio-net
failover device (ie. if the hostdev object has a teaming->type ==
VIR_DOMAIN_NET_TEAMING_TYPE_TRANSIENT).

In the future there will be other vfio devices that can be migrated,
so we don't want to rely on this hardcoded block. QEMU 6.0+ will
anyway inform us of any devices that will block migration (as a part
of qemuDomainGetMigrationBlockers()), so we only need to do the
hardcoded check in the case of old QEMU that can't provide that
information.

Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
2022-07-21 11:12:46 -04:00
Laine Stump
2dd5587f1d qemu: don't try to query QEMU about migration blockers during offline migration
The new code that queries QEMU about migration blockers was put at the
top of qemuMigrationSrcIsAllowed(), but that function can also be
called in the case of offline migration (ie when the domain is
inactive / QEMU isn't running). This check should have been put inside
the "if (!(flags & VIR_MIGRATE_OFFLINE))" conditional, so let's move
it there.

Fixes: 156e99f686690855be4e45d9b8b3194191a8bc31
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
2022-07-21 11:12:44 -04:00
Jiri Denemark
6262752460 qemu_migration: Use EnterMonitorAsync in qemuDomainGetMigrationBlockers
The code is run with an async job and thus needs to make sure a nested
job is acquired before entering the monitor.

While touching the code in qemuMigrationSrcIsAllowed I also fixed the
grammar which was accidentally broken by v8.5.0-140-g2103807e33.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-21 17:02:13 +02:00
Eugenio Pérez
2103807e33 qemu: remove hardcoded migration fail for vDPA devices if we can ask QEMU
vDPA devices will be migratable soon, so we shouldn't unconditionally
block migration of any domain with a vDPA device. Instead, we should
rely on QEMU to make the decision when that info is available from the
query-migrate QMP command (QEMU versions too old to have that info in
the results of query-migrate don't support migration of vDPA devices,
so in that case we will continue to unconditionally block migration).

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
2022-07-21 00:58:06 -04:00
Eugenio Pérez
156e99f686 qemu: query QEMU for migration blockers before our own harcoded checks
Since QEMU 6.0, if QEMU knows that a migration would fail,
'query-migrate' will return an array of error strings describing the
migration blockers.  This can be used to check whether there are any
devices/conditions blocking migration.

This patch adds a call to this query at the top of
qemuMigrationSrcIsAllowed().

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
2022-07-21 00:58:06 -04:00
Kristina Hanicova
badb7972fd qemu & hypervisor: move job object into hypervisor
This patch moves qemuDomainJobObj into hypervisor/ as generalized
virDomainJobObj along with generalized private job callbacks as
virDomainObjPrivateJobCallbacks.

Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-20 14:43:14 +02:00
Peter Krempa
6810cc45f7 qemu: Always assume support for QEMU_CAPS_MIGRATION_PARAM_BANDWIDTH
The 'max-bandwidth' field was added as argument of
'migrate-set-parameters' in qemu-2.8, thus all qemu version supported by
libvirt already use the new code path.

This patch assumes the presence and removes the legacy code paths.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-15 15:57:09 +02:00
Tim Wiederhake
58e6bb8be8 Fix spelling
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
2022-07-04 10:07:47 +02:00
Jiri Denemark
766abdc291 qemu_migration: Apply max-postcopy-bandwidth on post-copy resume
When resuming post-copy migration users may want to limit the bandwidth
used by the migration and use a value that is different from the one
specified when the migration was originally started.

Resolves: https://gitlab.com/libvirt/libvirt/-/issues/333

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-01 11:28:34 +02:00
Jiri Denemark
8c335b5530 qemu_migration: Pass migParams to qemuMigrationSrcResume
So the we can apply selected migration parameters even when resuming
post-copy migration.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-01 11:28:34 +02:00
Jiri Denemark
0eae541257 qemu: Pass migration flags to qemuMigrationParamsApply
The flags will later be used to determine which parameters should
actually be applied.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-01 11:28:34 +02:00
Jiri Denemark
f9dcc01a0f qemu_migration: Avoid mem.hard_limit > 0 check
My original commit v8.4.0-288-gf01fc4d119 accidentally forgot to fix
both instances of the same problem. While it fixed the destination side
of migration, the source one remained broken.

However, that commit was also wrong in saying the issue could have
caused unlimited memory locking to be allowed for QEMU when RDMA
migration was used. It could not, because the code would refuse to even
think about starting RDMA migration if hard_limit was not set. But
avoiding the "mem.hard_limit > 0" check is useful anyway.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-07-01 11:28:34 +02:00
Jiri Denemark
d375993ab3 qemu_migration: Implement VIR_MIGRATE_ZEROCOPY flag
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/306

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-06-23 16:45:39 +02:00
Jiri Denemark
f01fc4d119 qemu_migration: Don't set unlimited memlock limit for RDMA
Our documentation says RDMA migration requires hard_limit to be set so
that we know how big memory locking limit should be set for the domain
during migration. But since commit v1.2.13-71-gcf521fc8ba (which changed
the default hard_limit value from 0 to
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED) we were actually setting memlock
limit to unlimited if hard_limit was not set.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-06-23 16:45:39 +02:00
Jiri Denemark
d4d3bb8130 qemu_migration: Restore original memory locking limit
For RDMA migration we update memory locking limit, but never set it back
once migration finishes (on the destination host) or aborts (on the
source host).

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-06-23 16:45:39 +02:00
Jiri Denemark
22ee8cbf09 qemu_migration: Use qemuDomainSetMaxMemLock
This helper will not try to set the limit if it is already big enough,
which may be useful when libvirt daemon is running in a containerized
environment and is not allowed to change memory locking limit.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2022-06-23 16:45:39 +02:00