New qemuTPMHasSharedStorage() function is introduced which
returns whether the swtpm state directory is on a shared
filesystem (e.g. NFS).
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The callers store only an 'unsigned int' in the field. Convert it to the
proper type including parser/formatter.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When libvirtd is restarted during an active outgoing migration (or
snapshot, save, or dump which are internally implemented as migration)
it wants to cancel the migration. But by a mistake in commit
v8.7.0-57-g2d7b22b561 the qemuMigrationSrcCancel function is called with
wait == true, which leads to an instant crash by dereferencing NULL
pointer stored in priv->job.current.
When canceling migration to file (snapshot, save, dump), we don't need
to wait until it is really canceled as no migration capabilities or
parameters need to be restored.
On the other hand we need to wait when canceling outgoing migration and
since we don't have virDomainJobData at this point, we have to
temporarily restore the migration job to make sure we can process
MIGRATION events from QEMU.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In my commit v8.7.0-57-g2d7b22b561 I attempted to make
qemuMigrationSrcCancel synchronous, but failed. When we are canceling
migration after some kind of error which is detected in
in qemuMigrationSrcWaitForCompletion, jobData->status will be set to
VIR_DOMAIN_JOB_STATUS_FAILED regardless on QEMU state. So instead of
relying on the translated jobData->status in qemuMigrationSrcIsCanceled
we need to check the migration status we get from QEMU MIGRATION event.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This patch 'fixes' the behavior of the persistent_state TPM domain XML
attribute that intends to preserve the state of the TPM but should not
keep the state around on all the hosts a VM has been migrated to. It
removes the TPM state directory structure from the source host upon
successful migration when non-shared storage is used. Similarly, it
removes it from the destination host upon migration failure when
non-shared storage is used.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Add UNDEFINE_TPM and UNDEFINE_KEEP_TPM flags to qemuDomainUndefineFlags()
API and --tpm and --keep-tpm to 'virsh undefine'. Pass the
virDomainUndefineFlagsValues via qemuDomainRemoveInactive()
from qemuDomainUndefineFlags() all the way down to
qemuTPMEmulatorCleanupHost() and delete TPM storage there considering that
the UNDEFINE_TPM flag has priority over the persistent_state attribute
from the domain XML. Pass 0 in all other API call sites to
qemuDomainRemoveInactive() for now.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
'qemuMonitorJSONMigrate' is called from:
- qemuMonitorMigrateToHost
- qemuMonitorMigrateToSocket
Both of the above function are called only from
qemuMigrationSrcStart.
- qemuMonitorMigrateToFd
- called from:
- qemuMigrationSrcToFile
Both instances here pass QEMU_MONITOR_MIGRATE_BACKGROUND
directly.
- qemuMigrationSrcStart
qemuMigrationSrcStart is then called from qemuMigrationSrcRun and
qemuMigrationSrcResume, both of which always add QEMU_MONITOR_MIGRATE_BACKGROUND
to the flags.
Thus any caller always passes the flag so that we can remove the flag
altogether.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
QEMU supported the NBD server required for the new-style migration for a
long time already and when coupled with -blockdev the old style
migration doesn't even work, thus remove support for it.
This patch modifies the code to check that the destination returned data
for the NBD migration and returns an error if it did not and deletes the
fallback code paths which would not work.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The NBD server (detected via 'nbd-server-start' qmp command) was added
to qemu in v1.3 and can't be compiled out.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In commit 6111b2352242e9 removing pre-blockdev code paths I've
improperly refactored the setup of non-shared storage migration.
Specifically the code checking that there are disks and setting up the
NBD data in the migration cookie was originally outside of the loop
checking the user provided list of specific disks to migrate, but became
part of the block as it was not un-indented when a higher level block
was being removed.
The above caused that if non-shared storage migration is requested, but
the user doesn't provide the list of disks to migrate (thus implying to
migrate every appropriate disk) the code doesn't actually setup the
migration and then later on falls back to the old-style migration which
no longer works with blockdev.
Move the check that there's anything to migrate out of the
'nmigrate_disks' block.
Fixes: 6111b2352242e93c6d2c29f9549d596ed1056ce5
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2125111
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/373
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This patch moves qemuDomainObjEndJob() into
src/conf/virdomainjob as universal virDomainObjEndJob().
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
This patch moves qemuDomainObjBeginJob() into
src/conf/virdomainjob as universal virDomainObjBeginJob().
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
This patch uses the job object directly in the domain object and
removes the job object from private data of all drivers that use
it as well as other relevant code (initializing and freeing the
structure).
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We have always considered "migrate_cancel" QMP command to return after
successfully cancelling the migration. But this is no longer true (to be
honest I'm not sure it ever was) as it just changes the migration state
to "cancelling". In most cases the migration is canceled pretty quickly
and we don't really notice anything, but sometimes it takes so long we
even get to clearing migration capabilities before the migration is
actually canceled, which fails as capabilities can only be changed when
no migration is running. So to avoid this issue, we can wait for the
migration to be really canceled after sending migrate_cancel. The only
place where we don't need synchronous behavior is when we're cancelling
migration on user's request while it is actively watched by another
thread.
https://bugzilla.redhat.com/show_bug.cgi?id=2114866
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
We will need a little bit more code around qemuMonitorMigrateCancel to
make sure it works as expected. The new qemuMigrationSrcCancel helper
will avoid repeating the code in several places.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Let's call this qemuMigrationSrcCancelUnattended as the function is
supposed to be used when no other thread is watching the migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
The migration code was using few blockdev bits before blockdev was
fully integrated to allow TLS with NBD.
Since we now always use blockdev we can remove the check.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We no longer need the arguments which were conditionally filled based on
presence of the QEMU_CAPS_BLOCKDEV feature.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Assume that QEMU_CAPS_BLOCKDEV is present and remove all code executed
when it's not.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The only caller doesn't check the return value and actually doesn't have
one either. Remove the return value and adjust return statements.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Since we started handling the monitor EOF event inside a job any code
which uses virDomainObjWait would no longer properly abort in case when
the VM crashed during the wait.
This is because virDomainObjWait uses virDomainObjIsActive which checks
'vm->def->id' to see if the VM is still active. Unfortunately the domain
id is cleared in qemuProcessStop which is run only inside the job.
To fix this we can use the 'beingDestroyed' flag stored in the VM
private data which is set to true around the time when the condition is
signalled.
Reported-by: Pavel Hrdina <phrdina@redhat.com>
Fixes: 8c9ff9960b29d4703a99efdd1cadcf6f48799cc0
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The qemu code will need to check other qemu-private conditions when
reporting success for waiting. Thus we must replace all use of it with a
qemu-specific helper. For now the helper forwards directly to
virDomainObjWait.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This call to qemuMigrationSrcIsAllowedHostdev() (which does a
hardcoded fail of the migration if there is any PCI or mdev hostdev
device in the domain) while doing the destination side of migration
prep was found once the call to that same function was removed from
the source side migration prep (commit 25883cd5).
According to jdenemar, for the V2 migration protocol, prep of the
destination is the first step, so this *was* the proper place to do
the check, but for V3 migration this is in a way redundant (since we
will have already done the check on the source side (updated by
25883cd5 to query QEMU rather than do a hardcoded fail)).
Of course it's possible that the source could support migration of a
particular VFIO device, but the destination doesn't. But the current
check on the destination side is worthless even in that case, since it
is just *always* failing rather than querying QEMU; and QEMU can't be
queried at the point where the destination check is happening, since
it isn't yet running.
Anyway QEMU should complain when it's started if it's going to fail,
so removing this check should just move the failure to happen a bit
later. So the best solution to this problem is to simply remove the
hardcoded check/fail from qemuMigrationDstPrepareFresh() and rely on
QEMU to fail if it needs to.
Fixes: 25883cd5f0b188f2417f294b7d219a77b219f7c2
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Commit v8.4.0-287-gd4d3bb8130 tried to make sure the original
pre-migration memory locking limit is restored at the end of migration,
but it missed the case when libvirt daemon is restarted during
migration which needs to be aborted on reconnect.
And if this was not enough, I forgot to actually save the status XML
after setting the field in priv (in the commit mentioned above and also
in v8.4.0-291-gd375993ab3).
https://bugzilla.redhat.com/show_bug.cgi?id=2107424
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
We keep original values of migration parameters so that we can restore
them at the end of migration to make sure later migration does not use
some random values. However, this does not really work when libvirt
daemon is restarted on the source host because we failed to explicitly
save the status XML after getting the migration parameters from QEMU.
Actually it might work if the status XML is written later for some other
reason such as domain state change, but that's not how it should work.
https://bugzilla.redhat.com/show_bug.cgi?id=2107892
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Commit 62627524607f added the acquiring of a job, but it is not always
VIR_ASYNC_JOB_MIGRATION_OUT, so the code fails when doing save or anything else.
Correct the async job by passing it from the caller as another parameter.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
libvirt currently will block migration for any vfio-assigned device
unless it is a network device that is associated with a virtio-net
failover device (ie. if the hostdev object has a teaming->type ==
VIR_DOMAIN_NET_TEAMING_TYPE_TRANSIENT).
In the future there will be other vfio devices that can be migrated,
so we don't want to rely on this hardcoded block. QEMU 6.0+ will
anyway inform us of any devices that will block migration (as a part
of qemuDomainGetMigrationBlockers()), so we only need to do the
hardcoded check in the case of old QEMU that can't provide that
information.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
The new code that queries QEMU about migration blockers was put at the
top of qemuMigrationSrcIsAllowed(), but that function can also be
called in the case of offline migration (ie when the domain is
inactive / QEMU isn't running). This check should have been put inside
the "if (!(flags & VIR_MIGRATE_OFFLINE))" conditional, so let's move
it there.
Fixes: 156e99f686690855be4e45d9b8b3194191a8bc31
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
The code is run with an async job and thus needs to make sure a nested
job is acquired before entering the monitor.
While touching the code in qemuMigrationSrcIsAllowed I also fixed the
grammar which was accidentally broken by v8.5.0-140-g2103807e33.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
vDPA devices will be migratable soon, so we shouldn't unconditionally
block migration of any domain with a vDPA device. Instead, we should
rely on QEMU to make the decision when that info is available from the
query-migrate QMP command (QEMU versions too old to have that info in
the results of query-migrate don't support migration of vDPA devices,
so in that case we will continue to unconditionally block migration).
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
Since QEMU 6.0, if QEMU knows that a migration would fail,
'query-migrate' will return an array of error strings describing the
migration blockers. This can be used to check whether there are any
devices/conditions blocking migration.
This patch adds a call to this query at the top of
qemuMigrationSrcIsAllowed().
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
This patch moves qemuDomainJobObj into hypervisor/ as generalized
virDomainJobObj along with generalized private job callbacks as
virDomainObjPrivateJobCallbacks.
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The 'max-bandwidth' field was added as argument of
'migrate-set-parameters' in qemu-2.8, thus all qemu version supported by
libvirt already use the new code path.
This patch assumes the presence and removes the legacy code paths.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
When resuming post-copy migration users may want to limit the bandwidth
used by the migration and use a value that is different from the one
specified when the migration was originally started.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/333
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
So the we can apply selected migration parameters even when resuming
post-copy migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The flags will later be used to determine which parameters should
actually be applied.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
My original commit v8.4.0-288-gf01fc4d119 accidentally forgot to fix
both instances of the same problem. While it fixed the destination side
of migration, the source one remained broken.
However, that commit was also wrong in saying the issue could have
caused unlimited memory locking to be allowed for QEMU when RDMA
migration was used. It could not, because the code would refuse to even
think about starting RDMA migration if hard_limit was not set. But
avoiding the "mem.hard_limit > 0" check is useful anyway.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Our documentation says RDMA migration requires hard_limit to be set so
that we know how big memory locking limit should be set for the domain
during migration. But since commit v1.2.13-71-gcf521fc8ba (which changed
the default hard_limit value from 0 to
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED) we were actually setting memlock
limit to unlimited if hard_limit was not set.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
For RDMA migration we update memory locking limit, but never set it back
once migration finishes (on the destination host) or aborts (on the
source host).
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This helper will not try to set the limit if it is already big enough,
which may be useful when libvirt daemon is running in a containerized
environment and is not allowed to change memory locking limit.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>