The original virDomainAbortJob did not support flags.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Since all parts of post-copy recovery have been implemented now, it's
time to enable the corresponding flag.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Every single call to qemuMigrationJobContinue needs to register a
cleanup callback in case the migrating domain dies between phases or
when migration is paused due to a failure in postcopy mode.
Let's integrate registering the callback in qemuMigrationJobContinue to
make sure the current thread does not release a migration job without
setting a cleanup callback.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The callback will properly cleanup non-p2p migration job in case the
migrating domain dies between Begin and Perform while the client which
controls the migration is not cooperating (normally the API for the next
migration phase would handle this).
The same situation can happen even after Prepare and Perform phases, but
they both already register a suitable callback, so no fix is needed
there.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Normally the structure is created once the source reports completed
migration, but with post-copy migration we can get here even after
libvirt daemon was restarted. It doesn't make sense to preserve the
structure in our status XML as we're going to rewrite almost all of it
while refreshing the stats anyway. So we just create the structure here
if it doesn't exist to make sure we can properly report statistics of a
completed migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Everything was already done in the normal Finish phase and vCPUs are
running. We just need to wait for all remaining data to be transferred.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The QEMU process is already running, all we need to do is to call
migrate-recover QMP command. Except for some checks and cookie handling,
of course.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Non-postcopy case talks to QEMU monitor and thus needs to create a
nested job. Since qemuMigrationAnyConnectionClosed is called in case
there's no thread processing a migration API, we need to make the
current thread a temporary owner of the migration job to avoid "This
thread doesn't seem to be the async job owner: 0". This is done by
starting a migration phase.
While no monitor interaction happens in postcopy case and just setting
the phase (to indicate a broken postcopy migration) would be enough,
being consistent and setting the owner does not hurt anything.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
To prepare the code for handling incoming migration too.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The function is now called qemuMigrationAnyConnectionClosed to make it
clear it is supposed to handle broken connection during migration. It
will soon be used on both sides of migration so the "Src" part was changed
to "Any" to avoid renaming the function twice in a row.
The original *Cleanup name could easily be confused with cleanup
callbacks called when a domain is destroyed.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
This command tells QEMU to start listening for an incoming post-copy
recovery connection. Just like migrate-incoming is used for starting
fresh migration on the destination host.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Offline migration jumps over a big part of qemuMigrationDstPrepareFresh.
Let's move that part into a new qemuMigrationDstPrepareActive function
to make the code easier to follow.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Moves most of the code from qemuMigrationDstPrepareAny to a new
qemuMigrationDstPrepareFresh so that qemuMigrationDstPrepareAny can be
shared with post-copy resume.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
It just calls migrate QMP command with resume=true without having to
worry about migration capabilities or parameters, storage migration,
etc. since everything has already been done in the normal Perform phase.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
qemuMigrationSrcRun does a lot of thing before and after telling QEMU to
start the migration. Let's make the core reusable by moving it to a new
qemuMigrationSrcStart function.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
To make the code flow a bit more sensible.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Mostly we just need to check whether the domain is in a failed post-copy
migration that can be resumed.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
For historical reasons we automatically enabled VIR_MIGRATE_PAUSED flag
when a migration was started for a paused domain. However, when resuming
failed post-copy migration the domain on the source host will always be
paused (as it is already running on the destination host). We must avoid
enabling VIR_MIGRATE_PAUSED in this case.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
This flag can be used to restart post-copy migration once it failed
because of a broken connection.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
This phase marks a migration protocol as broken in a post-copy phase.
Libvirt is no longer actively watching the migration in this phase as
the migration API that started the migration failed.
This may either happen when post-copy migration really fails (QEMU
enters postcopy-paused migration state) or when the migration still
progresses between both QEMU processes, but libvirt lost control of it
because the connection between libvirt daemons (in p2p migration) or a
daemon and client (non-p2p migration) was closed. For example, when one
of the daemons was restarted.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Both qemuMigrationJobSetPhase and qemuMigrationJobStartPhase were doing
the same thing, which made little sense. Let's follow the difference
between qemuDomainObjSetJobPhase and qemuDomainObjStartJobPhase and
change job owner only in qemuMigrationJobStartPhase.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
We will want to update migration phase without affecting job ownership.
Either in the thread that already owns the job or from an event handler
which only changes the phase (of a job no-one owns) without assuming it.
Let's move the ownership change to a new qemuDomainObjStartJobPhase
helper and let qemuDomainObjSetJobPhase set the phase without touching
ownership.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The check can reveal a serious bug in our migration code and we should
not silently ignore it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Into a new qemuMigrationCheckPhase helper, which can be reused in other
places.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When recovering from a failed post-copy migration, we need to go through
all migration phases again, but don't need to repeat all the steps in
each phase. Let's create a new set of migration phases dedicated to
post-copy recovery so that we can easily distinguish between normal and
recovery code.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Turn the final part of Begin phase formatting a domain XML for migration
into a reusable helper.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When libvirt daemon is restarted during an active post-copy migration,
we do not always mark the migration as broken. In this phase libvirt is
not really needed for migration to finish successfully. In fact the
migration could have even finished while libvirt was not running or it
may still be happily running.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
We want to use query-migrate QMP command to check the current migration
state when reconnecting to active domains, but the reply we get to this
command may not contain any statistics at all if called on the
destination host.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
So far migration could only be completed while a migration API was
running and waiting for the migration to finish. In case such API could
not be called (the connection that initiated the migration is broken)
the migration would just be aborted or left in a "don't know what to do"
state. But this will change soon and we will be able to successfully
complete such migration once we get the corresponding event from QEMU.
This is specific to post-copy migration when vCPUs are already running
on the destination and we're only waiting for all memory pages to be
transferred. Such post-copy migration (which no-one is actively
watching) is called unattended migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When reconnecting to an active domain we need to use a different job
structure than the one referenced from the VM object.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Normally migrationPort is released in the Finish phase, but we need to
make sure it is properly released also in case qemuMigrationDstFinish is
not called at all. Currently the only callback which is called in this
situation qemuMigrationDstPrepareCleanup which already releases
migrationPort. This patch adds similar handling to additional callbacks
which will be used in the future.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
By separating it into a dedicated qemuMigrationSrcComplete function
which can be later called in other places.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The function which started a migration phase should also finish it by
calling qemuMigrationJobFinish/qemuMigrationJobContinue so that the code
is easier to follow.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Refactors qemuMigrationDstFinish by moving some parts to a dedicated
function for easier introduction of postcopy resume code without
duplicating common parts of the Finish phase. The goal is to have the
following call graph:
- qemuMigrationDstFinish
- qemuMigrationDstFinishOffline
- qemuMigrationDstFinishActive
- qemuMigrationDstFinishFresh
- qemuMigrationDstFinishResume
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
To keep all cookie handling (parsing and formatting) in the same
function.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Refactors qemuMigrationDstFinish by moving some parts to a dedicated
function for easier introduction of postcopy resume code without
duplicating common parts of the Finish phase. The goal is to have the
following call graph:
- qemuMigrationDstFinish
- qemuMigrationDstFinishOffline
- qemuMigrationDstFinishActive
- qemuMigrationDstFinishFresh
- qemuMigrationDstFinishResume
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Refactors qemuMigrationDstFinish by moving some parts to a dedicated
function for easier introduction of postcopy resume code without
duplicating common parts of the Finish phase. The goal is to have the
following call graph:
- qemuMigrationDstFinish
- qemuMigrationDstFinishOffline
- qemuMigrationDstFinishActive
- qemuMigrationDstFinishFresh
- qemuMigrationDstFinishResume
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
We want to prevent our error path that can potentially kill the domain
on the destination host from overwriting an error reported earlier, but
we were only doing so in one specific path when starting vCPUs fails.
Let's do it in all paths.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>