Note that we can only do this for intel-iommu and virtio-iommu,
which are configured using -device; smmuv3 is configured using
a machine type property, so there's no room on the command line
for an alias in that case.
https://bugzilla.redhat.com/show_bug.cgi?id=2108483
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
When hotplugging a chardev, Libvirt opens corresponding
file/binds to a socket/does whatever necessary to obtain an FD
that is later passed to QEMU. However, due to wrong placement of
the function that does all of this
(qemuProcessPrepareHostBackendChardevHotplug()) it may happen
that a file is set seclabel on, only to be unlink()-ed and
created again (the former is done by
qemuSecuritySetChardevLabel(), the latter by aforementioned
function). The unlink()-ing is done for UNIX sockets with
mode='bind' and happens inside qemuOpenChrChardevUNIXSocket().
However, these steps can be swapped simply.
Fixes: ad81aa8ad0
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Kristina Hanicova <khanicov@redhat.com>
When hotplugging a chardev, Libvirt opens corresponding
file/binds to a socket/does whatever necessary to obtain an FD
that is later passed to QEMU. However, if something fails after
the FDs were transferred to QEMU and before chardev is actually
added via monitor, these FDs are never closed in QEMU. This is
rather suboptimal.
Fixes: 15bdced9b3
Fixes: ad81aa8ad0
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Kristina Hanicova <khanicov@redhat.com>
Commit 6262752460 added the acquiring of a job, but it is not always
VIR_ASYNC_JOB_MIGRATION_OUT, so the code fails when doing save or anything else.
Correct the async job by passing it from the caller as another parameter.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
libvirt currently will block migration for any vfio-assigned device
unless it is a network device that is associated with a virtio-net
failover device (ie. if the hostdev object has a teaming->type ==
VIR_DOMAIN_NET_TEAMING_TYPE_TRANSIENT).
In the future there will be other vfio devices that can be migrated,
so we don't want to rely on this hardcoded block. QEMU 6.0+ will
anyway inform us of any devices that will block migration (as a part
of qemuDomainGetMigrationBlockers()), so we only need to do the
hardcoded check in the case of old QEMU that can't provide that
information.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
The new code that queries QEMU about migration blockers was put at the
top of qemuMigrationSrcIsAllowed(), but that function can also be
called in the case of offline migration (ie when the domain is
inactive / QEMU isn't running). This check should have been put inside
the "if (!(flags & VIR_MIGRATE_OFFLINE))" conditional, so let's move
it there.
Fixes: 156e99f686
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
The code is run with an async job and thus needs to make sure a nested
job is acquired before entering the monitor.
While touching the code in qemuMigrationSrcIsAllowed I also fixed the
grammar which was accidentally broken by v8.5.0-140-g2103807e33.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
This patch moves qemuDomainObjInitJob() as virDomainObjInitJob()
into hypervisor in order to be used by other drivers as well.
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
We have qemuCgroupAllowDevicePath() which sets up devices
controller for just one path. And if we have more paths we have
to call it in a loop. So far, we have just one such place, but
soon we'll have another one (for SGX memory). Separate the loop
into its own function so that it can be reused.
And while at it, move setting the default set of devices as the
first thing, right after all devices are disallowed.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Inside of the qemuSetupDevicesCgroup() there's @deviceACL
variable, which points to a string list of devices that are
allowed in devices controller by default. This list can either
come from qemu.conf (cfg->cgroupDeviceACL) or from a builtin
@defaultDeviceACL. However, a multiline ternary operator is used
when setting the variable which is against our coding style.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
vDPA devices will be migratable soon, so we shouldn't unconditionally
block migration of any domain with a vDPA device. Instead, we should
rely on QEMU to make the decision when that info is available from the
query-migrate QMP command (QEMU versions too old to have that info in
the results of query-migrate don't support migration of vDPA devices,
so in that case we will continue to unconditionally block migration).
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
Since QEMU 6.0, if QEMU knows that a migration would fail,
'query-migrate' will return an array of error strings describing the
migration blockers. This can be used to check whether there are any
devices/conditions blocking migration.
This patch adds a call to this query at the top of
qemuMigrationSrcIsAllowed().
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
Since QEMU 6.0, if migration is blocked for some reason,
'query-migrate' will return an array of error strings describing the
migration blockers. This can be used to check whether there are any
devices, or other conditions, that would cause migration to fail.
This patch adds a function that sends this query via a QMP command and
returns the resulting array of reasons. qemuMigrationSrcIsAllowed()
will be able to use the new function to ask QEMU for migration
blockers, instead of the hardcoded guesses that libvirt currently has.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
since qemu 6.0, if migration is blocked for some reason, 'query-migrate'
will return an array of error strings describing the migration blockers.
This can be used to check whether there are any devices blocking
migration, etc.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
QEMU supports hotplug of a cdrom device with USB or SCSI bus. Just
unblock these devices in qemuDomainAttachDeviceDiskLiveInternal() and
qemuDomainDetachPrepDisk().
Fixes: https://gitlab.com/libvirt/libvirt/-/issues/261
Signed-off-by: minglei.liu <minglei.liu@smartx.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
This patch alters members of virDomainObjPrivateJobCallbacks to
make the code more consistent.
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
This patch moves qemuDomainJobObj into hypervisor/ as generalized
virDomainJobObj along with generalized private job callbacks as
virDomainObjPrivateJobCallbacks.
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
This patch removes variable 'async', which is used only once, and
replaces it with direct comparison with an enum member.
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
When formatting qemuCaps XML, the <cpudata/> element is
misaligned. This is because it contains multiple lines and
virBufferAsprintf() does not expect that. Switch to
virBufferAddStr() which does.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
The G_GNUC_NO_INLINE macro will eventually be marked as
deprecated [1] and we are recommended to use G_NO_INLINE instead.
Do the switch now, rather than waiting for compile time warning
to occur.
1: 15cd0f0461
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
These wrapper functions were used to adapt the virObjectUnref() function
signature for different callbacks. But in commit 0d184072, the
virObjectUnref() function was changed to return a void instead of a
bool, so these adapters are no longer necessary.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
The getters/setters for individual properties of migration
speed/downtime/cache size are unused once we switched to setting them
purely via migration parameters. Remove the unused helpers.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The 'xbzrle-cache-size' parameter was added in qemu-2.11 thus all
supported qemu versions now use the new code path.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The 'downtime-limit' field of 'migrate-set-parameters' was introduced in
qemu-2.8, thus all qemu versions supported by libvirt use the new code.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The 'max-bandwidth' field was added as argument of
'migrate-set-parameters' in qemu-2.8, thus all qemu version supported by
libvirt already use the new code path.
This patch assumes the presence and removes the legacy code paths.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
QEMU offers two attributes for handling reset requests of an USB
host device: guest-reset and guest-resets-all. When combined they
act as follows:
1) guest-reset=false
The guest is not allowed to reset the physical USB device.
2) guest-reset=true,guest-resets-all=false
The guest is allowed to reset the device when it is not yet
initialized (aka no USB bus address assigned). Usually this results
in one guest reset being allowed. This is the default behavior.
3) guest-reset=true,guest-resets-all=true
The guest is allowed to reset the device as it pleases.
Now, there's a clear 1:1 mapping with our representation of
guestReset, so generating cmd line is trivial.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that we have a capability, validate that the QEMU we are
talking to has everything we need for guestReset.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We will need two attributes of usb-host device to set:
guest-reset and guest-resets-all. The former was introduced in
QEMU v4.0.0-rc0~56^2 and the other in v4.2.0-rc1~9^2. Hence,
track the latter only as it's only starting from that commit when
QEMU has both attributes.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
It was always possible to modify the inactive XML, because
VIR_DOMAIN_AFFECT_CURRENT (= 0) is accepted implicitly. But now
that the logic when changing both config and live XMLs is more
robust we can accept VIR_DOMAIN_AFFECT_CONFIG flag too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
There are three APIs that allow changing IOThreads:
virDomainAddIOThread()
virDomainDelIOThread()
virDomainSetIOThreadParams()
In case of QEMU driver these are handled by
qemuDomainChgIOThread() which attempts to be versatile enough to
work on both inactive and live domain definitions at the same
time. However, it's a bit clumsy - when a change to live
definition succeeds but fails in inactive definition then there's
no rollback. And somewhat rightfully so - changes to live
definition are in general harder to roll back. Therefore, do what
we do elsewhere (qemuDomainAttachDeviceLiveAndConfig(),
qemuDomainDetachDeviceAliasLiveAndConfig(), ...):
1) do the change to inactive XML first,
2) in fact, do the change to a copy of inactive XML,
3) swap inactive XML and its copy only after everything
succeeded.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When virDomainSetIOThreadParams() API is called, well its QEMU
impl: qemuDomainSetIOThreadParams() then typed params are parsed
by qemuDomainIOThreadParseParams() into this
qemuMonitorIOThreadInfo struct. In the struct we have a <int,
bool> pair for every IOThread attribute we can tune through
monitor. The struct is then passed to
qemuMonitorJSONSetIOThread() which looks at the bool and if set
then the corresponding attribute is set to given value. Each
attribute is thus changed in a separate call. While this works
for attributes independent of each other ("poll-max-ns",
"poll-grow", "poll-shrink"), it does not always work for the
other attributes ("thread-pool-min" and "thread-pool-max").
The limitation here is that the lower boundary (minimum) has to
be lower (or equal to) the upper boundary (maximum) at all times.
This means, that in some cases we might need to set attributes in
reversed order to meet the constraint.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/339
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
The domain post parse functions currently live in domain_conf.c
which thus grows always larger. Mimic what we've done for the
validation code and move the post parse code into a separate
file: domain_postparse.c.
I've started by moving every function with PostParse in its name
into the new file and then compile hunting for helper functions
only to move them as well.
In the end, I've moved virDomainDefPostParse symbol in
libvirt_private.syms into a new section. And while
virDomainDeviceDefPostParseOne() is made 'public' in
domain_postparse.h too, I'm not exporting it because it has no
caller outside src/conf/ and it's unlikely it ever will.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
There are two places where the @model member of
_virDomainIOMMUDef struct is typecasted to virDomainIOMMUModel
which is completely unnecessary because the struct already
defines the member of that type.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
No sane firmware build will fail this check, but just to be on
the safe side let's check anyway.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Currently, a firmware configuration such as
<os firmware='efi'>
<firmware>
<feature enabled='yes' name='enrolled-keys'/>
</firmware>
</os>
will correctly pick a firmware that implements the Secure Boot
feature and initialize the NVRAM file so that it contains the
keys necessary to enforce the signing requirements. However, the
lack of a
<loader secure='yes'/>
element makes it possible for pflash writes to happen outside
of SMM mode. This means that the authenticated UEFI variables
where the keys are stored could potentially be overwritten by
malicious code running in the guest, thus making it possible to
circumvent Secure Boot.
To prevent that from happening, automatically turn on the
loader.secure feature whenever a firmware that implements Secure
Boot is chosen by the firmware autoselection logic. This is
identical to the way we already automatically enable SMM in such
a scenario.
Note that, while this is technically a guest-visible change, it
will not affect migration of existings VMs and will not prevent
legitimate guest code from running.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
When resuming post-copy migration users may want to limit the bandwidth
used by the migration and use a value that is different from the one
specified when the migration was originally started.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/333
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
So the we can apply selected migration parameters even when resuming
post-copy migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
We will need to annotate individual parameters a bit more than just
noting their type. Let's introduce qemuMigrationParamInfo replacing
simple qemuMigrationParamTypes with an array of structs.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The flags will later be used to determine which parameters should
actually be applied.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
My original commit v8.4.0-288-gf01fc4d119 accidentally forgot to fix
both instances of the same problem. While it fixed the destination side
of migration, the source one remained broken.
However, that commit was also wrong in saying the issue could have
caused unlimited memory locking to be allowed for QEMU when RDMA
migration was used. It could not, because the code would refuse to even
think about starting RDMA migration if hard_limit was not set. But
avoiding the "mem.hard_limit > 0" check is useful anyway.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Introduced in v8.4.0-rc1~183 but the first real problem
introduced in v8.4.0-rc1~170, there's a
qemuBuildInterfaceConnect() call inside of
qemuDomainAttachNetDevice(). If the former fails, then the
function is immediately returned from instead of jumping onto the
cleanup label. This is crucial, because at this point the domain
definition contains 'borrowed' net definition, which is then
freed, since an error was met. The domain definition is then left
with a dangling pointer which leads to all sorts of different
crashes.
Fixes: 29d022b1eb
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2102009
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Our documentation says RDMA migration requires hard_limit to be set so
that we know how big memory locking limit should be set for the domain
during migration. But since commit v1.2.13-71-gcf521fc8ba (which changed
the default hard_limit value from 0 to
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED) we were actually setting memlock
limit to unlimited if hard_limit was not set.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
For RDMA migration we update memory locking limit, but never set it back
once migration finishes (on the destination host) or aborts (on the
source host).
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This helper will not try to set the limit if it is already big enough,
which may be useful when libvirt daemon is running in a containerized
environment and is not allowed to change memory locking limit.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
qemuDomainAdjustMaxMemLock combined computing the desired limit with
applying it. This patch separates the code to apply a memory locking
limit to a new qemuDomainSetMaxMemLock helper for better reusability.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Convert all the cases where we can unconditionally free
the virURI at the end of scope.
In libxlDomainMigrationDstPrepare, uri is only filled
if uri_in was present, so moving the virURIFree out of
the condition is safe.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Replace tpm->type and tpm->model qemuCaps validation with the
similar logic in domcaps.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
The qemu `tpm-tis` device is an ISA device, so only really applicable
to x86 archs. For all non-x86 archs we should use `tpm-tis-device`
This fixes tpm-tis usage on armv7l and riscv
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Checking against qemu capabilities should be enough here
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/329
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
We explicitly check whether the value is YES or NO, which makes
it unnecessary to make sure it's not ABSENT beforehand.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This patch introduces the logic to format and parse remote NVRAM.
Update NVRAM element schema, and docs for supporting network backed
NVRAM. NVRAM backed over network would give the flexibility to start
the VM on any host without having to worry about where to get the latest
nvram image.
<nvram type='network'>
<source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/0'>
<host name='example.com' port='6000'/>
</source>
</nvram>
or
<nvram type='file'>
<source file='/var/lib/libvirt/nvram/guest_VARS.fd'/>
</nvram>
In the qemu driver we will support the new definition only with qemu's
supporting -blockdev.
Signed-off-by: Prerna Saxena <prerna.saxena@nutanix.com>
Signed-off-by: Florian Schmidt <flosch@nutanix.com>
Signed-off-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Prepare for network backed nvram by refusing the reset of nvram on boot
and don't check whether it exists. We will not support filling it from a
template.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Setup all fields for use with -blockdev.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
We don't really use it besides when starting up the VM so when
reconnecting this step is totally pointless.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Use the designated helpers for virStorageSource instead using the
file-based ones with a check.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Since we now have a full virStorageSource for storing the nvram path we
don't need the extra dance of transferring the data into the 'pflash1'
variable which was an intermediary solution to use -blockdev.
For now we keep it functionally identical to the previous impl.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Currently, libvirt allows only local filepaths to specify the location
of the 'nvram' image. Changing it to virStorageSource type will allow
supporting remote storage for nvram.
Signed-off-by: Prerna Saxena <prerna.saxena@nutanix.com>
Signed-off-by: Florian Schmidt <flosch@nutanix.com>
Signed-off-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Extract the internals of qemuDomainPrepareStorageSourceBlockdev into
qemuDomainPrepareStorageSourceBlockdevNodename so that we can reuse it
when instantiating the virStorageSource for pflash backing.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
This is a partial revert of 87a43a907f
The change to use g_clear_pointer() in more places was accidentally
applied to cases involving vir_g_source_unref().
In some cases, the ordering of g_source_destroy() and
vir_g_source_unref() was reversed, which resulted in the source being
marked as destroyed, after it is already unreferenced. This
use-after-free case might work in many cases, but with versions of
glib older than 2.64.0 it may defer unref to run within the main
thread to avoid a race condition, which creates a large distance
between the g_source_unref() and g_source_destroy().
In some cases, the call to vir_g_source_unref() was replaced with a
second call to g_source_destroy(), leading to a memory leak or worse.
In our experience, the symptoms were that use of libvirt-python became
slower over time, with OpenStack nova-compute initially taking around
one second to periodically query the host PCI devices, and within an
hour it was taking over a minute to complete the same operation, until
it is was eventually running this query back-to-back, resulting in the
nova-compute process consuming 100% of one CPU thread, losing its
RabbitMQ connection frequently, and showing up as down to the control
plane.
Signed-off-by: Mark Mielke <mark.mielke@gmail.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
When creating a TAP interface we can end up with multiple FDs,
each representing one queue. However, these FDs must be
relabelled as they are then passed to QEMU. In case of
qemuBuildInterfaceConnect() we allocate the array for the FDs and
then let function corresponding to the <interface/> type to fill
the array with FDs. When any of the functions meets an error,
it's also responsible for closing previously opened FDs. However,
the functions take a shortcut: iterate through each member of the
array and close it (if it's non-negative). This assumes that the
array is initialized to negative values, which use to be the case
before rewrite in v8.4.0-rc1~170 but after it it's no longer the
case. Subsequently, "random" FDs are closed (okay, not that
random since the array is allocated via g_new0(), but hey - FD 0
is still valid FD and might be valuable, actually).
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2075383#c18
Fixes: 7a38d3946b
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Since the main-loop and iothread classes are derived from the
same class (EventLoopBaseClass) we don't need new capability and
can use QEMU_CAPS_IOTHREAD_THREAD_POOL_MAX directly to check
whether QEMU's capable of setting defaultiothread pool size.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Introduced in previous commit, QEMU driver needs to be taught how
to set VIR_DOMAIN_IOTHREAD_THREAD_POOL_MIN and
VIR_DOMAIN_IOTHREAD_THREAD_POOL_MAX parameters on given IOThread.
Fortunately, this is fairly trivial to do and since these two
parameters are exposed in domain XML too the update of inactive
XML can be wired up too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Now that we have a capability that reflects whether QEMU is
capable of setting iothread pool size, let's introduce a
validator check to make sure users are not trying to use this
feature with QEMU that doesn't support it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
This capability reflects whether QEMU allows setting
thread-pool-min and thread-pool-max attributes on iothread
object. Since both attributes were introduced in the same commit
(v7.0.0-878-g71ad4713cc) and can't exist independently of each
other we can stick with one capability covering both of them.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
They were constructed from two separate strings using "%s: %s", which
is ugly and does not work well with translations.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
When a domain has a guest agent channel enabled and the agent is running
in the guest, we will get VSERPORT_CHANGE event on a destination host as
soon as we start vCPUs there. This is not an issue for normal migration,
but post-copy migration will remain running after we started vCPUs on
the destination. If it runs for more than 30s, the VSERPORT_CHANGE event
handler will fail to get a job and log the following error message:
Timed out during operation: cannot acquire state change lock (held
by monitor=remoteDispatchDomainMigrateFinish3Params)
and of course we will think the guest agent is not connected and thus
all APIs talking to it will fail. Until the agent or libvirt daemon is
restarted.
Luckily we only need to update the channel state (to mark it as
connected) and connect to the agent neither of which conflicts with
migration. Thus we can safely enable processing this event during
migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
This is a special job for operations that need to modify domain state
during an active migration. The modification must not affect any state
that could conflict with the migration code. This is useful mainly for
event handlers that need to be processed during migration and which
could otherwise time out on acquiring a normal MODIFY job.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Since all parts of post-copy recovery have been implemented now, it's
time to enable the corresponding flag.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Every single call to qemuMigrationJobContinue needs to register a
cleanup callback in case the migrating domain dies between phases or
when migration is paused due to a failure in postcopy mode.
Let's integrate registering the callback in qemuMigrationJobContinue to
make sure the current thread does not release a migration job without
setting a cleanup callback.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The callback will properly cleanup non-p2p migration job in case the
migrating domain dies between Begin and Perform while the client which
controls the migration is not cooperating (normally the API for the next
migration phase would handle this).
The same situation can happen even after Prepare and Perform phases, but
they both already register a suitable callback, so no fix is needed
there.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Normally the structure is created once the source reports completed
migration, but with post-copy migration we can get here even after
libvirt daemon was restarted. It doesn't make sense to preserve the
structure in our status XML as we're going to rewrite almost all of it
while refreshing the stats anyway. So we just create the structure here
if it doesn't exist to make sure we can properly report statistics of a
completed migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Everything was already done in the normal Finish phase and vCPUs are
running. We just need to wait for all remaining data to be transferred.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The QEMU process is already running, all we need to do is to call
migrate-recover QMP command. Except for some checks and cookie handling,
of course.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Non-postcopy case talks to QEMU monitor and thus needs to create a
nested job. Since qemuMigrationAnyConnectionClosed is called in case
there's no thread processing a migration API, we need to make the
current thread a temporary owner of the migration job to avoid "This
thread doesn't seem to be the async job owner: 0". This is done by
starting a migration phase.
While no monitor interaction happens in postcopy case and just setting
the phase (to indicate a broken postcopy migration) would be enough,
being consistent and setting the owner does not hurt anything.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
To prepare the code for handling incoming migration too.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The function is now called qemuMigrationAnyConnectionClosed to make it
clear it is supposed to handle broken connection during migration. It
will soon be used on both sides of migration so the "Src" part was changed
to "Any" to avoid renaming the function twice in a row.
The original *Cleanup name could easily be confused with cleanup
callbacks called when a domain is destroyed.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
This command tells QEMU to start listening for an incoming post-copy
recovery connection. Just like migrate-incoming is used for starting
fresh migration on the destination host.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Offline migration jumps over a big part of qemuMigrationDstPrepareFresh.
Let's move that part into a new qemuMigrationDstPrepareActive function
to make the code easier to follow.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Moves most of the code from qemuMigrationDstPrepareAny to a new
qemuMigrationDstPrepareFresh so that qemuMigrationDstPrepareAny can be
shared with post-copy resume.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
It just calls migrate QMP command with resume=true without having to
worry about migration capabilities or parameters, storage migration,
etc. since everything has already been done in the normal Perform phase.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
qemuMigrationSrcRun does a lot of thing before and after telling QEMU to
start the migration. Let's make the core reusable by moving it to a new
qemuMigrationSrcStart function.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
To make the code flow a bit more sensible.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Mostly we just need to check whether the domain is in a failed post-copy
migration that can be resumed.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
This phase marks a migration protocol as broken in a post-copy phase.
Libvirt is no longer actively watching the migration in this phase as
the migration API that started the migration failed.
This may either happen when post-copy migration really fails (QEMU
enters postcopy-paused migration state) or when the migration still
progresses between both QEMU processes, but libvirt lost control of it
because the connection between libvirt daemons (in p2p migration) or a
daemon and client (non-p2p migration) was closed. For example, when one
of the daemons was restarted.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Both qemuMigrationJobSetPhase and qemuMigrationJobStartPhase were doing
the same thing, which made little sense. Let's follow the difference
between qemuDomainObjSetJobPhase and qemuDomainObjStartJobPhase and
change job owner only in qemuMigrationJobStartPhase.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
We will want to update migration phase without affecting job ownership.
Either in the thread that already owns the job or from an event handler
which only changes the phase (of a job no-one owns) without assuming it.
Let's move the ownership change to a new qemuDomainObjStartJobPhase
helper and let qemuDomainObjSetJobPhase set the phase without touching
ownership.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The check can reveal a serious bug in our migration code and we should
not silently ignore it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Into a new qemuMigrationCheckPhase helper, which can be reused in other
places.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When recovering from a failed post-copy migration, we need to go through
all migration phases again, but don't need to repeat all the steps in
each phase. Let's create a new set of migration phases dedicated to
post-copy recovery so that we can easily distinguish between normal and
recovery code.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Turn the final part of Begin phase formatting a domain XML for migration
into a reusable helper.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When libvirt daemon is restarted during an active post-copy migration,
we do not always mark the migration as broken. In this phase libvirt is
not really needed for migration to finish successfully. In fact the
migration could have even finished while libvirt was not running or it
may still be happily running.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
We want to use query-migrate QMP command to check the current migration
state when reconnecting to active domains, but the reply we get to this
command may not contain any statistics at all if called on the
destination host.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
So far migration could only be completed while a migration API was
running and waiting for the migration to finish. In case such API could
not be called (the connection that initiated the migration is broken)
the migration would just be aborted or left in a "don't know what to do"
state. But this will change soon and we will be able to successfully
complete such migration once we get the corresponding event from QEMU.
This is specific to post-copy migration when vCPUs are already running
on the destination and we're only waiting for all memory pages to be
transferred. Such post-copy migration (which no-one is actively
watching) is called unattended migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When reconnecting to an active domain we need to use a different job
structure than the one referenced from the VM object.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Normally migrationPort is released in the Finish phase, but we need to
make sure it is properly released also in case qemuMigrationDstFinish is
not called at all. Currently the only callback which is called in this
situation qemuMigrationDstPrepareCleanup which already releases
migrationPort. This patch adds similar handling to additional callbacks
which will be used in the future.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
By separating it into a dedicated qemuMigrationSrcComplete function
which can be later called in other places.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The function which started a migration phase should also finish it by
calling qemuMigrationJobFinish/qemuMigrationJobContinue so that the code
is easier to follow.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Refactors qemuMigrationDstFinish by moving some parts to a dedicated
function for easier introduction of postcopy resume code without
duplicating common parts of the Finish phase. The goal is to have the
following call graph:
- qemuMigrationDstFinish
- qemuMigrationDstFinishOffline
- qemuMigrationDstFinishActive
- qemuMigrationDstFinishFresh
- qemuMigrationDstFinishResume
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
To keep all cookie handling (parsing and formatting) in the same
function.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Refactors qemuMigrationDstFinish by moving some parts to a dedicated
function for easier introduction of postcopy resume code without
duplicating common parts of the Finish phase. The goal is to have the
following call graph:
- qemuMigrationDstFinish
- qemuMigrationDstFinishOffline
- qemuMigrationDstFinishActive
- qemuMigrationDstFinishFresh
- qemuMigrationDstFinishResume
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Refactors qemuMigrationDstFinish by moving some parts to a dedicated
function for easier introduction of postcopy resume code without
duplicating common parts of the Finish phase. The goal is to have the
following call graph:
- qemuMigrationDstFinish
- qemuMigrationDstFinishOffline
- qemuMigrationDstFinishActive
- qemuMigrationDstFinishFresh
- qemuMigrationDstFinishResume
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
We want to prevent our error path that can potentially kill the domain
on the destination host from overwriting an error reported earlier, but
we were only doing so in one specific path when starting vCPUs fails.
Let's do it in all paths.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The comment about QEMU < 0.10.6 has been irrelevant for years.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
By separating it into a dedicated qemuMigrationDstComplete function
which can be later called in other places.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The final part of Finish phase will be refactored into a dedicated
function and we don't want to generate the cookie there.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Let's call it "error" so that it's clear the label is only used in
failure path.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Most of the code in "endjob" label is executed only on failure. Let's
duplicate the rest so that the label can be used only in error path
making the success path easier to follow and refactor.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Code executed only when dom != NULL can be moved before "endjob" label,
to the only place where dom is set.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
We don't need the object until we get to the "endjob" label.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When connection breaks during post-copy migration, QEMU enters
'postcopy-paused' state. We need to handle this state and make the
situation visible to upper layers.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Even though a migration is paused, we still want to see the amount of
data transferred so far and that the migration is indeed not progressing
any further.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Migration is a job which takes some time and if it succeeds, there's
nothing to call another migration on. If a migration fails, it might
make sense to rerun it with different arguments, but this would only be
done once the first migration fails rather than while it is still
running.
If this was not enough, the migration job now stays active even if
post-copy migration fails and anyone possibly retrying the migration
would be waiting for the job timeout just to get a suboptimal error
message.
So let's special case getting a migration job when another one is
already active.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Jobs that are supposed to remain active even when libvirt daemon
restarts were reported as started at the time the daemon was restarted.
This is not very helpful, we should restore the original timestamp.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Since we keep the migration job active when post-copy migration fails,
we need to restore it when reconnecting to running domains.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When migration fails after it already switched to post-copy phase on the
source, but early enough that we haven't called Finish on the
destination yet, we know the vCPUs were not started on the destination
and the source host still has a complete state of the domain. Thus we
can just ignore the fact post-copy phase started and normally abort the
migration and resume vCPUs on the source.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When post-copy migration fails, we can't just abort the migration and
resume the domain on the source host as it is already running on the
destination host and no host has a complete state of the domain memory.
Instead of the current approach of just marking the domain on both ends
as paused/running with a post-copy failed sub state, we will keep the
migration job active (even though the migration API will return failure)
so that the state is more visible and we can better control what APIs
can be called on the domains and even allow for resuming the migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The code for setting up a previously active backup job in
qemuProcessRecoverJob is generalized into a dedicated function so that
it can be later reused in other places.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
It is used for saving job out of domain object. Just like
virErrorPreserveLast is used for errors. Let's make the naming
consistent as Restore would suggest different semantics.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The function can be used as a callback for qemuDomainCleanupAdd to
automatically clean up a migration job when a domain is destroyed.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The function never returns anything but zero.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The events would normally be triggered only if we're changing domain
state. But most of the time the domain is already in the right state and
we're just changing its substate from {PAUSED,RUNNING}_POSTCOPY to
*_POSTCOPY_FAILED. Let's emit lifecycle events explicitly when post-copy
migration fails to make the failure visible without polling.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>