Our streams are not the best transport for migration data and we support
TLS for security now. It's unlikely that there will be enough motivation
to add a new migration protocol to tunnel NBD too.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Due to failures to unlink on previous rename/undefine we can already have
autolink etc files for the domain to be defined. Remove them.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
qemuMigrationSrcCleanup uses qemuDomainObjDiscardAsyncJob currently. But
discard does not reduce jobs_queued counter so it leaks. Also discard does not
notify other threads that job condition is available. Discard does reset nested
job but nested job is not possible in this conditions.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Don't hide our use of GHashTable behind our typedef. This will also
promote the use of glibs hash function directly.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Matt Coleman <matt@datto.com>
In one of recent patches the way that we start NBD server for
incoming migration was reworked (v6.8.0-rc1~298). A new boolean
was introduced that tracks whether the NBD server was started so
that we don't start it twice nor record in the port in the port
allocator twice. Well, this idea is good, but in the
implementation the boolean is never set, so we are reserving the
port twice and would be starting the NBD server twice too if it
wasn't for port reservation fail.
Fixes: e74d627bb3
Reported-by: Vjaceslavs Klimovs <vklimovs@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Enable <interface type='vdpa'> for qemu domains. This provides basic
support and does not support hotplug or migration.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
Centralize the logic deciding which arguments to use when exporting a
block backend via NBD to a single place so that it can be centrally
fixed in upcoming commits to support the new export method via
'block-export-add'.
Additionally this allows simplification of the caller from migration as
the logic deciding which arguments to use is extracted too.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
If storage migration is requested, and the destination storage does
not exist on the remote host, qemu's migration support will call
into the libvirt storage driver to precreate the destination storage.
The storage driver virConnectPtr is opened too early though, adding
an unnecessary dependency on the storage driver for several cases
that don't require it. This currently requires kubevirt to install
the storage driver even though they aren't actually using it.
Push the virGetConnectStorage calls to right before the cases they are
actually needed.
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Use a more descriptive name and move the verb to the end so that the
functions conform with the naming policy.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We need an empty cookie, so use qemuMigrationCookieNew instead of
qemuMigrationEatCookie with NULL/0 arguments.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use a more descriptive name and move the verb to the end so that the
functions conform with the naming policy.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Block migration when transient disk option is enabled to simplify the
handling of the overlay files.
Signed-off-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Ján Tomko <jtomko@redhat.com>
Add an abort() on the class/object allocation failures so that
virStorageSourceNew() always returns a virStorageSource and remove
checks from all callers.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In the past we had to declare @cfg and then explicitly unref it.
But now, with glib we can use g_autoptr() which will do the unref
automatically and thus is more bulletproof.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
This allows:
a) migration without access to network
b) complete control of the migration stream
c) easy migration between containerised libvirt daemons on the same host
Resolves: https://bugzilla.redhat.com/1638889
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Adds new typed param for migration and uses this as a UNIX socket path that
should be used for the NBD part of migration. And also adds virsh support.
Partially resolves: https://bugzilla.redhat.com/1638889
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Clean up the semantics by using one extra self-describing variable.
This also fixes the port allocation when the port is specified.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Instead of saving some data from a union up front and changing an overlayed
struct before using said data, let's just set the new values after they are
decided. This will increase the readability of future commit(s).
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
GCC 10 complains about "well_formed_uri" may be used uninitialzed.
Even though it is a false positive, we can easily avoid it.
Avoiding
../src/qemu/qemu_migration.c: In function ‘qemuMigrationDstPrepareDirect’:
../src/qemu/qemu_migration.c:2920:16: error: ‘well_formed_uri’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
2920 | if (well_formed_uri) {
| ^
Signed-off-by: Boris Fiuczynski <fiuczy@linux.ibm.com>
Reviewed-by: Marc Hartmayer <mhartmay@linux.ibm.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Role(master or peer) controls how the domain behaves on migration.
For more details about migration with ivshmem, see
https://git.qemu.org/?p=qemu.git;a=blob_plain;f=docs/system/ivshmem.rst;hb=HEAD
It's a optional attribute in libvirt, and qemu will choose default
role for ivshmem device if the user is not specified.
With device property 'role', the value can be 'master' or 'peer'.
- 'master' (means 'master=on' in qemu), the guest will copy
the shared memory on migration to the destination host.
- 'peer' (means 'master=off' in qemu), the migration is disabled.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Signed-off-by: Yang Hang <yanghang44@huawei.com>
Signed-off-by: Wang Xin <wangxinxin.wang@huawei.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
To remove dependecy of `qemuDomainJob` on job specific
paramters, a `privateData` pointer is introduced.
To handle it, structure of callback functions is
also introduced.
Signed-off-by: Prathamesh Chavan <pc44800@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Use g_autoptr() and remove both 'error' and 'cleanup' labels.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Use g_autoptr() and remove the 'cleanup' label.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Use g_autoptr() and remove the unneeded 'cleanup' label.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Use g_autoptr() and remove the 'cleanup' label.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Use g_autofree and remove the 'cleanup' label.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The same functionality can be achieved using migrate-set-parameters QMP
command with max-bandwidth parameter.
https://bugzilla.redhat.com/show_bug.cgi?id=1829545
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Upon migration with disks, libvirt determines if each disk exists
on the destination and tries to pre-create missing ones. Well,
NVMe disks can't be pre-created, but they can be checked for
presence.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1823639
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In order to add a string to qemuDomainJobInfo we must ensure that it's
freed and copied properly. Add helpers to copy and free the structure
and adjust the code to use them properly for the new semantics.
Additionally also allocation is changed to g_new0 as it includes the
type and thus it's very easy to grep for all the allocations of a given
type.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Upcoming patch will remove it.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Mores <pmores@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Helper processes may have their state migrated with QEMU data stream
thanks to the QEMU "dbus-vmstate".
libvirt maintains the list of helpers to be migrated. The
"dbus-vmstate" is added when required, and given the list of helper
Ids that must be migrated, on save & load sides.
See also:
https://git.qemu.org/?p=qemu.git;a=blob;f=docs/interop/dbus-vmstate.rst
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
This code was based on a per-helper instance and peer-to-peer
connections. The code that landed in qemu master for v5.0 is relying
on a single instance and DBus bus.
Instead of trying to adapt the existing dbus-vmstate code, let's
remove it and resubmit. That should make reviewing easier.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Historically threads are given a name based on the C function,
and this name is just used inside libvirt. With OS level thread
naming this name is now visible to debuggers, but also has to
fit in 15 characters on Linux, so function names are too long
in some cases.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This is not yet supported.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Tested-by: Andrea Bolognani <abologna@redhat.com>
Include virutil.h in all files that use it,
instead of relying on it being pulled in somehow.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Firstly, the check for disk I/O error can be moved into 'if
(!offline)' section a few lines below.
Secondly, checks for vmstate and slirp should be moved under the
same section because they reflect live state of a domain. For
offline migration no QEMU is involved and thus these restrictions
are not valid.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This addreses portability to Windows and standardizes
error reporting. This fixes a number of places which
failed to set O_CLOEXEC or failed to report errors.
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Aside from itinerant error (actually warning) messages due to an
unrecognized response from qemu, this isn't even necessary - the
migration proceeds successfully to completion anyway.
(I'm not sure where to see this status reported in the API though - do
we need to add an extra state, or recognition of a new event somewhere?)
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Normally a PCI hostdev can't be migrated, so
qemuMigrationSrcIsAllowedHostdev() won't permit it. In the case of a a
hostdev network interface that has <teaming type='transient'/> set,
QEMU will automatically unplug the device prior to migration, and
re-plug a corresponding device on the destination. This patch modifies
qemuMigrationSrcIsAllowedHostdev() to allow domains with those devices
to be migrated.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
There are a large number of different header files that
are related to the sockets APIs. The virsocket.h header
includes all of the relevant headers for Windows and UNIX
in one convenient place. If virsocketaddr.h is already
included, then there's no need for virsocket.h
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When using blockdev configurations the 'device' argument of
'blockdev-mirror' must correspond to the topmost node in the block node
graph. Libvirt didn't do this properly in case when 'copy_on_read'
option was enabled on the disk.
Use qemuDomainDiskGetTopNodename to fix it for the blockdev-mirror calls
in qemuDomainBlockCopy and the non-shared-storage migration.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
To simplify implementation, some restrictions are added. For
instance, an NVMe disk can't go to any bus but virtio and has to
be type of 'disk' and can't have startupPolicy set.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
There are going to be more disk types that are considered unsafe
with respect to migration. Therefore, move the error reporting
call outside of if() body and rework if-else combo to switch().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
We will want to use the async job infrastructure along with all the APIs
and event for the backup job so add the backup job as a new async job
type.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The NUMA cells are stored directly in the virCapsHostPtr
struct. This moves them into their own struct allowing
them to be stored independantly of the rest of the host
capabilities. The change is used as an excuse to switch
the representation to use a GPtrArray too.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Our normal practice is for the object type to be the name prefix, and
the object instance be the first parameter passed in.
Rename these to virDomainObjSave and virDomainDefSave moving their
primary parameter to be the first one. Ensure that the xml options
are passed into both functions in prep for future work.
Finally enforce checking of the return type and mark all parameters
as non-NULL.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
With blockdev we need to refer to the nodename of the disk source image
as the source argument for the blockdev-mirror operation while still
keeping the old job name. With blockdev we must also persist the job in
qemu.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Separate out allocation of the virStorageSource corresponding to the
target NBD export of the migration.
As part of the splitout we allocate the export name explicitly as that
one must not change regardless whether blockdev is used or not to
provide compatibility.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Now that the cleanup section does not exist remove the label.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
qemuMigrationSrcNBDCopyCancelOne uses the block job data structure but
generated it's own job name rather than taking it from the block job
data.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
With -blockdev we must use the nodename as the export but we must keep
the name of the export as it was before to ensure compatiblity.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Declare the variable inside the loop with automatic clearing.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
For managed save we can choose between various compression
methods. I randomly tested the 'xz' program on a 8 GB guest
and was surprised to have to wait > 50 minutes for it to
finish compressing, with 'xz' burning 100% cpu for the
entire time. Despite the impressive compression, this is
completely useless in the real world as it is far too long
to wait to save the VM.
The 'xz' binary defaults to '-6' optimization level which
aims for high compression, with moderate memory usage,
at the expense of speed.
This change switches it to use the '-3' optimization level
which is documented as being the one that optimizes speed
at expense of compression. Even with this, it will still
outperform all the other options in terms of compression
level. It is a little less than x4 faster than '-6' which
means it starts to be a viable choice to use 'xz' for
people who really want best compression.
The test results on a 1 GB, fairly freshly booted VM are
as follows
format | save | restore size
=======+=======+=============
raw | 05s | 1s | 428 MB
lzop | 05s | 3s | 160 MB
gzip | 29s | 5s | 118 MB
bz2 | 54s | 22s | 114 MB
xz | 4m37s | 13s | 86 MB
xz -3 | 1m20s | 12s | 95 MB
Based on this we can say
* For moderate compression with no noticable loss in speed
=> use lzop
* For high compression with moderate loss in speed
=> use gzip
* For best compression with significant loss in speed
=> use xz
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This previous commit introduced a simpler free callback for
hash data with only 1 arg, the value to free:
commit 49288fac96
Author: Peter Krempa <pkrempa@redhat.com>
Date: Wed Oct 9 15:26:37 2019 +0200
util: hash: Add possibility to use simpler data free function in virHash
It missed two functions in the hash table code which need
to call the alternate data free function, virHashRemoveEntry
and virHashRemoveSet.
After the previous patch though, there is no code that
makes functional use of the 2nd key arg in the data
free function. There is merely one log message that can
be dropped.
We can thus purge the current virHashDataFree callback
entirely, and rename virHashDataFreeSimple to replace
it.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
In many cases we used virDomainDiskByName to solely look up disk by
target. We have a new helper now so we can replace it.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Replace all occurrences of
if (VIR_STRDUP(a, b) < 0)
/* effectively dead code */
with:
a = g_strdup(b);
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Replace all the occurrences of
ignore_value(VIR_STRDUP(a, b));
with
a = g_strdup(b);
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Provide some consistency over error message variable name and usage
when saving error messages across possible other errors or possibility
of resetting of the last error.
Instead of virSaveLastError paired up with virSetError and virFreeError,
we should use the newer virErrorPreserveLast and virRestoreError.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Now that all the types using VIR_AUTOUNREF have a cleanup func defined
to virObjectUnref, use g_autoptr instead of VIR_AUTOUNREF.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since commit 44e7f02915
util: rewrite auto cleanup macros to use glib's equivalent
VIR_AUTOPTR aliases to g_autoptr. Replace all of its use by the GLib
macro version.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since commit 44e7f02915
util: rewrite auto cleanup macros to use glib's equivalent
VIR_AUTOFREE is just an alias for g_autofree. Use the GLib macros
directly instead of our custom aliases.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Also define the macro for building with GLib older than 2.60
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Use G_GNUC_UNUSED from GLib instead of ATTRIBUTE_UNUSED.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Semantically we can't guarantee that we'll be able to destroy the VM on
the remote host, thus we can't allow remote migration. All other forms
of migration (e.g. saving to file) are okay though as they don't clash
with semantics of the flag.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
When blockdev is used we always should use the blockdev mode for
non-shared storage migration.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Pass backing store as an argument rather than extracting it locally and
fix the callers.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When QEMU supports flushing caches at the end of migration, we can
safely allow migration even if disk/driver/@cache is not none nor
directsync.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Acked-By: Peter Krempa <pkrempa@redhat.com>
The original message was logically incorrect: cache != none or cache !=
directsync is always true. But even replacing "or" with "and" doesn't
make it more readable for humans.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Acked-By: Peter Krempa <pkrempa@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
Several general functions from domain_conf.c were lazily passing NULL as
the parseOpaque pointer instead of letting their callers pass the right
data. This patch fixes all paths leading to virDomainDefCopy to do the
right thing.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
This patch fixes all paths leading to qemuMigrationAnyPrepareDef.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
This patch fixes all paths leading to qemuDomainDefFormatBufInternal.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
This patch fixes all paths leading to qemuDomainDefCopy.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Now that block job data is stored in the status XML portion we need to
make sure that everything which changes the state also saves the status
XML. The job registering function is used while parsing the status XML
so in that case we need to skip the XML saving.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Add the job structure to the table when instantiating a new job and
remove it when it terminates/fails.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
To allow using -blockdev with blockjobs QEMU needs to reopen files in
read-write mode when modifying the backing chain. To achieve this we
need to use 'auto-read-only' for the backing files rather than the
normal 'read-only' property. That way qemu knows that the files need to
be reopened.
Note that the format drivers (e.g. qcow2) are still opened with the
read-only property enabled when being a member of the backing chain
since they are supposed to be immutable unless a block job is started.
QEMU v4.0 (since commit 23dece19da4) allows also dynamic behaviour for
auto-read-only which allows us to use sVirt as we only grant write
permissions to files when doing a blockjob.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Allow using the delayed dismiss of the job so that we can reap the state
even if libvirtd was not running when qemu emitted the job completion
event.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that we no longer support the HMP monitor, remove some dead code.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Acked-by: Peter Krempa <pkrempa@redhat.com>
The hash table returned by qemuMonitorGetAllBlockJobInfo is organized by
the frontend name (which skipps the 'drive-' prefix). While our code
properly matches the jobs to the disk, qemu needs the full job name
including the 'drive-' prefix to be able to identify jobs.
Fix this by adding an argument to qemuMonitorGetAllBlockJobInfo which
does not modify the job name before filling the hash.
This fixes a regression where users would not be able to cancel/pivot
block jobs after restarting libvirtd while a blockjob is running.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The upcoming virDomainBackup() API needs to take advantage of the
ability to expose a bitmap as part of nbd-server-add for a pull-mode
backup (this is the recently-added QEMU_CAPS_NBD_BITMAP capability).
Signed-off-by: Eric Blake <eblake@redhat.com>
Acked-by: Peter Krempa <pkrempa@redhat.com>
Migration always uses a TCP socket for NBD servers, because we don't
support same-host migration. But upcoming pull-mode incremental backup
needs to also support a Unix socket, for retrieving the backup from
the same host. Support this by plumbing virStorageNetHostDef through
the monitor calls, since that is a nice reusable struct that can track
both TCP and Unix sockets.
Update qemumonitorjsontest to verify both forms of the QMP command.
Signed-off-by: Eric Blake <eblake@redhat.com>
Acked-by: Peter Krempa <pkrempa@redhat.com>
Split out the 'shallow' and 'reuse' flags as booleans rather than passing
in flags and constructing them in irrelevant APIs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Split out the 'shallow' flag as a boolean argument rather than passing
in flags and constructing them in irrelevant APIs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The NBD migration code uses drive/blockdev-mirror internally. In those
APIs we pass around flags for the monitor commands which are based on
the flags for the virDomainBlockRebase API. Since there's only one flag
which changes, pass it around explicitly rather than obscuring it in a
bitfield.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Standardize on putting the _LAST enum value on the second line
of VIR_ENUM_IMPL invocations. Later patches that add string labels
to VIR_ENUM_IMPL will push most of these to the second line anyways,
so this saves some noise.
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Since the STOP event handler can use the pausedReason as sent to
qemuProcessStopCPUs, we no longer need to send duplicate suspended
lifecycle events because we know what caused the stop along with extra
details. This processing allows us to also remove the duplicated state
change from qemuProcessStopCPUs.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
qemuMigrationSrcPerform callers expect it to call virDomainObjEndAPI
in any case so on error paths we miss the virDomainObjEndAPI call.
To fix this let's make qemuMigrationSrcPerform callers responsible
for the virDomainObjEndAPI call.
ACKed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
qemu added the 'drive-mirror' command in v1.3.0 (d9b902db3fb71fdc)
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The VIR_MIGRATE_PARALLEL flag is implemented using QEMU's multifd
migration capability and the corresponding multifd-channels migration
parameter.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
snapshot_conf.h was mixing three separate types: the snapshot
definition, the snapshot object, and the snapshot object list.
Separate out the snapshot object list code into its own file, and
update includes for affected clients.
This is just code motion, but done in preparation of sharing a lot of
the object list code with checkpoints.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that virStorageSource is a subclass of virObject we can use
virObjectUnref and remove virStorageSourceFree which was a thin wrapper.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Since virStorageSource is now a subclass of virObject, we can use
VIR_AUTOUNREF instead.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Add virStorageSourceNew and refactor places allocating that structure to
use the helper.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Let's make use of the auto __cleanup capabilities cleaning up any
now unnecessary goto paths.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
If a domain has a disk that is type='network' we require specific
cache mode to allow migration with it (either 'directsync' or
'none'). This doesn't make much sense since network disks are
supposed to be safe to migrate by default.
At the same time, we should be checking for the actual source
type, not apparent type set in the domain XML.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
We have this very handy macro called VIR_STEAL_PTR() which steals
one pointer into the other and sets the other to NULL. The
following coccinelle patch was used to create this commit:
@ rule1 @
identifier a, b;
@@
- b = a;
...
- a = NULL;
+ VIR_STEAL_PTR(b, a);
Some places were clean up afterwards to make syntax-check happy
(e.g. some curly braces were removed where the body become a one
liner).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Currently the job name corresponds to the disk the job belongs to. For
jobs which will not correspond to disks we'll need to track the name
separately.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that the data is per-job, we don't really need to bother with
finishing the synchronous job handling if the job is already terminated.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Instead of passing in the disk information, pass in the job and name the
function accordingly.
Few callers needed to be modified to have the job pointer handy.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The job error can be safely accessed in the job structure, so we don't
need to propagate it through qemuBlockJobUpdateDisk.
Drop the propagation and refactor any caller that pased non-NULL error.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The same message is reported in 3 distinct places. Move it out into a
single function.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Add a field tracking the current state of job so that it can be queried
later. Until now the job state e.g. that the job is _READY for
finalizing was tracked only for mirror jobs. Add tracking of state for
all jobs.
Similarly to 'qemuBlockJobType' this maps the existing states of the
blockjob from virConnectDomainEventBlockJobStatus to
'qemuBlockJobState' so that we can track some internal states as well.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Modify qemuBlockJobSyncBeginDisk to operate on qemuBlockt sJobDataPtr and
rename it accordingly.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We can properly track the job type when starting the job so that we
don't have to infer it later.
This patch also adds an enum of block job types specific to qemu
(qemuBlockjobType) which mirrors the public block job types
(virDomainBlockJobType) but allows for other types to be added later
which will not be public.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
If the job wasn't started, we don't need to end the synchronous job. Add
a note and drop the unnecessary calls.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Rather than directly modifying fields in the qemuBlockJobDataPtr
structure add a bunch of fields which allow to do the transitions.
This will help later when adding more complexity to the job handling.
APIs introduced in this patch are:
qemuBlockJobDiskNew - prepare for starting a new blockjob on a disk
qemuBlockJobDiskGetJob - get the block job data structure for a disk
For individual job state manipulation the following APIs are added:
qemuBlockJobStarted - Sets the job as started with qemu. Until that
the job can be cancelled without asking qemu.
qemuBlockJobStartupFinalize - finalize job startup. If the job was
started in qemu already, just releases
reference to the job object. Otherwise
clears everything as if the job was never
started.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Extract the disk mirroring startup code from the loop into a separate
function to allow cleaner cleanup paths.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When cancelling job after a reconnect we can now use the disk block job
state rather than having to re-detect it in the migration code.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Internally we do a 'block-copy' to accomodate non-shared storage
migration but the code did not fill in that the block job was active on
the disk when starting the copy job. Since we handle block jobs finishes
regardless of having it registered it's not a problem but soon will
become one.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
All the public APIs of the qemu_blockjob module operate on a 'disk'.
Since I'll be adding APIs which operate on a job later let's rename the
existing ones.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
If migration is cancelled or confirm phase fails the domain
should be kept on the source even if VIR_MIGRATE_UNDEFINE_SOURCE
was requested.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
There are some checks done when parsing a migration cookie. For
instance, one of the checks ensures that the domain is not being
migrated onto the same host. If that is the case, then we are in
big trouble because the @vm is the same domain object used by
source and it has some jobs sets and everything so recovering
from failed cookie parsing would be needlessly hard.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
The function currently takes virDomainObjPtr because it's using
both: the domain definition and domain private data.
Unfortunately, this means that in prepare phase we can't parse
migration cookie before putting incoming domain def onto domain
objects list (addressed in the very next commit). Change the
arguments so that virDomainDef and private data are passed
instead of virDomainObjPtr.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
There are several functions called in the cleanup path. Some of
them do report error (e.g. qemuDomainRemoveInactiveJob()) which
may result in overwriting an error reported earlier with some
less useful message.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
The only place where VIR_DOMAIN_EVENT_RESUMED should be generated is the
RESUME event handler to make sure we don't generate duplicate events or
state changes. In the worse case the duplicity can revert or cover
changes done by other event handlers.
For example, after QEMU sent RESUME, BLOCK_IO_ERROR, and STOP events
we could happily mark the domain as running and report
VIR_DOMAIN_EVENT_RESUMED to registered clients.
https://bugzilla.redhat.com/show_bug.cgi?id=1612943
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
When a domain is killed on the source host while it is being migrated
and libvirtd is waiting for the migration to finish (waiting for the
domain condition in qemuMigrationSrcWaitForCompletion), the run-time
state including priv->job.current may already be freed once
virDomainObjWait returns with -1. Thus the priv->job.current pointer
cached in jobInfo is no longer valid and setting jobInfo->status may
crash the daemon.
https://bugzilla.redhat.com/show_bug.cgi?id=1593137
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Once we called qemuDomainObjEnterRemote to talk to the destination
daemon during a peer to peer migration, the vm lock is released and we
only hold an async job. If the source domain dies at this point the
monitor EOF callback is allowed to do its job and (among other things)
clear all private data irrelevant for stopped domain. Thus when we call
qemuDomainObjExitRemote, the domain may already be gone and we should
avoid touching runtime private data (such as current job info).
In other words after acquiring the lock in qemuDomainObjExitRemote, we
need to check the domain is still alive. Unless we're doing offline
migration.
https://bugzilla.redhat.com/show_bug.cgi?id=1589730
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The variable is used to store the offline migration capability of the
destination daemon. Let's call it 'dstOffline' so that we can later use
'offline' to indicate whether we were asked to do offline migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
And replace all calls with virObjectEventStateQueue such that:
qemuDomainEventQueue(driver, event);
becomes:
virObjectEventStateQueue(driver->domainEventState, event);
And remove NULL checking from all callers.
Signed-off-by: Anya Harter <aharter@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Replace instances where we previously called virGetLastError just to
either get the code or to check if an error exists with
virGetLastErrorCode to avoid a validity pre-check.
Signed-off-by: Ramy Elkest <ramyelkest@gmail.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
The alias of the secret for decrypting the TLS passphrase is useless
besides for TLS setup. Stop passing it around.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This has been broken since commit v4.0.0-165-g93412bb827 which added
jobInfo->statsType enum to distinguish various statistics types. During
migration the type will always be QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION,
however the destination code consuming the statistics data from
migration cookie failed to properly set the type. So even though
everything was filled in, the type remained *_NONE and any attempt to
fetch the statistics data of a completed migration on the destination
host failed.
https://bugzilla.redhat.com/show_bug.cgi?id=1584071
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Implement the secure way to transport non-shared storage data across
migrations. The new approach uses blockdev-add to create the NBD client
so that the TLS secret object can be specified.
https://bugzilla.redhat.com/show_bug.cgi?id=1300772
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Separate the code relevant for this approach so that we can later add a
second implementation without making the function messy.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Drop the mention of 'drive mirror' from the function names and mention
NBD. This will help when adding the 'blockdev mirror' migration code
which will allow using TLS.
Additionally fix some of the function comments to make more sense
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The initiation of a synchronous block job in the NBD storage migration
code was placed after entering the monitor thus after the lock on the VM
object was unlocked. Thankfully nothing bad could happen in this
situation since the migration job prevents any disk detaches or other
modifications of the domain object.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The pointer to the qemu driver is already included in domain object's
private data, so does not need to be passed as yet another parameter
when the domain object is already passed.
Also removes parameter 'driver' from functions which had it just because of
qemuBlockJobUpdate.
Signed-off-by: Roland Schulz <schullzroll@gmail.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
When adding a new object to the domain object list, there should
have been 2 virObjectRef calls made one for each list into which
the object was placed to match the 2 virObjectUnref calls that
would occur during Remove as part of virHashRemoveEntry when
virObjectFreeHashData is called when the element is removed from
the hash table as set up in virDomainObjListNew.
Some drivers (libxl, lxc, qemu, and vz) handled this inconsistency
by calling virObjectRef upon successful return from virDomainObjListAdd
in order to use virDomainObjEndAPI when done with the returned @vm.
While others (bhyve, openvz, test, and vmware) handled this via only
calling virObjectUnlock upon successful return from virDomainObjListAdd.
This patch will "unify" the approach to use virDomainObjEndAPI
for any @vm successfully returned from virDomainObjListAdd.
Because list removal is so tightly coupled with list addition,
this patch fixes the list removal algorithm to return the object
as entered - "locked and reffed". This way, the callers can then
decide how to uniformly handle add/remove success and failure.
This removes the onus on the caller to "specially handle" the
@vm during removal processing.
The Add/Remove logic allows for some logic simplification such
as in libxl where we can Remove the @vm directly rather than
needing to set a @remove_dom boolean and removing after the
libxlDomainObjEndJob completes as the @vm is locked/reffed.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Use the TLS env for migration when starting the NBD server if TLS is
enabled for migration.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
To allow encryption of the non-shared storage migration NBD connection
we will need to instantiated the NBD server with the TLS env.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
When a VM is destroyed while being migrated (waiting in
qemuMigrationSrcWaitForCompletion) the private object cleanup code frees
the 'current' job info. Since the migration code attempts to setup
various aspects of the current job even on failure this results into a
crash.
Job data is cleared in qemuDomainObjPrivateDataClear since commit
888aa4b6b9
Fix this by skipping all of the code which requires the qemu process to
be alive if the VM is not active any more.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Since libvirt is currently not able to setup the NBD migration stream
secured by TLS we should not allow such migration since data would be
transferred unencrypted.
This will break compatibility of TLS migration if non-shared storage is
requested but the security implications are more severe.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Trying to delete the non-existent TLS objects results in ugly error
messages in the log, which could easily confuse users. Let's avoid this
confusion by not trying to delete the objects if we were not asked to
enable TLS migration and thus we didn't created the objects anyway.
This patch restores the behavior to the state before "qemu: Reset all
migration parameters".
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We store the flags passed to the API which started the migration. Let's
use them instead of a separate bool to check if post-copy migration was
requested.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When an async job is running, we sometimes need to know how it was
started to distinguish between several types of the job, e.g., post-copy
vs. normal migration. So far we added a specific bool item to
qemuDomainJobObj for such cases, which doesn't scale very well and
storing such bools in status XML would be painful so we didn't do it.
A better approach is to store the flags passed to the API which started
the async job, which can be easily stored in status XML.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When an always-on migration capability is supposed to be enabled on both
sides of migration, each side can only enable the feature if it is
enabled by the other side.
Thus the source host sends a list of supported migration capabilities in
the migration cookie generated in the Begin phase. The destination host
consumes the list in the Prepare phase and decides what capabilities can
be enabled when starting a QEMU process for incoming migration. Once
done the destination sends the list of supported capabilities back to
the source where it is used during the Perform phase to determine what
capabilities can be automatically enabled.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Some migration capabilities may be enabled automatically, but only if
both sides of migration support them. Thus we need to be able transfer
the list of supported migration capabilities in migration cookie.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Since every parameter or capability set in qemuMigrationCompression
structure is now reflected in qemuMigrationParams structure, we can
replace qemuMigrationAnyCompressionDump with a new API which will work
on qemuMigrationParams.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
There's no need to call this API explicitly in the migration code. We
can pass the compression parameters to qemuMigrationParamsFromFlags and
it can internally call qemuMigrationParamsSetCompression to apply them
to the qemuMigrationParams structure.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Propagate the calls up the stack to the point where
qemuMigrationParamsFromFlags is called. The end goal achieved in the
following few patches is to merge compression parameters into the
general migration parameters code.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Most migration capabilities are directly connected with
virDomainMigrateFlags so qemuMigrationParamsFromFlags can automatically
enable them.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Some migration capabilities are always enabled if QEMU supports them. We
can just drop the explicit code for them and let
qemuMigrationParamsCheck automatically set such capabilities.
QEMU_MONITOR_MIGRATION_CAPS_EVENTS would normally be one of the always
on features, but it is the only feature we want to enable even for other
jobs which internally use migration (such as save and snapshot). Hence
this capability is set very early after libvirtd connects to QEMU
monitor.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
It's just a tiny wrapper around qemuMigrationParamsSetCapability and
setting priv->job.postcopyEnabled is not something qemuMigrationParams
code should be doing anyway so let the callers do it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Every migration entry point in qemu_driver is supposed to call
qemuMigrationParamsFromFlags to transform flags and parameters into
qemuMigrationParams structure and pass the result to qemuMigration*
APIs.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Instead of checking each capability at the time we want to set it in
qemuMigrationParamsSetCapability we can check all of them at once in
qemuMigrationParamsCheck.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We reached the point when qemuMigrationParamsApply is the only API which
sends migration parameters and capabilities to QEMU. Thus all but the
TLS parameters can be set before we ask QEMU for the current values of
all parameters in qemuMigrationParamsCheck.
Supported migration capabilities are queried as soon as libvirt connects
to QEMU monitor so we can check them anytime.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We reached the point when qemuMigrationParamsApply is the only API which
sends migration parameters and capabilities to QEMU. Thus all but the
TLS parameters can be set before we ask QEMU for the current values of
all parameters in qemuMigrationParamsCheck.
Supported migration capabilities are queried as soon as libvirt connects
to QEMU monitor so we can check them anytime.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Prefer xbzrle-cache-size migration parameter over the special
migrate-set-cache-size QMP command.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Migration capabilities are closely related to migration parameters and
it makes sense to keep them in a single data structure. Similarly to
migration parameters the capabilities are all send to QEMU at once in
qemuMigrationParamsApply, all other APIs operate on the
qemuMigrationParams structure.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The new name is qemuMigrationParamsApply and it will soon become the
only API which will send all requested migration parameters and
capabilities to QEMU. All other qemuMigrationParams* APIs will just
operate on the qemuMigrationParams structure.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
There's no real reason for qemuMigrationParamsEnableTLS to require the
callers to pass a valid virQEMUDriverConfigPtr, it can just call
virQEMUDriverGetConfig.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The function checks whether QEMU supports TLS migration and stores the
original value of tls-creds parameter to priv->migTLSAlias. This is no
longer needed because we already have the original value stored in
priv->migParams.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The code can be merged directly in qemuMigrationParamsAddTLSObjects.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Restore the original values of all migration parameters we store in
qemuDomainJobObj instead of explicitly resting only a limited set of
them.
The result is not strictly equivalent to the previous code wrt reseting
TLS state because the previous code would only reset it if we changed it
before while the new code will reset it always if QEMU supports TLS
migration. This is not a problem for the parameters themselves, but it
can cause spurious errors about missing TLS objects being logged at the
end of non-TLS migration. This issue will be fixed ~50 patches later.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Any job which touches migration parameters will first store their
original values (i.e., QEMU defaults) to qemuDomainJobObj to make it
easier to reset them back once the job finishes.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When connection to the client which controls a non-p2p migration gets
closed between Perform and Confirm phase, we don't know whether the
domain was successfully migrated or not. Thus, we have to leave the
domain paused and just cleanup the migration job and reset migration
parameters.
Previously we didn't reset the parameters and future save or snapshot
operations would see wrong environment (and could fail because of it) in
case the domain stayed running on the source host.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Currently migration parameters are stored in a structure which mimics
the QEMU migration parameters handled by query-migrate-parameters and
migrate-set-parameters. The new structure will become a libvirt's
abstraction on top of QEMU migration parameters, capabilities, and
related stuff.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
It will get a bit more complicated soon and storing it on a stack with
{0} initializer will no longer work. We need a proper constructor.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The function is connected with the code which handles migration
parameters and capabilities, let's move it to qemu_migration_params.c.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In the end, this will allow us to have most of the logic around
migration parameters and capabilities done in one place.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The function is now called qemuMigrationParamsFromFlags to better
reflect what it is doing: taking migration flags and params and
producing a struct with QEMU migration parameters.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Since virCloseCallbacksRun was ignoring the value anyway, let's
just change it to be a void function.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
In qemuMigrationSrcRun, we already checked for non-NULL mig
and then dereferenced it. It's only possible for mig to be
NULL in the error section.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
This partially reverts 82592551cb.
When migrating a domain, qemuMigrationDstPrepareAny() is called
which eventually calls qemuProcessLaunch(conn = NULL, flags =
VIR_QEMU_PROCESS_START_AUTODESTROY); But the very first thing
that qemuProcessLaunch does is check if AUTODESTROY flag is set
and @conn is not NULL. Well, it is.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1494454
If a domain disk is stored on local filesystem (e.g. ext4) but is
not being migrated it is very likely that domain is not able to
run on destination. Regardless of share/cache mode.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Range check in virPortAllocatorSetUsed is not useful anymore
when we manage ports for entire unsigned short range values.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Ensure all enum cases are listed in switch statements, or cast away
enum type in places where we don't wish to cover all cases.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
It is very difficult while reading the migration code trying to
understand whether a particular function is being called on the src side
or the dst side, or either. Putting "Src" or "Dst" in the method names will
make this much more obvious. "Any" is used in a few helpers which can be
called from both sides.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
These APIs are not required anywhere outside the migration code so need
not be exported to the rest of the QEMU driver.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The qemuMigrationPrecreateStorage method needs a connection
to access the storage driver. Instead of passing it around,
open it at time of use.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When setting up graphics, we sometimes need to resolve networks,
requiring the caller to pass in a virConnectPtr, except sometimes they
pass in NULL. Use virGetConnectNetwork() to acquire the connection to
the network driver when it is needed.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
During domain startup there are many places where we need to acquire
secrets. Currently code passes around a virConnectPtr, except in the
places where we pass in NULL. So there are a few codepaths where ability
to start guests using secrets will fail. Change to acquire a handle to
the secret driver when needed.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
There is a long standing hack to pass a virConnectPtr into the
qemuMonitorStartCPUs method, so that when the text monitor prompts
for a disk password, we can lookup virSecretPtr objects. This causes
us to have to pass a virConnectPtr around through countless methods
up the call chain....except some places don't have any virConnectPtr
available so have always just passed NULL. We can finally fix this
disastrous design by using virGetConnectSecret() to open a connection
to the secret driver at time of use.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Loadable drivers must never depend on each other. Over time some usage
mistakenly crept in for the storage and network drivers, but now this is
eliminated the syntax-check rules can enforce this separation once more.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The storagePoolLookupByTargetPath() method in the storage driver is used
by the QEMU driver during block migration. If there's a valid use case
for this in the QEMU driver, then external apps likely have similar
needs. Exposing it in the public API removes the direct dependancy from
the QEMU driver to the storage driver.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Convert the stats field in _qemuDomainJobInfo to be a union. This
will allow for the collection of various different types of stats
in the same field.
When starting the async job that will end up being used for stats,
set the @statsType value appropriately. The @mirrorStats are
special and are used with stats.mig in order to generate the
returned job stats for a migration.
Using the NONE should avoid the possibility that some random
async job would try to return stats for migration even though
a migration is not in progress.
For now a migration and a save job will use the same statsType
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
In my first approach in 4b480d1076 I overlooked the comment in
qemuMigrationRunIncoming stating that during actual migration the
qemuMigrationRunIncoming does not wait until the migration is complete
but rather offloads that to the Finish phase of migration.
This means that during actual migration qemuProcessRefreshState was
called prior to qemu actually transferring the full state and thus the
queries did not get the correct information. The approach worked only
for restore, where we wait for the migration to finish during qemu
startup.
Fix the issue by calling qemuProcessRefreshState both from
qemuProcessStart if there's no incomming migration and from
qemuMigrationFinish so that the code actually works as expected.
When migrating a shutoff domain (i.e., offline migration), we have no
statistics to report and thus jobInfo will be NULL in
qemuMigrationFinish.
Broken by me in v3.10.0-183-ge8784e7868.
https://bugzilla.redhat.com/show_bug.cgi?id=1536351
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Libvirt 3.7.0 and earlier libvirt reported a migration job as completed
immediately after QEMU finished sending migration data at which point
migration was not really complete yet. Commit v3.7.0-29-g3f2d6d829e
fixed this, but caused a regression in reporting statistics for
completed jobs which started reporting the job as still running. This
happened because the completed job statistics including the job status
are copied from the running job before we finally mark it as completed.
Let's make sure QEMU_DOMAIN_JOB_STATUS_COMPLETED is always set in the
completed job info even when the job has not finished yet.
https://bugzilla.redhat.com/show_bug.cgi?id=1523036
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Right-aligning backslashes when defining macros or using complex
commands in Makefiles looks cute, but as soon as any changes is
required to the code you end up with either distractingly broken
alignment or unnecessarily big diffs where most of the changes
are just pushing all backslashes a few characters to one side.
Generated using
$ git grep -El '[[:blank:]][[:blank:]]\\$' | \
grep -E '*\.([chx]|am|mk)$$' | \
while read f; do \
sed -Ei 's/[[:blank:]]*[[:blank:]]\\$/ \\/g' "$f"; \
done
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
The parameters used "migrate" prefix which is pretty redundant and
qemuMonitorMigrationParams structure is our internal representation of
QEMU migration parameters and it is supposed to use names which match
QEMU names.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
QEMU identified a race condition between the device state serialization
and the end of storage migration. Both QEMU and libvirt needs to be
updated to fix this.
Our migration work flow is modified so that after starting the migration
we to wait for QEMU to enter "pre-switchover", "postcopy-active", or
"completed" state. Once there, we cancel all block jobs as usual. But if
QEMU is in "pre-switchover", we need to resume the migration afterwards
and wait again for the real end (either "postcopy-active" or
"completed" state).
Old QEMU will just enter either "postcopy-active" or "completed"
directly, which is still correctly handled even by new libvirt. The
"pre-switchover" state will only be entered if QEMU supports it and the
pause-before-switchover capability was enabled. Thus all combinations of
libvirt and QEMU will work, but only new QEMU with new libvirt will
avoid the race condition.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This new capability enables a pause before device state serialization so
that we can finish all block jobs without racing with the end of the
migration. The pause is indicated by "pre-switchover" state. Once we're
done QEMU enters "device" migration state.
This patch just defines the new capability and QEMU migration states and
their mapping to our job states.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instead of enumerating all states which need to be turned into
QEMU_DOMAIN_JOB_STATUS_FAILED (and failing to add all of them), it's
better to mention just the one which needs to be left alone.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Almost every failure in qemuMigrationRun while we are talking to QEMU
monitor results in a jump to exit_monitor label. The only exception is
removed by this patch.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The "ret" variable is used for storing the return value of a function
and should not be used as a temporary variable.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Merge cancel and cancelPostCopy sections with the generic error section,
where we can easily decide whether canceling the ongoing migration is
required.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Let cleanup only do things common to both failure and success paths and
move error handling code inside the new "error" section.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Some code which was supposed to be executed only when migration
succeeded was buried inside the cleanup code.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
When adding a new job state it's useful to let the compiler complain
about places where we need to think about what to do with the new
state.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
All calls to qemuMonitorGetMigrationCapability in QEMU driver are
replaced with qemuMigrationCapsGet.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
We need to send allowReboot in the migration cookie to ensure the same
behavior of the virDomainSetLifecycleAction() API on the destination.
Consider this scenario:
1. On the source the domain is started with:
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
2. User calls an API to set "destroy" for <on_reboot>:
<on_poweroff>destroy</on_poweroff>
<on_reboot>destroy</on_reboot>
<on_crash>destroy</on_crash>
3. The guest is migrated to a different host
4a. Without the allowReboot in the migration cookie the QEMU
process on destination would be started with -no-reboot
which would prevent using the virDomainSetLifecycleAction() API
for the rest of the guest lifetime.
4b. With the allowReboot in the migration cookie the QEMU process
on destination is started without -no-reboot like it was started
on the source host and the virDomainSetLifecycleAction() API
continues to work.
The following patch adds a QEMU implementation of the
virDomainSetLifecycleAction() API and that implementation disallows
using the API if all actions are set to "destroy" because we add
"-no-reboot" on the QEMU command line. Changing the lifecycle action
is in this case pointless because the QEMU process is always terminated.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
When migration fails, QEMU may provide a description of the error in
the reply to query-migrate QMP command. We can fetch this error and use
it instead of the generic "unexpectedly failed" message.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Pass flags to the function rather than just whether we have incoming
migration. This also enforces correct startup policy for USB devices
when reverting from a snapshot.
qemuMigrationPrepareAny called multiple of the functions starting the
qemu process for incoming migration by adding the flags explicitly.
Extract them to a variable so that they can be easily used for other
calls or changed in the future.
Seeing a log message saying 'flags=93' is ambiguous & confusing unless
you happen to know that libvirt always prints flags as hex. Change our
debug messages so that they always add a '0x' prefix when printing flags,
and '0' prefix when printing mode. A few other misc places gain a '0x'
prefix in error messages too.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
In case of real migration (not migrating to file on save, dump etc)
migration info is not complete at time qemu finishes migration
in normal (non postcopy) mode. We need to update disks stats,
downtime info etc. Thus let's not expose this job status as
completed.
To archive this let's set status to 'qemu completed' after
qemu reports migration is finished. It is not visible as complete
job to clients. Cookie code on confirm phase will finally turn
job into completed. As we don't need more things to do when
migrating to file status is set to 'completed' as before
in this case.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When getting job info in case mirror does not reach ready phase
fetch mirror stats from qemu. Otherwise mirror stats are already
saved in current job.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instead of checking stat.status let's set status to migrating
as soon as migrate command is send (waiting for completion
is a good place too).
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Setting status to none has little value - getting job status
will not return even elapsed time.
After this patch getting job stats stays correct in a sence
it will not fetch migration stats because it consults
stats.status before doing the fetch.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuMigrationFetchJobStatus is rather inconvinient. Some of its
callers don't need status to be updated, some don't need to update
elapsed time right away. So let's update status or elapsed time
in callers instead.
This patch drops updating job status on getting job stats by
client. This way we will not provide status 'completed' while
it is not yet updated by migration routine.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This way we get stats only in one place. The former code waits for
complete/postcopy status basically and don't need to mess with stats.
The patch drops raising an error on stats updates failure. This
does not make much sense anyway.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Let's introduce QEMU_DOMAIN_JOB_STATUS_POSTCOPY state for job.current->status
instead of checking job.current->stats.status. The latter can be changed
when fetching migration statistics. Moving state function from the variable
and leave only store function seems more managable.
This patch removes all state checking usage of stats except for
qemuDomainGetJobStatsInternal. This place will be handled separately.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This patch simply switches code from using VIR_DOMAIN_JOB_* to
introduced QEMU_DOMAIN_JOB_STATUS_*. Later this gives us freedom
to introduce states for postcopy and mirroring phases.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
At some places we either already have synchronous job or we just
released it. Also, some APIs might want to use this code without
having to release their job. Anyway, the job acquire code is
moved out to qemuDomainRemoveInactiveJob so that
qemuDomainRemoveInactive does just what it promises.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
At present shared disks can be migrated with either readonly or cache=none. But
cache=directsync should be safe for migration, because both cache=directsync and cache=none
don't use the host page cache, and cache=direct write through qemu block layer cache.
Signed-off-by: Peng Hao <peng.hao2@zte.com.cn>
Reviewed-by: Wang Yechao <wang.yechao255@zte.com.cn>
While qemuProcessIncomingDefNew takes an fd argument and stores it in
qemuProcessIncomingDef structure, the caller is still responsible for
closing the file descriptor.
Introduced by commit v1.2.21-140-ge7c6f4575.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Most places which want to check ABI stability for an active domain need
to call this API rather than the original
qemuDomainDefCheckABIStability. The only exception is in snapshots where
we need to decide what to do depending on the saved image data.
https://bugzilla.redhat.com/show_bug.cgi?id=1460952
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>