When a connection to the destination host during a p2p migration drops,
we know we will have to cancel the migration; it doesn't make sense to
waste resources by trying to finish the migration. We already do so
after sending "migrate" command to QEMU and we should do it while
waiting for drive mirrors to become ready too.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Checking status of all part of migration and aborting it when something
failed is a complex thing which makes the waiting loop hard to read.
This patch moves all the checks into a separate function similarly to
what was done for drive mirror loops.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instead of passing current job name to several functions which already
know what the current job is we can generate the name where we actually
need to use it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Once we start waiting for migration events instead of polling
query-migrate, priv->job.current will not be regularly updated anymore
because we will get the current status directly from the events. Thus
virDomainGetJob{Info,Stats} will have to query QEMU, but they can't just
blindly update priv->job.current structure. This patch introduces
qemuMigrationFetchJobStatus which just fills in a caller supplied
structure and makes qemuMigrationUpdateJobStatus a tiny wrapper around
it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Move common parts of qemuDomainGetJobInfo and qemuDomainGetJobStats into
a separate API (qemuDomainGetJobStatsInternal).
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
QEMU_CAPS_SEAMLESS_MIGRATION capability says QEMU supports
SPICE_MIGRATE_COMPLETED event. Thus we can just drop all code which
polls query-spice and replace it with waiting for the event.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When libvirtd is restarted during migration, we properly cancel the
ongoing migration (unless it managed to almost finished before the
restart). But if we were also migrating storage using NBD, we would
completely forget about the running disk mirrors.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
"query-block-jobs" QMP command returns all running block jobs at once,
while qemuMonitorBlockJobInfo would only report one. This is not very
nice in case we need to check several block jobs. This patch refactors
the monitor code to always parse all block jobs and store them in a
hash.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
So that they can format private data (e.g., disk private data) stored
elsewhere in the domain object.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This patch reverts commit 76c61cdca2.
VIR_DOMAIN_DISK_MIRROR_STATE_ABORT says we asked for a block job to be
aborted rather than saying it was aborted. Let's just use
VIR_DOMAIN_DISK_MIRROR_STATE_NONE consistently whenever a block job
finishes since no caller depends on VIR_DOMAIN_DISK_MIRROR_STATE_ABORT
(anymore) to check whether a block job failed or it was cancelled.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Abort migration as soon as we detect that some of the disk mirrors
failed. There's no sense in trying to finish memory migration first.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instead of cancelling disk mirrors sequentially, let's just call
block-job-cancel for all migrating disks and then wait until all
disappear.
In case we cancel disk mirrors at the end of successful migration we
also need to check all block jobs completed successfully. Otherwise we
have to abort the migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
By switching block jobs to use domain conditions, we can drop some
pretty complicated code in NBD storage migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Because we are polling we may detect some errors after we asked QEMU for
migration status even though they occurred before. If this happens and
QEMU reports migration completed successfully, we would happily report
the migration succeeded even though we should have cancelled it because
of the other error.
In practise it is not a big issue now but it will become a much bigger
issue once the check for storage migration status is moved inside the
loop in qemuMigrationWaitForCompletion.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The wrapper is useful for calling qemuBlockJobEventProcess with the
event details stored in disk's privateData, which is the most likely
usage of qemuBlockJobEventProcess.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Complex jobs, such as migration, need to monitor several events at once,
which is impossible when each of the event uses its own condition
variable. This patch adds a single condition variable to each domain
object. This variable can be used instead of the other event specific
conditions.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Well, if a server is being destructed, all underlying services and
their sockets should disappear with it. But due to bug in our
implementation this is not the case. Yes, we are closing the sockets,
but that's not enough. We must also:
1) Unregister them from the event loop
2) Unref the service for each socket
The last step is needed, because each socket callback holds a
reference to the service object. Since in the first step we are
unregistering the callbacks, they no longer need the reference.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Although highly unlikely, nobody says that virEventAddHandle()
can't return 0 as a handle to socket callback. It can't happen
with our default implementation since all watches will have value
1 or greater, but users can register their own callback functions
(which can re-use unused watch IDs for instance). If this is the
case, weird things may happen.
Also, there's a little bug I'm fixing too, upon
virNetSocketRemoveIOCallback(), the variable holding callback ID
was not reset. Therefore calling AddIOCallback() once again would
fail. Not that we are doing it right now, but we might.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When going through the code I've notice that
virNetSocketAddIOCallback() increases the reference counter of
@socket. However, its counter part RemoveIOCallback does not. It took
me a while to realize this disproportion. The AddIOCallback registers
our own callback which eventually calls the desired callback and then
unref the @sock. Yeah, a bit complicated but it works. So, lets note
this hard learned fact in a comment in RemoveIOCallback().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1224233
Currently it's not possible to determine the difference between a
fatal memory allocation or failure to open/read the directory error
with a perhaps less fatal, I didn't find the "block" device in the
directory (which may be a disk entry without a block device).
In the case of the latter, we shouldn't cause failure to continue
searching in the caller (virStorageBackendSCSIFindLUs), rather we
should allow trying reading the next directory entry.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1203032
Implement a `migrate_disks' parameters for the QEMU driver. This multi-
value parameter can be used to explicitly specify what block devices
are to be migrated using the NBD server. Tunnelled migration using NBD
is to be done.
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The `virTypedParamsAddStringList' function provides interface to add a
NULL-terminated array of string values as a multi-value to the params.
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Add multikey API:
* virTypedParamsFilter that filters all the parameters with specified name.
* virTypedParamsGetStringList that returns a list with all the values for
specified name and string type.
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Allow multi-value parameters to be build using virTypedParamsAdd*
functions by removing check for duplicates.
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The `virTypedParamsValidate' function now can be instructed to allow
multiple entries for some of the keys. For this flag the type with
the `VIR_TYPED_PARAM_MULTIPLE' flag.
Add unit tests for this new behaviour.
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When playing with disk migration lately, I've noticed this warning in
domain logs:
WARNING: Image format was not specified for 'nbd://masina:49153/drive-virtio-disk0' and probing guessed raw.
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
Specify the 'raw' format explicitly to remove the restrictions.
So I started digging into qemu source code to see what has triggered
the warning. I'd expect qemu to know formats of guest's disks since we
tell them on command line. This lead me to qmp_drive_mirror() where
the following can be found:
if (!has_format) {
format = mode == NEW_IMAGE_MODE_EXISTING ? NULL : bs->drv->format_name;
}
So, format is automatically initialized from the disk iff mode !=
"existing". Unfortunately, in migration we are tied to use this mode
(NBD doesn't support creating new images). Therefore the only way to
avoid this warning is to pass format. The discussion on the mail-list [1]
resulted in the code that always forces NBD export as "raw" format.
[1] https://www.redhat.com/archives/libvir-list/2015-June/msg00153.html
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
This function is returning a string (domain XML). Since d3ce7363
when it was first introduced, it was indented incorrectly:
static char
*qemuMigrationBeginPhase(..)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If virDomainObjGetDefs used in qemuDomainPinIOThread would fail the code
would jump to the 'cleanup' label after acquiring the job, thus the VM
would be locked forever.
Introduced in commit cac6d639.
The privileged flag will not change while the configuration might
change. Make the 'privileged' flag member of the driver again and mark
it immutable. Should that ever change add an accessor that will group
reads of the state.
virDomainObjGetOneDef will help to retrieve the correct definition
pointer from @vm in cases where VIR_DOMAIN_AFFECT_LIVE and
VIR_DOMAIN_AFFECT_CONFIG are mutually exclusive. The function simply
returns the correct pointer. This similarly to virDomainObjGetDefs will
greatly simplify the code.
If @flags contains only VIR_DOMAIN_AFFECT_CONFIG and @vm is active, the
function would return the active config rather than the persistent one
that it should return. This happened due to the fact that
virDomainObjGetDefs was checking the updated flags which may not contain
VIR_DOMAIN_AFFECT_LIVE if it is not requested even if @vm is active.
Additionally the function would not take the flags into account when
setting the pointers which was later used to determine whether the code
needs to update the given configuration.
The mistake was caught by the virt-test suite.
When hotplugging a memory device, there wasn't a check to determine
if there is a conflict with the address space being used by the to
be added memory device and any existing device which is disallowed by qemu.
This patch adds a check to ensure the new device address doesn't
conflict with any existing device.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
VIR_APPEND_ELEMENT would clear @srv to NULL after it successfully
inserted it thus the reference count could not be increased afterwards.
Switch to VIR_APPEND_ELEMENT_COPY. This fixes crash after terminating
the daemon.