If QEMU fails during incoming migration, the domain disappears including
a possibly useful error message read from QEMU log file. Let's remember
the error in virQEMUDriver so that Finish can report more than just "no
such domain".
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Since we already support the MIGRATION event, we just need to make sure
the domain condition is signalled whenever a p2p connection drops or the
domain is paused due to IO error and we can avoid waking up every 50 ms
to check whether something happened.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We don't need to call query-migrate every 50ms when we get the current
migration state via MIGRATION event.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When migration fails in qemuMigrationPrepareAny, we unconditionally call
qemuDomainRemoveInactive, which should only be called for transient
domains. The check for !vm->persistent was accidentally removed by
commit 540c339.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When a connection to the destination host during a p2p migration drops,
we know we will have to cancel the migration; it doesn't make sense to
waste resources by trying to finish the migration. We already do so
after sending "migrate" command to QEMU and we should do it while
waiting for drive mirrors to become ready too.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Checking status of all part of migration and aborting it when something
failed is a complex thing which makes the waiting loop hard to read.
This patch moves all the checks into a separate function similarly to
what was done for drive mirror loops.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instead of passing current job name to several functions which already
know what the current job is we can generate the name where we actually
need to use it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Once we start waiting for migration events instead of polling
query-migrate, priv->job.current will not be regularly updated anymore
because we will get the current status directly from the events. Thus
virDomainGetJob{Info,Stats} will have to query QEMU, but they can't just
blindly update priv->job.current structure. This patch introduces
qemuMigrationFetchJobStatus which just fills in a caller supplied
structure and makes qemuMigrationUpdateJobStatus a tiny wrapper around
it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
QEMU_CAPS_SEAMLESS_MIGRATION capability says QEMU supports
SPICE_MIGRATE_COMPLETED event. Thus we can just drop all code which
polls query-spice and replace it with waiting for the event.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When libvirtd is restarted during migration, we properly cancel the
ongoing migration (unless it managed to almost finished before the
restart). But if we were also migrating storage using NBD, we would
completely forget about the running disk mirrors.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Abort migration as soon as we detect that some of the disk mirrors
failed. There's no sense in trying to finish memory migration first.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instead of cancelling disk mirrors sequentially, let's just call
block-job-cancel for all migrating disks and then wait until all
disappear.
In case we cancel disk mirrors at the end of successful migration we
also need to check all block jobs completed successfully. Otherwise we
have to abort the migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
By switching block jobs to use domain conditions, we can drop some
pretty complicated code in NBD storage migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Because we are polling we may detect some errors after we asked QEMU for
migration status even though they occurred before. If this happens and
QEMU reports migration completed successfully, we would happily report
the migration succeeded even though we should have cancelled it because
of the other error.
In practise it is not a big issue now but it will become a much bigger
issue once the check for storage migration status is moved inside the
loop in qemuMigrationWaitForCompletion.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1203032
Implement a `migrate_disks' parameters for the QEMU driver. This multi-
value parameter can be used to explicitly specify what block devices
are to be migrated using the NBD server. Tunnelled migration using NBD
is to be done.
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When playing with disk migration lately, I've noticed this warning in
domain logs:
WARNING: Image format was not specified for 'nbd://masina:49153/drive-virtio-disk0' and probing guessed raw.
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
Specify the 'raw' format explicitly to remove the restrictions.
So I started digging into qemu source code to see what has triggered
the warning. I'd expect qemu to know formats of guest's disks since we
tell them on command line. This lead me to qmp_drive_mirror() where
the following can be found:
if (!has_format) {
format = mode == NEW_IMAGE_MODE_EXISTING ? NULL : bs->drv->format_name;
}
So, format is automatically initialized from the disk iff mode !=
"existing". Unfortunately, in migration we are tied to use this mode
(NBD doesn't support creating new images). Therefore the only way to
avoid this warning is to pass format. The discussion on the mail-list [1]
resulted in the code that always forces NBD export as "raw" format.
[1] https://www.redhat.com/archives/libvir-list/2015-June/msg00153.html
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
This function is returning a string (domain XML). Since d3ce7363
when it was first introduced, it was indented incorrectly:
static char
*qemuMigrationBeginPhase(..)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
By default, getaddrinfo() will return addresses for both
IPv4 and IPv6 if both protocols are enabled, and so the
RPC code will listen/connect to both protocols too. There
may be cases where it is desirable to restrict this to
just one of the two protocols, so add an 'int family'
parameter to all the TCP related APIs.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Most virDomainDiskIndexByName callers do not care about the index; what
they really want is a disk def pointer.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When cancelling drive mirror, always try to do that for all disks even
if it fails for some of them. Report the first error we saw.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instead of redoing the same filtering over and over everytime we need to
walk through all disks which are being migrated.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
As of eeb008dbfcf31 the variable is not used anymore. Drop it.
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When migrating a domain while changing its name and using
VIR_MIGRATE_PERSIST_DEST flag, libvirt would fail to properly change the
name in the persistent definition. The inconsistency results in weird
behavior when dumping domain XML, destroying the domain, restarting
libvirtd and likely in several other situations.
Since the new name is already stored in vm->def->name, we just need to
make sure the persistent definition uses this new name too.
https://bugzilla.redhat.com/show_bug.cgi?id=1076354
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
In qemuMigrationDriveMirror we can start all disk mirrors in parallel.
We wait until they are all ready, or one of them aborts.
In qemuMigrationCancelDriveMirror, we wait until all mirrors are
properly stopped. This is necessary to ensure that destination VM is
fully in sync with the (paused) source VM.
If a drive mirror can not be cancelled, then the destination is not in a
consistent state. In this case it is not safe to continue with the
migration.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
virDomainGetJobStats is able to report statistics of a completed
migration, however to get usable downtime and total time statistics both
hosts have to keep synchronized time. To provide at least some
estimation of the times even when NTP daemons are not running on both
hosts we can just ignore the time needed to transfer a migration cookie
to the destination host. The result will be also inaccurate but a bit
more predictable. The total/down time will just be at least what we
report.
https://bugzilla.redhat.com/show_bug.cgi?id=1213434
This is basically turning qemuDomObjEndAPI into a more general
function. Other drivers which gets a reference to domain objects may
benefit from this function too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This needs to specified in way too many places for a simple validation
check. The ostype/arch/virttype validation checks later in
DomainDefParseXML should catch most of the cases that this was covering.
1. 'last_good_net' indicates the index of last successfully configured
net. so def->nets[last_good_net] should also be clean up if error occurs.
2. if error occurs in 'virNetDevMacVLanVPortProfileRegisterCallback'
(second 'goto err_exit' in loop), we should also do
'virNetDevVPortProfileDisassociate' cleanup for the
'virNetDevVPortProfileAssociate'(first code block in loop). So we should
consider the net is successfully configured after first code block in
loop finishes.
Signed-off-by: Huanle Han <hanxueluo@gmail.com>
When pre-creating storage for domains, we need to find corresponding
disk in the XML on the destination (domain XML may differ there, e.g.
disk is accessible under different path). For better debugging, I'm
printing all info I received on a disk. But there was a typo when
printing the disk capacity: "%lluu" instead of "%llu".
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The problem with the previous implementation is,
even when qemuMigrationUpdateJobStatus() detects a migration job
has completed, it will do a sleep for 50 ms (which is unnecessary
and only adds up to the VM pause time).
Signed-off-by: Xing Lin <xinglin@cs.utah.edu>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
qemuMigrationCookieAddNBD is usually called from within an async
MIGRATION_OUT or MIGRATION_IN job, so it needs to start a nested job.
(The one exception is during the Begin phase when change protection
isn't enabled, but qemuDomainObjEnterMonitorAsync will behave the same
as qemuDomainObjEnterMonitor in this case.)
This bug was encountered with a libvirt client that repeatedly queries
the disk mirroring block job info during a migration. If one of these
queries occurs just as the Perform migration cookie is baked, libvirt
crashes.
Relevant logs are as follows:
6701: warning : qemuDomainObjEnterMonitorInternal:1544 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous
[1] 6701: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block","id":"libvirt-629"}
[2] 6699: info : qemuMonitorIOWrite:503 : QEMU_MONITOR_IO_WRITE: mon=0x7fefdc004700 buf={"execute":"query-block","id":"libvirt-629"}
[3] 6704: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block-jobs","id":"libvirt-630"}
[4] 6699: info : qemuMonitorJSONIOProcessLine:203 : QEMU_MONITOR_RECV_REPLY: mon=0x7fefdc004700 reply={"return": [...], "id": "libvirt-629"}
6699: error : qemuMonitorJSONIOProcessLine:211 : internal error: Unexpected JSON reply '{"return": [...], "id": "libvirt-629"}'
At [1] qemuMonitorBlockStatsUpdateCapacity sends its request, then waits
on mon->notify. At [2] the request is written out to the monitor socket.
At [3] qemuMonitorBlockJobInfo sends its request, and also waits on
mon->notify. The reply from the first request is received at [4].
However, qemuMonitorJSONIOProcessLine is not expecting this reply since
the second request hadn't completed sending. The reply is dropped and an
error is returned.
qemuMonitorIO signals mon->notify twice during its error handling,
waking up both of the threads waiting on it. One of them clears mon->msg
as it exits qemuMonitorSend; the other crashes:
qemuMonitorSend (mon=0x7fefdc004700, msg=<value optimized out>) at qemu/qemu_monitor.c:975
975 while (!mon->msg->finished) {
(gdb) print mon->msg
$1 = (qemuMonitorMessagePtr) 0x0
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
If a VM migration is aborted, a disk mirror may be failed by QEMU before
libvirt has a chance to cancel it. The disk->mirrorState remains at
_ABORT in this case, and this breaks subsequent mirrorings of that disk.
We should instead check the mirrorState directly and transition to _NONE
if it is already aborted. Do the check *after* aborting the block job in
QEMU to avoid a race.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
If virCloseCallbacksSet fails, qemuMigrationBegin must return NULL to
indicate an error occurred.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
virDomainHasDiskMirror() currently detects only jobs that add the mirror
elements. Since some operations like migration are interlocked by
existing block jobs on the given domain the check needs to be
instrumented to check regular jobs too.
This patch renames virDomainHasDiskMirror to virDomainHasDiskBlockjob
and adds an argument that allows to select that it returns true only for
block copy jobs as those interlock making the domain persistent.
Other two uses trigger on any block job type.
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Make sure that libvirt has all vital information needed to reliably
represent configuration of guest's memory devices in case of a
migration.
This patch forbids migration in case the required slot number and module
base address are not present (failed to be loaded from qemu via
monitor).
Commit cf54c60699833b3791a5d0eb3eb5a1948c267f6b introduced the ability
to create missing storage volumes during migration. For network disks,
however, we may not necessarily be able to detect whether they already
exist -- there is no straight-forward way to map the disk to a storage
volume, and even if there were it's possible no configured storage pool
actually contains the disk.
It is better to assume the network disk exists in this case, rather than
aborting the migration completely. If the volume really is missing, QEMU
will generate an appropriate error later in the migration.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
In qemu 2.3, the migration status will include 'cancelling' in the
window between when an asynchronous cancel has been requested and
when the migration is actually halted. Previously, qemu hid this
state and reported 'active'. Libvirt manages the sequence okay
even when the string is unrecognized (that is, it will report an
unknown state:
Migration: [ 69 %]^Cerror: internal error: unexpected migration status in cancelling.
but the migration is still cancelled), but recognizing the string
makes for a smoother user experience.
* src/qemu/qemu_monitor.h
(QEMU_MONITOR_MIGRATION_STATUS_CANCELLING): Add enum.
* src/qemu/qemu_monitor.c (qemuMonitorMigrationStatus): Map it.
* src/qemu/qemu_migration.c (qemuMigrationUpdateJobStatus): Adjust
clients.
* src/qemu/qemu_monitor_json.c
(qemuMonitorJSONGetMigrationStatusReply): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
virnetdevopenvswitch.h declares a few functions that can be called to
add ports to and remove them from OVS bridges, and retrieve the
migration data for a port. It does not contain any data definitions
that are used by domain_conf.h. But for some reason, domain_conf.h
virnetdevopenvswitch.h should be directly #including it. This adds a
few lines to the project, but saves all the files that don't need it
from the extra computing, and makes the dependencies more clear cut.
There was a mess in the way how we store unlimited value for memory
limits and how we handled values provided by user. Internally there
were two possible ways how to store unlimited value: as 0 value or as
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED. Because we chose to store memory
limits as unsigned long long, we cannot use -1 to represent unlimited.
It's much easier for us to say that everything greater than
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED means unlimited and leave 0 as valid
value despite that it makes no sense to set limit to 0.
Remove unnecessary function virCompareLimitUlong. The update of test
is to prevent the 0 to be miss-used as unlimited in future.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1146539
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1179678
When migrating with storage, libvirt iterates over domain disks and
instruct qemu to migrate the ones we are interested in (shared, RO and
source-less disks are skipped). The disks are migrated in series. No
new disk is transferred until the previous one hasn't been quiesced.
This is checked on the qemu monitor via 'query-jobs' command. If the
disk has been quiesced, it practically went from copying its content
to mirroring state, where all disk writes are mirrored to the other
side of migration too. Having said that, there's one inherent error in
the design. The monitor command we use reports only active jobs. So if
the job fails for whatever reason, we will not see it anymore in the
command output. And this can happen fairly simply: just try to migrate
a domain with storage. If the storage migration fails (e.g. due to
ENOSPC on the destination) we resume the host on the destination and
let it run on partly copied disk.
The proper fix is what even the comment in the code says: listen for
qemu events instead of polling. If storage migration changes state an
event is emitted and we can act accordingly: either consider disk
copied and continue the process, or consider disk mangled and abort
the migration.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1191355
When we attempt to migrate a vm with a migrateuri that has no scheme:
# virsh migrate test4 --live qemu+ssh://lhuang/system --migrateuri 127.0.0.1
target libvirtd will crash because uri->scheme is NULL in
qemuMigrationPrepareDirect on this line:
if (STRNEQ(uri->scheme, "tcp") &&
Add a value check before this line. Also fix a bug like this in
doNativeMigrate, that could only happen when destination libvirtd
returned an incorrect URI.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>