694 Commits

Author SHA1 Message Date
Jiri Denemark
e68f395fcb qemu: Remember incoming migration errors
If QEMU fails during incoming migration, the domain disappears including
a possibly useful error message read from QEMU log file. Let's remember
the error in virQEMUDriver so that Finish can report more than just "no
such domain".

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-07-10 11:47:13 +02:00
Jiri Denemark
3409f5bc4e qemu: Wait for migration events on domain condition
Since we already support the MIGRATION event, we just need to make sure
the domain condition is signalled whenever a p2p connection drops or the
domain is paused due to IO error and we can avoid waking up every 50 ms
to check whether something happened.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-07-09 21:57:30 +02:00
Jiri Denemark
6d2edb6a42 qemu: Update migration state according to MIGRATION event
We don't need to call query-migrate every 50ms when we get the current
migration state via MIGRATION event.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-07-09 21:53:35 +02:00
Luyao Huang
09444724bc qemu: Avoid removing persistent config if migration fails
When migration fails in qemuMigrationPrepareAny, we unconditionally call
qemuDomainRemoveInactive, which should only be called for transient
domains. The check for !vm->persistent was accidentally removed by
commit 540c339.

Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-25 10:18:39 +02:00
Jiri Denemark
d823fa6f64 qemu: cancel drive mirrors when p2p connection breaks
When a connection to the destination host during a p2p migration drops,
we know we will have to cancel the migration; it doesn't make sense to
waste resources by trying to finish the migration. We already do so
after sending "migrate" command to QEMU and we should do it while
waiting for drive mirrors to become ready too.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:19:49 +02:00
Jiri Denemark
d29c45587b qemu: Refactor qemuMigrationWaitForCompletion
Checking status of all part of migration and aborting it when something
failed is a complex thing which makes the waiting loop hard to read.
This patch moves all the checks into a separate function similarly to
what was done for drive mirror loops.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:15:12 +02:00
Jiri Denemark
92b5bcccaa qemu: Don't pass redundant job name around
Instead of passing current job name to several functions which already
know what the current job is we can generate the name where we actually
need to use it.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:15:12 +02:00
Jiri Denemark
c1a7f199e8 qemu: Refactor qemuMigrationUpdateJobStatus
Once we start waiting for migration events instead of polling
query-migrate, priv->job.current will not be regularly updated anymore
because we will get the current status directly from the events. Thus
virDomainGetJob{Info,Stats} will have to query QEMU, but they can't just
blindly update priv->job.current structure. This patch introduces
qemuMigrationFetchJobStatus which just fills in a caller supplied
structure and makes qemuMigrationUpdateJobStatus a tiny wrapper around
it.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:15:12 +02:00
Jiri Denemark
2ad46e5b0e qemu: Do not poll for spice migration status
QEMU_CAPS_SEAMLESS_MIGRATION capability says QEMU supports
SPICE_MIGRATE_COMPLETED event. Thus we can just drop all code which
polls query-spice and replace it with waiting for the event.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:15:11 +02:00
Jiri Denemark
d814c70b3b qemu: Use domain condition for asyncAbort
To avoid polling for asyncAbort flag changes.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:15:11 +02:00
Jiri Denemark
e8f263e0d0 qemu: Cancel disk mirrors after libvirtd restart
When libvirtd is restarted during migration, we properly cancel the
ongoing migration (unless it managed to almost finished before the
restart). But if we were also migrating storage using NBD, we would
completely forget about the running disk mirrors.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:15:11 +02:00
Jiri Denemark
40cd0290dc qemu: Make qemuMigrationCancelDriveMirror usable without async job
We don't have an async job when reconnecting to existing domains after
libvirtd restart.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:15:10 +02:00
Jiri Denemark
a9ba39a1a7 qemu: Abort migration early if disk mirror failed
Abort migration as soon as we detect that some of the disk mirrors
failed. There's no sense in trying to finish memory migration first.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:15:10 +02:00
Jiri Denemark
cebb110f73 qemu: Cancel storage migration in parallel
Instead of cancelling disk mirrors sequentially, let's just call
block-job-cancel for all migrating disks and then wait until all
disappear.

In case we cancel disk mirrors at the end of successful migration we
also need to check all block jobs completed successfully. Otherwise we
have to abort the migration.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:15:10 +02:00
Jiri Denemark
4172b96a3e qemu: Use domain condition for synchronous block jobs
By switching block jobs to use domain conditions, we can drop some
pretty complicated code in NBD storage migration.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:15:10 +02:00
Jiri Denemark
39564891f8 qemu: Properly report failed migration
Because we are polling we may detect some errors after we asked QEMU for
migration status even though they occurred before. If this happens and
QEMU reports migration completed successfully, we would happily report
the migration succeeded even though we should have cancelled it because
of the other error.

In practise it is not a big issue now but it will become a much bigger
issue once the check for storage migration status is moved inside the
loop in qemuMigrationWaitForCompletion.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:13:16 +02:00
Pavel Boldin
93a19e283e qemu: migration: selective block device migration
https://bugzilla.redhat.com/show_bug.cgi?id=1203032

Implement a `migrate_disks' parameters for the QEMU driver. This multi-
value parameter can be used to explicitly specify what block devices
are to be migrated using the NBD server. Tunnelled migration using NBD
is to be done.

Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-06-18 16:46:09 +02:00
Michal Privoznik
cb7297c150 qemuMigrationDriveMirror: Force raw format for NBD
When playing with disk migration lately, I've noticed this warning in
domain logs:

WARNING: Image format was not specified for 'nbd://masina:49153/drive-virtio-disk0' and probing guessed raw.
         Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
         Specify the 'raw' format explicitly to remove the restrictions.

So I started digging into qemu source code to see what has triggered
the warning. I'd expect qemu to know formats of guest's disks since we
tell them on command line. This lead me to qmp_drive_mirror() where
the following can be found:

    if (!has_format) {
        format = mode == NEW_IMAGE_MODE_EXISTING ? NULL : bs->drv->format_name;
    }

So, format is automatically initialized from the disk iff mode !=
"existing". Unfortunately, in migration we are tied to use this mode
(NBD doesn't support creating new images). Therefore the only way to
avoid this warning is to pass format. The discussion on the mail-list [1]
resulted in the code that always forces NBD export as "raw" format.

[1] https://www.redhat.com/archives/libvir-list/2015-June/msg00153.html
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
2015-06-18 16:46:09 +02:00
Michal Privoznik
9c5efd1afd qemuMigrationBeginPhase: Fix function header indentation
This function is returning a string (domain XML). Since d3ce7363
when it was first introduced, it was indented incorrectly:

static char
*qemuMigrationBeginPhase(..)

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-06-18 16:46:09 +02:00
Daniel P. Berrange
d587704cc7 rpc: allow selection of TCP address family
By default, getaddrinfo() will return addresses for both
IPv4 and IPv6 if both protocols are enabled, and so the
RPC code will listen/connect to both protocols too. There
may be cases where it is desirable to restrict this to
just one of the two protocols, so add an 'int family'
parameter to all the TCP related APIs.

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2015-06-11 12:11:18 +01:00
Ján Tomko
12b949dfb2 maint: remove incorrect apostrophes from 'its' 2015-06-04 10:01:42 +02:00
Jiri Denemark
82cffb58a1 Use virDomainDiskByName where appropriate
Most virDomainDiskIndexByName callers do not care about the index; what
they really want is a disk def pointer.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-05-21 14:35:02 +02:00
Jiri Denemark
a692277873 qemu: Don't give up on first error in qemuMigrationCancelDriverMirror
When cancelling drive mirror, always try to do that for all disks even
if it fails for some of them. Report the first error we saw.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-05-15 08:05:31 +02:00
Jiri Denemark
5139924b8d qemu: Keep track of what disks are being migrated
Instead of redoing the same filtering over and over everytime we need to
walk through all disks which are being migrated.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-05-15 08:05:31 +02:00
Jiri Denemark
46a7a49535 Move QEMU-only fields from virDomainDiskDef into privateData
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-05-15 08:05:31 +02:00
Jiri Denemark
078717e151 Rename virDomainHasBlockjob as qemuDomainHasBlockjob
And move it to qemu_domain.[ch] because this API is QEMU-only.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-05-15 08:05:26 +02:00
zhang bo
7eb5b4bf6f qemuMigrationPrepareAny: Drop useless variable @now
As of eeb008dbfcf31 the variable is not used anymore. Drop it.

Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-05-13 16:50:20 +02:00
Jiri Denemark
fc3601a308 qemu: Properly rename persistent def after migration
When migrating a domain while changing its name and using
VIR_MIGRATE_PERSIST_DEST flag, libvirt would fail to properly change the
name in the persistent definition. The inconsistency results in weird
behavior when dumping domain XML, destroying the domain, restarting
libvirtd and likely in several other situations.

Since the new name is already stored in vm->def->name, we just need to
make sure the persistent definition uses this new name too.

https://bugzilla.redhat.com/show_bug.cgi?id=1076354

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-05-04 22:59:51 +02:00
Jiri Denemark
b45ec56f58 qemu: Forbid unsupported parameters for tunnelled migration
Neither migrate URI nor lister address make any sense for tunnelled
migration.

https://bugzilla.redhat.com/show_bug.cgi?id=1066375
https://bugzilla.redhat.com/show_bug.cgi?id=1073233

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-05-04 15:06:33 +02:00
Michael Chapman
99725f946c qemu: migration: use sync block job helpers
In qemuMigrationDriveMirror we can start all disk mirrors in parallel.
We wait until they are all ready, or one of them aborts.

In qemuMigrationCancelDriveMirror, we wait until all mirrors are
properly stopped. This is necessary to ensure that destination VM is
fully in sync with the (paused) source VM.

If a drive mirror can not be cancelled, then the destination is not in a
consistent state. In this case it is not safe to continue with the
migration.

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2015-04-29 13:11:42 +02:00
Jiri Denemark
aa9f139599 migration: Usable time statistics without requiring NTP
virDomainGetJobStats is able to report statistics of a completed
migration, however to get usable downtime and total time statistics both
hosts have to keep synchronized time. To provide at least some
estimation of the times even when NTP daemons are not running on both
hosts we can just ignore the time needed to transfer a migration cookie
to the destination host. The result will be also inaccurate but a bit
more predictable. The total/down time will just be at least what we
report.

https://bugzilla.redhat.com/show_bug.cgi?id=1213434
2015-04-24 15:02:00 +02:00
Michal Privoznik
79d14a9930 Introduce virDomainObjEndAPI
This is basically turning qemuDomObjEndAPI into a more general
function. Other drivers which gets a reference to domain objects may
benefit from this function too.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-04-24 13:22:45 +02:00
Peter Krempa
bd57977391 qemu: migration: Refactor hostdev validation in migration check
The hostdev check can error out right away.
2015-04-22 14:05:50 +02:00
Cole Robinson
835cf84b7e domain: conf: Drop expectedVirtTypes
This needs to specified in way too many places for a simple validation
check. The ostype/arch/virttype validation checks later in
DomainDefParseXML should catch most of the cases that this was covering.
2015-04-20 16:43:43 -04:00
Huanle Han
c61ded8a7d qemu: fix index error when clean up vport profile
1. 'last_good_net' indicates the index of last successfully configured
net. so def->nets[last_good_net] should also be clean up if error occurs.

2. if error occurs in 'virNetDevMacVLanVPortProfileRegisterCallback'
(second 'goto err_exit' in loop), we should also do
'virNetDevVPortProfileDisassociate' cleanup for the
'virNetDevVPortProfileAssociate'(first code block in loop). So we should
consider the net is successfully configured after first code block in
loop finishes.

Signed-off-by: Huanle Han <hanxueluo@gmail.com>
2015-04-14 14:49:15 +02:00
Peter Krempa
cfc0a3d4ce qemu: blockjob: Separate qemuDomainBlockJobAbort from qemuDomainBlockJobImpl
Sacrifice a few lines of code in favor of the code being more readable.
2015-04-14 10:00:56 +02:00
Michal Privoznik
65a88572ad qemuMigrationPrecreateStorage: Fix debug message
When pre-creating storage for domains, we need to find corresponding
disk in the XML on the destination (domain XML may differ there, e.g.
disk is accessible under different path). For better debugging, I'm
printing all info I received on a disk. But there was a typo when
printing the disk capacity: "%lluu" instead of "%llu".

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-04-13 11:40:57 +02:00
Xing Lin
522e81cbb5 qemu_migration.c: sleep first before checking for migration status.
The problem with the previous implementation is,
even when qemuMigrationUpdateJobStatus() detects a migration job
has completed, it will do a sleep for 50 ms (which is unnecessary
and only adds up to the VM pause time).

Signed-off-by: Xing Lin <xinglin@cs.utah.edu>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-04-13 09:52:28 +02:00
Michael Chapman
72df8314f0 qemu_migrate: use nested job when adding NBD to cookie
qemuMigrationCookieAddNBD is usually called from within an async
MIGRATION_OUT or MIGRATION_IN job, so it needs to start a nested job.

(The one exception is during the Begin phase when change protection
isn't enabled, but qemuDomainObjEnterMonitorAsync will behave the same
as qemuDomainObjEnterMonitor in this case.)

This bug was encountered with a libvirt client that repeatedly queries
the disk mirroring block job info during a migration. If one of these
queries occurs just as the Perform migration cookie is baked, libvirt
crashes.

Relevant logs are as follows:

    6701: warning : qemuDomainObjEnterMonitorInternal:1544 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous
[1] 6701: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block","id":"libvirt-629"}
[2] 6699: info : qemuMonitorIOWrite:503 : QEMU_MONITOR_IO_WRITE: mon=0x7fefdc004700 buf={"execute":"query-block","id":"libvirt-629"}
[3] 6704: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block-jobs","id":"libvirt-630"}
[4] 6699: info : qemuMonitorJSONIOProcessLine:203 : QEMU_MONITOR_RECV_REPLY: mon=0x7fefdc004700 reply={"return": [...], "id": "libvirt-629"}
    6699: error : qemuMonitorJSONIOProcessLine:211 : internal error: Unexpected JSON reply '{"return": [...], "id": "libvirt-629"}'

At [1] qemuMonitorBlockStatsUpdateCapacity sends its request, then waits
on mon->notify. At [2] the request is written out to the monitor socket.
At [3] qemuMonitorBlockJobInfo sends its request, and also waits on
mon->notify. The reply from the first request is received at [4].
However, qemuMonitorJSONIOProcessLine is not expecting this reply since
the second request hadn't completed sending. The reply is dropped and an
error is returned.

qemuMonitorIO signals mon->notify twice during its error handling,
waking up both of the threads waiting on it. One of them clears mon->msg
as it exits qemuMonitorSend; the other crashes:

  qemuMonitorSend (mon=0x7fefdc004700, msg=<value optimized out>) at qemu/qemu_monitor.c:975
  975         while (!mon->msg->finished) {
  (gdb) print mon->msg
  $1 = (qemuMonitorMessagePtr) 0x0

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2015-04-08 10:30:17 +02:00
Michael Chapman
e5d729ba42 qemu: fix race between disk mirror fail and cancel
If a VM migration is aborted, a disk mirror may be failed by QEMU before
libvirt has a chance to cancel it. The disk->mirrorState remains at
_ABORT in this case, and this breaks subsequent mirrorings of that disk.

We should instead check the mirrorState directly and transition to _NONE
if it is already aborted. Do the check *after* aborting the block job in
QEMU to avoid a race.

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2015-04-08 09:45:47 +02:00
Michael Chapman
77ddd0bba2 qemu: fix error propagation in qemuMigrationBegin
If virCloseCallbacksSet fails, qemuMigrationBegin must return NULL to
indicate an error occurred.

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2015-04-08 09:45:47 +02:00
Shanzhi Yu
ffe3d3e886 conf: Rename virDomainHasDiskMirror and detect block jobs properly
virDomainHasDiskMirror() currently detects only jobs that add the mirror
elements. Since some operations like migration are interlocked by
existing block jobs on the given domain the check needs to be
instrumented to check regular jobs too.

This patch renames virDomainHasDiskMirror to virDomainHasDiskBlockjob
and adds an argument that allows to select that it returns true only for
block copy jobs as those interlock making the domain persistent.

Other two uses trigger on any block job type.

Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
2015-04-02 10:37:47 +02:00
Peter Krempa
c5710066e8 qemu: migration: Forbid migration with memory modules lacking info
Make sure that libvirt has all vital information needed to reliably
represent configuration of guest's memory devices in case of a
migration.

This patch forbids migration in case the required slot number and module
base address are not present (failed to be loaded from qemu via
monitor).
2015-03-23 14:25:15 +01:00
Michael Chapman
a1b1805155 qemu: skip precreation of network disks
Commit cf54c60699833b3791a5d0eb3eb5a1948c267f6b introduced the ability
to create missing storage volumes during migration. For network disks,
however, we may not necessarily be able to detect whether they already
exist -- there is no straight-forward way to map the disk to a storage
volume, and even if there were it's possible no configured storage pool
actually contains the disk.

It is better to assume the network disk exists in this case, rather than
aborting the migration completely. If the volume really is missing, QEMU
will generate an appropriate error later in the migration.

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2015-03-23 10:25:20 +01:00
Eric Blake
e2660cb8a6 qemu: track 'cancelling' migration state
In qemu 2.3, the migration status will include 'cancelling' in the
window between when an asynchronous cancel has been requested and
when the migration is actually halted.  Previously, qemu hid this
state and reported 'active'.  Libvirt manages the sequence okay
even when the string is unrecognized (that is, it will report an
unknown state:

Migration: [ 69 %]^Cerror: internal error: unexpected migration status in cancelling.

but the migration is still cancelled), but recognizing the string
makes for a smoother user experience.

* src/qemu/qemu_monitor.h
(QEMU_MONITOR_MIGRATION_STATUS_CANCELLING): Add enum.
* src/qemu/qemu_monitor.c (qemuMonitorMigrationStatus): Map it.
* src/qemu/qemu_migration.c (qemuMigrationUpdateJobStatus): Adjust
clients.
* src/qemu/qemu_monitor_json.c
(qemuMonitorJSONGetMigrationStatusReply): Likewise.

Signed-off-by: Eric Blake <eblake@redhat.com>
2015-03-18 14:59:34 -06:00
Laine Stump
451547a422 util: clean up #includes of virnetdevopenvswitch.h
virnetdevopenvswitch.h declares a few functions that can be called to
add ports to and remove them from OVS bridges, and retrieve the
migration data for a port. It does not contain any data definitions
that are used by domain_conf.h. But for some reason, domain_conf.h
virnetdevopenvswitch.h should be directly #including it. This adds a
few lines to the project, but saves all the files that don't need it
from the extra computing, and makes the dependencies more clear cut.
2015-03-18 14:43:47 -04:00
Pavel Hrdina
cf521fc8ba memtune: change the way how we store unlimited value
There was a mess in the way how we store unlimited value for memory
limits and how we handled values provided by user.  Internally there
were two possible ways how to store unlimited value: as 0 value or as
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED.  Because we chose to store memory
limits as unsigned long long, we cannot use -1 to represent unlimited.
It's much easier for us to say that everything greater than
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED means unlimited and leave 0 as valid
value despite that it makes no sense to set limit to 0.

Remove unnecessary function virCompareLimitUlong.  The update of test
is to prevent the 0 to be miss-used as unlimited in future.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1146539

Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
2015-03-06 11:52:24 +01:00
Michal Privoznik
37cf163ab2 virQEMUCapsCacheLookupCopy: Pass machine type
It will come handy in the near future when we will filter some
capabilities based on it.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-02-20 13:27:59 +01:00
Michal Privoznik
80c5f10e86 qemuMigrationDriveMirror: Listen to events
https://bugzilla.redhat.com/show_bug.cgi?id=1179678

When migrating with storage, libvirt iterates over domain disks and
instruct qemu to migrate the ones we are interested in (shared, RO and
source-less disks are skipped). The disks are migrated in series. No
new disk is transferred until the previous one hasn't been quiesced.
This is checked on the qemu monitor via 'query-jobs' command. If the
disk has been quiesced, it practically went from copying its content
to mirroring state, where all disk writes are mirrored to the other
side of migration too. Having said that, there's one inherent error in
the design. The monitor command we use reports only active jobs. So if
the job fails for whatever reason, we will not see it anymore in the
command output. And this can happen fairly simply: just try to migrate
a domain with storage. If the storage migration fails (e.g. due to
ENOSPC on the destination) we resume the host on the destination and
let it run on partly copied disk.

The proper fix is what even the comment in the code says: listen for
qemu events instead of polling. If storage migration changes state an
event is emitted and we can act accordingly: either consider disk
copied and continue the process, or consider disk mangled and abort
the migration.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-02-19 14:12:38 +01:00
Luyao Huang
45853b5289 qemu: fix crash when migrateuri has no scheme
https://bugzilla.redhat.com/show_bug.cgi?id=1191355

When we attempt to migrate a vm with a migrateuri that has no scheme:

 # virsh migrate test4 --live qemu+ssh://lhuang/system --migrateuri 127.0.0.1

target libvirtd will crash because uri->scheme is NULL in
qemuMigrationPrepareDirect on this line:

     if (STRNEQ(uri->scheme, "tcp") &&

Add a value check before this line. Also fix a bug like this in
doNativeMigrate, that could only happen when destination libvirtd
returned an incorrect URI.

Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2015-02-11 13:20:30 +01:00