685 Commits

Author SHA1 Message Date
Jiri Denemark
d814c70b3b qemu: Use domain condition for asyncAbort
To avoid polling for asyncAbort flag changes.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:15:11 +02:00
Jiri Denemark
e8f263e0d0 qemu: Cancel disk mirrors after libvirtd restart
When libvirtd is restarted during migration, we properly cancel the
ongoing migration (unless it managed to almost finished before the
restart). But if we were also migrating storage using NBD, we would
completely forget about the running disk mirrors.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:15:11 +02:00
Jiri Denemark
40cd0290dc qemu: Make qemuMigrationCancelDriveMirror usable without async job
We don't have an async job when reconnecting to existing domains after
libvirtd restart.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:15:10 +02:00
Jiri Denemark
a9ba39a1a7 qemu: Abort migration early if disk mirror failed
Abort migration as soon as we detect that some of the disk mirrors
failed. There's no sense in trying to finish memory migration first.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:15:10 +02:00
Jiri Denemark
cebb110f73 qemu: Cancel storage migration in parallel
Instead of cancelling disk mirrors sequentially, let's just call
block-job-cancel for all migrating disks and then wait until all
disappear.

In case we cancel disk mirrors at the end of successful migration we
also need to check all block jobs completed successfully. Otherwise we
have to abort the migration.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:15:10 +02:00
Jiri Denemark
4172b96a3e qemu: Use domain condition for synchronous block jobs
By switching block jobs to use domain conditions, we can drop some
pretty complicated code in NBD storage migration.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:15:10 +02:00
Jiri Denemark
39564891f8 qemu: Properly report failed migration
Because we are polling we may detect some errors after we asked QEMU for
migration status even though they occurred before. If this happens and
QEMU reports migration completed successfully, we would happily report
the migration succeeded even though we should have cancelled it because
of the other error.

In practise it is not a big issue now but it will become a much bigger
issue once the check for storage migration status is moved inside the
loop in qemuMigrationWaitForCompletion.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-06-19 15:13:16 +02:00
Pavel Boldin
93a19e283e qemu: migration: selective block device migration
https://bugzilla.redhat.com/show_bug.cgi?id=1203032

Implement a `migrate_disks' parameters for the QEMU driver. This multi-
value parameter can be used to explicitly specify what block devices
are to be migrated using the NBD server. Tunnelled migration using NBD
is to be done.

Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-06-18 16:46:09 +02:00
Michal Privoznik
cb7297c150 qemuMigrationDriveMirror: Force raw format for NBD
When playing with disk migration lately, I've noticed this warning in
domain logs:

WARNING: Image format was not specified for 'nbd://masina:49153/drive-virtio-disk0' and probing guessed raw.
         Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
         Specify the 'raw' format explicitly to remove the restrictions.

So I started digging into qemu source code to see what has triggered
the warning. I'd expect qemu to know formats of guest's disks since we
tell them on command line. This lead me to qmp_drive_mirror() where
the following can be found:

    if (!has_format) {
        format = mode == NEW_IMAGE_MODE_EXISTING ? NULL : bs->drv->format_name;
    }

So, format is automatically initialized from the disk iff mode !=
"existing". Unfortunately, in migration we are tied to use this mode
(NBD doesn't support creating new images). Therefore the only way to
avoid this warning is to pass format. The discussion on the mail-list [1]
resulted in the code that always forces NBD export as "raw" format.

[1] https://www.redhat.com/archives/libvir-list/2015-June/msg00153.html
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
2015-06-18 16:46:09 +02:00
Michal Privoznik
9c5efd1afd qemuMigrationBeginPhase: Fix function header indentation
This function is returning a string (domain XML). Since d3ce7363
when it was first introduced, it was indented incorrectly:

static char
*qemuMigrationBeginPhase(..)

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-06-18 16:46:09 +02:00
Daniel P. Berrange
d587704cc7 rpc: allow selection of TCP address family
By default, getaddrinfo() will return addresses for both
IPv4 and IPv6 if both protocols are enabled, and so the
RPC code will listen/connect to both protocols too. There
may be cases where it is desirable to restrict this to
just one of the two protocols, so add an 'int family'
parameter to all the TCP related APIs.

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2015-06-11 12:11:18 +01:00
Ján Tomko
12b949dfb2 maint: remove incorrect apostrophes from 'its' 2015-06-04 10:01:42 +02:00
Jiri Denemark
82cffb58a1 Use virDomainDiskByName where appropriate
Most virDomainDiskIndexByName callers do not care about the index; what
they really want is a disk def pointer.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-05-21 14:35:02 +02:00
Jiri Denemark
a692277873 qemu: Don't give up on first error in qemuMigrationCancelDriverMirror
When cancelling drive mirror, always try to do that for all disks even
if it fails for some of them. Report the first error we saw.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-05-15 08:05:31 +02:00
Jiri Denemark
5139924b8d qemu: Keep track of what disks are being migrated
Instead of redoing the same filtering over and over everytime we need to
walk through all disks which are being migrated.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-05-15 08:05:31 +02:00
Jiri Denemark
46a7a49535 Move QEMU-only fields from virDomainDiskDef into privateData
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-05-15 08:05:31 +02:00
Jiri Denemark
078717e151 Rename virDomainHasBlockjob as qemuDomainHasBlockjob
And move it to qemu_domain.[ch] because this API is QEMU-only.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-05-15 08:05:26 +02:00
zhang bo
7eb5b4bf6f qemuMigrationPrepareAny: Drop useless variable @now
As of eeb008dbfcf31 the variable is not used anymore. Drop it.

Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-05-13 16:50:20 +02:00
Jiri Denemark
fc3601a308 qemu: Properly rename persistent def after migration
When migrating a domain while changing its name and using
VIR_MIGRATE_PERSIST_DEST flag, libvirt would fail to properly change the
name in the persistent definition. The inconsistency results in weird
behavior when dumping domain XML, destroying the domain, restarting
libvirtd and likely in several other situations.

Since the new name is already stored in vm->def->name, we just need to
make sure the persistent definition uses this new name too.

https://bugzilla.redhat.com/show_bug.cgi?id=1076354

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-05-04 22:59:51 +02:00
Jiri Denemark
b45ec56f58 qemu: Forbid unsupported parameters for tunnelled migration
Neither migrate URI nor lister address make any sense for tunnelled
migration.

https://bugzilla.redhat.com/show_bug.cgi?id=1066375
https://bugzilla.redhat.com/show_bug.cgi?id=1073233

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-05-04 15:06:33 +02:00
Michael Chapman
99725f946c qemu: migration: use sync block job helpers
In qemuMigrationDriveMirror we can start all disk mirrors in parallel.
We wait until they are all ready, or one of them aborts.

In qemuMigrationCancelDriveMirror, we wait until all mirrors are
properly stopped. This is necessary to ensure that destination VM is
fully in sync with the (paused) source VM.

If a drive mirror can not be cancelled, then the destination is not in a
consistent state. In this case it is not safe to continue with the
migration.

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2015-04-29 13:11:42 +02:00
Jiri Denemark
aa9f139599 migration: Usable time statistics without requiring NTP
virDomainGetJobStats is able to report statistics of a completed
migration, however to get usable downtime and total time statistics both
hosts have to keep synchronized time. To provide at least some
estimation of the times even when NTP daemons are not running on both
hosts we can just ignore the time needed to transfer a migration cookie
to the destination host. The result will be also inaccurate but a bit
more predictable. The total/down time will just be at least what we
report.

https://bugzilla.redhat.com/show_bug.cgi?id=1213434
2015-04-24 15:02:00 +02:00
Michal Privoznik
79d14a9930 Introduce virDomainObjEndAPI
This is basically turning qemuDomObjEndAPI into a more general
function. Other drivers which gets a reference to domain objects may
benefit from this function too.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-04-24 13:22:45 +02:00
Peter Krempa
bd57977391 qemu: migration: Refactor hostdev validation in migration check
The hostdev check can error out right away.
2015-04-22 14:05:50 +02:00
Cole Robinson
835cf84b7e domain: conf: Drop expectedVirtTypes
This needs to specified in way too many places for a simple validation
check. The ostype/arch/virttype validation checks later in
DomainDefParseXML should catch most of the cases that this was covering.
2015-04-20 16:43:43 -04:00
Huanle Han
c61ded8a7d qemu: fix index error when clean up vport profile
1. 'last_good_net' indicates the index of last successfully configured
net. so def->nets[last_good_net] should also be clean up if error occurs.

2. if error occurs in 'virNetDevMacVLanVPortProfileRegisterCallback'
(second 'goto err_exit' in loop), we should also do
'virNetDevVPortProfileDisassociate' cleanup for the
'virNetDevVPortProfileAssociate'(first code block in loop). So we should
consider the net is successfully configured after first code block in
loop finishes.

Signed-off-by: Huanle Han <hanxueluo@gmail.com>
2015-04-14 14:49:15 +02:00
Peter Krempa
cfc0a3d4ce qemu: blockjob: Separate qemuDomainBlockJobAbort from qemuDomainBlockJobImpl
Sacrifice a few lines of code in favor of the code being more readable.
2015-04-14 10:00:56 +02:00
Michal Privoznik
65a88572ad qemuMigrationPrecreateStorage: Fix debug message
When pre-creating storage for domains, we need to find corresponding
disk in the XML on the destination (domain XML may differ there, e.g.
disk is accessible under different path). For better debugging, I'm
printing all info I received on a disk. But there was a typo when
printing the disk capacity: "%lluu" instead of "%llu".

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-04-13 11:40:57 +02:00
Xing Lin
522e81cbb5 qemu_migration.c: sleep first before checking for migration status.
The problem with the previous implementation is,
even when qemuMigrationUpdateJobStatus() detects a migration job
has completed, it will do a sleep for 50 ms (which is unnecessary
and only adds up to the VM pause time).

Signed-off-by: Xing Lin <xinglin@cs.utah.edu>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-04-13 09:52:28 +02:00
Michael Chapman
72df8314f0 qemu_migrate: use nested job when adding NBD to cookie
qemuMigrationCookieAddNBD is usually called from within an async
MIGRATION_OUT or MIGRATION_IN job, so it needs to start a nested job.

(The one exception is during the Begin phase when change protection
isn't enabled, but qemuDomainObjEnterMonitorAsync will behave the same
as qemuDomainObjEnterMonitor in this case.)

This bug was encountered with a libvirt client that repeatedly queries
the disk mirroring block job info during a migration. If one of these
queries occurs just as the Perform migration cookie is baked, libvirt
crashes.

Relevant logs are as follows:

    6701: warning : qemuDomainObjEnterMonitorInternal:1544 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous
[1] 6701: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block","id":"libvirt-629"}
[2] 6699: info : qemuMonitorIOWrite:503 : QEMU_MONITOR_IO_WRITE: mon=0x7fefdc004700 buf={"execute":"query-block","id":"libvirt-629"}
[3] 6704: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block-jobs","id":"libvirt-630"}
[4] 6699: info : qemuMonitorJSONIOProcessLine:203 : QEMU_MONITOR_RECV_REPLY: mon=0x7fefdc004700 reply={"return": [...], "id": "libvirt-629"}
    6699: error : qemuMonitorJSONIOProcessLine:211 : internal error: Unexpected JSON reply '{"return": [...], "id": "libvirt-629"}'

At [1] qemuMonitorBlockStatsUpdateCapacity sends its request, then waits
on mon->notify. At [2] the request is written out to the monitor socket.
At [3] qemuMonitorBlockJobInfo sends its request, and also waits on
mon->notify. The reply from the first request is received at [4].
However, qemuMonitorJSONIOProcessLine is not expecting this reply since
the second request hadn't completed sending. The reply is dropped and an
error is returned.

qemuMonitorIO signals mon->notify twice during its error handling,
waking up both of the threads waiting on it. One of them clears mon->msg
as it exits qemuMonitorSend; the other crashes:

  qemuMonitorSend (mon=0x7fefdc004700, msg=<value optimized out>) at qemu/qemu_monitor.c:975
  975         while (!mon->msg->finished) {
  (gdb) print mon->msg
  $1 = (qemuMonitorMessagePtr) 0x0

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2015-04-08 10:30:17 +02:00
Michael Chapman
e5d729ba42 qemu: fix race between disk mirror fail and cancel
If a VM migration is aborted, a disk mirror may be failed by QEMU before
libvirt has a chance to cancel it. The disk->mirrorState remains at
_ABORT in this case, and this breaks subsequent mirrorings of that disk.

We should instead check the mirrorState directly and transition to _NONE
if it is already aborted. Do the check *after* aborting the block job in
QEMU to avoid a race.

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2015-04-08 09:45:47 +02:00
Michael Chapman
77ddd0bba2 qemu: fix error propagation in qemuMigrationBegin
If virCloseCallbacksSet fails, qemuMigrationBegin must return NULL to
indicate an error occurred.

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2015-04-08 09:45:47 +02:00
Shanzhi Yu
ffe3d3e886 conf: Rename virDomainHasDiskMirror and detect block jobs properly
virDomainHasDiskMirror() currently detects only jobs that add the mirror
elements. Since some operations like migration are interlocked by
existing block jobs on the given domain the check needs to be
instrumented to check regular jobs too.

This patch renames virDomainHasDiskMirror to virDomainHasDiskBlockjob
and adds an argument that allows to select that it returns true only for
block copy jobs as those interlock making the domain persistent.

Other two uses trigger on any block job type.

Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
2015-04-02 10:37:47 +02:00
Peter Krempa
c5710066e8 qemu: migration: Forbid migration with memory modules lacking info
Make sure that libvirt has all vital information needed to reliably
represent configuration of guest's memory devices in case of a
migration.

This patch forbids migration in case the required slot number and module
base address are not present (failed to be loaded from qemu via
monitor).
2015-03-23 14:25:15 +01:00
Michael Chapman
a1b1805155 qemu: skip precreation of network disks
Commit cf54c60699833b3791a5d0eb3eb5a1948c267f6b introduced the ability
to create missing storage volumes during migration. For network disks,
however, we may not necessarily be able to detect whether they already
exist -- there is no straight-forward way to map the disk to a storage
volume, and even if there were it's possible no configured storage pool
actually contains the disk.

It is better to assume the network disk exists in this case, rather than
aborting the migration completely. If the volume really is missing, QEMU
will generate an appropriate error later in the migration.

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2015-03-23 10:25:20 +01:00
Eric Blake
e2660cb8a6 qemu: track 'cancelling' migration state
In qemu 2.3, the migration status will include 'cancelling' in the
window between when an asynchronous cancel has been requested and
when the migration is actually halted.  Previously, qemu hid this
state and reported 'active'.  Libvirt manages the sequence okay
even when the string is unrecognized (that is, it will report an
unknown state:

Migration: [ 69 %]^Cerror: internal error: unexpected migration status in cancelling.

but the migration is still cancelled), but recognizing the string
makes for a smoother user experience.

* src/qemu/qemu_monitor.h
(QEMU_MONITOR_MIGRATION_STATUS_CANCELLING): Add enum.
* src/qemu/qemu_monitor.c (qemuMonitorMigrationStatus): Map it.
* src/qemu/qemu_migration.c (qemuMigrationUpdateJobStatus): Adjust
clients.
* src/qemu/qemu_monitor_json.c
(qemuMonitorJSONGetMigrationStatusReply): Likewise.

Signed-off-by: Eric Blake <eblake@redhat.com>
2015-03-18 14:59:34 -06:00
Laine Stump
451547a422 util: clean up #includes of virnetdevopenvswitch.h
virnetdevopenvswitch.h declares a few functions that can be called to
add ports to and remove them from OVS bridges, and retrieve the
migration data for a port. It does not contain any data definitions
that are used by domain_conf.h. But for some reason, domain_conf.h
virnetdevopenvswitch.h should be directly #including it. This adds a
few lines to the project, but saves all the files that don't need it
from the extra computing, and makes the dependencies more clear cut.
2015-03-18 14:43:47 -04:00
Pavel Hrdina
cf521fc8ba memtune: change the way how we store unlimited value
There was a mess in the way how we store unlimited value for memory
limits and how we handled values provided by user.  Internally there
were two possible ways how to store unlimited value: as 0 value or as
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED.  Because we chose to store memory
limits as unsigned long long, we cannot use -1 to represent unlimited.
It's much easier for us to say that everything greater than
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED means unlimited and leave 0 as valid
value despite that it makes no sense to set limit to 0.

Remove unnecessary function virCompareLimitUlong.  The update of test
is to prevent the 0 to be miss-used as unlimited in future.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1146539

Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
2015-03-06 11:52:24 +01:00
Michal Privoznik
37cf163ab2 virQEMUCapsCacheLookupCopy: Pass machine type
It will come handy in the near future when we will filter some
capabilities based on it.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-02-20 13:27:59 +01:00
Michal Privoznik
80c5f10e86 qemuMigrationDriveMirror: Listen to events
https://bugzilla.redhat.com/show_bug.cgi?id=1179678

When migrating with storage, libvirt iterates over domain disks and
instruct qemu to migrate the ones we are interested in (shared, RO and
source-less disks are skipped). The disks are migrated in series. No
new disk is transferred until the previous one hasn't been quiesced.
This is checked on the qemu monitor via 'query-jobs' command. If the
disk has been quiesced, it practically went from copying its content
to mirroring state, where all disk writes are mirrored to the other
side of migration too. Having said that, there's one inherent error in
the design. The monitor command we use reports only active jobs. So if
the job fails for whatever reason, we will not see it anymore in the
command output. And this can happen fairly simply: just try to migrate
a domain with storage. If the storage migration fails (e.g. due to
ENOSPC on the destination) we resume the host on the destination and
let it run on partly copied disk.

The proper fix is what even the comment in the code says: listen for
qemu events instead of polling. If storage migration changes state an
event is emitted and we can act accordingly: either consider disk
copied and continue the process, or consider disk mangled and abort
the migration.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-02-19 14:12:38 +01:00
Luyao Huang
45853b5289 qemu: fix crash when migrateuri has no scheme
https://bugzilla.redhat.com/show_bug.cgi?id=1191355

When we attempt to migrate a vm with a migrateuri that has no scheme:

 # virsh migrate test4 --live qemu+ssh://lhuang/system --migrateuri 127.0.0.1

target libvirtd will crash because uri->scheme is NULL in
qemuMigrationPrepareDirect on this line:

     if (STRNEQ(uri->scheme, "tcp") &&

Add a value check before this line. Also fix a bug like this in
doNativeMigrate, that could only happen when destination libvirtd
returned an incorrect URI.

Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2015-02-11 13:20:30 +01:00
Luyao Huang
1b2c9ce752 qemu: Properly report error on uuid mismatch in the migration cookie
Add the missing jump to the error label when the uuid in the
migration cookie XML does not match the uuid of the migrated
domain.

Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2015-02-05 08:14:36 +01:00
Ján Tomko
5c703ca396 Always check return value of qemuDomainObjExitMonitor
Depending on the context, either error out if the domain
has disappeared in the meantime, or just ignore the value
to allow marking the function as ATTRIBUTE_RETURN_CHECK.
2015-01-19 10:12:32 +01:00
Luyao Huang
5035279198 qemu: free priv->origname when qemuMigrationPrepareAny fails
https://bugzilla.redhat.com/show_bug.cgi?id=1181182

When we meet error in qemuMigrationPrepareAny and goto
cleanup with rc < 0, we forget clear the priv->origname and this
will make this vm migrate fail next time because leave a wrong
origname in  priv, and will Generate a wrong cookie when do
migrate next time.

This patch will make priv->origname is NULL when migrate fail
in target host.

Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
2015-01-15 11:32:50 +01:00
Daniel P. Berrange
0ecd685109 Give virDomainDef parser & formatter their own flags
The virDomainDefParse* and virDomainDefFormat* methods both
accept the VIR_DOMAIN_XML_* flags defined in the public API,
along with a set of other VIR_DOMAIN_XML_INTERNAL_* flags
defined in domain_conf.c.

This is seriously confusing & error prone for a number of
reasons:

 - VIR_DOMAIN_XML_SECURE, VIR_DOMAIN_XML_MIGRATABLE and
   VIR_DOMAIN_XML_UPDATE_CPU are only relevant for the
   formatting operation
 - Some of the VIR_DOMAIN_XML_INTERNAL_* flags only apply
   to parse or to format, but not both.

This patch cleanly separates out the flags. There are two
distint VIR_DOMAIN_DEF_PARSE_* and VIR_DOMAIN_DEF_FORMAT_*
flags that are used by the corresponding methods. The
VIR_DOMAIN_XML_* flags received via public API calls must
be converted to the VIR_DOMAIN_DEF_FORMAT_* flags where
needed.

The various calls to virDomainDefParse which hardcoded the
use of the VIR_DOMAIN_XML_INACTIVE flag change to use the
VIR_DOMAIN_DEF_PARSE_INACTIVE flag.
2015-01-13 16:26:12 +00:00
Martin Kletzander
31354b5b32 qemu: Fix coverity issues after refcount refactoring
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2014-12-23 05:34:05 +01:00
Martin Kletzander
540c339a25 qemu: completely rework reference counting
There is one problem that causes various errors in the daemon.  When
domain is waiting for a job, it is unlocked while waiting on the
condition.  However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd.  If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object.  However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).

The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list.  That reference can be removed when the API is waiting for
a job.  And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.

This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked.  This makes all reference counting
deterministic and makes the code a bit clearer.

Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2014-12-21 10:48:56 +01:00
Eric Blake
7b11f5e554 getstats: prepare monitor collection for recursion
A future patch will allow recursion into backing chains when
collecting block stats.  This patch should not change behavior,
but merely moves out the common code that will be reused once
recursion is enabled, and adds the parameter that will turn on
recursion.

* src/qemu/qemu_monitor.h (qemuMonitorGetAllBlockStatsInfo)
(qemuMonitorBlockStatsUpdateCapacity): Add recursion parameter,
although it is ignored for now.
* src/qemu/qemu_monitor.h (qemuMonitorGetAllBlockStatsInfo)
(qemuMonitorBlockStatsUpdateCapacity): Likewise.
* src/qemu/qemu_monitor_json.h
(qemuMonitorJSONGetAllBlockStatsInfo)
(qemuMonitorJSONBlockStatsUpdateCapacity): Likewise.
* src/qemu/qemu_monitor_json.c
(qemuMonitorJSONGetAllBlockStatsInfo)
(qemuMonitorJSONBlockStatsUpdateCapacity): Add parameter, and
split...
(qemuMonitorJSONGetOneBlockStatsInfo)
(qemuMonitorJSONBlockStatsUpdateCapacityOne): ...into helpers.
(qemuMonitorJSONGetBlockStatsInfo): Update caller.
* src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Update caller.
* src/qemu/qemu_migration.c (qemuMigrationCookieAddNBD): Likewise.

Signed-off-by: Eric Blake <eblake@redhat.com>
2014-12-16 16:08:04 -07:00
Michal Privoznik
cf54c60699 qemu_migration: Precreate missing storage
Based on previous commit, we can now precreate missing volumes. While
digging out the functionality from storage driver would be nicer, if
you've seen the code it's nearly impossible. So I'm going from the
other end:

1) For given disk target, disk path is looked up.
2) For the disk path, storage pool is looked up, a volume XML is
constructed and then passed to virStorageVolCreateXML() which has all
the knowledge how to create raw images, (encrypted) qcow(2) images,
etc.

One of the advantages of this approach is, we don't have to care about
image conversion - qemu does that for us. So for instance, users can
transform qcow2 into raw on migration (if the correct XML is passed to
the migration API).

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2014-12-02 18:02:13 +01:00
Michal Privoznik
e1466dc7fa qemu_migration: Send disk sizes to the other side
Up 'til now, users need to precreate non-shared storage on migration
themselves. This is not very friendly requirement and we should do
something about it. In this patch, the migration cookie is extended,
so that <nbd/> section does not only contain NBD port, but info on
disks being migrated. This patch sends a list of pairs of:

    <disk target; disk size>

to the destination. The actual storage allocation is left for next
commit.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2014-12-02 17:51:57 +01:00