I was adding a JSON test, and was shocked to find out our parser
treated the input string of "1" as invalid JSON. It turns out
that YAJL specifically documents that it buffers input, and that
if the last input read could be a prefix to a longer token, then
you have to explicitly tell the parser that the buffer has ended
before that token will be processed.
It doesn't help that yajl 2 renamed the function from what it was
in yajl 1.
* src/util/virjson.c (virJSONValueFromString): Complete parse, in
case buffer ends in possible token prefix.
* tests/jsontest.c (mymain): Expose the problem.
Signed-off-by: Eric Blake <eblake@redhat.com>
When the block job would fail while watching it using the "--wait"
option for blockcopy, virsh would rather unhelpfully report:
$ virsh blockcopy vm hdc /tmp/raw.img --granularity 4096 --verbose --wait
Now in mirroring phase
Add a special case when the block job vanishes while waiting for it to
finish to improve the message:
$ virsh blockcopy vm hdc /tmp/raw.img --granularity 8192 --verbose --wait
error: Block Copy unexpectedly failed
There's a small formatting problem in the function. One line is
not correctly indented. Fix this.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This capability specifies that "virt" machine on ARM has PCI controller. Enabled when version is at least 2.3.0.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
If vm->def->cputune.nvcpupin is 0 in libxlDomainSetVcpuAffinities (as
seems to be the case on arm) then the VIR_FREE after cleanup: would be
operating on an uninitialised pointer in map.map.
Fix this by using libxl_bitmap_init and libxl_bitmap_dispose in the
appropriate places (like VIR_FREE, libxl_bitmap_dispose is also
idempotent, so there is no double free on exit from the loop).
libxl_bitmap_dispose is slightly preferable since it also sets
map.size back to 0, avoiding a potential source of confusion.
This fixes the crashes we've been seeing in the Xen automated tests on
ARM.
I had a glance at the handful of other users of libxl_bitmap and none
of them looked to have a similar issue.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
When a connection to the destination host during a p2p migration drops,
we know we will have to cancel the migration; it doesn't make sense to
waste resources by trying to finish the migration. We already do so
after sending "migrate" command to QEMU and we should do it while
waiting for drive mirrors to become ready too.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Checking status of all part of migration and aborting it when something
failed is a complex thing which makes the waiting loop hard to read.
This patch moves all the checks into a separate function similarly to
what was done for drive mirror loops.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instead of passing current job name to several functions which already
know what the current job is we can generate the name where we actually
need to use it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Once we start waiting for migration events instead of polling
query-migrate, priv->job.current will not be regularly updated anymore
because we will get the current status directly from the events. Thus
virDomainGetJob{Info,Stats} will have to query QEMU, but they can't just
blindly update priv->job.current structure. This patch introduces
qemuMigrationFetchJobStatus which just fills in a caller supplied
structure and makes qemuMigrationUpdateJobStatus a tiny wrapper around
it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Move common parts of qemuDomainGetJobInfo and qemuDomainGetJobStats into
a separate API (qemuDomainGetJobStatsInternal).
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
QEMU_CAPS_SEAMLESS_MIGRATION capability says QEMU supports
SPICE_MIGRATE_COMPLETED event. Thus we can just drop all code which
polls query-spice and replace it with waiting for the event.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When libvirtd is restarted during migration, we properly cancel the
ongoing migration (unless it managed to almost finished before the
restart). But if we were also migrating storage using NBD, we would
completely forget about the running disk mirrors.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
"query-block-jobs" QMP command returns all running block jobs at once,
while qemuMonitorBlockJobInfo would only report one. This is not very
nice in case we need to check several block jobs. This patch refactors
the monitor code to always parse all block jobs and store them in a
hash.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
So that they can format private data (e.g., disk private data) stored
elsewhere in the domain object.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This patch reverts commit 76c61cdca2.
VIR_DOMAIN_DISK_MIRROR_STATE_ABORT says we asked for a block job to be
aborted rather than saying it was aborted. Let's just use
VIR_DOMAIN_DISK_MIRROR_STATE_NONE consistently whenever a block job
finishes since no caller depends on VIR_DOMAIN_DISK_MIRROR_STATE_ABORT
(anymore) to check whether a block job failed or it was cancelled.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Abort migration as soon as we detect that some of the disk mirrors
failed. There's no sense in trying to finish memory migration first.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instead of cancelling disk mirrors sequentially, let's just call
block-job-cancel for all migrating disks and then wait until all
disappear.
In case we cancel disk mirrors at the end of successful migration we
also need to check all block jobs completed successfully. Otherwise we
have to abort the migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
By switching block jobs to use domain conditions, we can drop some
pretty complicated code in NBD storage migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Because we are polling we may detect some errors after we asked QEMU for
migration status even though they occurred before. If this happens and
QEMU reports migration completed successfully, we would happily report
the migration succeeded even though we should have cancelled it because
of the other error.
In practise it is not a big issue now but it will become a much bigger
issue once the check for storage migration status is moved inside the
loop in qemuMigrationWaitForCompletion.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The wrapper is useful for calling qemuBlockJobEventProcess with the
event details stored in disk's privateData, which is the most likely
usage of qemuBlockJobEventProcess.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Complex jobs, such as migration, need to monitor several events at once,
which is impossible when each of the event uses its own condition
variable. This patch adds a single condition variable to each domain
object. This variable can be used instead of the other event specific
conditions.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Well, if a server is being destructed, all underlying services and
their sockets should disappear with it. But due to bug in our
implementation this is not the case. Yes, we are closing the sockets,
but that's not enough. We must also:
1) Unregister them from the event loop
2) Unref the service for each socket
The last step is needed, because each socket callback holds a
reference to the service object. Since in the first step we are
unregistering the callbacks, they no longer need the reference.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Although highly unlikely, nobody says that virEventAddHandle()
can't return 0 as a handle to socket callback. It can't happen
with our default implementation since all watches will have value
1 or greater, but users can register their own callback functions
(which can re-use unused watch IDs for instance). If this is the
case, weird things may happen.
Also, there's a little bug I'm fixing too, upon
virNetSocketRemoveIOCallback(), the variable holding callback ID
was not reset. Therefore calling AddIOCallback() once again would
fail. Not that we are doing it right now, but we might.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When going through the code I've notice that
virNetSocketAddIOCallback() increases the reference counter of
@socket. However, its counter part RemoveIOCallback does not. It took
me a while to realize this disproportion. The AddIOCallback registers
our own callback which eventually calls the desired callback and then
unref the @sock. Yeah, a bit complicated but it works. So, lets note
this hard learned fact in a comment in RemoveIOCallback().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When setting up the daemon networking, new services are created. These
services then have sockets to listen on. Once created, the service
objects are added to corresponding server object. However, during that
process, server increases reference counter of the service object. So,
at the end of the function, we should decrease it again. This way the
service objects will have only 1 reference, but that's okay since
servers are the only objects having a reference.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1224233
Currently it's not possible to determine the difference between a
fatal memory allocation or failure to open/read the directory error
with a perhaps less fatal, I didn't find the "block" device in the
directory (which may be a disk entry without a block device).
In the case of the latter, we shouldn't cause failure to continue
searching in the caller (virStorageBackendSCSIFindLUs), rather we
should allow trying reading the next directory entry.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Add `virsh migrate' option `--migrate-disks' that allows CLI user to
explicitly specify block devices to migrate.
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1203032
Implement a `migrate_disks' parameters for the QEMU driver. This multi-
value parameter can be used to explicitly specify what block devices
are to be migrated using the NBD server. Tunnelled migration using NBD
is to be done.
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The `virTypedParamsAddStringList' function provides interface to add a
NULL-terminated array of string values as a multi-value to the params.
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Add multikey API:
* virTypedParamsFilter that filters all the parameters with specified name.
* virTypedParamsGetStringList that returns a list with all the values for
specified name and string type.
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Allow multi-value parameters to be build using virTypedParamsAdd*
functions by removing check for duplicates.
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The `virTypedParamsValidate' function now can be instructed to allow
multiple entries for some of the keys. For this flag the type with
the `VIR_TYPED_PARAM_MULTIPLE' flag.
Add unit tests for this new behaviour.
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When playing with disk migration lately, I've noticed this warning in
domain logs:
WARNING: Image format was not specified for 'nbd://masina:49153/drive-virtio-disk0' and probing guessed raw.
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
Specify the 'raw' format explicitly to remove the restrictions.
So I started digging into qemu source code to see what has triggered
the warning. I'd expect qemu to know formats of guest's disks since we
tell them on command line. This lead me to qmp_drive_mirror() where
the following can be found:
if (!has_format) {
format = mode == NEW_IMAGE_MODE_EXISTING ? NULL : bs->drv->format_name;
}
So, format is automatically initialized from the disk iff mode !=
"existing". Unfortunately, in migration we are tied to use this mode
(NBD doesn't support creating new images). Therefore the only way to
avoid this warning is to pass format. The discussion on the mail-list [1]
resulted in the code that always forces NBD export as "raw" format.
[1] https://www.redhat.com/archives/libvir-list/2015-June/msg00153.html
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
This function is returning a string (domain XML). Since d3ce7363
when it was first introduced, it was indented incorrectly:
static char
*qemuMigrationBeginPhase(..)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If virDomainObjGetDefs used in qemuDomainPinIOThread would fail the code
would jump to the 'cleanup' label after acquiring the job, thus the VM
would be locked forever.
Introduced in commit cac6d639.
The privileged flag will not change while the configuration might
change. Make the 'privileged' flag member of the driver again and mark
it immutable. Should that ever change add an accessor that will group
reads of the state.