virDomainSaveConfig calls virDomainDefFormat which was setting the caps
to NULL, thus keeping the old behaviour (i.e. not looking at
netprefix). This patch adds the virCapsPtr to the function and allows
the configuration to be saved and skipping interface names that were
registered with virCapabilitiesSetNetPrefix().
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
This function may be called with @dconnuri == NULL, e.g. from
virDomainMigrateToURI3() if the flags are missing
VIR_MIGRATE_PEER2PEER flag. Moreover, all later functions called
from here do wrap it into NULLSTR() so why not do the same here?
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
memory_dirty_rate corresponds to dirty-pages-rate in QEMU and
memory_iteration is what QEMU reports in dirty-sync-count.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The structure actually contains migration statistics rather than just
the status as the name suggests. Renaming it as
qemuMonitorMigrationStats removes the confusion.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
My commit 674afcb09e moved computing the
default listen address from qemuMigrationPrepareAny to
qemuMigrationPrepareIncoming. However, I didn't notice listenAddress was
later passed to qemuMigrationStartNBDServer. Thus, it would be called
with the original value of listenAddress (NULL).
Let's add the updated listen address to qemuProcessIncomingDef and use
it when starting NBD servers.
Reported-by: Michael Chapman <mike@very.puzzling.org>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Valgrind complained:
==18990== 20 (16 direct, 4 indirect) bytes in 1 blocks are definitely lost in loss record 188 of 996
==18990== at 0x4A057BB: calloc (vg_replace_malloc.c:593)
==18990== by 0x5292E9B: virAllocN (viralloc.c:191)
==18990== by 0x2221E731: qemuMigrationCookieXMLParseStr (qemu_migration.c:1012)
==18990== by 0x2221F390: qemuMigrationEatCookie (qemu_migration.c:1413)
==18990== by 0x222228CE: qemuMigrationPrepareAny (qemu_migration.c:3463)
==18990== by 0x22224121: qemuMigrationPrepareDirect (qemu_migration.c:3865)
==18990== by 0x22251C25: qemuDomainMigratePrepare3Params (qemu_driver.c:12414)
==18990== by 0x5389EE0: virDomainMigratePrepare3Params (libvirt-domain.c:5107)
==18990== by 0x1278DB: remoteDispatchDomainMigratePrepare3ParamsHelper (remote.c:5425)
==18990== by 0x53FF287: virNetServerProgramDispatch (virnetserverprogram.c:437)
==18990== by 0x540523D: virNetServerProcessMsg (virnetserver.c:135)
==18990== by 0x54052C7: virNetServerHandleJob (virnetserver.c:156)
==18990==
==18990== 20 (16 direct, 4 indirect) bytes in 1 blocks are definitely lost in loss record 189 of 996
==18990== at 0x4A057BB: calloc (vg_replace_malloc.c:593)
==18990== by 0x5292E9B: virAllocN (viralloc.c:191)
==18990== by 0x2221E731: qemuMigrationCookieXMLParseStr (qemu_migration.c:1012)
==18990== by 0x2221F390: qemuMigrationEatCookie (qemu_migration.c:1413)
==18990== by 0x222249D2: qemuMigrationRun (qemu_migration.c:4395)
==18990== by 0x22226365: doNativeMigrate (qemu_migration.c:4693)
==18990== by 0x22228E45: qemuMigrationPerform (qemu_migration.c:5553)
==18990== by 0x2225144B: qemuDomainMigratePerform3Params (qemu_driver.c:12621)
==18990== by 0x539F5D8: virDomainMigratePerform3Params (libvirt-domain.c:5206)
==18990== by 0x127305: remoteDispatchDomainMigratePerform3ParamsHelper (remote.c:5557)
==18990== by 0x53FF287: virNetServerProgramDispatch (virnetserverprogram.c:437)
==18990== by 0x540523D: virNetServerProcessMsg (virnetserver.c:135)
If we're replacing the NBD data, it's simplest to free the old object
(including the disk list) and allocate a new one.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
Currently the QEMU monitor is given an FD to the logfile. This
won't work in the future with virtlogd, so it needs to use the
qemuDomainLogContextPtr instead, but it shouldn't directly
access that object either. So define a callback that the
monitor can use for reporting errors from the log file.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
There are two pretty similar functions qemuProcessReadLog and
qemuProcessReadChildErrors. Both read from the QEMU log file
and try to strip out libvirt messages. The latter then reports
an error, while the former lets the callers report an error.
Re-write qemuProcessReadLog so that it uses a single read
into a dynamically allocated buffer. Then introduce a new
qemuProcessReportLogError that calls qemuProcessReadLog
and reports an error.
Convert all callers to use qemuProcessReportLogError.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Using qemuProcess{Init,Launch,FinishStartup} allows us to run
pre-migration commands on destination before asking QEMU to wait for
incoming migration data.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
NBD storage migration will not work with offline migration anyway and we
already checked that the user did not ask for it. Thus it doesn't make
sense to keep the code after 'done' label where we jump in case of
offline migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Some failure paths in qemuMigrationPrepareAny forgot to kill the just
started QEMU process. This patch fixes this by combining 'stop' and
'endjob' label into a new label 'stopjob'. This name was chosen to avoid
confusion with the most common semantics of 'endjob'. Normally, 'endjob'
is always called at the end of an API to stop the job we entered at the
beginning. In qemuMigrationPrepareAny we only want to stop the job in
failure path; on success we need to carry the job over to the Finish
phase.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Traditionally, we pass incoming migration URI on QEMU command line,
which has some drawbacks. Depending on the URI QEMU may initialize its
migration state immediately without giving us a chance to set any
additional migration parameters (this applies mainly for fd: URIs). For
some URIs the monitor may be completely blocked from the beginning until
migration is finished, which means we may be stuck in qmp_capabilities
command without being able to send any QMP commands.
QEMU solved this by introducing "defer" parameter for -incoming command
line option. This will tell QEMU to prepare for an incoming migration
while the actual incoming URI is sent using migrate-incoming QMP
command. Before calling this command we can normally talk to the
monitor and even set any migration parameters which will be honored by
the incoming migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Since we require QEMU 0.12.0, we can assume that QEMU supports
all of the fd, tcp, unix and exec migration protocols.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The code reported that a migration flag is unsupported but didn't jump
to the error label. Probably an oversight in commit f88af9dc that
introduced the flag checking.
Since the flag was not enabled when 'eating' the migration cookie,
libvirt reported a bogus error when memory hotplug was enabled:
unsupported migration cookie feature memory-hotplug
The error was ignored though due to a bug in the code so it slipped
through testing.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1278404
In commit f41be296, we moved vm->persistent check into
qemuDomainRemoveInactive, but we didn't change the vm->persistent
before call qemuDomainRemoveInactive in some place before and just
call it to remove the inactive vm.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Tunnelled migration can hang if the destination qemu exits despite all the
ABI checks. This happens whenever the destination qemu exits before the
complete transfer is noticed by source qemu. The savevm state checks at
runtime can fail at destination and cause qemu to error out.
The source qemu cant notice it as the EPIPE is not propogated to it.
The qemuMigrationIOFunc() notices the stream being broken from virStreamSend()
and it cleans up the stream alone. The qemuMigrationWaitForCompletion() would
never get to 100% transfer completion.
The qemuMigrationWaitForCompletion() never breaks out as well since
the ssh connection to destination is healthy, and the source qemu also thinks
the migration is ongoing as the Fd to which it transfers, is never
closed or broken. So, the migration will hang forever. Even Ctrl-C on the
virsh migrate wouldn't be honoured. Close the source side FD when there is
an error in the stream. That way, the source qemu updates itself and
qemuMigrationWaitForCompletion() notices the failure.
Close the FD for all kinds of errors to be sure. The error message is not
copied for EPIPE so that the destination error is copied instead later.
Note:
Reproducible with repeated migrations between Power hosts running in different
subcores-per-core modes.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
qemuMigrationIsAllowed would disallow offline migration if the VM
contained host devices or memory modules. Since during offline migration
we don't transfer any state we can safely migrate VMs with such
configuration.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1265049
Use the migration @flags for checking various migration aspects rather
than picking them out as booleans. Document the new semantics in the
function header.
Now that qemuMigrationIsAllowed is always called with @vm, we can drop
the @def argument and simplify the control flow.
Additionally the comment is invalid so drop it.
Extract the hostdev check from qemuMigrationIsAllowed into a separate
function since that is the only part that needs to be done in the v2
migration protocol prepare phase on the destination. All other checks
were added when the v3 protocol existed so they don't need to be
extracted.
This change will allow to drop the @def argument for
qemuMigrationIsAllowed and further simplify the function.
Even though QEMU on the source host reports completed migration and thus
we move to the Finish phase, QEMU on the destination host may still be
processing migration data. Thus before we can start guest CPUs on the
destination, we have to wait for a completed migration event.
https://bugzilla.redhat.com/show_bug.cgi?id=1265902
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
With new QEMU which supports migration events,
qemuMigrationCheckJobStatus needs to explicitly query QEMU for migration
statistics once migration is completed to make sure the caller sees
up-to-date statistics with both old and new QEMU. However, some callers
are not interested in the statistics at all and once we start waiting
for a completed migration on the destination host too, checking the
statistics would even fail. Let's push the decision whether to update
the statistics or not to the caller.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The function already has two bool parameters and we will need to add a
new one. Let's switch to flags to make the callers readable.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The destination host gets detailed statistics about the current
migration form the source host via migration cookie and copies them to
the domain object so that they can be queried using
virDomainGetJobStats. However, we should only copy statistics to the
domain object when migration finished successfully.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Even if we are migrating a domain with VIR_MIGRATE_PAUSED flag set, we
should still update the total time of the migration. Updating downtime
doesn't hurt either, even though we don't actually start guest CPUs.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
So far we have the following pattern occurring over and over
again:
if (!vm->persistent)
qemuDomainRemoveInactive(driver, vm);
It's safe to put the check into the function and save some LoC.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Add a new parser flag that will mark code paths that parse XML files
wich will not be used with existing VM state so that post parse
callbacks can possibly do ABI incompatible changes if needed.
When persistently migrating a domain to a destination host where the
same domain already exists (i.e., it is persistent and shutdown at the
destination), we would happily throw away the original persistent
definition without properly freeing it. And when updating the definition
fails for some reason we don't properly revert to the original state
leaving the domain broken.
In addition to fixing these issues, the patch also makes sure the domain
definition parsed from a migration cookie is either used or freed.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
For quite a long time we don't need to postpone queueing events until
the end of the function since we no longer have the big driver lock.
Let's make the code of qemuMigrationFinish simpler by queuing events at
the time we generate them.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Every single call to qemuDomainEventQueue() uses the following pattern:
if (event)
qemuDomainEventQueue(driver, event);
Let's move the check for valid event to qemuDomainEventQueue and
simplify all callers.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Finish is the final state in v2 of our migration protocol. If something
fails, we have no option to abort the migration and resume the original
domain. Non fatal errors (such as failure to start guest CPUs or make
the domain persistent) has to be treated as success. Keeping the domain
running while reporting the failure was just asking for trouble.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Whenever something fails during incoming migration in Finish phase
before we started guest CPUs, we need to kill the domain in addition to
reporting the failure.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When we save status XML at the point during migration where we have
already started the domain on destination, we can't really go back and
abort migration. Thus the only thing we can do is to log a warning and
report success.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Offline migration is quite special because we don't really need to do
anything but make the domain persistent. Let's do it separately from
normal migration to avoid cluttering the code with
!(flags & VIR_MIGRATE_OFFLINE).
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If the current live definition does not have memory hotplug enabled, but
the persistent one does libvirt would reject migration if the
destination does not support memory hotplug even if the user didn't want
to persist the VM at the destination and thus the XML containing the
memory hotplug definition would not be used. To fix this corner case the
code will check for memory hotplug in the newDef only if
VIR_MIGRATE_PERSIST_DEST was used.
Commit id '2e7cea243' added a check for an error from Finish instead
of 'unexpected error'; however, if for some reason there wasn't an
error, then virGetLastError could return NULL resulting in the
NULL pointer deref to err->domain.
Libvirt doesn't reliably know the location of the backing chain when
pre-creating images for non-shared migration. This isn't a problem for
full copy, but incremental copy requires the information.
Forbid pre-creating the image in cases where incremental migration is
required. This limitation can perhaps be lifted once libvirt will fully
support loading of backing chain information from the XML.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1249587
When stopping a domain on the destination host after a failed migration,
we need to avoid reseting security labels since the domain is still
running on the source host. While we were correctly doing so in some
cases, there were still some paths which did this wrong.
https://bugzilla.redhat.com/show_bug.cgi?id=1242904
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If destination libvirt doesn't support memory hotplug since all the
support was introduced by adding new elements the destination would
attempt to start qemu with an invalid configuration. The worse part is
that qemu might hang in such situation.
Fix this by sending a required migration feature called 'memory-hotplug'
to the destination. If the destination doesn't recognize it it will fail
the migration.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1248350
If we are migrating to an UNIX socket, we accept() a connection
from qemu and use that FD to set up a tunnel. However, the FD is
not closed as often as it should be.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When QEMU exits on destination during migration, the source reports
either success (if the failure happened at the very end) or unhelpful
"unexpectedly failed" error message. However, the Finish API called on
the destination may report a real error so let's use it instead of the
generic one.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
virDomainMigrateFinish* APIs were unfortunately designed to return the
pointer to the domain on destination and NULL on error. This looks OK in
normal cases but the same API is also called when we know migration
failed and thus we expect Finish to return NULL even if it actually did
all it was supposed to do without any error. The call is defined to
return nonnull domain pointer over RPC, which means returning NULL will
always result in an error being send. If this was not in fact an error,
the API itself wouldn't set anything to the thread local virError, which
makes the RPC layer come up with it's own "Library function returned
error but did not set virError" error.
This is quite confusing and also hard to detect by the caller. This
patch adds a special error code which can be used to check that Finish
successfully aborted migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If QEMU fails during incoming migration, the domain disappears including
a possibly useful error message read from QEMU log file. Let's remember
the error in virQEMUDriver so that Finish can report more than just "no
such domain".
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Since we already support the MIGRATION event, we just need to make sure
the domain condition is signalled whenever a p2p connection drops or the
domain is paused due to IO error and we can avoid waking up every 50 ms
to check whether something happened.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We don't need to call query-migrate every 50ms when we get the current
migration state via MIGRATION event.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When migration fails in qemuMigrationPrepareAny, we unconditionally call
qemuDomainRemoveInactive, which should only be called for transient
domains. The check for !vm->persistent was accidentally removed by
commit 540c339.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When a connection to the destination host during a p2p migration drops,
we know we will have to cancel the migration; it doesn't make sense to
waste resources by trying to finish the migration. We already do so
after sending "migrate" command to QEMU and we should do it while
waiting for drive mirrors to become ready too.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Checking status of all part of migration and aborting it when something
failed is a complex thing which makes the waiting loop hard to read.
This patch moves all the checks into a separate function similarly to
what was done for drive mirror loops.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instead of passing current job name to several functions which already
know what the current job is we can generate the name where we actually
need to use it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Once we start waiting for migration events instead of polling
query-migrate, priv->job.current will not be regularly updated anymore
because we will get the current status directly from the events. Thus
virDomainGetJob{Info,Stats} will have to query QEMU, but they can't just
blindly update priv->job.current structure. This patch introduces
qemuMigrationFetchJobStatus which just fills in a caller supplied
structure and makes qemuMigrationUpdateJobStatus a tiny wrapper around
it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
QEMU_CAPS_SEAMLESS_MIGRATION capability says QEMU supports
SPICE_MIGRATE_COMPLETED event. Thus we can just drop all code which
polls query-spice and replace it with waiting for the event.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When libvirtd is restarted during migration, we properly cancel the
ongoing migration (unless it managed to almost finished before the
restart). But if we were also migrating storage using NBD, we would
completely forget about the running disk mirrors.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Abort migration as soon as we detect that some of the disk mirrors
failed. There's no sense in trying to finish memory migration first.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instead of cancelling disk mirrors sequentially, let's just call
block-job-cancel for all migrating disks and then wait until all
disappear.
In case we cancel disk mirrors at the end of successful migration we
also need to check all block jobs completed successfully. Otherwise we
have to abort the migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
By switching block jobs to use domain conditions, we can drop some
pretty complicated code in NBD storage migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Because we are polling we may detect some errors after we asked QEMU for
migration status even though they occurred before. If this happens and
QEMU reports migration completed successfully, we would happily report
the migration succeeded even though we should have cancelled it because
of the other error.
In practise it is not a big issue now but it will become a much bigger
issue once the check for storage migration status is moved inside the
loop in qemuMigrationWaitForCompletion.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1203032
Implement a `migrate_disks' parameters for the QEMU driver. This multi-
value parameter can be used to explicitly specify what block devices
are to be migrated using the NBD server. Tunnelled migration using NBD
is to be done.
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When playing with disk migration lately, I've noticed this warning in
domain logs:
WARNING: Image format was not specified for 'nbd://masina:49153/drive-virtio-disk0' and probing guessed raw.
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
Specify the 'raw' format explicitly to remove the restrictions.
So I started digging into qemu source code to see what has triggered
the warning. I'd expect qemu to know formats of guest's disks since we
tell them on command line. This lead me to qmp_drive_mirror() where
the following can be found:
if (!has_format) {
format = mode == NEW_IMAGE_MODE_EXISTING ? NULL : bs->drv->format_name;
}
So, format is automatically initialized from the disk iff mode !=
"existing". Unfortunately, in migration we are tied to use this mode
(NBD doesn't support creating new images). Therefore the only way to
avoid this warning is to pass format. The discussion on the mail-list [1]
resulted in the code that always forces NBD export as "raw" format.
[1] https://www.redhat.com/archives/libvir-list/2015-June/msg00153.html
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Pavel Boldin <pboldin@mirantis.com>
This function is returning a string (domain XML). Since d3ce7363
when it was first introduced, it was indented incorrectly:
static char
*qemuMigrationBeginPhase(..)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
By default, getaddrinfo() will return addresses for both
IPv4 and IPv6 if both protocols are enabled, and so the
RPC code will listen/connect to both protocols too. There
may be cases where it is desirable to restrict this to
just one of the two protocols, so add an 'int family'
parameter to all the TCP related APIs.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Most virDomainDiskIndexByName callers do not care about the index; what
they really want is a disk def pointer.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When cancelling drive mirror, always try to do that for all disks even
if it fails for some of them. Report the first error we saw.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instead of redoing the same filtering over and over everytime we need to
walk through all disks which are being migrated.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
As of eeb008dbfc the variable is not used anymore. Drop it.
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When migrating a domain while changing its name and using
VIR_MIGRATE_PERSIST_DEST flag, libvirt would fail to properly change the
name in the persistent definition. The inconsistency results in weird
behavior when dumping domain XML, destroying the domain, restarting
libvirtd and likely in several other situations.
Since the new name is already stored in vm->def->name, we just need to
make sure the persistent definition uses this new name too.
https://bugzilla.redhat.com/show_bug.cgi?id=1076354
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
In qemuMigrationDriveMirror we can start all disk mirrors in parallel.
We wait until they are all ready, or one of them aborts.
In qemuMigrationCancelDriveMirror, we wait until all mirrors are
properly stopped. This is necessary to ensure that destination VM is
fully in sync with the (paused) source VM.
If a drive mirror can not be cancelled, then the destination is not in a
consistent state. In this case it is not safe to continue with the
migration.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
virDomainGetJobStats is able to report statistics of a completed
migration, however to get usable downtime and total time statistics both
hosts have to keep synchronized time. To provide at least some
estimation of the times even when NTP daemons are not running on both
hosts we can just ignore the time needed to transfer a migration cookie
to the destination host. The result will be also inaccurate but a bit
more predictable. The total/down time will just be at least what we
report.
https://bugzilla.redhat.com/show_bug.cgi?id=1213434
This is basically turning qemuDomObjEndAPI into a more general
function. Other drivers which gets a reference to domain objects may
benefit from this function too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This needs to specified in way too many places for a simple validation
check. The ostype/arch/virttype validation checks later in
DomainDefParseXML should catch most of the cases that this was covering.
1. 'last_good_net' indicates the index of last successfully configured
net. so def->nets[last_good_net] should also be clean up if error occurs.
2. if error occurs in 'virNetDevMacVLanVPortProfileRegisterCallback'
(second 'goto err_exit' in loop), we should also do
'virNetDevVPortProfileDisassociate' cleanup for the
'virNetDevVPortProfileAssociate'(first code block in loop). So we should
consider the net is successfully configured after first code block in
loop finishes.
Signed-off-by: Huanle Han <hanxueluo@gmail.com>
When pre-creating storage for domains, we need to find corresponding
disk in the XML on the destination (domain XML may differ there, e.g.
disk is accessible under different path). For better debugging, I'm
printing all info I received on a disk. But there was a typo when
printing the disk capacity: "%lluu" instead of "%llu".
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The problem with the previous implementation is,
even when qemuMigrationUpdateJobStatus() detects a migration job
has completed, it will do a sleep for 50 ms (which is unnecessary
and only adds up to the VM pause time).
Signed-off-by: Xing Lin <xinglin@cs.utah.edu>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
qemuMigrationCookieAddNBD is usually called from within an async
MIGRATION_OUT or MIGRATION_IN job, so it needs to start a nested job.
(The one exception is during the Begin phase when change protection
isn't enabled, but qemuDomainObjEnterMonitorAsync will behave the same
as qemuDomainObjEnterMonitor in this case.)
This bug was encountered with a libvirt client that repeatedly queries
the disk mirroring block job info during a migration. If one of these
queries occurs just as the Perform migration cookie is baked, libvirt
crashes.
Relevant logs are as follows:
6701: warning : qemuDomainObjEnterMonitorInternal:1544 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous
[1] 6701: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block","id":"libvirt-629"}
[2] 6699: info : qemuMonitorIOWrite:503 : QEMU_MONITOR_IO_WRITE: mon=0x7fefdc004700 buf={"execute":"query-block","id":"libvirt-629"}
[3] 6704: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block-jobs","id":"libvirt-630"}
[4] 6699: info : qemuMonitorJSONIOProcessLine:203 : QEMU_MONITOR_RECV_REPLY: mon=0x7fefdc004700 reply={"return": [...], "id": "libvirt-629"}
6699: error : qemuMonitorJSONIOProcessLine:211 : internal error: Unexpected JSON reply '{"return": [...], "id": "libvirt-629"}'
At [1] qemuMonitorBlockStatsUpdateCapacity sends its request, then waits
on mon->notify. At [2] the request is written out to the monitor socket.
At [3] qemuMonitorBlockJobInfo sends its request, and also waits on
mon->notify. The reply from the first request is received at [4].
However, qemuMonitorJSONIOProcessLine is not expecting this reply since
the second request hadn't completed sending. The reply is dropped and an
error is returned.
qemuMonitorIO signals mon->notify twice during its error handling,
waking up both of the threads waiting on it. One of them clears mon->msg
as it exits qemuMonitorSend; the other crashes:
qemuMonitorSend (mon=0x7fefdc004700, msg=<value optimized out>) at qemu/qemu_monitor.c:975
975 while (!mon->msg->finished) {
(gdb) print mon->msg
$1 = (qemuMonitorMessagePtr) 0x0
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
If a VM migration is aborted, a disk mirror may be failed by QEMU before
libvirt has a chance to cancel it. The disk->mirrorState remains at
_ABORT in this case, and this breaks subsequent mirrorings of that disk.
We should instead check the mirrorState directly and transition to _NONE
if it is already aborted. Do the check *after* aborting the block job in
QEMU to avoid a race.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
If virCloseCallbacksSet fails, qemuMigrationBegin must return NULL to
indicate an error occurred.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
virDomainHasDiskMirror() currently detects only jobs that add the mirror
elements. Since some operations like migration are interlocked by
existing block jobs on the given domain the check needs to be
instrumented to check regular jobs too.
This patch renames virDomainHasDiskMirror to virDomainHasDiskBlockjob
and adds an argument that allows to select that it returns true only for
block copy jobs as those interlock making the domain persistent.
Other two uses trigger on any block job type.
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Make sure that libvirt has all vital information needed to reliably
represent configuration of guest's memory devices in case of a
migration.
This patch forbids migration in case the required slot number and module
base address are not present (failed to be loaded from qemu via
monitor).
Commit cf54c60699 introduced the ability
to create missing storage volumes during migration. For network disks,
however, we may not necessarily be able to detect whether they already
exist -- there is no straight-forward way to map the disk to a storage
volume, and even if there were it's possible no configured storage pool
actually contains the disk.
It is better to assume the network disk exists in this case, rather than
aborting the migration completely. If the volume really is missing, QEMU
will generate an appropriate error later in the migration.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
In qemu 2.3, the migration status will include 'cancelling' in the
window between when an asynchronous cancel has been requested and
when the migration is actually halted. Previously, qemu hid this
state and reported 'active'. Libvirt manages the sequence okay
even when the string is unrecognized (that is, it will report an
unknown state:
Migration: [ 69 %]^Cerror: internal error: unexpected migration status in cancelling.
but the migration is still cancelled), but recognizing the string
makes for a smoother user experience.
* src/qemu/qemu_monitor.h
(QEMU_MONITOR_MIGRATION_STATUS_CANCELLING): Add enum.
* src/qemu/qemu_monitor.c (qemuMonitorMigrationStatus): Map it.
* src/qemu/qemu_migration.c (qemuMigrationUpdateJobStatus): Adjust
clients.
* src/qemu/qemu_monitor_json.c
(qemuMonitorJSONGetMigrationStatusReply): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
virnetdevopenvswitch.h declares a few functions that can be called to
add ports to and remove them from OVS bridges, and retrieve the
migration data for a port. It does not contain any data definitions
that are used by domain_conf.h. But for some reason, domain_conf.h
virnetdevopenvswitch.h should be directly #including it. This adds a
few lines to the project, but saves all the files that don't need it
from the extra computing, and makes the dependencies more clear cut.
There was a mess in the way how we store unlimited value for memory
limits and how we handled values provided by user. Internally there
were two possible ways how to store unlimited value: as 0 value or as
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED. Because we chose to store memory
limits as unsigned long long, we cannot use -1 to represent unlimited.
It's much easier for us to say that everything greater than
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED means unlimited and leave 0 as valid
value despite that it makes no sense to set limit to 0.
Remove unnecessary function virCompareLimitUlong. The update of test
is to prevent the 0 to be miss-used as unlimited in future.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1146539
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1179678
When migrating with storage, libvirt iterates over domain disks and
instruct qemu to migrate the ones we are interested in (shared, RO and
source-less disks are skipped). The disks are migrated in series. No
new disk is transferred until the previous one hasn't been quiesced.
This is checked on the qemu monitor via 'query-jobs' command. If the
disk has been quiesced, it practically went from copying its content
to mirroring state, where all disk writes are mirrored to the other
side of migration too. Having said that, there's one inherent error in
the design. The monitor command we use reports only active jobs. So if
the job fails for whatever reason, we will not see it anymore in the
command output. And this can happen fairly simply: just try to migrate
a domain with storage. If the storage migration fails (e.g. due to
ENOSPC on the destination) we resume the host on the destination and
let it run on partly copied disk.
The proper fix is what even the comment in the code says: listen for
qemu events instead of polling. If storage migration changes state an
event is emitted and we can act accordingly: either consider disk
copied and continue the process, or consider disk mangled and abort
the migration.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1191355
When we attempt to migrate a vm with a migrateuri that has no scheme:
# virsh migrate test4 --live qemu+ssh://lhuang/system --migrateuri 127.0.0.1
target libvirtd will crash because uri->scheme is NULL in
qemuMigrationPrepareDirect on this line:
if (STRNEQ(uri->scheme, "tcp") &&
Add a value check before this line. Also fix a bug like this in
doNativeMigrate, that could only happen when destination libvirtd
returned an incorrect URI.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Add the missing jump to the error label when the uuid in the
migration cookie XML does not match the uuid of the migrated
domain.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Depending on the context, either error out if the domain
has disappeared in the meantime, or just ignore the value
to allow marking the function as ATTRIBUTE_RETURN_CHECK.
https://bugzilla.redhat.com/show_bug.cgi?id=1181182
When we meet error in qemuMigrationPrepareAny and goto
cleanup with rc < 0, we forget clear the priv->origname and this
will make this vm migrate fail next time because leave a wrong
origname in priv, and will Generate a wrong cookie when do
migrate next time.
This patch will make priv->origname is NULL when migrate fail
in target host.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The virDomainDefParse* and virDomainDefFormat* methods both
accept the VIR_DOMAIN_XML_* flags defined in the public API,
along with a set of other VIR_DOMAIN_XML_INTERNAL_* flags
defined in domain_conf.c.
This is seriously confusing & error prone for a number of
reasons:
- VIR_DOMAIN_XML_SECURE, VIR_DOMAIN_XML_MIGRATABLE and
VIR_DOMAIN_XML_UPDATE_CPU are only relevant for the
formatting operation
- Some of the VIR_DOMAIN_XML_INTERNAL_* flags only apply
to parse or to format, but not both.
This patch cleanly separates out the flags. There are two
distint VIR_DOMAIN_DEF_PARSE_* and VIR_DOMAIN_DEF_FORMAT_*
flags that are used by the corresponding methods. The
VIR_DOMAIN_XML_* flags received via public API calls must
be converted to the VIR_DOMAIN_DEF_FORMAT_* flags where
needed.
The various calls to virDomainDefParse which hardcoded the
use of the VIR_DOMAIN_XML_INACTIVE flag change to use the
VIR_DOMAIN_DEF_PARSE_INACTIVE flag.
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
A future patch will allow recursion into backing chains when
collecting block stats. This patch should not change behavior,
but merely moves out the common code that will be reused once
recursion is enabled, and adds the parameter that will turn on
recursion.
* src/qemu/qemu_monitor.h (qemuMonitorGetAllBlockStatsInfo)
(qemuMonitorBlockStatsUpdateCapacity): Add recursion parameter,
although it is ignored for now.
* src/qemu/qemu_monitor.h (qemuMonitorGetAllBlockStatsInfo)
(qemuMonitorBlockStatsUpdateCapacity): Likewise.
* src/qemu/qemu_monitor_json.h
(qemuMonitorJSONGetAllBlockStatsInfo)
(qemuMonitorJSONBlockStatsUpdateCapacity): Likewise.
* src/qemu/qemu_monitor_json.c
(qemuMonitorJSONGetAllBlockStatsInfo)
(qemuMonitorJSONBlockStatsUpdateCapacity): Add parameter, and
split...
(qemuMonitorJSONGetOneBlockStatsInfo)
(qemuMonitorJSONBlockStatsUpdateCapacityOne): ...into helpers.
(qemuMonitorJSONGetBlockStatsInfo): Update caller.
* src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Update caller.
* src/qemu/qemu_migration.c (qemuMigrationCookieAddNBD): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Based on previous commit, we can now precreate missing volumes. While
digging out the functionality from storage driver would be nicer, if
you've seen the code it's nearly impossible. So I'm going from the
other end:
1) For given disk target, disk path is looked up.
2) For the disk path, storage pool is looked up, a volume XML is
constructed and then passed to virStorageVolCreateXML() which has all
the knowledge how to create raw images, (encrypted) qcow(2) images,
etc.
One of the advantages of this approach is, we don't have to care about
image conversion - qemu does that for us. So for instance, users can
transform qcow2 into raw on migration (if the correct XML is passed to
the migration API).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Up 'til now, users need to precreate non-shared storage on migration
themselves. This is not very friendly requirement and we should do
something about it. In this patch, the migration cookie is extended,
so that <nbd/> section does not only contain NBD port, but info on
disks being migrated. This patch sends a list of pairs of:
<disk target; disk size>
to the destination. The actual storage allocation is left for next
commit.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If job is failed in qemuMigrationRun, we expect the jobinfo type as
FAILED. But jobinfo type won't be updated until entering
qemuMigrationWaitForCompletion. We should make it updated in all
conditions. Moreover, we can't use qemuMigrationUpdateJobStatus
here because job may fail in libvirt, so we can't query job status
from QEMU.
Signed-off-by: Wang Rui <moon.wangrui@huawei.com>
The migration job status is traced in qemuMigrationUpdateJobStatus
which is called in qemuMigrationRun. But if migration is cancelled
before the trace such as in qemuMigrationDriveMirror, the jobinfo
type won't be updated to CANCELLED. After this patch, we can get
jobinfo type CANCELLED if migration is cancelled during drive
mirror. Moreover, we can't use qemuMigrationUpdateJobStatus
because from qemu's point of view it's just the drive mirror being
cancelled and the migration hasn't even started yet.
Signed-off-by: Wang Rui <moon.wangrui@huawei.com>
virReportSystemError is reserved for reporting system errors, calling it
with VIR_ERR_* error codes produces error messages that do not make any
sense, such as
internal error: guest failed to start: Kernel doesn't support user
namespace: Link has been severed
We should prohibit wrong usage with a syntax-check rule.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Oops, I forgot to squash one more instance of the same check in the
previous commit (v1.2.10-144-g52691f9).
https://bugzilla.redhat.com/show_bug.cgi?id=1147331
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Any attempt to start a tunnelled migration with libvirtd that supports
RDMA migration (specifically commit v1.2.8-226-ged22a47) crashes
libvirtd on the destination host.
The crash is inevitable because qemuMigrationPrepareAny is always called
with NULL protocol in case of tunnelled migration.
https://bugzilla.redhat.com/show_bug.cgi?id=1147331
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We used to set migration capabilities only when a user asked for them in
flags. This is fine when migration succeeds since the QEMU process is
killed in the end but in case migration fails or if it's cancelled, some
capabilities may remain turned on with no way to turn them off. To fix
that, migration capabilities have to be turned on if requested but
explicitly turned off in case they were not requested but QEMU supports
them.
https://bugzilla.redhat.com/show_bug.cgi?id=1163953
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Commit 6e5c79a1 tried to fix deadlock between nwfilter{Define,Undefine}
and starting of guest, but this same deadlock exists for
updating/attaching network device to domain.
The deadlock was introduced by removing global QEMU driver lock because
nwfilter was counting on this lock and ensure that all driver locks are
locked inside of nwfilter{Define,Undefine}.
This patch extends usage of virNWFilterReadLockFilterUpdates to prevent
the deadlock for all possible paths in QEMU driver. LXC and UML drivers
still have global lock.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1143780
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
==404== 232 bytes in 1 blocks are definitely lost in loss record 669 of 758
==404== at 0x4C2B934: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==404== by 0x52A2BF3: virAlloc (viralloc.c:144)
==404== by 0x1D49AD70: qemuMigrationCookieAddStatistics (qemu_migration.c:554)
==404== by 0x1D49AD70: qemuMigrationBakeCookie (qemu_migration.c:1228)
==404== by 0x1D4A43B8: qemuMigrationFinish (qemu_migration.c:5002)
==404== by 0x1D4C9339: qemuDomainMigrateFinish3Params (qemu_driver.c:11526)
Introduced by commit 5d6fb96
In qemuMigrationFinish mig->nbd can not be initialized by
qemuMigrationEatCookie without the QEMU_MIGRATION_COOKIE_NBD flag.
That causes qemuMigrationStopNBDServer to return early without
stopping the NBD server properly.
Signed-off-by: Weiwei Li <nuonuoli@tencent.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
commit 3e1e16aa8d (Use a port from the
migration range for NBD as well) changed ndb port allocation from
remotePorts to migrationPorts, but did not change the port releasing
process, which makes an error when migrating several times (above 64):
error: internal error: Unable to find an unused port in range
'migration' (49152-49215)
https://bugzilla.redhat.com/show_bug.cgi?id=1159245
Signed-off-by: Weiwei Li <nuonuoli@tencent.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
if specifying migration_host to an Ipv6 address without brackets,
it was resolved to an incorrect address, such as:
tcp:2001:0DB8::1428:4444,
but the correct address should be:
tcp:[2001:0DB8::1428]:4444
so we should add brackets when parsing it.
Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
Commit fba6bc4 introduced the non-migratable invtsc feature,
breaking save/migration with host-model and host-passthrough.
On hosts with this feature present it was automatically included
in the CPU definition, regardless of QEMU support.
Commit de0aeaf stopped including it by default for host-model,
but failed to fix host-passthrough.
This commit ignores checking of CPU features with host-passthrough,
since we don't pass them to QEMU (only -cpu host is passed),
allowing domains using host-passthrough that were saved with
the broken version of libvirtd to be restored.
https://bugzilla.redhat.com/show_bug.cgi?id=1147584
RDMA Live migration requires registering memory with the hardware, and
thus QEMU offers a new 'capability' to pre-register / mlock() the guest
memory in advance for higher RDMA performance before the migration
begins. This capability is disabled by default, which means QEMU will
register the memory with the hardware in an on-demand basis.
This patch exposes this capability with the following example usage:
virsh migrate --live --rdma-pin-all --migrateuri rdma://hostname domain qemu+ssh://hostname/system
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This patch adds support for RDMA protocol in migration URIs.
USAGE: $ virsh migrate --live --migrateuri rdma://hostname domain qemu+ssh://hostname/system
Since libvirt runs QEMU in a pretty restricted environment, several
files needs to be added to cgroup_device_acl (in qemu.conf) for QEMU to
be able to access the host's infiniband hardware. Full documenation of
the feature can be found on QEMU wiki:
http://wiki.qemu.org/Features/RDMALiveMigration
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Currently we only support TCP protocol for native QEMU migration but
this is going to be changed. Let's make the code more general and remove
hardcoded TCP protocol from several places.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
For compatibility with old libvirt we need to support both tcp:host and
tcp://host migration URIs. Let's make the code that parses them a bit
cleaner.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
RDMA migration uses the 'setup' state in QEMU to optionally lock
all memory before the migration starts. The total time spent in
this state is exposed as VIR_DOMAIN_JOB_SETUP_TIME.
Additionally, QEMU also exports migration throughput (mbps) for both
memory and disk, so let's add them too: VIR_DOMAIN_JOB_MEMORY_BPS,
VIR_DOMAIN_JOB_DISK_BPS.
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Upstream qemu 1.4 added some drive-mirror tunables not present
when it was first introduced in 1.3. Management apps may want
to set these in some cases (for example, without tuning
granularity down to sector size, a copy may end up occupying
more bytes than the original because an entire cluster is
copied even when only a sector within the cluster is dirty,
although tuning it down results in more CPU time to do the
copy). I haven't personally needed to use the parameters, but
since they exist, and since the new API supports virTypedParams,
we might as well expose them.
Since the tuning parameters aren't often used, and omitted from
the QMP command when unspecified, I think it is safe to rely on
qemu 1.3 to issue an error about them being unsupported, rather
than trying to create a new capability bit in libvirt.
Meanwhile, all versions of qemu from 1.4 to 2.1 have a bug where
a bad granularity (such as non-power-of-2) gives a poor message:
error: internal error: unable to execute QEMU command 'drive-mirror': Invalid parameter 'drive-virtio-disk0'
because of abuse of QERR_INVALID_PARAMETER (which is supposed to
name the parameter that was given a bad value, rather than the
value passed to some other parameter). I don't see that a
capability check will help, so we'll just live with it (and it
has since been improved in upstream qemu).
* src/qemu/qemu_monitor.h (qemuMonitorDriveMirror): Add
parameters.
* src/qemu/qemu_monitor.c (qemuMonitorDriveMirror): Likewise.
* src/qemu/qemu_monitor_json.h (qemuMonitorJSONDriveMirror):
Likewise.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONDriveMirror):
Likewise.
* src/qemu/qemu_driver.c (qemuDomainBlockCopyCommon): Likewise.
(qemuDomainBlockRebase, qemuDomainBlockCopy): Adjust callers.
* src/qemu/qemu_migration.c (qemuMigrationDriveMirror): Likewise.
* tests/qemumonitorjsontest.c (qemuMonitorJSONDriveMirror): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
If the qemuMigrationEatCookie() fails to set mig, we jump to cleanup:
which will call qemuMigrationCancelDriveMirror() without first checking
if mig == NULL
Signed-off-by: John Ferlan <jferlan@redhat.com>
After the previous commit, migration statistics on the source and
destination hosts are not equal because the destination updated time
statistics. Let's send the result back so that the same data can be
queried on both sides of the migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Total time of a migration and total downtime transfered from a source to
a destination host do not count with the transfer time to the
destination host and with the time elapsed before guest CPUs are
resumed. Thus, source libvirtd remembers when migration started and when
guest CPUs were paused. Both timestamps are transferred to destination
libvirtd which uses them to compute total migration time and total
downtime. Obviously, this requires the time to be synchronized between
the two hosts. The reported times are useless otherwise but they would
be equally useless if we didn't do this recomputation so don't lose
anything by doing it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When migrating a transient domain or with VIR_MIGRATE_UNDEFINE_SOURCE
flag, the domain may disappear from source host. And so will migration
statistics associated with the domain. We need to transfer the
statistics at the end of a migration so that they can be queried at the
destination host.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
virDomainGetJobStats gains new VIR_DOMAIN_JOB_STATS_COMPLETED flag that
can be used to fetch statistics of a completed job rather than a
currently running job.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Job statistics data were tracked in several structures and variables.
Let's make a new qemuDomainJobInfo structure which can be used as a
single source of statistics data as a preparation for storing data about
completed a job.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When QEMU fails during incoming migration after we successfully started
it (i.e., during Perform or Finish phase), we report a rather unhelpful
message
Unable to read from monitor: Connection reset by peer
We already have a code that takes error messages from QEMU's error
output but we disable it once QEMU successfully starts. This patch
postpones this until the end of Finish phase during incoming migration
so that we can report a much better error message:
internal error: early end of file from monitor: possible problem:
Unknown savevm section or instance '0000:00:05.0/virtio-balloon' 0
load of migration failed
https://bugzilla.redhat.com/show_bug.cgi?id=1090093
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
While reviewing the new virDomainBlockCopy API, Peter Krempa
pointed out that our existing design of using MiB/s for block
job bandwidth is rather coarse, especially since qemu tracks
it in bytes/s; so virDomainBlockCopy only accepts bytes/s.
But once the new API is implemented for qemu, we will be in
the situation where it is possible to set a value that cannot
be accurately reflected back to the user, because the existing
virDomainGetBlockJobInfo defaults to the coarser units.
Fortunately, we have an escape hatch; and one that has already
served us well in the past: we can use the flags argument to
specify which scale to use (see virDomainBlockResize for prior
art). This patch fixes the query side of the API; made easier
by previous patches that split the query side out from the
modification code. Later patches will address the virsh
interface, as well retrofitting all other blockjob APIs to
also accept a flag for toggling bandwidth units.
* include/libvirt/libvirt.h.in (_virDomainBlockJobInfo)
(VIR_DOMAIN_BLOCK_COPY_BANDWIDTH): Document sizing issues.
(virDomainBlockJobInfoFlags): New enum.
* src/libvirt.c (virDomainGetBlockJobInfo): Document new flag.
* src/qemu/qemu_monitor.h (qemuMonitorBlockJobInfo): Add parameter.
* src/qemu/qemu_monitor.c (qemuMonitorBlockJobInfo): Likewise.
* src/qemu/qemu_monitor_json.h (qemuMonitorJSONBlockJobInfo):
Likewise.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONBlockJobInfo)
(qemuMonitorJSONGetBlockJobInfoOne): Likewise. Don't scale here.
* src/qemu/qemu_migration.c (qemuMigrationDriveMirror): Update
callers.
* src/qemu/qemu_driver.c (qemuDomainBlockPivot)
(qemuDomainBlockJobImpl): Likewise.
(qemuDomainGetBlockJobInfo): Likewise, and support new flag.
Signed-off-by: Eric Blake <eblake@redhat.com>
Another layer of overly-multiplexed code that deserves to be
split into obviously separate paths for query vs. modify.
This continues the cleanup started in commit cefe0ba.
In the process, make some tweaks to simplify the logic when
parsing the JSON reply. There should be no user-visible
semantic changes.
* src/qemu/qemu_monitor.h (qemuMonitorBlockJob): Drop parameter.
(qemuMonitorBlockJobInfo): New prototype.
(BLOCK_JOB_INFO): Drop enum.
* src/qemu/qemu_monitor_json.h (qemuMonitorJSONBlockJob)
(qemuMonitorJSONBlockJobInfo): Likewise.
* src/qemu/qemu_monitor.c (qemuMonitorBlockJob): Split...
(qemuMonitorBlockJobInfo): ...into second function.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONBlockJob): Move
block info portions...
(qemuMonitorJSONGetBlockJobInfo): ...here, and rename...
(qemuMonitorJSONBlockJobInfo): ...and export.
(qemuMonitorJSONGetBlockJobInfoOne): Alter return semantics.
* src/qemu/qemu_driver.c (qemuDomainBlockPivot)
(qemuDomainBlockJobImpl, qemuDomainGetBlockJobInfo): Adjust
callers.
* src/qemu/qemu_migration.c (qemuMigrationDriveMirror)
(qemuMigrationCancelDriveMirror): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
During a QEMU live migration several warning messages about job
handling could be written to syslog on the destination host:
"entering monitor without asking for a nested job is dangerous"
The messages are written because the job handling during migration
uses hard coded asyncJob values in several places that are incorrect.
This patch passes the required asyncJob value around and prevents
the warnings as well as any issues that the warnings may be referring
to.
https://bugzilla.redhat.com/show_bug.cgi?id=1130089
Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
In qemuMigrationToFile we enter the monitor multiple times and don't
check if the VM is still alive after returning form the monitor. Add the
checks to skip pieces of code in case the VM crashes while saving it's
state.
In the future we might need to track state of individual images. Move
the readonly and shared flags to the virStorageSource struct so that we
can keep them in a per-image basis.
To allow changing the name that is recorded in the top of the current
image chain used in a block pull/rebase operation, we need to specify
the backing name to qemu. This is done via the "backing-file" attribute
to the block-stream commad.
Replace:
if (virBufferError(&buf)) {
virBufferFreeAndReset(&buf);
virReportOOMError();
...
}
with:
if (virBufferCheckError(&buf) < 0)
...
This should not be a functional change (unless some callers
misused the virBuffer APIs - a different error would be reported
then)
As we are doing with the enum structures, a cleanup in "src/qemu/"
directory was done now. All the enums that were defined in the
header files were converted to typedefs in this directory. This
patch includes all the adjustments to remove conflicts when you do
this kind of change. "Enum-to-typedef"'s conversions were made in
"src/qemu/qemu_{capabilities, domain, migration, hotplug}.h".
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
As part of the work on backing chains, I'm finding that it would
be easier to directly manipulate chains of pointers (adding a
snapshot merely adjusts pointers to form the correct list) rather
than copy data from one struct to another. This patch converts
domain disk source to be a pointer.
In this patch, the pointer is ALWAYS allocated (thanks in part to
the previous patch forwarding all disk def allocation through a
common point), and all other changse are just mechanical fallout of
the new type; there should be no functional change. It is possible
that we may want to leave the pointer NULL for a cdrom with no
medium in a later patch, but as that requires a closer audit of the
source to ensure we don't fault on a null dereference, I didn't do
it here.
* src/conf/domain_conf.h (_virDomainDiskDef): Change type of src.
* src/conf/domain_conf.c: Adjust all clients.
* src/security/security_selinux.c: Likewise.
* src/qemu/qemu_domain.c: Likewise.
* src/qemu/qemu_command.c: Likewise.
* src/qemu/qemu_conf.c: Likewise.
* src/qemu/qemu_process.c: Likewise.
* src/qemu/qemu_migration.c: Likewise.
* src/qemu/qemu_driver.c: Likewise.
* src/lxc/lxc_driver.c: Likewise.
* src/lxc/lxc_controller.c: Likewise.
* tests/securityselinuxlabeltest.c: Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
This is similar to the previous commit in that we need to explicitly
send migrate_cancel when libvirt detects an error other than those
reported by query-migrate. However, the possibility to hit such error is
pretty small.
When QEMU reports failed or cancelled migration, we don't need to send
it migrate_cancel QMP command. But in all other error paths, such as if
we detect broken connection to a destination daemon or something else
happens inside libvirt, we need to explicitly send migrate_cancel
command instead of relying on the migration to be implicitly cancelled
when destination QEMU is killed.
Because we were not doing so, one could end up with a paused domain
after failed migration.
https://bugzilla.redhat.com/show_bug.cgi?id=1098833
For now, we set the migration URI via command line '--migrate_uri' or
construct the URI by looking up the dest host's hostname which could be
solved by DNS automatically.
But in cases the dest host have two or more NICs to reach, we may need to
send the migration data over a specific NIC which is different from the
automatically resolved one for some reason like performance, security, etc.
Thus we must explicitly specify the migrateuri in command line everytime,
but it is too troublesome if there are many such hosts (and don't forget
virt-manager).
This patch adds a configuration file option on dest host to save the
default value set which can be specified to a migration hostname or
one of this host's addresses used for transferring data, thus user doesn't
have to specify it in command line everytime.
Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When qemu driver is polling for migration to finish (in
qemuMigrationWaitForCompletion), it may happen that another job allowed
during migration is running and if it does not finish within 30 seconds,
migration would be cancelled because of that. However, we can just
ignore the timeout and let the waiting loop try again later.
If an event fired at the end of migration is ever implemented in QEMU,
we can just wait for the event instead of polling for migration status
and libvirt will behave consistently, i.e., migration won't be cancelled
in case another job started during migration takes long time to finish.
For bug https://bugzilla.redhat.com/show_bug.cgi?id=1083238
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This commit provides the ability to virDomainMigrateToURI to
check for SASL credentials when attempts to migrate a domain
with the driver QEMU.
Signed-off-by: Sahid Orentino Ferdjaoui <sahid.ferdjaoui@cloudwatt.com>
When EIO comes to qemu while it's replying to
qemuMigrationUpdateJobStatus(), qemu blocks, the migration of RAM can
complete in the meantime, and when qemu unblocks, it sends us
BLOCK_IO_ERROR plus migrations "status": "complete". Even though we
act upon the BLOCK_IO_ERROR by setting the proper state of the domain,
the call still waits for the proper reply on monitor for query_migrate
and after it gets it, it checks that migration is completed and the
migration is finished. This is what abort_on_error flag was meant for
(we can migrate with these errors, but this flag must inhibit such
behaviour). Changing the order of the steps guarantees the flag works
properly.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1045833
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The code in virstoragefile.c is getting more complex as I
consolidate backing chain handling code. But for the setuid
virt-login-shell, we don't need to crawl backing chains. It's
easier to audit things for setuid security if there are fewer
files involved, so this patch moves the one function that
virFileOpen() was actually relying on to also live in virfile.c.
* src/util/virstoragefile.c (virStorageFileIsSharedFS)
(virStorageFileIsSharedFSType): Move...
* src/util/virfile.c (virFileIsSharedFS, virFileIsSharedFSType):
...to here, and rename.
(virFileOpenAs): Update caller.
* src/security/security_selinux.c
(virSecuritySELinuxSetFileconHelper)
(virSecuritySELinuxSetSecurityAllLabel)
(virSecuritySELinuxRestoreSecurityImageLabelInt): Likewise.
* src/security/security_dac.c
(virSecurityDACRestoreSecurityImageLabelInt): Likewise.
* src/qemu/qemu_driver.c (qemuOpenFileAs): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationIsSafe): Likewise.
* src/util/virstoragefile.h: Adjust declarations.
* src/util/virfile.h: Likewise.
* src/libvirt_private.syms (virfile.h, virstoragefile.h): Move
symbols as appropriate.
Signed-off-by: Eric Blake <eblake@redhat.com>
It's finally time to start tracking disk backing chains in
<domain> XML. The first step is to start refactoring code
so that we have an object more convenient for representing
each host source resource in the context of a single guest
<disk>. Ultimately, I plan to move the new type into src/util
where it can be reused by virStorageFile, but to make the
transition easier to review, this patch just creates the
new type then fixes everything until it compiles again.
* src/conf/domain_conf.h (_virDomainDiskDef): Split...
(_virDomainDiskSourceDef): ...to new struct.
(virDomainDiskAuthClear): Use new type.
* src/conf/domain_conf.c (virDomainDiskDefFree): Split...
(virDomainDiskSourceDefClear): ...to new function.
(virDomainDiskGetType, virDomainDiskSetType)
(virDomainDiskGetSource, virDomainDiskSetSource)
(virDomainDiskGetDriver, virDomainDiskSetDriver)
(virDomainDiskGetFormat, virDomainDiskSetFormat)
(virDomainDiskAuthClear, virDomainDiskGetActualType)
(virDomainDiskDefParseXML, virDomainDiskSourceDefFormat)
(virDomainDiskDefFormat, virDomainDiskDefForeachPath)
(virDomainDiskDefGetSecurityLabelDef)
(virDomainDiskSourceIsBlockType): Adjust all users.
* src/lxc/lxc_controller.c (virLXCControllerSetupDisk):
Likewise.
* src/lxc/lxc_driver.c (lxcDomainAttachDeviceMknodHelper):
Likewise.
* src/qemu/qemu_command.c (qemuAddRBDHost, qemuParseRBDString)
(qemuParseDriveURIString, qemuParseGlusterString)
(qemuParseISCSIString, qemuParseNBDString)
(qemuDomainDiskGetSourceString, qemuBuildDriveStr)
(qemuBuildCommandLine, qemuParseCommandLineDisk)
(qemuParseCommandLine): Likewise.
* src/qemu/qemu_conf.c (qemuCheckSharedDevice)
(qemuAddISCSIPoolSourceHost, qemuTranslateDiskSourcePool):
Likewise.
* src/qemu/qemu_driver.c (qemuDomainUpdateDeviceConfig)
(qemuDomainPrepareDiskChainElement)
(qemuDomainSnapshotCreateInactiveExternal)
(qemuDomainSnapshotPrepareDiskExternalBackingInactive)
(qemuDomainSnapshotPrepareDiskInternal)
(qemuDomainSnapshotPrepare)
(qemuDomainSnapshotCreateSingleDiskActive)
(qemuDomainSnapshotUndoSingleDiskActive)
(qemuDomainBlockPivot, qemuDomainBlockJobImpl)
(qemuDomainBlockCopy, qemuDomainBlockCommit): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationIsSafe): Likewise.
* src/qemu/qemu_process.c (qemuProcessGetVolumeQcowPassphrase)
(qemuProcessInitPasswords): Likewise.
* src/security/security_selinux.c
(virSecuritySELinuxSetSecurityFileLabel): Likewise.
* src/storage/storage_driver.c (virStorageFileInitFromDiskDef):
Likewise.
* tests/securityselinuxlabeltest.c (testSELinuxLoadDef):
Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Busy enterprise workloads hosted on large sized VM's tend to dirty
memory faster than the transfer rate achieved via live guest migration.
Despite some good recent improvements (& using dedicated 10Gig NICs
between hosts) the live migration may NOT converge.
Recently support was added in qemu (version 1.6) to allow a user to
choose if they wish to force convergence of their migration via a
new migration capability : "auto-converge". This feature allows for qemu
to auto-detect lack of convergence and trigger a throttle-down of the
VCPUs.
This patch includes the libvirt support needed to trigger this
feature. (Testing is in progress)
Signed-off-by: Chegu Vinod <chegu_vinod@hp.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Any source file which calls the logging APIs now needs
to have a VIR_LOG_INIT("source.name") declaration at
the start of the file. This provides a static variable
of the virLogSource type.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
If there should be some sort of separator it is better to use comment
with the filename, copyright, description, license information and
authors.
Found by:
git grep -nH '^$' | grep '\.[ch]:1:'
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Currently, there's just one place where we care if hook script is
changing the domain XML: migration hook for incoming migration. In
all other places where a hook script is executed, we don't read the
XML back from the script.
Anyway, the hook script can alter domain XML and hence we should taint
it if the script did.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Couple of codepaths shared the same code which can be moved out to a
function and on one of such places, qemuMigrationConfirmPhase(), the
domain was resumed even if it wasn't running before the migration
started.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1057407
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
While auditing the error reporting, I noticed that migration
had some issues. Some of the static helper functions tried
to call virDispatchError(), even though their caller will also
report the error. Also, if a migration is cancelled early
because a uri was not set, we did not guarantee that the finish
stage would not overwrite the first error message.
* src/qemu/qemu_migration.c (doPeer2PeerMigrate2)
(doPeer2PeerMigrate3): Preserve first error when cancelling.
* src/libvirt.c (virDomainMigrateVersion3Full): Likewise.
(virDomainMigrateVersion1, virDomainMigrateVersion2)
(virDomainMigrateDirect): Avoid redundant error dispatch.
(virDomainMigrateFinish2, virDomainMigrateFinish3)
(virDomainMigrateFinish3Params): Don't report error on cleanup
path.
Signed-off-by: Eric Blake <eblake@redhat.com>
The public virConnectRef and virConnectClose API are just thin
wrappers around virObjectRef/virObjectRef, with added object
validation and an error reset. Within our backend drivers, use
of the object validation is just an inefficiency since we always
pass valid objects. More important to think about is what
happens with the error reset; our uses of virConnectRef happened
to be safe (since we hadn't encountered any earlier errors), but
in several cases the use of virConnectClose could lose a real
error.
Ideally, we should also avoid calling virConnectOpen() from
within backend drivers - but that is a known situation that
needs much more design work.
* src/qemu/qemu_process.c (qemuProcessReconnectHelper)
(qemuProcessReconnect): Avoid nested public API call.
* src/qemu/qemu_driver.c (qemuAutostartDomains)
(qemuStateInitialize, qemuStateStop): Likewise.
* src/qemu/qemu_migration.c (doPeer2PeerMigrate): Likewise.
* src/storage/storage_driver.c (storageDriverAutostart):
Likewise.
* src/uml/uml_driver.c (umlAutostartConfigs): Likewise.
* src/lxc/lxc_process.c (virLXCProcessAutostartAll): Likewise.
(virLXCProcessReboot): Likewise, and avoid leaking conn on error.
Signed-off-by: Eric Blake <eblake@redhat.com>
@listenAddress and @cookiein arguments, should be exchanged,
because the order of the caller and the callee does not match.
This results in the listen address being ignored for peer-to-peer
migration and the cookie being ignored for v2 migration.
Introduced by c4ac7ef (v1.1.4-rc1~141).
https://bugzilla.redhat.com/show_bug.cgi?id=1049338
Signed-off-by: Minoru Usui <usui@mxm.nes.nec.co.jp>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
This nested job is canceled by the first ExitMonitor call (even though
it was not created by the corresponding EnterMonitor call), and
again in qemuMigrationPrepareAny if qemuProcessStart failed.
This can lead to a crash if the vm object was disposed of before calling
qemuDomainRemoveInactive:
0 ..62bc in virClassIsDerivedFrom (klass=0xdeadbeef,
parent=0x7ffce4cdd270) at util/virobject.c:166
1 ..6666 in virObjectIsClass at util/virobject.c:362
2 ..66b4 in virObjectLock at util/virobject.c:314
3 ..477e in virDomainObjListRemove at conf/domain_conf.c:2359
4 ..7a64 in qemuDomainRemoveInactive at qemu/qemu_domain.c:2087
5 ..956c in qemuMigrationPrepareAny at qemu/qemu_migration.c:2469
This was added by commit e4e2822, exposed by 5a4c237 and c7ac251.
https://bugzilla.redhat.com/show_bug.cgi?id=1018267
Report the error in virPortAllocatorAcquire instead
of doing it in every caller.
The error contains the port range name instead of the intended
use for the port, e.g.:
Unable to find an unused port in range 'display' (65534-65535)
instead of:
Unable to find an unused port for SPICE
This also adds error reporting when the QEMU driver could not
find an unused port for VNC, VNC WebSockets or NBD migration.
QEMU 1.6.0 introduced new migration status: setup
Libvirt does not expect such string in QMP and refuses to migrate with error
"unexpected migration status in setup"
This patch fixes it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Since 86d90b3a (yes, my patch; again) we are supporting NBD storage
migration. However, on error recovery path we got the steps reversed.
The correct order is: return NBD port to the virPortAllocator and then
either unlock the vm or remove it from the driver. Not vice versa.
==11192== Invalid write of size 4
==11192== at 0x11488559: qemuMigrationPrepareAny (qemu_migration.c:2459)
==11192== by 0x11488EA6: qemuMigrationPrepareDirect (qemu_migration.c:2652)
==11192== by 0x114D1509: qemuDomainMigratePrepare3Params (qemu_driver.c:10332)
==11192== by 0x519075D: virDomainMigratePrepare3Params (libvirt.c:7290)
==11192== by 0x1502DA: remoteDispatchDomainMigratePrepare3Params (remote.c:4798)
==11192== by 0x12DECA: remoteDispatchDomainMigratePrepare3ParamsHelper (remote_dispatch.h:5741)
==11192== by 0x5212127: virNetServerProgramDispatchCall (virnetserverprogram.c:435)
==11192== by 0x5211C86: virNetServerProgramDispatch (virnetserverprogram.c:305)
==11192== by 0x520A8FD: virNetServerProcessMsg (virnetserver.c:165)
==11192== by 0x520A9E1: virNetServerHandleJob (virnetserver.c:186)
==11192== by 0x50DA78F: virThreadPoolWorker (virthreadpool.c:144)
==11192== by 0x50DA11C: virThreadHelper (virthreadpthread.c:161)
==11192== Address 0x1368baa0 is 576 bytes inside a block of size 688 free'd
==11192== at 0x4A07F5C: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==11192== by 0x5079A2F: virFree (viralloc.c:580)
==11192== by 0x11456C34: qemuDomainObjPrivateFree (qemu_domain.c:267)
==11192== by 0x50F41B4: virDomainObjDispose (domain_conf.c:2034)
==11192== by 0x50C2991: virObjectUnref (virobject.c:262)
==11192== by 0x50F4CFC: virDomainObjListRemove (domain_conf.c:2361)
==11192== by 0x1145C125: qemuDomainRemoveInactive (qemu_domain.c:2087)
==11192== by 0x11488520: qemuMigrationPrepareAny (qemu_migration.c:2456)
==11192== by 0x11488EA6: qemuMigrationPrepareDirect (qemu_migration.c:2652)
==11192== by 0x114D1509: qemuDomainMigratePrepare3Params (qemu_driver.c:10332)
==11192== by 0x519075D: virDomainMigratePrepare3Params (libvirt.c:7290)
==11192== by 0x1502DA: remoteDispatchDomainMigratePrepare3Params (remote.c:4798)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So far we are checking if qemu supports 'nbd-server-start'. This,
however, makes no sense on the source as nbd-server-* is used on the
destination. On the source the 'drive-mirror' is used instead.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If there's a migration cancelled, the bitmap of migration port should be
cleaned up too.
Signed-off-by: Zeng Junliang <zengjunliang@huawei.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When adding an automatically allocated port to a well-formed migration
URI, keep it well-formed:
tcp://1.2.3.4/ -> tcp://1.2.3.4/:12345 # wrong
tcp://1.2.3.4/ -> tcp://1.2.3.4:12345/ # fixed
tcp://1.2.3.4 -> tcp://1.2.3.4:12345 # still works
tcp:1.2.3.4 -> tcp:1.2.3.4:12345 # still works (old syntax)
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
https://bugzilla.redhat.com/show_bug.cgi?id=1019053
When we migrate vms concurrently, there's a chance that libvirtd on
destination assigns the same port for different migrations, which will
lead to migration failure during prepare phase on destination. So we use
virPortAllocator here to solve the problem.
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
After my patches, some functions gained one more argument
(@listenAddress) which wasn't included in debug printing of
arguments they were called with. Functions in question are:
qemuMigrationPrepareDirect and qemuMigrationPerform.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=994364
Whenever we check for ABI stability, we have new xml (e.g. provided by
user, or obtained from snapshot, whatever) which we compare to old xml
and see if ABI won't break. However, if the new xml was produced via
virDomainGetXMLDesc(..., VIR_DOMAIN_XML_MIGRATABLE) it lacks some
devices, e.g. 'pci-root' controller. Hence, the ABI stability check
fails even though it is stable. Moreover, we can't simply fix
virDomainDefCheckABIStability because removing the correct devices is
task for the driver. For instance, qemu driver wants to remove the usb
controller too, while LXC driver doesn't. That's why we need special
qemu wrapper over virDomainDefCheckABIStability which removes the
correct devices from domain XML, produces MIGRATABLE xml and calls the
check ABI stability function.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Since the wait is done during migration (still inside
QEMU_ASYNC_JOB_MIGRATION_OUT), the code should enter the monitor as such
in order to prohibit all other jobs from interfering in the meantime.
This patch fixes bug #1009886 in which qemuDomainGetBlockInfo was
waiting on the monitor condition and after GetSpiceMigrationStatus
mangled its internal data, the daemon crashed.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1009886
qemuMigrationEatCookie has flags to control if these should
be parsed, but it does not fill mig->flags. These cookies might
get leaked if these flags are not set by qemuMigrationBakeCookie.
42 (32 direct, 10 indirect) bytes in 1 blocks are definitely lost in
loss record 361 of 662
==123== by 0x1BA33FCA: qemuMigrationEatCookie (qemu_migration.c:678)
==123== by 0x1BA34A1E: qemuMigrationRun (qemu_migration.c:3108)
==123== by 0x1BA3622B: doNativeMigrate (qemu_migration.c:3343)
==123== by 0x1BA3B408: qemuMigrationPerform (qemu_migration.c:4138)
Osier Yang pointed out that ever since commit 31cb030, the
signature of qemuDomainObjEndJob was changed to return a bool.
While comparison against 0 or > 0 still gives the right results,
it looks fishy; we also had one place that was comparing < 0
which is effectively dead code.
* src/qemu/qemu_migration.c (qemuMigrationPrepareAny): Fix dead
code bug.
(qemuMigrationBegin): Use more canonical form of bool check.
* src/qemu/qemu_driver.c (qemuAutostartDomain)
(qemuDomainCreateXML, qemuDomainSuspend, qemuDomainResume)
(qemuDomainShutdownFlags, qemuDomainReboot, qemuDomainReset)
(qemuDomainDestroyFlags, qemuDomainSetMemoryFlags)
(qemuDomainSetMemoryStatsPeriod, qemuDomainInjectNMI)
(qemuDomainSendKey, qemuDomainGetInfo, qemuDomainScreenshot)
(qemuDomainSetVcpusFlags, qemuDomainGetVcpusFlags)
(qemuDomainRestoreFlags, qemuDomainGetXMLDesc)
(qemuDomainCreateWithFlags, qemuDomainAttachDeviceFlags)
(qemuDomainUpdateDeviceFlags, qemuDomainDetachDeviceFlags)
(qemuDomainBlockResize, qemuDomainBlockStats)
(qemuDomainBlockStatsFlags, qemuDomainMemoryStats)
(qemuDomainMemoryPeek, qemuDomainGetBlockInfo)
(qemuDomainAbortJob, qemuDomainMigrateSetMaxDowntime)
(qemuDomainMigrateGetCompressionCache)
(qemuDomainMigrateSetCompressionCache)
(qemuDomainMigrateSetMaxSpeed)
(qemuDomainSnapshotCreateActiveInternal)
(qemuDomainRevertToSnapshot, qemuDomainSnapshotDelete)
(qemuDomainQemuMonitorCommand, qemuDomainQemuAttach)
(qemuDomainBlockJobImpl, qemuDomainBlockCopy)
(qemuDomainBlockCommit, qemuDomainOpenGraphics)
(qemuDomainGetBlockIoTune, qemuDomainGetDiskErrors)
(qemuDomainPMSuspendForDuration, qemuDomainPMWakeup)
(qemuDomainQemuAgentCommand, qemuDomainFSTrim): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=999352
Since commit v1.0.5-56-g449e6b1 (Pull parsing of migration xml up into
QEMU driver APIs) any attempt to rename a domain during migration fails
with the following error message:
internal error Incoming cookie data had unexpected name DOM vs DOM2
This is because migration cookies always use the original domain name
and the mentioned commit failed to propagate the name back to
qemuMigrationPrepareAny.
https://bugzilla.redhat.com/show_bug.cgi?id=979477
Since 1.0.3 we are using the new way to copy non shared storage during
migration (the NBD way). However, whether the new or old way is used is
not controllable by user but unconditionally turned on if both sides of
migration support it. Moreover, the implementation is not complete: the
combination for VIR_MIGRATE_TUNNELLED flag is missing (as we need to
open new port on the destination) in which case we just error out. This
is a deadly combination: not letting users choose their destiny and
erroring out. We should not do that but VIR_WARN and turn the NBD off
instead.
Convert the remaining methods in vircgroup.c to report errors
instead of returning errno values.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Convert the type of loop iterators named 'i', 'j', k',
'ii', 'jj', 'kk', to be 'size_t' instead of 'int' or
'unsigned int', also santizing 'ii', 'jj', 'kk' to use
the normal 'i', 'j', 'k' naming
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Whenever virPortAllocatorRelease is called with port == 0, it complains
that the port is not in an allowed range, which is expectable as the
port was never allocated. Let's make virPortAllocatorRelease ignore 0
ports in a similar way free() ignores NULL pointers.
https://bugzilla.redhat.com/show_bug.cgi?id=981139
If a domain is paused before migration starts, we need to tell that to
the destination libvirtd to prevent it from resuming the domain at the
end of migration. This regression was introduced by commit 5379bb0.
Since commit 23e8b5d8, the code is refactored in a way that supports
domains with multiple graphics elements and commit 37b415200 allows
starting such domains. However none of those commits take migration
into account. Even though qemu doesn't support relocation for
anything else than SPICE and for no more than one graphics, there is no
reason to hardcode one graphics into this part of the code as well.