Use query-cpus-fast instead of query-cpus if supported by QEMU.
Based on the QEMU_CAPS_QUERY_CPUS_FAST capability.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
This function is indeed getting -device properties and not
-object properties. The current name is misleading.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Since the monitor code no longer needs to see this enum, we move it
to the place where migration parameters are defined and drop the
"monitor" reference from the name.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We want to have all migration capabilities parsing and formatting at one
place, i.e., in qemu_migration_params.c. The parsing is already there in
qemuMigrationCapsCheck.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We want to have all migration parameters parsing and formatting at one
place, i.e., in qemu_migration_params.c.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We want to have all migration parameters parsing and formatting at once
place, i.e., in qemu_migration_params.c.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Originally QEMU provided query-migrate-cache-size and
migrate-set-cache-size QMP commands for querying/setting XBZRLE cache
size. In version 2.11 QEMU added support for XBZRLE cache size to the
general migration paramaters commands.
This patch adds support for this parameter to libvirt to make sure it is
properly restored to its original value after a failed or aborted
migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Our current monitor API forces the caller to call
migrate-set-capabilities QMP command for each capability separately,
which is quite suboptimal. Let's add a new API for setting all
capabilities at once.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The last use of qemuMonitorMigrateToCommand was removed years back in
commit 2e90c9daf9
Author: Daniel P. Berrange <berrange@redhat.com>
Date: Fri Nov 6 16:50:26 2015 +0000
qemu: assume support for all migration protocols except rdma
Prior to that commit, 'exec:' to used to replicate the 'unix:' protocol
by spawning 'nc'.
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Rather than trying to prevent stealing of the 'actions' virJSONValue
into the monitor command replace the code so that it does the same
thing, since 'actions' was actually not really used after calling the
monitor.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
The JSON array was processed to the hash table used by the query apis in
the monitor code. Move it to a new helper in qemu_qapi.c.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
There is a long standing hack to pass a virConnectPtr into the
qemuMonitorStartCPUs method, so that when the text monitor prompts
for a disk password, we can lookup virSecretPtr objects. This causes
us to have to pass a virConnectPtr around through countless methods
up the call chain....except some places don't have any virConnectPtr
available so have always just passed NULL. We can finally fix this
disastrous design by using virGetConnectSecret() to open a connection
to the secret driver at time of use.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Add the query-dump API's in order to allow the dump-guest-memory
to be used to monitor progress. This will use the dump stats
extraction helper to fill a return buffer.
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
The event will be fired when the domain memory only dump completes.
Fill in a return buffer to store/pass along the dump statistics that
will be eventually shared by a query-dump command. Also pass along
the status of the filling and any possible error received.
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1536461
This reverts commit aeda1b8c56.
Problem is that we need mon->lastError to be set because it's
used all over the place. Also, there's nothing wrong with
reporting error if one occurred. I mean, if there's a thread
executing an API and which currently is talking on monitor it
definitely wants the error reported.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
We read from QEMU until seeing a \r\n pair to indicate a completed reply
or event. To avoid memory denial-of-service though, we must have a size
limit on amount of data we buffer. 10 MB is large enough that it ought
to cope with normal QEMU replies, and small enough that we're not
consuming unreasonable mem.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The PROBE macro used in qemuMonitorIOProcess and the VIR_DEBUG message
in qemuMonitorJSONIOProcess create a lot of logging churn when debug
logging is enabled during monitor communication.
The messages logged from the PROBE macro are rather useless since they
are reporting the partial state of receiving the reply from qemu. The
actual full reply is still logged in qemuMonitorJSONIOProcessLine once
the full message is received.
This patch pass event error up to the place where we can
use it. Error is passed only for sync blockjob event mode
as we can't use the error in async mode. In async mode we
just pass the event details to the client thru event API
but current blockjob event API can not carry extra parameter.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Right-aligning backslashes when defining macros or using complex
commands in Makefiles looks cute, but as soon as any changes is
required to the code you end up with either distractingly broken
alignment or unnecessarily big diffs where most of the changes
are just pushing all backslashes a few characters to one side.
Generated using
$ git grep -El '[[:blank:]][[:blank:]]\\$' | \
grep -E '*\.([chx]|am|mk)$$' | \
while read f; do \
sed -Ei 's/[[:blank:]]*[[:blank:]]\\$/ \\/g' "$f"; \
done
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
We handle incremental storage migration in a different way. The support
for this new (as of QEMU 2.10) parameter is only needed for full
coverage of migration parameters used by QEMU.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
We already support several ways of setting migration bandwidth and this
is not adding another one. With this patch we are able to read and write
this parameter using query-migrate-parameters and migrate-set-parameters
in one call with all other parameters.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The parameters used "migrate" prefix which is pretty redundant and
qemuMonitorMigrationParams structure is our internal representation of
QEMU migration parameters and it is supposed to use names which match
QEMU names.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
We already support setting the maximum downtime with a dedicated
virDomainMigrateSetMaxDowntime API. This patch does not implement
another way of setting the downtime by adding a new public migration
parameter. It just makes sure any parameter we are able to get from a
QEMU monitor by query-migrate-parameters can be passed back to QEMU via
migrate-set-parameters.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The check can be easily replaced with a simple test in the JSON
implementation and we don't need to update it every time a new parameter
is added.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
This new capability enables a pause before device state serialization so
that we can finish all block jobs without racing with the end of the
migration. The pause is indicated by "pre-switchover" state. Once we're
done QEMU enters "device" migration state.
This patch just defines the new capability and QEMU migration states and
their mapping to our job states.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The only remaining user of qemuMonitorGetMigrationCapability is our test
suite. Let's replace qemuMonitorGetMigrationCapability with
qemuMonitorGetMigrationCapabilities there and drop the unused function.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
When migration fails, QEMU may provide a description of the error in
the reply to query-migrate QMP command. We can fetch this error and use
it instead of the generic "unexpectedly failed" message.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
query-cpu-definitions QMP command returns a list of unavailable features
which prevent CPU models from being usable on the current host. So far
we only checked whether the list was empty to mark CPU models as
(un)usable. This patch parses all unavailable features for each CPU
model and stores them in virDomainCapsCPUModel as a list of usability
blockers.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1447169
Since domain can have at most one watchdog it simplifies things a
bit. However, since we must be able to set the watchdog action as
well, new monitor command needs to be used.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Seeing a log message saying 'flags=93' is ambiguous & confusing unless
you happen to know that libvirt always prints flags as hex. Change our
debug messages so that they always add a '0x' prefix when printing flags,
and '0' prefix when printing mode. A few other misc places gain a '0x'
prefix in error messages too.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Allow getting the raw data from query-blockstats, so that we can use it
to detect the backing chain later on.
Reviewed-by: Eric Blake <eblake@redhat.com>
vcpu properties gathered from query-hotpluggable cpus need to be passed
back to qemu. As qemu did not use the node-id property until now and
libvirt forgot to pass it back properly (it was parsed but not passed
around) we did not honor this.
This patch adds node-id to the structures where it was missing and
passes it around as necessary.
The test data was generated with a VM with following config:
<numa>
<cell id='0' cpus='0,2,4,6' memory='512000' unit='KiB'/>
<cell id='1' cpus='1,3,5,7' memory='512000' unit='KiB'/>
</numa>
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1452053
Add a comment for mon->watch to make clear what's the purpose of this
value.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
QEMU will likely report the details of it shutting down, particularly
whether the shutdown was initiated by the guest or host. We should
forward that information along, at least for shutdown events. Reset
has that as well, however that is not a lifecycle event and would add
extra constants that might not be used. It can be added later on.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1384007
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
If a shutdown is expected because it was triggered via libvirt we can
also expect the monitor to close. In those cases do not report an
internal error like:
"internal error: End of file from qemu monitor"
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Implement qemuMonitorRegister() as there is already a
qemuMonitorUnregister() function. This way it may be easier to
understand the code paths.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
There were multiple race conditions that could lead to segmentation
faults. The first precondition for this is qemuProcessLaunch must fail
sometime shortly after starting the new QEMU process. The second
precondition for the segmentation faults is that the new QEMU process
dies - or to be more precise the QEMU monitor has to be closed
irregularly. If both happens during qemuProcessStart (starting a
domain) there are race windows between the thread with the event
loop (T1) and the thread that is starting the domain (T2).
First segmentation fault scenario:
If qemuProcessLaunch fails during qemuProcessStart the code branches
to the 'stop' path where 'qemuMonitorSetDomainLog(priv->mon, NULL,
NULL, NULL)' will set the log function of the monitor to NULL (done in
T2). In the meantime the event loop of T1 will wake up with an EOF
event for the QEMU monitor because the QEMU process has died. The
crash occurs if T1 has checked 'mon->logFunc != NULL' in qemuMonitorIO
just before the logFunc was set to NULL by T2. If this situation
occurs T1 will try to call mon->logFunc which leads to the
segmentation fault.
Solution:
Require the monitor lock for setting the log function.
Backtrace:
0 0x0000000000000000 in ?? ()
1 0x000003ffe9e45316 in qemuMonitorIO (watch=<optimized out>,
fd=<optimized out>, events=<optimized out>, opaque=0x3ffe08aa860) at
../../src/qemu/qemu_monitor.c:727
2 0x000003fffda2e1a4 in virEventPollDispatchHandles (nfds=<optimized
out>, fds=0x2aa000fd980) at ../../src/util/vireventpoll.c:508
3 0x000003fffda2e398 in virEventPollRunOnce () at
../../src/util/vireventpoll.c:657
4 0x000003fffda2ca10 in virEventRunDefaultImpl () at
../../src/util/virevent.c:314
5 0x000003fffdba9366 in virNetDaemonRun (dmn=0x2aa000cc550) at
../../src/rpc/virnetdaemon.c:818
6 0x000002aa00024668 in main (argc=<optimized out>, argv=<optimized
out>) at ../../daemon/libvirtd.c:1541
Second segmentation fault scenario:
If qemuProcessLaunch fails it will unref the log context and with
invoking qemuMonitorSetDomainLog(priv->mon, NULL, NULL, NULL)
qemuDomainLogContextFree() will be invoked. qemuDomainLogContextFree()
invokes virNetClientClose() to close the client and cleans everything
up (including unref of _virLogManager.client) when virNetClientClose()
returns. When T1 is now trying to report 'qemu unexpectedly closed the
monitor' libvirtd will crash because the client has already been
freed.
Solution:
As the critical section in qemuMonitorIO is protected with the monitor
lock we can use the same solution as proposed for the first
segmentation fault.
Backtrace:
0 virClassIsDerivedFrom (klass=0x3100979797979797,
parent=0x2aa000d92f0) at ../../src/util/virobject.c:169
1 0x000003fffda659e6 in virObjectIsClass (anyobj=<optimized out>,
klass=<optimized out>) at ../../src/util/virobject.c:365
2 0x000003fffda65a24 in virObjectLock (anyobj=0x3ffe08c1db0) at
../../src/util/virobject.c:317
3 0x000003fffdba4688 in
virNetClientIOEventLoop (client=client@entry=0x3ffe08c1db0,
thiscall=thiscall@entry=0x2aa000fbfa0) at
../../src/rpc/virnetclient.c:1668
4 0x000003fffdba4b4c in
virNetClientIO (client=client@entry=0x3ffe08c1db0,
thiscall=0x2aa000fbfa0) at ../../src/rpc/virnetclient.c:1944
5 0x000003fffdba4d42 in
virNetClientSendInternal (client=client@entry=0x3ffe08c1db0,
msg=msg@entry=0x2aa000cc710, expectReply=expectReply@entry=true,
nonBlock=nonBlock@entry=false) at ../../src/rpc/virnetclient.c:2116
6 0x000003fffdba6268 in
virNetClientSendWithReply (client=0x3ffe08c1db0, msg=0x2aa000cc710) at
../../src/rpc/virnetclient.c:2144
7 0x000003fffdba6e8e in virNetClientProgramCall (prog=0x3ffe08c1120,
client=<optimized out>, serial=<optimized out>, proc=<optimized out>,
noutfds=<optimized out>, outfds=0x0, ninfds=0x0, infds=0x0,
args_filter=0x3fffdb64440
<xdr_virLogManagerProtocolDomainReadLogFileArgs>, args=0x3ffffffe010,
ret_filter=0x3fffdb644c0
<xdr_virLogManagerProtocolDomainReadLogFileRet>, ret=0x3ffffffe008) at
../../src/rpc/virnetclientprogram.c:329
8 0x000003fffdb64042 in
virLogManagerDomainReadLogFile (mgr=<optimized out>, path=<optimized
out>, inode=<optimized out>, offset=<optimized out>, maxlen=<optimized
out>, flags=0) at ../../src/logging/log_manager.c:272
9 0x000003ffe9e0315c in qemuDomainLogContextRead (ctxt=0x3ffe08c2980,
msg=0x3ffffffe1c0) at ../../src/qemu/qemu_domain.c:4422
10 0x000003ffe9e280a8 in qemuProcessReadLog (logCtxt=<optimized out>,
msg=msg@entry=0x3ffffffe288) at ../../src/qemu/qemu_process.c:1800
11 0x000003ffe9e28206 in qemuProcessReportLogError (logCtxt=<optimized
out>, msgprefix=0x3ffe9ec276a "qemu unexpectedly closed the monitor")
at ../../src/qemu/qemu_process.c:1836
12 0x000003ffe9e28306 in
qemuProcessMonitorReportLogError (mon=mon@entry=0x3ffe085cf10,
msg=<optimized out>, opaque=<optimized out>) at
../../src/qemu/qemu_process.c:1856
13 0x000003ffe9e452b6 in qemuMonitorIO (watch=<optimized out>,
fd=<optimized out>, events=<optimized out>, opaque=0x3ffe085cf10) at
../../src/qemu/qemu_monitor.c:726
14 0x000003fffda2e1a4 in virEventPollDispatchHandles (nfds=<optimized
out>, fds=0x2aa000fd980) at ../../src/util/vireventpoll.c:508
15 0x000003fffda2e398 in virEventPollRunOnce () at
../../src/util/vireventpoll.c:657
16 0x000003fffda2ca10 in virEventRunDefaultImpl () at
../../src/util/virevent.c:314
17 0x000003fffdba9366 in virNetDaemonRun (dmn=0x2aa000cc550) at
../../src/rpc/virnetdaemon.c:818
18 0x000002aa00024668 in main (argc=<optimized out>, argv=<optimized
out>) at ../../daemon/libvirtd.c:1541
Other code parts where the same problem was possible to occur are
fixed as well (qemuMigrationFinish, qemuProcessStart, and
qemuDomainSaveImageStartVM).
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reported-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
If calling query-cpu-model-expansion on the 'host'/'max' CPU model with
'migratable' property set to false succeeds, we know QEMU is able to
tell us which features would disable migration. Thus we can mark all
enabled features as migratable.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
QEMU is able to tell us whether a CPU feature would block migration or
not. This patch adds support for storing such features in
qemuMonitorCPUModelInfo.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The hyperv panic notifier reports additional data in form of 5 registers
that are reported in the crash event from qemu. Log them into the VM log
file and report them as a warning so that admins can see the cause of
crash of their windows VMs.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1426176
For certain kinds of panic notifiers (notably hyper-v) qemu is able to
report some data regarding the crash passed from the guest.
Make the data accessible to the callback in qemu so that it can be
processed further.
To allow matching the node names gathered via 'query-named-block-nodes'
we need to query and then use the top level nodes from 'query-block'.
Add the data to the structure returned by qemuMonitorGetBlockInfo.
Add monitor tooling for calling query-named-block-nodes. The monitor
returns the data as the raw JSON array that is returned from the
monitor.
Unfortunately the logic to extract the node names for a complete backing
chain will be so complex that I won't be able to extract any meaningful
subset of the data in the monitor code.