This patch adds checks for empty bitmaps right after the calls of
virBitmapParse. These only include spots where set API's are called and
where domain's XML is parsed.
Also, it partially reverts commit 983f5a which added a check for
invalid nodeset "0,^0" into virBitmapParse function. This change broke
the logic, as an empty bitmap should not cause an error.
https://bugzilla.redhat.com/show_bug.cgi?id=1210545
On arm, we probe for virtio-*-pci devices, but use their
virtio-*-device variants.
Set the capabilities based on the -device variants as well,
to make them work with qemus with the PCI devices compiled out.
When pre-creating storage for domains, we need to find corresponding
disk in the XML on the destination (domain XML may differ there, e.g.
disk is accessible under different path). For better debugging, I'm
printing all info I received on a disk. But there was a typo when
printing the disk capacity: "%lluu" instead of "%llu".
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The problem with the previous implementation is,
even when qemuMigrationUpdateJobStatus() detects a migration job
has completed, it will do a sleep for 50 ms (which is unnecessary
and only adds up to the VM pause time).
Signed-off-by: Xing Lin <xinglin@cs.utah.edu>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Future IOThread setting patches would copy the code anyway, so create
and generalize the adding of pindef for the vcpu and the pinning of the
thread into their own APIs.
We support VNC for containers to have the same
interface with VMs. At this moment it just renders
linux text console.
Of course we don't pass any physical devices and
don't emulate virtual devices. Our VNC server
renders text from terminal master and sends
input events from VNC client to terminal.
So add special video type VIR_DOMAIN_VIDEO_TYPE_PARALLELS
for these pseudo-devices.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Future IOThread setting patches would copy the code anyway, so create
and generalize a delete cgroup and pindef for the vcpu into its own API.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Future IOThread setting patches would copy the code anyway, so create
and generalize the add the vcpu to a cgroup into its own API.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Support for drive-reopen was never present in the upstream code so we
don't need to pause the VM when doing the block pivot. Kill all the
code related to this semi-upstream artifact.
131,088 bytes in 16 blocks are definitely lost in loss record 2,174 of 2,176
at 0x4C29BFD: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
by 0x4C2BACB: realloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
by 0x52A026F: virReallocN (viralloc.c:245)
by 0x52BFCB5: saferead_lim (virfile.c:1268)
by 0x52C00EF: virFileReadLimFD (virfile.c:1328)
by 0x52C019A: virFileReadAll (virfile.c:1351)
by 0x52A5D4F: virCgroupGetValueStr (vircgroup.c:763)
by 0x1DDA0DA3: qemuRestoreCgroupState (qemu_cgroup.c:805)
by 0x1DDA0DA3: qemuConnectCgroup (qemu_cgroup.c:857)
by 0x1DDB7BA1: qemuProcessReconnect (qemu_process.c:3694)
by 0x52FD171: virThreadHelper (virthread.c:206)
by 0x82B8DF4: start_thread (pthread_create.c:308)
by 0x85C31AC: clone (clone.S:113)
Signed-off-by: Luyao Huang <lhuang@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1198645
Once upon a time, there was a little domain. And the domain was pinned
onto a NUMA node and hasn't fully allocated its memory:
<memory unit='KiB'>2355200</memory>
<currentMemory unit='KiB'>1048576</currentMemory>
<numatune>
<memory mode='strict' nodeset='0'/>
</numatune>
Oh little me, said the domain, what will I do with so little memory.
If I only had a few megabytes more. But the old admin noticed the
whimpering, barely audible to untrained human ear. And good admin he
was, he gave the domain yet more memory. But the old NUMA topology
witch forbade to allocate more memory on the node zero. So he
decided to allocate it on a different node:
virsh # numatune little_domain --nodeset 0-1
virsh # setmem little_domain 2355200
The little domain was happy. For a while. Until bad, sharp teeth
shaped creature came. Every process in the system was afraid of him.
The OOM Killer they called him. Oh no, he's after the little domain.
There's no escape.
Do you kids know why? Because when the little domain was born, her
father, Libvirt, called numa_set_membind(). So even if the admin
allowed her to allocate memory from other nodes in the cgroups, the
membind() forbid it.
So what's the lesson? Libvirt should rely on cgroups, whenever
possible and use numa_set_membind() as the last ditch effort.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Currently we check qemuCaps before starting the block job. But qemuCaps
isn't available on a stopped domain, which means we get a misleading
error message in this case:
# virsh domstate example
shut off
# virsh blockjob example vda
error: unsupported configuration: block jobs not supported with this QEMU binary
Move the qemuCaps check into the block job so that we are guaranteed the
domain is running.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
qemuMigrationCookieAddNBD is usually called from within an async
MIGRATION_OUT or MIGRATION_IN job, so it needs to start a nested job.
(The one exception is during the Begin phase when change protection
isn't enabled, but qemuDomainObjEnterMonitorAsync will behave the same
as qemuDomainObjEnterMonitor in this case.)
This bug was encountered with a libvirt client that repeatedly queries
the disk mirroring block job info during a migration. If one of these
queries occurs just as the Perform migration cookie is baked, libvirt
crashes.
Relevant logs are as follows:
6701: warning : qemuDomainObjEnterMonitorInternal:1544 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous
[1] 6701: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block","id":"libvirt-629"}
[2] 6699: info : qemuMonitorIOWrite:503 : QEMU_MONITOR_IO_WRITE: mon=0x7fefdc004700 buf={"execute":"query-block","id":"libvirt-629"}
[3] 6704: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block-jobs","id":"libvirt-630"}
[4] 6699: info : qemuMonitorJSONIOProcessLine:203 : QEMU_MONITOR_RECV_REPLY: mon=0x7fefdc004700 reply={"return": [...], "id": "libvirt-629"}
6699: error : qemuMonitorJSONIOProcessLine:211 : internal error: Unexpected JSON reply '{"return": [...], "id": "libvirt-629"}'
At [1] qemuMonitorBlockStatsUpdateCapacity sends its request, then waits
on mon->notify. At [2] the request is written out to the monitor socket.
At [3] qemuMonitorBlockJobInfo sends its request, and also waits on
mon->notify. The reply from the first request is received at [4].
However, qemuMonitorJSONIOProcessLine is not expecting this reply since
the second request hadn't completed sending. The reply is dropped and an
error is returned.
qemuMonitorIO signals mon->notify twice during its error handling,
waking up both of the threads waiting on it. One of them clears mon->msg
as it exits qemuMonitorSend; the other crashes:
qemuMonitorSend (mon=0x7fefdc004700, msg=<value optimized out>) at qemu/qemu_monitor.c:975
975 while (!mon->msg->finished) {
(gdb) print mon->msg
$1 = (qemuMonitorMessagePtr) 0x0
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
If a VM migration is aborted, a disk mirror may be failed by QEMU before
libvirt has a chance to cancel it. The disk->mirrorState remains at
_ABORT in this case, and this breaks subsequent mirrorings of that disk.
We should instead check the mirrorState directly and transition to _NONE
if it is already aborted. Do the check *after* aborting the block job in
QEMU to avoid a race.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
If virCloseCallbacksSet fails, qemuMigrationBegin must return NULL to
indicate an error occurred.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
The destination libvirt daemon in a migration may segfault if the client
disconnects immediately after the migration has begun:
# virsh -c qemu+tls://remote/system list --all
Id Name State
----------------------------------------------------
...
# timeout --signal KILL 1 \
virsh migrate example qemu+tls://remote/system \
--verbose --compressed --live --auto-converge \
--abort-on-error --unsafe --persistent \
--undefinesource --copy-storage-all --xml example.xml
Killed
# virsh -c qemu+tls://remote/system list --all
error: failed to connect to the hypervisor
error: unable to connect to server at 'remote:16514': Connection refused
The crash is in:
1531 void
1532 qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj)
1533 {
1534 qemuDomainObjPrivatePtr priv = obj->privateData;
1535 qemuDomainJob job = priv->job.active;
1536
1537 priv->jobs_queued--;
Backtrace:
#0 at qemuDomainObjEndJob at qemu/qemu_domain.c:1537
#1 in qemuDomainRemoveInactive at qemu/qemu_domain.c:2497
#2 in qemuProcessAutoDestroy at qemu/qemu_process.c:5646
#3 in virCloseCallbacksRun at util/virclosecallbacks.c:350
#4 in qemuConnectClose at qemu/qemu_driver.c:1154
...
qemuDomainRemoveInactive calls virDomainObjListRemove, which in this
case is holding the last remaining reference to the domain.
qemuDomainRemoveInactive then calls qemuDomainObjEndJob, but the domain
object has been freed and poisoned by then.
This patch bumps the domain's refcount until qemuDomainRemoveInactive
has completed. We also ensure qemuProcessAutoDestroy does not return the
domain to virCloseCallbacksRun to be unlocked in this case. There is
similar logic in bhyveProcessAutoDestroy and lxcProcessAutoDestroy
(which call virDomainObjListRemove directly).
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
==19015== 968 (416 direct, 552 indirect) bytes in 1 blocks are definitely lost in loss record 999 of 1,049
==19015== at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==19015== by 0x52ADF14: virAllocVar (viralloc.c:560)
==19015== by 0x5302FD1: virObjectNew (virobject.c:193)
==19015== by 0x1DD9401E: virQEMUDriverConfigNew (qemu_conf.c:164)
==19015== by 0x1DDDF65D: qemuStateInitialize (qemu_driver.c:666)
==19015== by 0x53E0823: virStateInitialize (libvirt.c:777)
==19015== by 0x11E067: daemonRunStateInit (libvirtd.c:905)
==19015== by 0x53201AD: virThreadHelper (virthread.c:206)
==19015== by 0xA1EE1F2: start_thread (in /lib64/libpthread-2.19.so)
==19015== by 0xA4EFC8C: clone (in /lib64/libc-2.19.so)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
==19015== 1,064 (656 direct, 408 indirect) bytes in 2 blocks are definitely lost in loss record 1,002 of 1,049
==19015== at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==19015== by 0x52AD74B: virAlloc (viralloc.c:144)
==19015== by 0x52B47CA: virCgroupNew (vircgroup.c:1057)
==19015== by 0x52B53E5: virCgroupNewVcpu (vircgroup.c:1451)
==19015== by 0x1DD85A40: qemuSetupCgroupForVcpu (qemu_cgroup.c:1013)
==19015== by 0x1DDA66EA: qemuProcessStart (qemu_process.c:4844)
==19015== by 0x1DDF1807: qemuDomainObjStart (qemu_driver.c:7265)
==19015== by 0x1DDF1A66: qemuDomainCreateWithFlags (qemu_driver.c:7320)
==19015== by 0x1DDF1ACD: qemuDomainCreate (qemu_driver.c:7337)
==19015== by 0x53F87EA: virDomainCreate (libvirt-domain.c:6820)
==19015== by 0x12690A: remoteDispatchDomainCreate (remote_dispatch.h:3481)
==19015== by 0x126827: remoteDispatchDomainCreateHelper (remote_dispatch.h:3457)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Instead of always using controller 0 and incrementing port number,
respect the maximum port numbers of controllers and use all of them.
Ports for virtio consoles are quietly reserved, but not formatted
(neither in XML nor on QEMU command line).
Also rejects duplicate virtio-serial addresses.
https://bugzilla.redhat.com/show_bug.cgi?id=890606https://bugzilla.redhat.com/show_bug.cgi?id=1076708
Test changes:
* virtio-auto.args
Filling out the port when just the controller is specified.
switched from using
maxport + 1
to:
first free port on the controller
* virtio-autoassign.args
Filling out the address when no <address> is specified.
Started using all the controllers instead of 0, also discards
the bus value.
* xml -> xml output of virtio-auto
The port assignment is no longer done as a part of XML parsing,
so the unspecified values stay 0.
https://bugzilla.redhat.com/show_bug.cgi?id=1206479
As described in virDomainBlockCopy() parameters description, the
VIR_DOMAIN_BLOCK_COPY_GRANULARITY parameter may require the value to
have some specific attributes (e.g. be a power of two or fall within a
certain range). And in qemu, a power of two is required. However, our
code does not check that and let qemu operation fail. Moreover, the
virsh man page is not as exact as it could be in this respect.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When we shutdown/reboot a guest using agent-mode, if the guest itself blocks infinitely,
libvirt would block in qemuAgentShutdown() forever.
Thus, we set a timeout for shutdown/reboot, from our experience, 60 seconds would be fine.
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
virDomainHasDiskMirror() currently detects only jobs that add the mirror
elements. Since some operations like migration are interlocked by
existing block jobs on the given domain the check needs to be
instrumented to check regular jobs too.
This patch renames virDomainHasDiskMirror to virDomainHasDiskBlockjob
and adds an argument that allows to select that it returns true only for
block copy jobs as those interlock making the domain persistent.
Other two uses trigger on any block job type.
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
If any disk of a VM was involved in a (copy) block job we refused to do
a snapshot. As not only copy jobs interlock snapshots and the
interlocking is applicable to individual disks only we can make the
check in a more individual fashion and interlock all block job types
supported by libvirt.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1203628
In the order of appearance:
* MAX_LISTEN - never used
added by 23ad665c (qemud) and addec57 (lock daemon)
* NEXT_FREE_CLASS_ID - never used, added by 07d1b6b
* virLockError - never used, added by eb8268a4
* OPENVZ_MAX_ARG, CMDBUF_LEN, CMDOP_LEN
unused since the removal of ADD_ARG_LIT in d8b31306
* QEMU_NB_PER_CPU_STAT_PARAM - unused since 897808e
* QEMU_CMD_PROMPT, QEMU_PASSWD_PROMPT - unused since 1dc10a7
* TEST_MODEL_WORDSIZE - unused since c25c18f7
* TEMPDIR - never used, added by 714bef5
* NSIG - workaround around old headers
added by commit 60ed1d2
unused since virExec was moved by commit 02e8691
* DO_TEST_PARSE - never used, added by 9afa006
* DIFF_MSEC, GETTIMEOFDAY - unused since eee6eb6
Two places would call to qemuPrepareCpumap() with priv->autoNodeset to
convert it to a cpuset. Remove the function and use the prepared cpuset
automatically.
When the default cpuset or automatic numa placement is used libvirt
would place the whole parent cgroup in the specified cpuset. This then
disallowed to re-pin the vcpus to a different cpu.
This patch pins only the vcpu threads to the default cpuset and thus
allows to re-pin them later.
The following config would fail to start:
<domain type='kvm'>
...
<vcpu placement='static' cpuset='0-1' current='2'>4</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='2-3'/>
...
This is a regression since a39f69d2b.
When the synchronous pivot option is selected, libvirt would not update
the backing chain until the job was exitted. Some applications then
received invalid data as their job serialized first.
This patch removes polling to wait for the ABORT/PIVOT job completion
and replaces it with a condition. If a synchronous operation is
requested the update of the XML is executed in the job of the caller of
the synchronous request. Otherwise the monitor event callback uses a
separate worker to update the backing chain with a new job.
This is a regression since 1a92c71910
When the ABORT job is finished synchronously you get the following call
stack:
#0 qemuBlockJobEventProcess
#1 qemuDomainBlockJobImpl
#2 qemuDomainBlockJobAbort
#3 virDomainBlockJobAbort
While previously or while using the _ASYNC flag you'd get:
#0 qemuBlockJobEventProcess
#1 processBlockJobEvent
#2 qemuProcessEventHandler
#3 virThreadPoolWorker
Later on I'll be adding a condition that will allow to synchronise a
SYNC block job abort. The approach will require this code to be called
from two different places so it has to be extracted into a helper.
Commit 1a92c719 moved code to handle block job events to a different
function that is executed in a separate thread. The caller of
processBlockJob handles locking and unlocking of @vm, so the we should
not do it in the function itself.
The block copy API takes the speed in bytes/s rather than MiB/s that was
the prior approach in virDomainBlockRebase. We correctly converted the
speed to bytes/s in the old API but we still called the common helper
virDomainBlockCopyCommon with the unadjusted variable.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1207122
When getting info on NUMA parameters for domain,
virCgroupGetCpusetMems() may be called. However, as of 43b67f2e
the call is guarded by check if memory controller is present.
Even though it may be not obvious instantly, NUMA parameters are
stored under cpuset controller. Therefore the check needs to look
like this:
if (!virCgroupHasController(priv->cgroup,
VIR_CGROUP_CONTROLLER_CPUSET) ||
virCgroupGetCpusetMems(priv->cgroup, &nodeset) < 0) {
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Blockcopy to non-file destination is not supported according the code,
but a 'goto endjob' is missed after checking the destination.
This leads to calling drive-mirror with wrong parameters.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1206406
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>