virDomainGetJobStats is able to report statistics of a completed
migration, however to get usable downtime and total time statistics both
hosts have to keep synchronized time. To provide at least some
estimation of the times even when NTP daemons are not running on both
hosts we can just ignore the time needed to transfer a migration cookie
to the destination host. The result will be also inaccurate but a bit
more predictable. The total/down time will just be at least what we
report.
https://bugzilla.redhat.com/show_bug.cgi?id=1213434
Every domain that grabs a domain object to work over should
reference it to make sure it won't disappear meanwhile.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This is basically turning qemuDomObjEndAPI into a more general
function. Other drivers which gets a reference to domain objects may
benefit from this function too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
just as what b8e25c35d7 did, we
fall back to the ACPI method when the guest agent is unresponsive
in qemuDomainReboot().
Signed-off-by: YueWenyuan <yuewenyuan@huawei.com>
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Because packets going through the egress from a bridge (where our
bandwidth limiting takes place) have no information about which
interface they came from, the QoS rules that we create instead
use the source MAC address of the packets to make their decisions
about which QDisc the packet should be in.
One flaw in this is that when a guest changed the MAC address it
used, packets from the guest would no longer be put into the
correct QDisc, but would instead be put in an "unprivileged"
class, resulting in the bandwidth "floor" (minimum guaranteed)
being no longer honored.
Now that libvirt has infrastructure to capture and respond to
RX_FILTER_CHANGE events from qemu (sent whenever a guest
interface modifies its MAC address, among other things), we can
notice when a guest MAC address changes, and update the QoS rules
accordingly, so that bandwidth floor is honored even after a
guest MAC address change.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
qemuDomainSetMemoryFlags() would allow to set the initial memory greater
than the <maxMemory> field. While the configuration would not work as
memory hotplug requires NUMA to be enabled and the
qemuDomainSetMemoryFlags() API does not work on NUMA guests this just
fixes a corner case.
The fix is still worth though as it allows to induce an invalid
configuration and make the VM vanish on libvirt restart.
Additionally this tweaks error message to be more accurate.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
A further fix for:
https://bugzilla.redhat.com/show_bug.cgi?id=1113474
Since there is no possibility that any type of macvtap will work if
the parent physdev it's attached to is offline, we should bring the
physdev online at the same time as the macvtap. When taking the
macvtap offline, it's also necessary to take the physdev offline for
macvtap passthrough mode (because the physdev has the same MAC address
as the macvtap device, so could potentially cause problems with
misdirected packets during migration, as outlined in commits 829770
and 879c13). We can't set the physdev offline for other macvtap modes
1) because there may be other macvtap devices attached to the same
physdev (and/or the host itself may be using the device) in the other
modes whereas passthrough mode is exclusive to one macvtap at a time,
and 2) there's no practical reason to do so anyway.
- Remove all qemu emulators
- Restart libvirtd
- Install qemu emulators
- Call 'virsh version' -> errors
The only thing that will force the qemu driver to refresh it's cached
capablities info is an explict API call to GetCapabilities.
However in the case when the initial caps lookup at driver connect didn't
find a single qemu emulator to poll, the driver is effectively useless
and really can't do anything until it's populated some qemu capabilities
info.
With the above steps, the user would have to either know about the
magic refresh capabilities call, or restart libvirtd to pick up the
changes.
Instead, this patch changes things so that every time a part of th
driver requests access to capabilities info, check to see if
we've previously seen any emulators. If not, force a refresh.
In the case of 'still no emulators found', this is still very quick, so
I can't think of a downside.
https://bugzilla.redhat.com/show_bug.cgi?id=1000116
This needs to specified in way too many places for a simple validation
check. The ostype/arch/virttype validation checks later in
DomainDefParseXML should catch most of the cases that this was covering.
This revealed that GuestDefaultEmulator was a bit buggy, capable
of returning an emulator that didn't match the passed domain type. Fix
up the test suite input to continue to pass.
https://bugzilla.redhat.com/show_bug.cgi?id=1209948
So we have this bug. The virConnectGetDomainCapabilities() API
performs a couple of checks before it produces any result. One of
the checks is if the architecture requested by user can be run by
the binary (again user provided). However, the check is pretty
dumb. It merely compares if the default binary architecture
matches the one provided by user. However, a qemu binary can run
multiple architectures. For instance: qemu-system-ppc64 can run:
ppc, ppcle, ppc64, ppc64le and ppcemb. The default is ppc64, so
if user requested something else, like ppc64le, the check would
have failed without obvious reason.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When a qemu domain is to be rebooted, from outside, at libvirt
level it looks like regular shutdown. To really restart the
domain, libvirt needs to issue reset command on the monitor once
SHUTDOWN event appeared. So, in order to differentiate bare
shutdown and reboot libvirt uses a variable within domain private
data. It's called fakeReboot. When the reboot API is called, the
variable is set, but when the shutdown API is called it must be
cleared out. But it was not for every possible case. So if user
called virDomainReboot(), and there was no ACPI daemon running
inside the guest (so guest didn't initiated shutdown sequence)
and then virDomainShutdown(mode=agent) was called bad thing
happened. We remembered the fakeReboot and instead of shutting
the domain down, we just rebooted it.
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Among all the monitor APIs some where checking if mon is NULL and some
were not. Since it's possible to have mon equal to NULL in case a second
call is attempted once entered the monitor. This requires that every
single API checks for the monitor.
This patch adds a macro that helps checking the state of the monitor and
either refactors existing checking code to use the macro or adds it in
case it was missing.
Rather than erroring out make the best attempt to retrieve other data if
disks are inaccessible or missing. The failure will still be logged
though.
Since the bulk stats API is called on multiple domains an error like
this makes the API unusable. This regression was introduced by commit
596a137134
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1209394
Commit f6563bc3 introduced HMP impl of the function (so that a different
uglier function could be removed). Before the HMP code is called there's
a leftover check that the monitor is JSON which inhibits the code from
working.
Changing the prototype to not have "int *index" since we'll soon be
disallowing index as a name. Curiously the original commit (a4504ac)
for the function used 'int idx' in the function - so they didn't match.
Now they do.
It is there even with -nodefaults and -no-user-config, so count with
that so we can start sparc domains.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
1. 'last_good_net' indicates the index of last successfully configured
net. so def->nets[last_good_net] should also be clean up if error occurs.
2. if error occurs in 'virNetDevMacVLanVPortProfileRegisterCallback'
(second 'goto err_exit' in loop), we should also do
'virNetDevVPortProfileDisassociate' cleanup for the
'virNetDevVPortProfileAssociate'(first code block in loop). So we should
consider the net is successfully configured after first code block in
loop finishes.
Signed-off-by: Huanle Han <hanxueluo@gmail.com>
After set memory parameters for running domain, save the change to live
xml is needed otherwise it will disappear after restart libvirtd.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1211548
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Apparently for Xen-devel 'index' is a global and causes a build failure,
so just use the shortened 'idx' instead to avoid the conflict.
Signed-off-by: John Ferlan <jferlan@redhat.com>
QEMU does not abandon the mirror. The job carries on in the synchronised
phase and it might be either pivoted again or cancelled. The commit
hints that the described behavior was happening in a downstream version.
If the command returns false there are two possible options:
1) qemu did not reach the point where it would ask the block job to
pivot
2) pivotting failed in the actual qemu coroutine
If either of those would happen we return failure and reset the
condition that waits for the block job to complete. This makes the API
fail but in case where qemu would actually abandon the mirror the fact
is notified via the event and handled asynchronously.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1202704
qemuDomainBlockJobImpl become an unmaintainable mess over the years of
adding new stuff to it. This patch starts splitting up individual
functions from it until it can be killed entirely.
In bulk this will add lines of code rather than delete them but it will
be traded for maintainability.
My intention is to split qemuMonitorJSONBlockJob() into simpler separate
functions for every block job type. Since the error handling code is the
same for all block jobs, this patch extracts the code into a separate
function that will later be reused in more places.
With the new helper qemuMonitorJSONErrorIsClass we can save a few
function calls as we can extract the error object once.
Split out the function that checks the actual error class string into a
separate helper as it will be useful later and refactor
qemuMonitorJSONHasError to return bool type and remove few useless
checks.
Basically virJSONValueObjectHasKey are useless here since the next call
to virJSONValueObjectGet is checking the return value again (which can't
fail at that point). By removing the first check we save a function
call.
Previously we checked that the vcpu we are trying to set is in range of
the number of threads presented by qemu. The problem is that if the VM
is offline the count is 0. Since the condition subtracted 1 from the
count the number would overflow and the check would never trigger.
Change the condition for more sensible ones with specific error
messages.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1208434
This patch adds checks for empty bitmaps right after the calls of
virBitmapParse. These only include spots where set API's are called and
where domain's XML is parsed.
Also, it partially reverts commit 983f5a which added a check for
invalid nodeset "0,^0" into virBitmapParse function. This change broke
the logic, as an empty bitmap should not cause an error.
https://bugzilla.redhat.com/show_bug.cgi?id=1210545
On arm, we probe for virtio-*-pci devices, but use their
virtio-*-device variants.
Set the capabilities based on the -device variants as well,
to make them work with qemus with the PCI devices compiled out.
When pre-creating storage for domains, we need to find corresponding
disk in the XML on the destination (domain XML may differ there, e.g.
disk is accessible under different path). For better debugging, I'm
printing all info I received on a disk. But there was a typo when
printing the disk capacity: "%lluu" instead of "%llu".
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The problem with the previous implementation is,
even when qemuMigrationUpdateJobStatus() detects a migration job
has completed, it will do a sleep for 50 ms (which is unnecessary
and only adds up to the VM pause time).
Signed-off-by: Xing Lin <xinglin@cs.utah.edu>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Future IOThread setting patches would copy the code anyway, so create
and generalize the adding of pindef for the vcpu and the pinning of the
thread into their own APIs.
We support VNC for containers to have the same
interface with VMs. At this moment it just renders
linux text console.
Of course we don't pass any physical devices and
don't emulate virtual devices. Our VNC server
renders text from terminal master and sends
input events from VNC client to terminal.
So add special video type VIR_DOMAIN_VIDEO_TYPE_PARALLELS
for these pseudo-devices.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Future IOThread setting patches would copy the code anyway, so create
and generalize a delete cgroup and pindef for the vcpu into its own API.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Future IOThread setting patches would copy the code anyway, so create
and generalize the add the vcpu to a cgroup into its own API.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Support for drive-reopen was never present in the upstream code so we
don't need to pause the VM when doing the block pivot. Kill all the
code related to this semi-upstream artifact.
131,088 bytes in 16 blocks are definitely lost in loss record 2,174 of 2,176
at 0x4C29BFD: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
by 0x4C2BACB: realloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
by 0x52A026F: virReallocN (viralloc.c:245)
by 0x52BFCB5: saferead_lim (virfile.c:1268)
by 0x52C00EF: virFileReadLimFD (virfile.c:1328)
by 0x52C019A: virFileReadAll (virfile.c:1351)
by 0x52A5D4F: virCgroupGetValueStr (vircgroup.c:763)
by 0x1DDA0DA3: qemuRestoreCgroupState (qemu_cgroup.c:805)
by 0x1DDA0DA3: qemuConnectCgroup (qemu_cgroup.c:857)
by 0x1DDB7BA1: qemuProcessReconnect (qemu_process.c:3694)
by 0x52FD171: virThreadHelper (virthread.c:206)
by 0x82B8DF4: start_thread (pthread_create.c:308)
by 0x85C31AC: clone (clone.S:113)
Signed-off-by: Luyao Huang <lhuang@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1198645
Once upon a time, there was a little domain. And the domain was pinned
onto a NUMA node and hasn't fully allocated its memory:
<memory unit='KiB'>2355200</memory>
<currentMemory unit='KiB'>1048576</currentMemory>
<numatune>
<memory mode='strict' nodeset='0'/>
</numatune>
Oh little me, said the domain, what will I do with so little memory.
If I only had a few megabytes more. But the old admin noticed the
whimpering, barely audible to untrained human ear. And good admin he
was, he gave the domain yet more memory. But the old NUMA topology
witch forbade to allocate more memory on the node zero. So he
decided to allocate it on a different node:
virsh # numatune little_domain --nodeset 0-1
virsh # setmem little_domain 2355200
The little domain was happy. For a while. Until bad, sharp teeth
shaped creature came. Every process in the system was afraid of him.
The OOM Killer they called him. Oh no, he's after the little domain.
There's no escape.
Do you kids know why? Because when the little domain was born, her
father, Libvirt, called numa_set_membind(). So even if the admin
allowed her to allocate memory from other nodes in the cgroups, the
membind() forbid it.
So what's the lesson? Libvirt should rely on cgroups, whenever
possible and use numa_set_membind() as the last ditch effort.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Currently we check qemuCaps before starting the block job. But qemuCaps
isn't available on a stopped domain, which means we get a misleading
error message in this case:
# virsh domstate example
shut off
# virsh blockjob example vda
error: unsupported configuration: block jobs not supported with this QEMU binary
Move the qemuCaps check into the block job so that we are guaranteed the
domain is running.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
qemuMigrationCookieAddNBD is usually called from within an async
MIGRATION_OUT or MIGRATION_IN job, so it needs to start a nested job.
(The one exception is during the Begin phase when change protection
isn't enabled, but qemuDomainObjEnterMonitorAsync will behave the same
as qemuDomainObjEnterMonitor in this case.)
This bug was encountered with a libvirt client that repeatedly queries
the disk mirroring block job info during a migration. If one of these
queries occurs just as the Perform migration cookie is baked, libvirt
crashes.
Relevant logs are as follows:
6701: warning : qemuDomainObjEnterMonitorInternal:1544 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous
[1] 6701: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block","id":"libvirt-629"}
[2] 6699: info : qemuMonitorIOWrite:503 : QEMU_MONITOR_IO_WRITE: mon=0x7fefdc004700 buf={"execute":"query-block","id":"libvirt-629"}
[3] 6704: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block-jobs","id":"libvirt-630"}
[4] 6699: info : qemuMonitorJSONIOProcessLine:203 : QEMU_MONITOR_RECV_REPLY: mon=0x7fefdc004700 reply={"return": [...], "id": "libvirt-629"}
6699: error : qemuMonitorJSONIOProcessLine:211 : internal error: Unexpected JSON reply '{"return": [...], "id": "libvirt-629"}'
At [1] qemuMonitorBlockStatsUpdateCapacity sends its request, then waits
on mon->notify. At [2] the request is written out to the monitor socket.
At [3] qemuMonitorBlockJobInfo sends its request, and also waits on
mon->notify. The reply from the first request is received at [4].
However, qemuMonitorJSONIOProcessLine is not expecting this reply since
the second request hadn't completed sending. The reply is dropped and an
error is returned.
qemuMonitorIO signals mon->notify twice during its error handling,
waking up both of the threads waiting on it. One of them clears mon->msg
as it exits qemuMonitorSend; the other crashes:
qemuMonitorSend (mon=0x7fefdc004700, msg=<value optimized out>) at qemu/qemu_monitor.c:975
975 while (!mon->msg->finished) {
(gdb) print mon->msg
$1 = (qemuMonitorMessagePtr) 0x0
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
If a VM migration is aborted, a disk mirror may be failed by QEMU before
libvirt has a chance to cancel it. The disk->mirrorState remains at
_ABORT in this case, and this breaks subsequent mirrorings of that disk.
We should instead check the mirrorState directly and transition to _NONE
if it is already aborted. Do the check *after* aborting the block job in
QEMU to avoid a race.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
If virCloseCallbacksSet fails, qemuMigrationBegin must return NULL to
indicate an error occurred.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
The destination libvirt daemon in a migration may segfault if the client
disconnects immediately after the migration has begun:
# virsh -c qemu+tls://remote/system list --all
Id Name State
----------------------------------------------------
...
# timeout --signal KILL 1 \
virsh migrate example qemu+tls://remote/system \
--verbose --compressed --live --auto-converge \
--abort-on-error --unsafe --persistent \
--undefinesource --copy-storage-all --xml example.xml
Killed
# virsh -c qemu+tls://remote/system list --all
error: failed to connect to the hypervisor
error: unable to connect to server at 'remote:16514': Connection refused
The crash is in:
1531 void
1532 qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj)
1533 {
1534 qemuDomainObjPrivatePtr priv = obj->privateData;
1535 qemuDomainJob job = priv->job.active;
1536
1537 priv->jobs_queued--;
Backtrace:
#0 at qemuDomainObjEndJob at qemu/qemu_domain.c:1537
#1 in qemuDomainRemoveInactive at qemu/qemu_domain.c:2497
#2 in qemuProcessAutoDestroy at qemu/qemu_process.c:5646
#3 in virCloseCallbacksRun at util/virclosecallbacks.c:350
#4 in qemuConnectClose at qemu/qemu_driver.c:1154
...
qemuDomainRemoveInactive calls virDomainObjListRemove, which in this
case is holding the last remaining reference to the domain.
qemuDomainRemoveInactive then calls qemuDomainObjEndJob, but the domain
object has been freed and poisoned by then.
This patch bumps the domain's refcount until qemuDomainRemoveInactive
has completed. We also ensure qemuProcessAutoDestroy does not return the
domain to virCloseCallbacksRun to be unlocked in this case. There is
similar logic in bhyveProcessAutoDestroy and lxcProcessAutoDestroy
(which call virDomainObjListRemove directly).
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
==19015== 968 (416 direct, 552 indirect) bytes in 1 blocks are definitely lost in loss record 999 of 1,049
==19015== at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==19015== by 0x52ADF14: virAllocVar (viralloc.c:560)
==19015== by 0x5302FD1: virObjectNew (virobject.c:193)
==19015== by 0x1DD9401E: virQEMUDriverConfigNew (qemu_conf.c:164)
==19015== by 0x1DDDF65D: qemuStateInitialize (qemu_driver.c:666)
==19015== by 0x53E0823: virStateInitialize (libvirt.c:777)
==19015== by 0x11E067: daemonRunStateInit (libvirtd.c:905)
==19015== by 0x53201AD: virThreadHelper (virthread.c:206)
==19015== by 0xA1EE1F2: start_thread (in /lib64/libpthread-2.19.so)
==19015== by 0xA4EFC8C: clone (in /lib64/libc-2.19.so)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
==19015== 1,064 (656 direct, 408 indirect) bytes in 2 blocks are definitely lost in loss record 1,002 of 1,049
==19015== at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==19015== by 0x52AD74B: virAlloc (viralloc.c:144)
==19015== by 0x52B47CA: virCgroupNew (vircgroup.c:1057)
==19015== by 0x52B53E5: virCgroupNewVcpu (vircgroup.c:1451)
==19015== by 0x1DD85A40: qemuSetupCgroupForVcpu (qemu_cgroup.c:1013)
==19015== by 0x1DDA66EA: qemuProcessStart (qemu_process.c:4844)
==19015== by 0x1DDF1807: qemuDomainObjStart (qemu_driver.c:7265)
==19015== by 0x1DDF1A66: qemuDomainCreateWithFlags (qemu_driver.c:7320)
==19015== by 0x1DDF1ACD: qemuDomainCreate (qemu_driver.c:7337)
==19015== by 0x53F87EA: virDomainCreate (libvirt-domain.c:6820)
==19015== by 0x12690A: remoteDispatchDomainCreate (remote_dispatch.h:3481)
==19015== by 0x126827: remoteDispatchDomainCreateHelper (remote_dispatch.h:3457)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Instead of always using controller 0 and incrementing port number,
respect the maximum port numbers of controllers and use all of them.
Ports for virtio consoles are quietly reserved, but not formatted
(neither in XML nor on QEMU command line).
Also rejects duplicate virtio-serial addresses.
https://bugzilla.redhat.com/show_bug.cgi?id=890606https://bugzilla.redhat.com/show_bug.cgi?id=1076708
Test changes:
* virtio-auto.args
Filling out the port when just the controller is specified.
switched from using
maxport + 1
to:
first free port on the controller
* virtio-autoassign.args
Filling out the address when no <address> is specified.
Started using all the controllers instead of 0, also discards
the bus value.
* xml -> xml output of virtio-auto
The port assignment is no longer done as a part of XML parsing,
so the unspecified values stay 0.
https://bugzilla.redhat.com/show_bug.cgi?id=1206479
As described in virDomainBlockCopy() parameters description, the
VIR_DOMAIN_BLOCK_COPY_GRANULARITY parameter may require the value to
have some specific attributes (e.g. be a power of two or fall within a
certain range). And in qemu, a power of two is required. However, our
code does not check that and let qemu operation fail. Moreover, the
virsh man page is not as exact as it could be in this respect.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When we shutdown/reboot a guest using agent-mode, if the guest itself blocks infinitely,
libvirt would block in qemuAgentShutdown() forever.
Thus, we set a timeout for shutdown/reboot, from our experience, 60 seconds would be fine.
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
virDomainHasDiskMirror() currently detects only jobs that add the mirror
elements. Since some operations like migration are interlocked by
existing block jobs on the given domain the check needs to be
instrumented to check regular jobs too.
This patch renames virDomainHasDiskMirror to virDomainHasDiskBlockjob
and adds an argument that allows to select that it returns true only for
block copy jobs as those interlock making the domain persistent.
Other two uses trigger on any block job type.
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
If any disk of a VM was involved in a (copy) block job we refused to do
a snapshot. As not only copy jobs interlock snapshots and the
interlocking is applicable to individual disks only we can make the
check in a more individual fashion and interlock all block job types
supported by libvirt.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1203628
In the order of appearance:
* MAX_LISTEN - never used
added by 23ad665c (qemud) and addec57 (lock daemon)
* NEXT_FREE_CLASS_ID - never used, added by 07d1b6b
* virLockError - never used, added by eb8268a4
* OPENVZ_MAX_ARG, CMDBUF_LEN, CMDOP_LEN
unused since the removal of ADD_ARG_LIT in d8b31306
* QEMU_NB_PER_CPU_STAT_PARAM - unused since 897808e
* QEMU_CMD_PROMPT, QEMU_PASSWD_PROMPT - unused since 1dc10a7
* TEST_MODEL_WORDSIZE - unused since c25c18f7
* TEMPDIR - never used, added by 714bef5
* NSIG - workaround around old headers
added by commit 60ed1d2
unused since virExec was moved by commit 02e8691
* DO_TEST_PARSE - never used, added by 9afa006
* DIFF_MSEC, GETTIMEOFDAY - unused since eee6eb6
Two places would call to qemuPrepareCpumap() with priv->autoNodeset to
convert it to a cpuset. Remove the function and use the prepared cpuset
automatically.
When the default cpuset or automatic numa placement is used libvirt
would place the whole parent cgroup in the specified cpuset. This then
disallowed to re-pin the vcpus to a different cpu.
This patch pins only the vcpu threads to the default cpuset and thus
allows to re-pin them later.
The following config would fail to start:
<domain type='kvm'>
...
<vcpu placement='static' cpuset='0-1' current='2'>4</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='2-3'/>
...
This is a regression since a39f69d2b.
When the synchronous pivot option is selected, libvirt would not update
the backing chain until the job was exitted. Some applications then
received invalid data as their job serialized first.
This patch removes polling to wait for the ABORT/PIVOT job completion
and replaces it with a condition. If a synchronous operation is
requested the update of the XML is executed in the job of the caller of
the synchronous request. Otherwise the monitor event callback uses a
separate worker to update the backing chain with a new job.
This is a regression since 1a92c71910
When the ABORT job is finished synchronously you get the following call
stack:
#0 qemuBlockJobEventProcess
#1 qemuDomainBlockJobImpl
#2 qemuDomainBlockJobAbort
#3 virDomainBlockJobAbort
While previously or while using the _ASYNC flag you'd get:
#0 qemuBlockJobEventProcess
#1 processBlockJobEvent
#2 qemuProcessEventHandler
#3 virThreadPoolWorker
Later on I'll be adding a condition that will allow to synchronise a
SYNC block job abort. The approach will require this code to be called
from two different places so it has to be extracted into a helper.
Commit 1a92c719 moved code to handle block job events to a different
function that is executed in a separate thread. The caller of
processBlockJob handles locking and unlocking of @vm, so the we should
not do it in the function itself.
The block copy API takes the speed in bytes/s rather than MiB/s that was
the prior approach in virDomainBlockRebase. We correctly converted the
speed to bytes/s in the old API but we still called the common helper
virDomainBlockCopyCommon with the unadjusted variable.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1207122
When getting info on NUMA parameters for domain,
virCgroupGetCpusetMems() may be called. However, as of 43b67f2e
the call is guarded by check if memory controller is present.
Even though it may be not obvious instantly, NUMA parameters are
stored under cpuset controller. Therefore the check needs to look
like this:
if (!virCgroupHasController(priv->cgroup,
VIR_CGROUP_CONTROLLER_CPUSET) ||
virCgroupGetCpusetMems(priv->cgroup, &nodeset) < 0) {
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Blockcopy to non-file destination is not supported according the code,
but a 'goto endjob' is missed after checking the destination.
This leads to calling drive-mirror with wrong parameters.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1206406
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Starting a qemu VM with a memory module that has the base address
specified results in the following error:
error: internal error: early end of file from monitor: possible problem:
2015-03-26T03:45:52.338891Z qemu-kvm: -device pc-dimm,node=0,memdev=memdimm0,
id=dimm0,slot=0,base=4294967296: Property '.base' not found
The correct property name for the base address is 'addr'.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Because of the microcode update to Haswell/Broadwell CPUs, existing
domains using these CPUs may fail to start even though they used to run
just fine. To help users solve this issue we try to suggest switching to
-noTSX variant of the CPU model:
virsh # start cd
error: Failed to start domain cd
error: unsupported configuration: guest and host CPU are not
compatible: Host CPU does not provide required features: rtm, hle;
try using 'Haswell-noTSX' CPU model
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Use the virStorageSourceIsEmpty helper to determine whether the drive
source is empty rather than checking for src->path. This will fix start
of VM with empty network cdrom that would not report any error.