As explained in previous patch, numad will balance the affinity
dynamically, so reflecting the cpuset from numad at the first
time doesn't make much case, and may just could cause confusion.
Instead of returning a CPUs list, numad returns NUMA node
list instead, this patch is to convert the node list to
cpumap before affinity setting. Otherwise, the domain
processes will be pinned only to CPU[$numa_cell_num],
which will cause significiant performance losses.
Also because numad will balance the affinity dynamically,
reflecting the cpuset from numad back doesn't make much
sense then, and it may just could produce confusion for
the users. Thus the better way is not to reflect it back
to XML. And in this case, it's better to ignore the cpuset
when parsing XML.
The codes to update the cpuset is removed in this patch
incidentally, and there will be a follow up patch to ignore
the manually specified "cpuset" if "placement" is "auto",
and document will be updated too.
The linux-2.6.32 kernel header does not yet define IFLA_VF_MAX and others,
which breaks compiling a new libvirt on old systems like Debian Squeeze.
(I also have to add --without-macvtap --disable-werror --without-virtualport to
./configure to get it to compile.)
Signed-off-by: Philipp Hahn <hahn@univention.de>
This is based on recent developments on patch checker and the
goal is to keep a list of pending patches needing review on the
project web site. The page template in git just holds a pointer
to the web page.
Although it should be harmless to do:
disk = disk = def->disks[i]
some not-so-wise compilers may fool around.
Besides, such assignment is useless here.
On newer xend (v3.x and after) there is no state and domid reported
for inactive domains. When initially creating connections this is
handled in various places by assigning domain->id = -1.
But once an instance has been running, the id is set to the current
domain id. And it does not change when the instance is shut down.
So when querying the domain info, the hypervisor driver, which gets
asked first will indicate it cannot find information, then the
xend driver is asked and will set the status to NOSTATE because it
checks for the -1 domain id.
Checking domain/status for 0 seems to be more reliable for that.
One note: I am not sure whether the domain->id also should get set
back to -1 whenever any sub-driver thinks the instance is no longer
running.
BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=746007
BugLink: http://bugs.launchpad.net/bugs/929626
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
This patch adds a netlink callback when migrating a VEPA enabled
virtual machine. It fixes a Bug where a VM would not request a port
association when it was cleared by lldpad.
This patch requires the latest git version of lldpad to work.
Signed-off-by: D. Herrendoerfer <d.herrendoerfer@herrendoerfer.name>
If dynamic_ownership is off and we are creating a file on NFS
we force chown. This will fail as chown/chmod are not supported
on NFS. However, with no dynamic_ownership we are not required
to do any chown.
* daemon/libvirtd-config.c (daemonConfigFree): fix memory leaks.
How to reproduce?
% make && make -C tests check TESTS=libvirtdconftest
% cd tests && valgrind -v --leak-check=full ./libvirtdconftest
actual result:
==11008== 185 bytes in 5 blocks are definitely lost in loss record 3 of 5
==11008== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==11008== by 0x39CF07F6E1: strdup (strdup.c:43)
==11008== by 0x406626: daemonConfigLoadOptions (libvirtd-config.c:438)
==11008== by 0x406800: daemonConfigLoadData (libvirtd-config.c:492)
==11008== by 0x403CCF: testCorrupt (libvirtdconftest.c:110)
==11008== by 0x404FAD: virtTestRun (testutils.c:145)
==11008== by 0x403A34: mymain (libvirtdconftest.c:219)
==11008== by 0x404687: virtTestMain (testutils.c:700)
==11008== by 0x39CF01ECDC: (below main) (libc-start.c:226)
==11008==
==11008== LEAK SUMMARY:
==11008== definitely lost: 185 bytes in 5 blocks
Signed-off-by: Alex Jia <ajia@redhat.com>
In my testing, I was able to provoke an odd block pull failure:
$ virsh blockpull dom vda --bandwidth 10000
error: Requested operation is not valid: No active operation on device: drive-virtio-disk0
merely by using gdb to artifically wait to do the block job set speed
until after the pull had already finished. But in reality, that should
be a success, since the pull finished before we had a chance to set
speed. Furthermore, using a double job lock is not only annoying, but
a bug in itself - if you do parallel virDomainBlockRebase, and hit
the race window just right, the first call grabs the VM job to start
a fast block job, then the second call grabs the VM job to start
a long-running job with unspecified speed, then the first call finally
regrabs the VM job and sets the speed, which ends up running the
second job under the speed from the first call. By consolidating
things into a single job, we avoid opening that race, as well as reduce
the time between starting the job and changing the speed, for less
likelihood of the speed change happening after block job completion
in the first place.
* src/qemu/qemu_monitor.h (BLOCK_JOB_CMD): Add new mode.
* src/qemu/qemu_driver.c (qemuDomainBlockRebase): Move secondary
job call...
(qemuDomainBlockJobImpl): ...here, for fewer locks.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONBlockJob): Change
return value on new internal mode.
Without the VIR_DOMAIN_BLOCK_JOB_ABORT_ASYNC flag, libvirt will internally
poll using qemu's "query-block-jobs" API and will not return until the
operation has been completed. API users are advised that this operation
is unbounded and further interaction with the domain during this period
may block. Future patches may refactor things to allow other queries in
parallel with this polling. For older qemu, we synthesize the cancellation
event, since qemu won't generate it.
The choice of polling duration copies from the code in qemu_migration.c.
Signed-off-by: Adam Litke <agl@us.ibm.com>
Cc: Stefan Hajnoczi <stefanha@gmail.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Probably in the noise, but this will let us scale more efficiently
as we learn to recognize even more qemu events.
* src/qemu/qemu_monitor_json.c (eventHandlers): Sort.
(qemuMonitorEventCompare): New helper function.
(qemuMonitorJSONIOProcessEvent): Optimize event lookup.
Block job cancellation can take a while. Now that upstream qemu 1.1
has asynchronous block cancellation, we want to expose that to the user.
Therefore, the following updates are made to the virDomainBlockJob API:
A new block job event type VIR_DOMAIN_BLOCK_JOB_CANCELED is managed by
libvirt. Regardless of the flags used with virDomainBlockJobAbort, this
event will be raised: 1. when using synchronous block_job_cancel (the
event will be synthesized by libvirt), and 2. whenever it is received
from qemu (via asynchronous block-job-cancel). Note that the event
may be detected by libvirt even before the virDomainBlockJobAbort
completes (always true when it is synthesized, but also possible if
cancellation was fast).
A new extension flag VIR_DOMAIN_BLOCK_JOB_ABORT_ASYNC is added to the
virDomainBlockJobAbort API. When enabled, this function will allow
(but not require) asynchronous operation (ie, it returns as soon as
possible, which might be before the job has actually been canceled).
When the API is used in this mode, it is the responsibility of the
caller to wait for a VIR_DOMAIN_BLOCK_JOB_CANCELED event or poll via
the virDomainGetBlockJobInfo API to check the cancellation status.
This patch also exposes the new flag through virsh, and makes virsh
slightly easier to use (--async implies --abort, and lack of any options
implies --info), although it leaves the qemu implementation for later
patches.
Signed-off-by: Adam Litke <agl@us.ibm.com>
Cc: Stefan Hajnoczi <stefanha@gmail.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
RHEL 6.2 was released with an early version of block jobs, which only
worked on the qed file format, where the commands were spelled with
underscore (contrary to QMP style), and where 'block_job_cancel' was
synchronous and did not trigger an event.
The upcoming qemu 1.1 release has fixed these short-comings [1][2]:
the commands now work on multiple file types, are spelled with dash,
and 'block-job-cancel' is asynchronous and emits an event upon conclusion.
[1]qemu commit 370521a1d6f5537ea7271c119f3fbb7b0fa57063
[2]https://lists.gnu.org/archive/html/qemu-devel/2012-04/msg01248.html
This patch recognizes the new spellings, and fixes virDomainBlockRebase
to give a graceful error when talking to a too-old qemu on a partial
rebase attempt. Fixes for the new semantics will come later. This
patch also removes a bogus ATTRIBUTE_NONNULL mistakenly added in
commit 10ec36e2.
* src/qemu/qemu_capabilities.h (QEMU_CAPS_BLOCKJOB_SYNC)
(QEMU_CAPS_BLOCKJOB_ASYNC): New bits.
* src/qemu/qemu_capabilities.c (qemuCaps): Name them.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONCheckCommands): Set
them.
(qemuMonitorJSONBlockJob): Manage both command names.
(qemuMonitorJSONDiskSnapshot): Minor formatting fix.
* src/qemu/qemu_monitor.h (qemuMonitorBlockJob): Alter signature.
* src/qemu/qemu_monitor_json.h (qemuMonitorJSONBlockJob): Likewise.
* src/qemu/qemu_monitor.c (qemuMonitorBlockJob): Pass through
capability bit.
* src/qemu/qemu_driver.c (qemuDomainBlockJobImpl): Update callers.
The new safe console handling introduced a possibility to deadlock the
qemu driver when a new console connection forcibly disconnects a
previous console stream that belongs to an already closed connection.
The virStreamFree function calls subsequently a the virReleaseConnect
function that tries to lock the driver while discarding the connection,
but the driver was already locked in qemuDomainOpenConsole.
Backtrace of the deadlocked thread:
0 0x00007f66e5aa7f14 in __lll_lock_wait () from /lib64/libpthread.so.0
1 0x00007f66e5aa3411 in _L_lock_500 () from /lib64/libpthread.so.0
2 0x00007f66e5aa322a in pthread_mutex_lock () from/lib64/libpthread.so.0
3 0x0000000000462bbd in qemudClose ()
4 0x00007f66e6e178eb in virReleaseConnect () from/usr/lib64/libvirt.so.0
5 0x00007f66e6e19c8c in virUnrefStream () from /usr/lib64/libvirt.so.0
6 0x00007f66e6e3d1de in virStreamFree () from /usr/lib64/libvirt.so.0
7 0x00007f66e6e09a5d in virConsoleHashEntryFree () from/usr/lib64/libvirt.so.0
8 0x00007f66e6db7282 in virHashRemoveEntry () from/usr/lib64/libvirt.so.0
9 0x00007f66e6e09c4e in virConsoleOpen () from /usr/lib64/libvirt.so.0
10 0x00000000004526e9 in qemuDomainOpenConsole ()
11 0x00007f66e6e421f1 in virDomainOpenConsole () from/usr/lib64/libvirt.so.0
12 0x00000000004361e4 in remoteDispatchDomainOpenConsoleHelper ()
13 0x00007f66e6e80375 in virNetServerProgramDispatch () from/usr/lib64/libvirt.so.0
14 0x00007f66e6e7ae11 in virNetServerHandleJob () from/usr/lib64/libvirt.so.0
15 0x00007f66e6da897d in virThreadPoolWorker () from/usr/lib64/libvirt.so.0
16 0x00007f66e6da7ff6 in virThreadHelper () from/usr/lib64/libvirt.so.0
17 0x00007f66e5aa0c5c in start_thread () from /lib64/libpthread.so.0
18 0x00007f66e57e7fcd in clone () from /lib64/libc.so.6
* src/qemu/qemu_driver.c: qemuDomainOpenConsole()
-- unlock the qemu driver right after acquiring the domain
object
In case an API fails with "cannot acquire state change lock", searching
for the API that possibly forgot to end its job is not always easy.
Let's keep track of the job owner and print it out for easier
identification.
As reported by Daniel Berrangé, we have a huge performance regression
for virDomainGetInfo() due to the change which makes virDomainEndJob()
save the XML status file every time it is called. Previous to that
change, 2000 calls to virDomainGetInfo() took ~2.5 seconds. After that
change, 2000 calls to virDomainGetInfo() take 2 *minutes* 45 secs.
We made the change to be able to recover from libvirtd restart in the
middle of a job. However, only destroy and async jobs are taken care of.
Thus it makes more sense to only save domain state XML when these jobs
are started/stopped.
I noticed these compiler warnings when building for the s390 architecture.
* src/node_device/node_device_udev.c (udevDeviceMonitorStartup):
Mark unused variable.
* src/nodeinfo.c (linuxNodeInfoCPUPopulate): Avoid unused variable.
* src/qemu/qemu_command.c: Wire up -bios with <loader>
* tests/qemuxml2argvdata/qemuxml2argv-bios.args,
tests/qemuxml2argvdata/qemuxml2argv-bios.xml: Expand
existing BIOS test case to cover <loader>
This patch cleans up variables used to store boolean command flags that
are inquired by vshCommandOptBool to use the bool data type instead of
an integer.
Additionally this patch cleans up flag variables that are inferred from
existing flags.
The documentation for the flag doesn't clearly state that the flag only
enhances the output and the user needs to specify other flags to list
inactive domains, that are enhanced by this flag.
Below code failed to compile on a 32 bit machine with error
typewrappers.c: In function 'libvirt_intUnwrap':
typewrappers.c:135:5: error: logical 'and' of mutually exclusive tests is always false [-Werror=logical-op]
cc1: all warnings being treated as errors
The patch fixes this error.
The daemon-conf test script continues to be very fragile to
changes in libvirt. It currently fails 1 time in 3/4 due
to race conditions in startup/shutdown of the test script.
Replace it with a proper test case tailored to the code
being tested
* tests/Makefile.am: Remove daemon-conf, add libvirtdconftest
* tests/daemon-conf: Delete obsolete test
* tests/libvirtdconftest.c: Test config file handling
Rename existing daemonConfigLoad API to daemonConfigLoadFile and
add an alternative daemonConfigLoadData
* daemon/libvirtd-config.c, daemon/libvirtd-config.h: Add
daemonConfigLoadData and rename daemonConfigLoad to
daemonConfigLoadFile
* daemon/libvirtd.c: Update for renamed API
To enable creation of unit tests, split the libvirtd config file
loading code out into separate files.
* daemon/libvirtd.c: Delete config loading code / structs
* daemon/libvirtd-config.c, daemon/libvirtd-config.h: Config
file loading APIs
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Leak introduced in commit 0436d32. If we allocate an actions array,
but fail early enough to never consume it with the qemu monitor
transaction call, we leaked memory.
But our semantics of making the transaction command free the caller's
memory is awkward; avoiding the memory leak requires making every
intermediate function in the call chain check for error. It is much
easier to fix things so that the function that allocates also frees,
while the call chain leaves the caller's data intact. To do that,
I had to hack our JSON data structure to make it easy to protect a
portion of an arbitrary JSON tree from being freed.
* src/util/json.h (virJSONType): Name the enum.
(_virJSONValue): New field.
* src/util/json.c (virJSONValueFree): Use it to protect a portion
of an array.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONTransaction): Avoid
freeing caller's data.
* src/qemu/qemu_driver.c (qemuDomainSnapshotCreateDiskActive):
Free actions array on failure.
We can tell qemuDomainSnapshotFSThaw if we want it to report errors or
not. However, if we don't want to and an error has been already set by
previous qemuReportError() we must keep copy of that error not just a
pointer to it. Otherwise, it get overwritten if FSThaw reports an error.
When building on Fedora 17 (which uses gcc 4.7.0) with -O0 in CFLAGS,
three of the tests failed to compile.
cputest.c and qemuxml2argvtest.c had non-static structs defined
inside the macro that was being repeatedly invoked. Due to some so-far
unidentified change in gcc, the stack space used by variables defined
inside { } is not recovered/re-used when the block ends, so all these
structs have become additive (this is the same problem worked around
in commit cf57d345b). Fortunately, these two files could be fixed with
a single line addition of "static" to the struct definition in the
macro.
virnettlscontexttest.c was a bit different, though. The problem structs
in the do/while loop of macros had non-constant initializers, so it
took a bit more work and piecemeal initialization instead of member
initialization to get things to be happy.
In an ideal world, none of these changes should be necessary, but not
knowing how long it will be until the gcc regressions are fixed, and
since the code is just as correct after this patch as before, it makes
sense to fix libvirt's build for -O0 while also reporting the gcc
problem.
This bug resolves https://bugzilla.redhat.com/show_bug.cgi?id=810100
rpm builds for i686 were failing with a segfault in
networkxml2argvtest. Running under valgrind showed that a region of
memory was being referenced after it had been freed (as the result of
realloc - see the valgrind report in the BZ).
The problem (in replaceTokens() - added in commit 22ec60, meaning this
bug was in 0.9.10 and 0.9.11) was that the pointers token_start and
token_end were being computed based on the value of *buf, then *buf
was being realloc'ed (potentially moving it), then token_start and
token_end were used without recomputing them to account for movement
of *buf.
The solution is to change the code so that token_start and token_end
are offsets into *buf rather than pointers. This way there is only a
single pointer to the buffer, and nothing needs readjusting after a
realloc. (You may note that some uses of token_start/token_end didn't
need to be changed to add in "*buf +" - that's because there ended up
being a +*buf and -*buf which canceled each other out).
DV gets the credit for finding this bug and pointing out the valgrind
report.
Detected by valgrind. Leaks are introduced in commit b22eaa7.
* src/conf/domain_conf.c (virDomainDiskDefParseXML): fix memory leaks.
How to reproduce?
% make && make -C tests check TESTS=qemuxml2argvtest
% cd tests && valgrind -v --leak-check=full ./qemuxml2argvtest
actual result:
==2143== 12 bytes in 2 blocks are definitely lost in loss record 74 of 179
==2143== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==2143== by 0x39D90A67DD: xmlStrndup (xmlstring.c:45)
==2143== by 0x4F5EC0: virDomainDiskDefParseXML (domain_conf.c:3438)
==2143== by 0x502F00: virDomainDefParseXML (domain_conf.c:8304)
==2143== by 0x505FE3: virDomainDefParseNode (domain_conf.c:9080)
==2143== by 0x5069AE: virDomainDefParse (domain_conf.c:9030)
==2143== by 0x41CBF4: testCompareXMLToArgvHelper (qemuxml2argvtest.c:105)
==2143== by 0x41E5DD: virtTestRun (testutils.c:145)
==2143== by 0x416FA3: mymain (qemuxml2argvtest.c:399)
==2143== by 0x41DCB7: virtTestMain (testutils.c:700)
==2143== by 0x39CF01ECDC: (below main) (libc-start.c:226)
Signed-off-by: Alex Jia <ajia@redhat.com>
* configure.ac: Set WITH_SYSCTL only on Linux hosts
* daemon/Makefile.am: Conditionalize install-sysctl using WITH_SYSCTL
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Cc: Jason Helfman <jhelfman@e-e.com>
Every now & then, with parallel builds, we get a failure to
validate hvsupport.html.in. I eventually noticed that this
is because we get 2 instances of the generator running at
once.
We already list hvsupport.html.in in BUILT_SOURCES but this
was not working. It turns out the flaw is that we were
adding deps to the 'all:' target instead of the 'all-am:'
target. BUILT_SOURCES is a dep of 'all', so any custom
targets written in Makefile.am must use 'all-am:' so that
they don't get run until BUILT_SOURCES are completely
generated
* docs/Makefile.am: s/all/all-am/
Some of the test suites use fprintf with format specifiers
that are not supported on Win32 and are not fixed by gnulib.
The mingw32 compiler also has trouble detecting ssize_t
correctly, complaining that 'ssize_t' does not match
'signed size_t' (which it expects for %zd). Force the
cast to size_t to avoid this problem
* tests/testutils.c, tests/testutils.h: Fix printf
annotation on virTestResult. Use virVasprintf
instead of vfprintf
* tests/virhashtest.c: Use VIR_WARN instead of fprintf(stderr).
Cast to size_t to avoid mingw32 compiler bug
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>