https://bugzilla.redhat.com/show_bug.cgi?id=1058839
Commit f9f56340 for CVE-2014-0028 almost had the right idea - we
need to check the ACL rules to filter which events to send. But
it overlooked one thing: the event dispatch queue is running in
the main loop thread, and therefore does not normally have a
current virIdentityPtr. But filter checks can be based on current
identity, so when libvirtd.conf contains access_drivers=["polkit"],
we ended up rejecting access for EVERY event due to failure to
look up the current identity, even if it should have been allowed.
Furthermore, even for events that are triggered by API calls, it
is important to remember that the point of events is that they can
be copied across multiple connections, which may have separate
identities and permissions. So even if events were dispatched
from a context where we have an identity, we must change to the
correct identity of the connection that will be receiving the
event, rather than basing a decision on the context that triggered
the event, when deciding whether to filter an event to a
particular connection.
If there were an easy way to get from virConnectPtr to the
appropriate virIdentityPtr, then object_event.c could adjust the
identity prior to checking whether to dispatch an event. But
setting up that back-reference is a bit invasive. Instead, it
is easier to delay the filtering check until lower down the
stack, at the point where we have direct access to the RPC
client object that owns an identity. As such, this patch ends
up reverting a large portion of the framework of commit f9f56340.
We also have to teach 'make check' to special-case the fact that
the event registration filtering is done at the point of dispatch,
rather than the point of registration. Note that even though we
don't actually use virConnectDomainEventRegisterCheckACL (because
the RegisterAny variant is sufficient), we still generate the
function for the purposes of documenting that the filtering
takes place.
Also note that I did not entirely delete the notion of a filter
from object_event.c; I still plan on using that for my upcoming
patch series for qemu monitor events in libvirt-qemu.so. In
other words, while this patch changes ACL filtering to live in
remote.c and therefore we have no current client of the filtering
in object_event.c, the notion of filtering in object_event.c is
still useful down the road.
* src/check-aclrules.pl: Exempt event registration from having to
pass checkACL filter down call stack.
* daemon/remote.c (remoteRelayDomainEventCheckACL)
(remoteRelayNetworkEventCheckACL): New functions.
(remoteRelay*Event*): Use new functions.
* src/conf/domain_event.h (virDomainEventStateRegister)
(virDomainEventStateRegisterID): Drop unused parameter.
* src/conf/network_event.h (virNetworkEventStateRegisterID):
Likewise.
* src/conf/domain_event.c (virDomainEventFilter): Delete unused
function.
* src/conf/network_event.c (virNetworkEventFilter): Likewise.
* src/libxl/libxl_driver.c: Adjust caller.
* src/lxc/lxc_driver.c: Likewise.
* src/network/bridge_driver.c: Likewise.
* src/qemu/qemu_driver.c: Likewise.
* src/remote/remote_driver.c: Likewise.
* src/test/test_driver.c: Likewise.
* src/uml/uml_driver.c: Likewise.
* src/vbox/vbox_tmpl.c: Likewise.
* src/xen/xen_driver.c: Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
(cherry picked from commit 11f20e43f1388d5f8f8c0bfac8c9cda6160a106b)
Conflicts:
daemon/remote.c - not backporting network events
src/conf/network_event.c - likewise
src/conf/network_event.h - likewise
src/network/bridge_driver.c - likewise
src/conf/domain_event.c - revert back to pre-CVE state
src/conf/domain_event.h - likewise
src/libxl/libxl_driver.c - likewise
src/lxc/lxc_driver.c - likewise
src/remote/remote_driver.c - likewise
src/test/test_driver.c - likewise
src/uml/uml_driver.c - likewise
src/xen/xen_driver.c - likewise
The NWFilter code has as a deadlock race condition between
the virNWFilter{Define,Undefine} APIs and starting of guest
VMs due to mis-matched lock ordering.
In the virNWFilter{Define,Undefine} codepaths the lock ordering
is
1. nwfilter driver lock
2. virt driver lock
3. nwfilter update lock
4. domain object lock
In the VM guest startup paths the lock ordering is
1. virt driver lock
2. domain object lock
3. nwfilter update lock
As can be seen the domain object and nwfilter update locks are
not acquired in a consistent order.
The fix used is to push the nwfilter update lock upto the top
level resulting in a lock ordering for virNWFilter{Define,Undefine}
of
1. nwfilter driver lock
2. nwfilter update lock
3. virt driver lock
4. domain object lock
and VM start using
1. nwfilter update lock
2. virt driver lock
3. domain object lock
This has the effect of serializing VM startup once again, even if
no nwfilters are applied to the guest. There is also the possibility
of deadlock due to a call graph loop via virNWFilterInstantiate
and virNWFilterInstantiateFilterLate.
These two problems mean the lock must be turned into a read/write
lock instead of a plain mutex at the same time. The lock is used to
serialize changes to the "driver->nwfilters" hash, so the write lock
only needs to be held by the define/undefine methods. All other
methods can rely on a read lock which allows good concurrency.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
(cherry picked from commit 6e5c79a1b5a8b3a23e7df7ffe58fb272aa17fbfb)
Add virRWLock backed up by a POSIX rwlock primitive
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
(cherry picked from commit c065984b58000a44c90588198d222a314ac532fd)
Ever since ACL filtering was added in commit 7639736 (v1.1.1), a
user could still use event registration to obtain access to a
domain that they could not normally access via virDomainLookup*
or virConnectListAllDomains and friends. We already have the
framework in the RPC generator for creating the filter, and
previous cleanup patches got us to the point that we can now
wire the filter through the entire object event stack.
Furthermore, whether or not domain:getattr is honored, use of
global events is a form of obtaining a list of networks, which
is covered by connect:search_domains added in a93cd08 (v1.1.0).
Ideally, we'd have a way to enforce connect:search_domains when
doing global registrations while omitting that check on a
per-domain registration. But this patch just unconditionally
requires connect:search_domains, even when no list could be
obtained, based on the following observations:
1. Administrators are unlikely to grant domain:getattr for one
or all domains while still denying connect:search_domains - a
user that is able to manage domains will want to be able to
manage them efficiently, but efficient management includes being
able to list the domains they can access. The idea of denying
connect:search_domains while still granting access to individual
domains is therefore not adding any real security, but just
serves as a layer of obscurity to annoy the end user.
2. In the current implementation, domain events are filtered
on the client; the server has no idea if a domain filter was
requested, and must therefore assume that all domain event
requests are global. Even if we fix the RPC protocol to
allow for server-side filtering for newer client/server combos,
making the connect:serach_domains ACL check conditional on
whether the domain argument was NULL won't benefit older clients.
Therefore, we choose to document that connect:search_domains
is a pre-requisite to any domain event management.
Network events need the same treatment, with the obvious
change of using connect:search_networks and network:getattr.
* src/access/viraccessperm.h
(VIR_ACCESS_PERM_CONNECT_SEARCH_DOMAINS)
(VIR_ACCESS_PERM_CONNECT_SEARCH_NETWORKS): Document additional
effect of the permission.
* src/conf/domain_event.h (virDomainEventStateRegister)
(virDomainEventStateRegisterID): Add new parameter.
* src/conf/network_event.h (virNetworkEventStateRegisterID):
Likewise.
* src/conf/object_event_private.h (virObjectEventStateRegisterID):
Likewise.
* src/conf/object_event.c (_virObjectEventCallback): Track a filter.
(virObjectEventDispatchMatchCallback): Use filter.
(virObjectEventCallbackListAddID): Register filter.
* src/conf/domain_event.c (virDomainEventFilter): New function.
(virDomainEventStateRegister, virDomainEventStateRegisterID):
Adjust callers.
* src/conf/network_event.c (virNetworkEventFilter): New function.
(virNetworkEventStateRegisterID): Adjust caller.
* src/remote/remote_protocol.x
(REMOTE_PROC_CONNECT_DOMAIN_EVENT_REGISTER)
(REMOTE_PROC_CONNECT_DOMAIN_EVENT_REGISTER_ANY)
(REMOTE_PROC_CONNECT_NETWORK_EVENT_REGISTER_ANY): Generate a
filter, and require connect:search_domains instead of weaker
connect:read.
* src/test/test_driver.c (testConnectDomainEventRegister)
(testConnectDomainEventRegisterAny)
(testConnectNetworkEventRegisterAny): Update callers.
* src/remote/remote_driver.c (remoteConnectDomainEventRegister)
(remoteConnectDomainEventRegisterAny): Likewise.
* src/xen/xen_driver.c (xenUnifiedConnectDomainEventRegister)
(xenUnifiedConnectDomainEventRegisterAny): Likewise.
* src/vbox/vbox_tmpl.c (vboxDomainGetXMLDesc): Likewise.
* src/libxl/libxl_driver.c (libxlConnectDomainEventRegister)
(libxlConnectDomainEventRegisterAny): Likewise.
* src/qemu/qemu_driver.c (qemuConnectDomainEventRegister)
(qemuConnectDomainEventRegisterAny): Likewise.
* src/uml/uml_driver.c (umlConnectDomainEventRegister)
(umlConnectDomainEventRegisterAny): Likewise.
* src/network/bridge_driver.c
(networkConnectNetworkEventRegisterAny): Likewise.
* src/lxc/lxc_driver.c (lxcConnectDomainEventRegister)
(lxcConnectDomainEventRegisterAny): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
(cherry picked from commit f9f56340539d609cdc2e9d4ab812b9f146c3f100)
Conflicts:
src/conf/object_event.c - not backporting event refactoring
src/conf/object_event_private.h - likewise
src/conf/network_event.c - not backporting network events
src/conf/network_event.h - likewise
src/network/bridge_driver.c - likewise
src/access/viraccessperm.h - likewise
src/remote/remote_protocol.x - likewise
src/conf/domain_event.c - includes code that upstream has in object_event
src/conf/domain_event.h - context
src/libxl/libxl_driver.c - context
src/lxc/lxc_driver.c - context
src/remote/remote_driver.c - context, not backporting network events
src/test/test_driver.c - context, not backporting network events
src/uml/uml_driver.c - context
src/xen/xen_driver.c - context
While running objecteventtest, it was found that valgrind pointed out the
following memory leak:
==13464== 5 bytes in 1 blocks are definitely lost in loss record 7 of 134
==13464== at 0x4A0887C: malloc (vg_replace_malloc.c:270)
==13464== by 0x341F485E21: strdup (strdup.c:42)
==13464== by 0x4CAE28F: virStrdup (virstring.c:554)
==13464== by 0x4CF3CBE: virObjectEventCallbackListAddID (object_event.c:286)
==13464== by 0x4CF49CA: virObjectEventStateRegisterID (object_event.c:729)
==13464== by 0x4CF73FE: virDomainEventStateRegisterID (domain_event.c:1424)
==13464== by 0x4D7358F: testConnectDomainEventRegisterAny (test_driver.c:6032)
==13464== by 0x4D600C8: virConnectDomainEventRegisterAny (libvirt.c:19128)
==13464== by 0x402409: testDomainStartStopEvent (objecteventtest.c:232)
==13464== by 0x403451: virtTestRun (testutils.c:138)
==13464== by 0x402012: mymain (objecteventtest.c:395)
==13464== by 0x403AF2: virtTestMain (testutils.c:593)
==13464==
(cherry picked from commit 34d52b3471a18c72b7a02e27d65857505d858a8e)
Conflicts:
src/conf/object_event.c - 1.2.1 refactoring to object_event not
backported, so change applied directly in older domain_event.c instead
The @list->callbacks is an array that is inflated whenever a new event
is added, e.g. via virDomainEventCallbackListAddID(). However, when we
are freeing the array, we free the items within it but forgot to
actually free it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
(cherry picked from commit ea13a759f5b32204e1a7bdf6ee228235acb42bcb)
https://bugzilla.redhat.com/show_bug.cgi?id=1047577
When writing commit 173c291, I missed the fact virNetServerClientClose
unlocks the client object before actually clearing client->sock and thus
it is possible to hit a window when client->keepalive is NULL while
client->sock is not NULL. I was thinking client->sock == NULL was a
better check for a closed connection but apparently we have to go with
client->keepalive == NULL to actually fix the crash.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
(cherry picked from commit 066c8ef6c18bc1faf8b3e10787b39796a7a06cc0)
https://bugzilla.redhat.com/show_bug.cgi?id=1047577
When a client closes its connection to libvirtd early during
virConnectOpen, more specifically just after making
REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call to check if
VIR_DRV_FEATURE_PROGRAM_KEEPALIVE is supported without even waiting for
the result, libvirtd may crash due to a race in keep-alive
initialization. Once receiving the REMOTE_PROC_CONNECT_SUPPORTS_FEATURE
call, the daemon's event loop delegates it to a worker thread. In case
the event loop detects EOF on the connection and calls
virNetServerClientClose before the worker thread starts to handle
REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call, client->keepalive will be
disposed by the time virNetServerClientStartKeepAlive gets called from
remoteDispatchConnectSupportsFeature. Because the flow is common for
both authenticated and read-only connections, even unprivileged clients
may cause the daemon to crash.
To avoid the crash, virNetServerClientStartKeepAlive needs to check if
the connection is still open before starting keep-alive protocol.
Every libvirt release since 0.9.8 is affected by this bug.
(cherry picked from commit 173c2914734eb5c32df6d35a82bf503e12261bcf)
CVE-2013-6458
Every API that is going to begin a job should do that before fetching
data from vm->def.
(cherry picked from commit 3b56425938e2f97208d5918263efa0d6439e4ecd)
Every API that is going to begin a job should do that before fetching
data from vm->def.
(cherry picked from commit ff5f30b6bfa317f2a4c33f69289baf4e887eb048)
CVE-2013-6458
Every API that is going to begin a job should do that before fetching
data from vm->def.
(cherry picked from commit f93d2caa070f6197ab50d372d286018b0ba6bbd8)
CVE-2013-6458
Generally, every API that is going to begin a job should do that before
fetching data from vm->def. However, qemuDomainGetBlockInfo does not
know whether it will have to start a job or not before checking vm->def.
To avoid using disk alias that might have been freed while we were
waiting for a job, we use its copy. In case the disk was removed in the
meantime, we will fail with "cannot find statistics for device '...'"
error message.
(cherry picked from commit b799259583bd65c0b2f5042e6c3ff19637ade881)
CVE-2013-6458
https://bugzilla.redhat.com/show_bug.cgi?id=1043069
When virDomainDetachDeviceFlags is called concurrently to
virDomainBlockStats: libvirtd may crash because qemuDomainBlockStats
finds a disk in vm->def before getting a job on a domain and uses the
disk pointer after getting the job. However, the domain in unlocked
while waiting on a job condition and thus data behind the disk pointer
may disappear. This happens when thread 1 runs
virDomainDetachDeviceFlags and enters monitor to actually remove the
disk. Then another thread starts running virDomainBlockStats, finds the
disk in vm->def, and while it's waiting on the job condition (owned by
the first thread), the first thread finishes the disk removal. When the
second thread gets the job, the memory pointed to be the disk pointer is
already gone.
That said, every API that is going to begin a job should do that before
fetching data from vm->def.
(cherry picked from commit db86da5ca2109e4006c286a09b6c75bfe10676ad)
by, in libxlDomainGetNumaParameters(), calling libxl_bitmap_init() as soon as
possible, which avoids getting to 'cleanup:', where libxl_bitmap_dispose()
happens, without having initialized the nodemap, and hence crashing after some
invalid free()-s:
# ./daemon/libvirtd -v
*** Error in `/home/xen/libvirt.git/daemon/.libs/lt-libvirtd': munmap_chunk(): invalid pointer: 0x00007fdd42592666 ***
======= Backtrace: =========
/lib64/libc.so.6(+0x7bbe7)[0x7fdd3f767be7]
/lib64/libxenlight.so.4.3(libxl_bitmap_dispose+0xd)[0x7fdd2c88c045]
/home/xen/libvirt.git/daemon/.libs/../../src/.libs/libvirt_driver_libxl.so(+0x12d26)[0x7fdd2caccd26]
/home/xen/libvirt.git/src/.libs/libvirt.so.0(virDomainGetNumaParameters+0x15c)[0x7fdd4247898c]
/home/xen/libvirt.git/daemon/.libs/lt-libvirtd(+0x1d9a2)[0x7fdd42ecc9a2]
/home/xen/libvirt.git/src/.libs/libvirt.so.0(virNetServerProgramDispatch+0x3da)[0x7fdd424e9eaa]
/home/xen/libvirt.git/src/.libs/libvirt.so.0(+0x1a6f38)[0x7fdd424e3f38]
/home/xen/libvirt.git/src/.libs/libvirt.so.0(+0xa81e5)[0x7fdd423e51e5]
/home/xen/libvirt.git/src/.libs/libvirt.so.0(+0xa783e)[0x7fdd423e483e]
/lib64/libpthread.so.0(+0x7c53)[0x7fdd3febbc53]
/lib64/libc.so.6(clone+0x6d)[0x7fdd3f7e1dbd]
Signed-off-by: Dario Faggili <dario.faggioli@citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
(cherry picked from commit f9ee91d35510ccbc6fc42cef8864b291b2d220f4)
The function doesn't check whether the request is made for active or
inactive domain. Thus when the domain is not running it still tries
accessing non-existing cgroups (priv->cgroup, which is NULL).
I re-made the function in order for it to work the same way it's qemu
counterpart does.
Reproducer:
1) Define an LXC domain
2) Do 'virsh memtune <domain> --hard-limit 133T'
Backtrace:
Thread 6 (Thread 0x7fffec8c0700 (LWP 26826)):
#0 0x00007ffff70edcc4 in virCgroupPathOfController (group=0x0, controller=3,
key=0x7ffff75734bd "memory.limit_in_bytes", path=0x7fffec8bf718) at util/vircgroup.c:1764
#1 0x00007ffff70e9206 in virCgroupSetValueStr (group=0x0, controller=3,
key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffe409f360 "1073741824")
at util/vircgroup.c:669
#2 0x00007ffff70e98b4 in virCgroupSetValueU64 (group=0x0, controller=3,
key=0x7ffff75734bd "memory.limit_in_bytes", value=1073741824) at util/vircgroup.c:740
#3 0x00007ffff70ee518 in virCgroupSetMemory (group=0x0, kb=1048576) at util/vircgroup.c:1904
#4 0x00007ffff70ee675 in virCgroupSetMemoryHardLimit (group=0x0, kb=1048576)
at util/vircgroup.c:1944
#5 0x00005555557d54c8 in lxcDomainSetMemoryParameters (dom=0x7fffe40cc420,
params=0x7fffe409f100, nparams=1, flags=0) at lxc/lxc_driver.c:774
#6 0x00007ffff72c20f9 in virDomainSetMemoryParameters (domain=0x7fffe40cc420,
params=0x7fffe409f100, nparams=1, flags=0) at libvirt.c:4051
#7 0x000055555561365f in remoteDispatchDomainSetMemoryParameters (server=0x555555eb7e00,
client=0x555555ec4b10, msg=0x555555eb94e0, rerr=0x7fffec8bfb70, args=0x7fffe40b8510)
at remote_dispatch.h:7621
#8 0x00005555556133fd in remoteDispatchDomainSetMemoryParametersHelper (server=0x555555eb7e00,
client=0x555555ec4b10, msg=0x555555eb94e0, rerr=0x7fffec8bfb70, args=0x7fffe40b8510,
ret=0x7fffe40b84f0) at remote_dispatch.h:7591
#9 0x00007ffff73b293f in virNetServerProgramDispatchCall (prog=0x555555ec3ae0,
server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0)
at rpc/virnetserverprogram.c:435
#10 0x00007ffff73b207f in virNetServerProgramDispatch (prog=0x555555ec3ae0,
server=0x555555eb7e00, client=0x555555ec4b10, msg=0x555555eb94e0)
at rpc/virnetserverprogram.c:305
#11 0x00007ffff73a4d2c in virNetServerProcessMsg (srv=0x555555eb7e00, client=0x555555ec4b10,
prog=0x555555ec3ae0, msg=0x555555eb94e0) at rpc/virnetserver.c:165
#12 0x00007ffff73a4e8d in virNetServerHandleJob (jobOpaque=0x555555ec3e30, opaque=0x555555eb7e00)
at rpc/virnetserver.c:186
#13 0x00007ffff7187f3f in virThreadPoolWorker (opaque=0x555555eb7ac0) at util/virthreadpool.c:144
#14 0x00007ffff718733a in virThreadHelper (data=0x555555eb7890) at util/virthreadpthread.c:161
#15 0x00007ffff468ed89 in start_thread (arg=0x7fffec8c0700) at pthread_create.c:308
#16 0x00007ffff3da26bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
(cherry picked from commit 9faf3f2950aed1643ab7564afcb4c693c77f71b5)
The function doesn't check whether the request is made for active or
inactive domain. Thus when the domain is not running it still tries
accessing non-existing cgroups (priv->cgroup, which is NULL).
I re-made the function in order for it to work the same way it's qemu
counterpart does.
Reproducer:
1) Define an LXC domain
2) Do 'virsh memtune <domain>'
Backtrace:
Thread 6 (Thread 0x7fffec8c0700 (LWP 13387)):
#0 0x00007ffff70edcc4 in virCgroupPathOfController (group=0x0, controller=3,
key=0x7ffff75734bd "memory.limit_in_bytes", path=0x7fffec8bf750) at util/vircgroup.c:1764
#1 0x00007ffff70e958c in virCgroupGetValueStr (group=0x0, controller=3,
key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffec8bf7c0) at util/vircgroup.c:705
#2 0x00007ffff70e9d29 in virCgroupGetValueU64 (group=0x0, controller=3,
key=0x7ffff75734bd "memory.limit_in_bytes", value=0x7fffec8bf810) at util/vircgroup.c:804
#3 0x00007ffff70ee706 in virCgroupGetMemoryHardLimit (group=0x0, kb=0x7fffec8bf8a8)
at util/vircgroup.c:1962
#4 0x00005555557d590f in lxcDomainGetMemoryParameters (dom=0x7fffd40024a0,
params=0x7fffd40027a0, nparams=0x7fffec8bfa24, flags=0) at lxc/lxc_driver.c:826
#5 0x00007ffff72c28d3 in virDomainGetMemoryParameters (domain=0x7fffd40024a0,
params=0x7fffd40027a0, nparams=0x7fffec8bfa24, flags=0) at libvirt.c:4137
#6 0x000055555563714d in remoteDispatchDomainGetMemoryParameters (server=0x555555eb7e00,
client=0x555555ebaef0, msg=0x555555ebb3e0, rerr=0x7fffec8bfb70, args=0x7fffd40024e0,
ret=0x7fffd4002420) at remote.c:1895
#7 0x00005555556052c4 in remoteDispatchDomainGetMemoryParametersHelper (server=0x555555eb7e00,
client=0x555555ebaef0, msg=0x555555ebb3e0, rerr=0x7fffec8bfb70, args=0x7fffd40024e0,
ret=0x7fffd4002420) at remote_dispatch.h:4050
#8 0x00007ffff73b293f in virNetServerProgramDispatchCall (prog=0x555555ec3ae0,
server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0)
at rpc/virnetserverprogram.c:435
#9 0x00007ffff73b207f in virNetServerProgramDispatch (prog=0x555555ec3ae0,
server=0x555555eb7e00, client=0x555555ebaef0, msg=0x555555ebb3e0)
at rpc/virnetserverprogram.c:305
#10 0x00007ffff73a4d2c in virNetServerProcessMsg (srv=0x555555eb7e00, client=0x555555ebaef0,
prog=0x555555ec3ae0, msg=0x555555ebb3e0) at rpc/virnetserver.c:165
#11 0x00007ffff73a4e8d in virNetServerHandleJob (jobOpaque=0x555555ebc7e0, opaque=0x555555eb7e00)
at rpc/virnetserver.c:186
#12 0x00007ffff7187f3f in virThreadPoolWorker (opaque=0x555555eb7ac0) at util/virthreadpool.c:144
#13 0x00007ffff718733a in virThreadHelper (data=0x555555eb7890) at util/virthreadpthread.c:161
#14 0x00007ffff468ed89 in start_thread (arg=0x7fffec8c0700) at pthread_create.c:308
#15 0x00007ffff3da26bd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
(cherry picked from commit f8c1cb90213508c4f32549023b0572ed774e48aa)
aa0f099 introduced a strict error checking for getsockopt and it
revealed that getting a peer credential of a socket on FreeBSD
didn't work. Libvirtd hits the error:
error : virNetSocketGetUNIXIdentity:1198 : Failed to get valid
client socket identity groups
SOL_SOCKET (0xffff) was used as a level of getsockopt for
LOCAL_PEERCRED, however, it was wrong. 0 is correct as well as
Mac OS X.
So for LOCAL_PEERCRED our options are SOL_LOCAL (if defined) or
0 on Mac OS X and FreeBSD. According to the fact, the patch
simplifies the code by removing ifdef __APPLE__.
I tested the patch on FreeBSD 8.4, 9.2 and 10.0-BETA1.
Signed-off-by: Ryota Ozaki <ozaki.ryota@gmail.com>
With Mac OS X 10.9, xdrproc_t is no longer defined as:
typedef bool_t (*xdrproc_t)(XDR *, ...);
but instead as:
typdef bool_t (*xdrproc_t)(XDR *, void *, unsigned int);
For reference, Linux systems typically define it as:
typedef bool_t (*xdrproc_t)(XDR *, void *, ...);
The rationale explained in the header is that using a vararg is
incorrect and has a potential to change the ABI slightly do to compiler
optimizations taken and the undefined behavior. They decided
to specify the exact number of parameters and for compatibility with old
code decided to make the signature require 3 arguments. The third
argument is ignored for cases that its not used and its recommended to
supply a 0.
Rather than casting the virBitmap pointer to uint8_t* and then using
the structure contents as a byte array, use the virBitmap API to determine
the bitmap size and test each bit.
Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org>
in recently xen commit: 7051d5c8, there is a api changes in
libxl_domain_create_restore.
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Thu Oct 10 12:23:10 2013 +0100
tools/migrate: Fix regression when migrating from older version of Xen
use the macro LIBXL_HAVE_DOMAIN_CREATE_RESTORE_PARAMS in libxl.h
in order to make libvirt could compile with old and new xen.
the params checkpointed_stream is useful if libvirt libxl driver
support migration. for new, set it as zero.
Signed-off-by: Bamvor Jian Zhang <bjzhang@suse.com>
When starting a transient VM the first thing done is to check
for duplicates. The check looks if there are any running VMs
with the matching name/uuid. It explicitly allows there to
be inactive VMs, so that a persistent VM can be temporarily
booted with a different config.
There is a race condition, however, where 2 or more clients
try to create the same transient VM. The first client will
cause a virDomainObjPtr to be added to the domain list, and
it is inactive at this stage. The second client may then
come along and see this inactive VM, and mistake it for a
persistent VM.
If the first VM fails to start its transient guest for any
reason, then it'll remove the virDomainObjPtr from the list.
The second client now has a virDomainObjPtr that it can try
to boot, which libvirt no longer has a record of. The result
can be a running QEMU process that is orphaned.
It was also, however, possible for the virDomainObjPtr to be
completely free'd which will cause libvirtd to crash in some
scenarios.
The fix is to only allow an existing inactive VM if it is
marked as persistent.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Thie patch fixes the segfault:
error : nodeStateInitialize:658 : DBus not available,
disabling HAL driver: internal error: Unable to get DBus
system bus connection: Failed to connect to socket
/var/run/dbus/system_bus_socket: No such file or directory
error : nodeStateInitialize:719 : ?:
Caught Segmentation violation dumping internal log buffer:
This segfault occurs at the below VIR_ERROR:
failure:
if (dbus_error_is_set(&err)) {
VIR_ERROR(_("%s: %s"), err.name, err.message);
When virDBusGetSystemBus fails, the code jumps to the above failure
path. However, the err variable is not correctly initialized
before calling virDBusGetSystemBus. As a result, dbus_error_is_set
may pass over the uninitialized err variable whose name or
message may point to somewhere unknown memory region, which
causes a segfault on VIR_ERROR.
The new code initializes the err variable before calling
virDBusGetSystemBus.
Signed-off-by: Ryota Ozaki <ozaki.ryota@gmail.com>
Include reference of the VM object pointer and name in debug
logs for QEMU start/stop functions. Also make sure we log the
PID that we started, since it isn't available elsewhere in the
logs.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
In debugging a recent oVirt/libvirt race condition, I was very
frustrated by lack of logging in the job enter/exit code. This
patch adds some key data which would have been useful in by
debugging attempts.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Using size_t counts will let us use VIR_APPEND_ELEMENT and friends.
* src/conf/storage_conf.h (_virStoragePoolObjList)
(_virStorageVolDefList): Track list sizes with size_t.
* src/storage/storage_backend_rbd.c
(virStorageBackendRBDRefreshPool): Fix type fallout.
Signed-off-by: Eric Blake <eblake@redhat.com>
To make it easier to forbid future attempts at a confusing typedef
name ending in Ptr that isn't actually a pointer, insist that we
follow our preferred style of 'typedef foo *fooPtr'.
* cfg.mk (sc_forbid_const_pointer_typedef): Enforce consistent
style, to prevent issue fixed in previous storage patch.
* src/conf/capabilities.h (virCapsPtr): Fix offender.
* src/security/security_stack.c (virSecurityStackItemPtr):
Likewise.
* tests/qemucapabilitiestest.c (testQemuDataPtr): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
There are two forms used throughout libvirt code comments, which
are both supported by this patch.
* plain links like e.g. http://www.libvirt.org/
* links enclosed in <> characters, e.g. <http://www.libvirt.org/>
Signed-off-by: Claudio Bley <cbley@av-test.de>
The following sequence
1. Define a persistent QMEU guest
2. Start the QEMU guest
3. Stop libvirtd
4. Kill the QEMU process
5. Start libvirtd
6. List persistent guests
At the last step, the previously running persistent guest
will be missing. This is because of a race condition in the
QEMU driver startup code. It does
1. Load all VM state files
2. Spawn thread to reconnect to each VM
3. Load all VM config files
Only at the end of step 3, does the 'virDomainObjPtr' get
marked as "persistent". There is therefore a window where
the thread reconnecting to the VM will remove the persistent
VM from the list.
The easy fix is to simply switch the order of steps 2 & 3.
In addition to this though, we must only attempt to reconnect
to a VM which had a non-zero PID loaded from its state file.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The 'error' cleanup block in qemuProcessReconnect() had a
'return' statement in the middle of it. This caused a leak
of virConnectPtr & virQEMUDriverConfigPtr instances. This
was identified because netcf recently started checking its
refcount in libvirtd shutdown:
netcfStateCleanup:109 : internal error: Attempt to close netcf state driver with open connections
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
virDomainObjListLoadAllConfigs sets dom->persistent after
having released its lock on the domain object. This exposes
a possible race condition.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The QEMU backend was disabled on Mac OS X without a reason in the code
and due to refactors its difficult to understand when/why it was
disabled. With QEMU being supported on Mac OS X there is no reason to
disable QEMU on this platform.
To ensure proper processing by virGetUserID() and virGetGroupID()
of a uid/gid add a "+" prior to the uid/gid to denote it's really
a uid/gid for the label.
The rbd code had a confusing typedef ending in Ptr that was not
actually a pointer, which made the rest of the code harder to
read. This fixes things to actually pass by pointer rather than
by copy.
* src/storage/storage_backend_rbd.c (virStorageBackendStatePtr):
Fix typedef.
(virStorageBackendRBDOpenRADOSConn)
(virStorageBackendRBDCloseRADOSConn)
(volStorageBackendRBDRefreshVolInfo)
(virStorageBackendRBDRefreshPool, virStorageBackendRBDDeleteVol)
(virStorageBackendRBDCreateVol, virStorageBackendRBDRefreshVol)
(virStorageBackendRBDResizeVol): Fix fallout.
Signed-off-by: Eric Blake <eblake@redhat.com>
When adding an automatically allocated port to a well-formed migration
URI, keep it well-formed:
tcp://1.2.3.4/ -> tcp://1.2.3.4/:12345 # wrong
tcp://1.2.3.4/ -> tcp://1.2.3.4:12345/ # fixed
tcp://1.2.3.4 -> tcp://1.2.3.4:12345 # still works
tcp:1.2.3.4 -> tcp:1.2.3.4:12345 # still works (old syntax)
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
For inexplicable reasons, many of the 3rd party package deps
were left against the 'libvirt-daemon' RPM when the drivers
were split out. This makes a minimal install heavier that
it should be. Push them all down into libvirt-daemon-driver-XXX
so they're only pulled in when truly needed
With this change applied, a minimal install of just the
libvirt-daemon-driver-lxc RPM is reduced by 41 MB on a
Fedora 19 host.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Expand the "secmodel" XML fragment of "host" with a sequence of
baselabel's which describe the default security context used by
libvirt with a specific security model and virtualization type:
<secmodel>
<model>selinux</model>
<doi>0</doi>
<baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
<baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
</secmodel>
<secmodel>
<model>dac</model>
<doi>0</doi>
<baselabel type='kvm'>107:107</baselabel>
<baselabel type='qemu'>107:107</baselabel>
</secmodel>
"baselabel" is driver-specific information, e.g. in the DAC security
model, it indicates USER_ID:GROUP_ID.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Merge the functions 'virSecurityDACSetUser' and
'virSecurityDACSetGroup' into 'virSecurityDACSetUserAndGroup'.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>