The dtrace probe macros rely on the logging API. We can't make
the internal.h header include the virlog.h header though since
that'd be a circular include. Instead simply split the dtrace
probes into their own header file, since there's no compelling
reason for them to be in the main internal.h header.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1047577
When writing commit 173c291, I missed the fact virNetServerClientClose
unlocks the client object before actually clearing client->sock and thus
it is possible to hit a window when client->keepalive is NULL while
client->sock is not NULL. I was thinking client->sock == NULL was a
better check for a closed connection but apparently we have to go with
client->keepalive == NULL to actually fix the crash.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1047577
When a client closes its connection to libvirtd early during
virConnectOpen, more specifically just after making
REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call to check if
VIR_DRV_FEATURE_PROGRAM_KEEPALIVE is supported without even waiting for
the result, libvirtd may crash due to a race in keep-alive
initialization. Once receiving the REMOTE_PROC_CONNECT_SUPPORTS_FEATURE
call, the daemon's event loop delegates it to a worker thread. In case
the event loop detects EOF on the connection and calls
virNetServerClientClose before the worker thread starts to handle
REMOTE_PROC_CONNECT_SUPPORTS_FEATURE call, client->keepalive will be
disposed by the time virNetServerClientStartKeepAlive gets called from
remoteDispatchConnectSupportsFeature. Because the flow is common for
both authenticated and read-only connections, even unprivileged clients
may cause the daemon to crash.
To avoid the crash, virNetServerClientStartKeepAlive needs to check if
the connection is still open before starting keep-alive protocol.
Every libvirt release since 0.9.8 is affected by this bug.
The x509dname is only set inside a WITH_GNUTLS conditional, so
when used/check later on for NULL, Coverity detects this is not
possible. Added WITH_GNUTLS around uses to remove message
The fix for CVE-2013-4311 had a pre-requisite enhancement
to the identity code
commit db7a5688c0
Author: Daniel P. Berrange <berrange@redhat.com>
Date: Thu Aug 22 16:00:01 2013 +0100
Also store user & group ID values in virIdentity
This had a typo which caused the group ID to overwrite the
user ID string. This meant any checks using this would have
the wrong ID value. This only affected the ACL code, not the
initial polkit auth. It also leaked memory.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Future improvements to the polkit code will require access to
the numeric user ID, not merely user name.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Compilation on cygwin failed due to a bug in the sasl headers
present on that platform (libsasl2-devel 2.1.26):
In file included from rpc/virnetserverclient.c:27:0:
/usr/include/sasl/sasl.h:230:38: error: expected declaration specifiers or '...' before 'size_t'
Upstream is aware of their bug:
https://bugzilla.cyrusimap.org/show_bug.cgi?id=3759
* src/rpc/virnetserverclient.c (includes): Ensure size_t is
defined before using sasl.h.
Signed-off-by: Eric Blake <eblake@redhat.com>
Since PIDs can be reused, polkit prefers to be given
a (PID,start time) pair. If given a PID on its own,
it will attempt to lookup the start time in /proc/pid/stat,
though this is subject to races.
It is safer if the client app resolves the PID start
time itself, because as long as the app has the client
socket open, the client PID won't be reused.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
There are various methods named "virXXXXSecurityContext",
which are specific to SELinux. Rename them all to
"virXXXXSELinuxContext". They will still raise errors at
runtime if SELinux is not compiled in
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The source code base needs to be adapted as well. Some files
include virutil.h just for the string related functions (here,
the include is substituted to match the new file), some include
virutil.h without any need (here, the include is removed), and
some require both.
Add APIs which allow creation of a virIdentity from the info
associated with a virNetServerClientPtr instance. This is done
based on the results of client authentication processes like
TLS, x509, SASL, SO_PEERCRED
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
A socket object has various pieces of security data associated
with it, such as the SELinux context, the SASL username and
the x509 distinguished name. Add new APIs to virNetServerClient
and related modules to access this data.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When converting to virObject, the probes on the 'Free' functions
were removed on the basis that there is a probe on virObjectFree
that suffices. This puts a burden on people writing probe scripts
to identify which object is being dispose. This adds back probes
in the 'Dispose' functions and updates the rpc monitor systemtap
example to use them
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently the server determines whether authentication of clients
is complete, by checking whether an identity is set. This patch
removes that lame hack and replaces it with an explicit method
for changing the client auth code
* daemon/remote.c: Update for new APis
* src/libvirt_private.syms, src/rpc/virnetserverclient.c,
src/rpc/virnetserverclient.h: Remove virNetServerClientGetIdentity
and virNetServerClientSetIdentity, adding a new method
virNetServerClientSetAuth.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This patch resolves CVE-2013-0170:
https://bugzilla.redhat.com/show_bug.cgi?id=893450
When reading and dispatching of a message failed the message was freed
but wasn't removed from the message queue.
After that when the connection was about to be closed the pointer for
the message was still present in the queue and it was passed to
virNetMessageFree which tried to call the callback function from an
uninitialized pointer.
This patch removes the message from the queue before it's freed.
* rpc/virnetserverclient.c: virNetServerClientDispatchRead:
- avoid use after free of RPC messages
The code is not reachable as of commit id: bb85f229. Removed
virKeepAliveStop() and virObjectUnref() because 'ka' cannot be
anything but NULL at the cleanup label.
Currently all classes must directly inherit from virObject.
This allows for arbitrarily deep hierarchy. There's not much
to this aside from chaining up the 'dispose' handlers from
each class & providing APIs to check types.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The libvirt coding standard is to use 'function(...args...)'
instead of 'function (...args...)'. A non-trivial number of
places did not follow this rule and are fixed in this patch.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Add two new APIs virNetServerClientNewPostExecRestart and
virNetServerClientPreExecRestart which allow a virNetServerClientPtr
object to be created from a JSON object and saved to a
JSON object, for the purpose of re-exec'ing a process.
This includes serialization of the connected socket associated
with the client
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
https://www.gnu.org/licenses/gpl-howto.html recommends that
the 'If not, see <url>.' phrase be a separate sentence.
* tests/securityselinuxhelper.c: Remove doubled line.
* tests/securityselinuxtest.c: Likewise.
* globally: s/; If/. If/
In preparation for adding further constructors, refactor
the virNetServerClientNew method to move most of the code
into a common virNetServerClientNewInternal helper API.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently there is a hook function that is invoked when a
new client connection comes in, which allows an app to
setup private data. This setup will make it difficult to
serialize client state during process re-exec(). Change to
a model where the app registers a callback when creating
the virNetServerPtr instance, which is used to allocate
the client private data immediately during virNetClientPtr
construction.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently the virNetServerServicePtr is responsible for
creating the virNetServerClientPtr instance when accepting
a new connection. Change this so that the virNetServerServicePtr
merely gives virNetServerPtr a virNetSocketPtr instance. The
virNetServerPtr can then create the virNetServerClientPtr
as it desires
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Per the FSF address could be changed from time to time, and GNU
recommends the following now: (http://www.gnu.org/licenses/gpl-howto.html)
You should have received a copy of the GNU General Public License
along with Foobar. If not, see <http://www.gnu.org/licenses/>.
This patch removes the explicit FSF address, and uses above instead
(of course, with inserting 'Lesser' before 'General').
Except a bunch of files for security driver, all others are changed
automatically, the copyright for securify files are not complete,
that's why to do it manually:
src/security/security_selinux.h
src/security/security_driver.h
src/security/security_selinux.c
src/security/security_apparmor.h
src/security/security_apparmor.c
src/security/security_driver.c
In the delayed close mode, we're just waiting for final data to
be written back to the client. While waiting, we should not
bother to read more data from the client.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When a libvirt API is called from the main event loop (which seems to be
common in event-based glib apps), the client IO loop would properly
handle keepalive requests sent by a server but will not actually send
them because the main event loop is blocked with the API. This patch
gets rid of response timer and the thread which is processing keepalive
requests is also responsible for queueing responses for delivery.
Currently, we are allocating buffer for RPC messages statically.
This is not such pain when RPC limits are small. However, if we want
ever to increase those limits, we need to allocate buffer dynamically,
based on RPC message len (= the first 4 bytes). Therefore we will
decrease our mem usage in most cases and still be flexible enough in
corner cases.
To avoid a namespace clash with forthcoming identity APIs,
rename the virNet*GetLocalIdentity() APIs to have the form
virNet*GetUNIXIdentity()
* daemon/remote.c, src/libvirt_private.syms: Update
for renamed APIs
* src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h,
src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: s/LocalIdentity/UNIXIdentity/
The code calling sendfd/recvfd was mistakenly assuming those
calls would never block. They can in fact return EAGAIN and
this is causing us to drop the client connection when blocking
ocurrs while sending/receiving FDs.
Fixing this is a little hairy on the incoming side, since at
the point where we see the EAGAIN, we already thought we had
finished receiving all data for the packet. So we play a little
trick to reset bufferOffset again and go back into polling for
more data.
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Update
virNetSocketSendFD/RecvFD to return 0 on EAGAIN, or 1
on success
* src/rpc/virnetclient.c: Move decoding of header & fds
out of virNetClientCallDispatch and into virNetClientIOHandleInput.
Handling blocking when sending/receiving FDs
* src/rpc/virnetmessage.h: Add a 'donefds' field to track
how many FDs we've sent / received
* src/rpc/virnetserverclient.c: Handling blocking when
sending/receiving FDs
If daemon is using SASL it reads client data into a cache. This cache is
big (usually 65KB) and can thus contain 2 or more messages. However,
on socket event we can dispatch only one message. So if we read two
messages at once, the second will not be dispatched as the socket event
goes away with filling the cache.
Moreover, when dispatching the cache we need to remember to take care
of client max requests limit.
The RPC server classes are extended to allow FDs to be received
from clients with calls. There is not currently any way for a
procedure to pass FDs back to the client with replies
* daemon/remote.c, src/rpc/gendispatch.pl: Change virNetMessageHeaderPtr
param to virNetMessagePtr in dispatcher impls
* src/rpc/virnetserver.c, src/rpc/virnetserverclient.c,
src/rpc/virnetserverprogram.c, src/rpc/virnetserverprogram.h:
Extend to support FD passing
The libvirtd daemon had a few crude system tap probes. Some of
these were broken during the RPC rewrite. The new modular RPC
code is structured in a way that allows much more effective
tracing. Instead of trying to hook up the original probes,
define a new set of probes for the RPC and event code.
The master probes file is now src/probes.d. This contains
probes for virNetServerClientPtr, virNetClientPtr, virSocketPtr
virNetTLSContextPtr and virNetTLSSessionPtr modules. Also add
probes for the poll event loop.
The src/dtrace2systemtap.pl script can convert the probes.d
file into a libvirt_probes.stp file to make use from systemtap
much simpler.
The src/rpc/gensystemtap.pl script can generate a set of
systemtap functions for translating RPC enum values into
printable strings. This works for all RPC header enums (program,
type, status, procedure) and also the authentication enum
The PROBE macro will automatically generate a VIR_DEBUG
statement, so any place with a PROBE can remove any existing
manual DEBUG statements.
* daemon/libvirtd.stp, daemon/probes.d: Remove obsolete probing
* daemon/libvirtd.h: Remove probe macros
* daemon/Makefile.am: Remove all probe buildings/install
* daemon/remote.c: Update authentication probes
* src/dtrace2systemtap.pl, src/rpc/gensystemtap.pl: Scripts
to generate STP files
* src/internal.h: Add probe macros
* src/probes.d: Master list of probes
* src/rpc/virnetclient.c, src/rpc/virnetserverclient.c,
src/rpc/virnetsocket.c, src/rpc/virnettlscontext.c,
src/util/event_poll.c: Insert probe points, removing any
DEBUG statements that duplicate the info
Commit 2c85644b0b attempted to
fix a problem with tracking RPC messages from streams by doing
- if (msg->header.type == VIR_NET_REPLY) {
+ if (msg->header.type == VIR_NET_REPLY ||
+ (msg->header.type == VIR_NET_STREAM &&
+ msg->header.status != VIR_NET_CONTINUE)) {
client->nrequests--;
In other words any stream packet, with status NET_OK or NET_ERROR
would cause nrequests to be decremented. This is great if the
packet from from a synchronous virStreamFinish or virStreamAbort
API call, but wildly wrong if from a server initiated abort.
The latter resulted in 'nrequests' being decremented below zero.
This then causes all I/O for that client to be stopped.
Instead of trying to infer whether we need to decrement the
nrequests field, from the message type/status, introduce an
explicit 'bool tracked' field to mark whether the virNetMessagePtr
object is subject to tracking.
Also add a virNetMessageClear function to allow a message
contents to be cleared out, without adversely impacting the
'tracked' field as a naive memset() would do
* src/rpc/virnetmessage.c, src/rpc/virnetmessage.h: Add
a 'bool tracked' field and virNetMessageClear() API
* daemon/remote.c, daemon/stream.c, src/rpc/virnetclientprogram.c,
src/rpc/virnetclientstream.c, src/rpc/virnetserverclient.c,
src/rpc/virnetserverprogram.c: Switch over to use
virNetMessageClear() and pass in the 'bool tracked' value
when creating messages.
Every active stream results in a reference being held on the
virNetServerClientPtr object. This meant that if a client quit
with any streams active, although all I/O was stopped the
virNetServerClientPtr object would leak. This causes libvirtd
to leak any file handles associated with open streams when a
client quit
To fix this, when we call virNetServerClientClose there is a
callback invoked which lets the daemon release the streams
and thus the extra references
* daemon/remote.c: Add a hook to close all streams
* daemon/stream.c, daemon/stream.h: Add API for releasing
all streams
* src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h:
Allow registration of a hook to trigger when closing client
Steps to reproduce this problem (vm1 is not running):
for i in `seq 50`; do virsh managedsave vm1& done; killall virsh
Pre-patch, virNetServerClientClose could end up setting client->sock
to NULL prior to other cleanup functions trying to use client->sock.
This fixes things by checking for NULL in more places, and by deferring
the cleanup until after all queued messages have been served.
* src/rpc/virnetserverclient.c (virNetServerClientRegisterEvent)
(virNetServerClientGetFD, virNetServerClientIsSecure)
(virNetServerClientLocalAddrString)
(virNetServerClientRemoteAddrString): Check for closed socket.
(virNetServerClientClose): Rearrange close sequence.
Analysis from Wen Congyang.
When an incoming RPC message is ready for processing,
virNetServerClientDispatchRead()
will invoke the 'dispatchFunc' callback. This is set to
virNetServerDispatchNewMessage
This function puts the message + client in a queue for processing by the thread
pool. The thread pool worker function is
virNetServerHandleJob
The first thing this does is acquire an extra reference on the 'client'.
Unfortunately, between the time the message+client are put on the thread pool
queue, and the time the worker runs, the client object may have had its last
reference removed.
We clearly need to add the reference to the client object before putting the
client on the processing queue
* src/rpc/virnetserverclient.c: Add a reference to the client when
invoking the dispatch function
* src/rpc/virnetserver.c: Don't acquire a reference to the client
when in the worker thread
When unregistering an I/O callback from a virNetSocket object,
there is still a chance that an event may come in on the callback.
In this case it is possible that the virNetSocket might have been
freed already. Make use of a virFreeCallback when registering
the I/O callbacks and hold a reference for the entire time the
callback is set.
* src/rpc/virnetsocket.c: Register a free function for the
file handle watch
* src/rpc/virnetsocket.h, src/rpc/virnetserverservice.c,
src/rpc/virnetserverclient.c, src/rpc/virnetclient.c: Add
a free function for the socket I/O watches
Continuation of commit 313ac7fd, and enforce things with a syntax
check.
Technically, virNetServerClientCalculateHandleMode is not printing
a mode_t, but rather a collection of VIR_EVENT_HANDLE_* bits;
however, these bits are < 8, so there is no different in the
output, and that was the easiest way to silence the new syntax check.
* cfg.mk (sc_flags_debug): New syntax check.
(exclude_file_name_regexp--sc_flags_debug): Add exemptions.
* src/fdstream.c (virFDStreamOpenFileInternal): Print flags in
hex, mode_t in octal.
* src/libvirt-qemu.c (virDomainQemuMonitorCommand)
(virDomainQemuAttach): Likewise.
* src/locking/lock_driver_nop.c (virLockManagerNopInit): Likewise.
* src/locking/lock_driver_sanlock.c (virLockManagerSanlockInit):
Likewise.
* src/locking/lock_manager.c: Likewise.
* src/qemu/qemu_migration.c: Likewise.
* src/qemu/qemu_monitor.c: Likewise.
* src/rpc/virnetserverclient.c
(virNetServerClientCalculateHandleMode): Print mode with %o.
The dispatch for the CLOSE RPC call was invoking the method
virNetServerClientClose(). This caused the client connection
to be immediately terminated. This meant the reply to the
final RPC message was never sent. Prior to the RPC rewrite
we merely flagged the connection for closing, and actually
closed it when the next RPC call dispatch had completed.
* daemon/remote.c: Flag connection for a delayed close
* daemon/stream.c: Update to use new API for closing
failed connection
* src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h:
Add support for a delayed connection close. Rename the
virNetServerClientMarkClose method to virNetServerClientImmediateClose
to clarify its semantics
When sending back the final OK or ERROR message on completion
of a stream, we were not decrementing the 'nrequests' tracker
on the client. With the default requests limit of '5', this
meant once a client had created 5 streams, they are unable to
process any further RPC calls. There was also a bug when
handling an error from decoding a message length header, which
meant a client connection would not immediately be closed.
* src/rpc/virnetserverclient.c: Fix release of request after
stream completion & mark client for close on error
Spotted by Coverity. If we don't update tmp each time through
the loop, then if the filter being removed was not the head of
the list, we accidentally lose all filters prior to the one we
wanted to remove.
* src/rpc/virnetserverclient.c (virNetServerClientRemoveFilter):
Don't lose unrelated filters.
To save on memory reallocation, virNetMessage instances that
have been transmitted, may be reused for a subsequent incoming
message. We forgot to clear out the old data of the message
fully, which caused later confusion upon read.
* src/rpc/virnetserverclient.c: memset entire message before
reusing it
The virNetServerClient object had a hardcoded limit of 10 requests
per client. Extend constructor to allow it to be passed in as a
configurable variable. Wire this up to the 'max_client_requests'
config parameter in libvirtd
* daemon/libvirtd.c: Pass max_client_requests into services
* src/rpc/virnetserverservice.c, src/rpc/virnetserverservice.h: Pass
nrequests_client_max to clients
* src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h: Allow
configurable request limit
When a filter steals an RPC message, that message must
not be freed, except by the filter code itself
* src/rpc/virnetserverclient.c: Don't free stolen RPC
messages
To facilitate creation of new daemons providing XDR RPC services,
pull a lot of the libvirtd daemon code into a set of reusable
objects.
* virNetServer: A server contains one or more services which
accept incoming clients. It maintains the list of active
clients. It has a list of RPC programs which can be used
by clients. When clients produce a complete RPC message,
the server passes this onto the corresponding program for
handling, and queues any response back with the client.
* virNetServerClient: Encapsulates a single client connection.
All I/O for the client is handled, reading & writing RPC
messages.
* virNetServerProgram: Handles processing and dispatch of
RPC method calls for a single RPC (program,version).
Multiple programs can be registered with the server.
* virNetServerService: Encapsulates socket(s) listening for
new connections. Each service listens on a single host/port,
but may have multiple sockets if on a dual IPv4/6 host.
Each new daemon now merely has to define the list of RPC procedures
& their handlers. It does not need to deal with any network related
functionality at all.