This patch resolves CVE-2013-0170:
https://bugzilla.redhat.com/show_bug.cgi?id=893450
When reading and dispatching of a message failed the message was freed
but wasn't removed from the message queue.
After that when the connection was about to be closed the pointer for
the message was still present in the queue and it was passed to
virNetMessageFree which tried to call the callback function from an
uninitialized pointer.
This patch removes the message from the queue before it's freed.
* rpc/virnetserverclient.c: virNetServerClientDispatchRead:
- avoid use after free of RPC messages
The code is not reachable as of commit id: bb85f229. Removed
virKeepAliveStop() and virObjectUnref() because 'ka' cannot be
anything but NULL at the cleanup label.
Currently all classes must directly inherit from virObject.
This allows for arbitrarily deep hierarchy. There's not much
to this aside from chaining up the 'dispose' handlers from
each class & providing APIs to check types.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The libvirt coding standard is to use 'function(...args...)'
instead of 'function (...args...)'. A non-trivial number of
places did not follow this rule and are fixed in this patch.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Add two new APIs virNetServerClientNewPostExecRestart and
virNetServerClientPreExecRestart which allow a virNetServerClientPtr
object to be created from a JSON object and saved to a
JSON object, for the purpose of re-exec'ing a process.
This includes serialization of the connected socket associated
with the client
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
https://www.gnu.org/licenses/gpl-howto.html recommends that
the 'If not, see <url>.' phrase be a separate sentence.
* tests/securityselinuxhelper.c: Remove doubled line.
* tests/securityselinuxtest.c: Likewise.
* globally: s/; If/. If/
In preparation for adding further constructors, refactor
the virNetServerClientNew method to move most of the code
into a common virNetServerClientNewInternal helper API.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently there is a hook function that is invoked when a
new client connection comes in, which allows an app to
setup private data. This setup will make it difficult to
serialize client state during process re-exec(). Change to
a model where the app registers a callback when creating
the virNetServerPtr instance, which is used to allocate
the client private data immediately during virNetClientPtr
construction.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently the virNetServerServicePtr is responsible for
creating the virNetServerClientPtr instance when accepting
a new connection. Change this so that the virNetServerServicePtr
merely gives virNetServerPtr a virNetSocketPtr instance. The
virNetServerPtr can then create the virNetServerClientPtr
as it desires
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Per the FSF address could be changed from time to time, and GNU
recommends the following now: (http://www.gnu.org/licenses/gpl-howto.html)
You should have received a copy of the GNU General Public License
along with Foobar. If not, see <http://www.gnu.org/licenses/>.
This patch removes the explicit FSF address, and uses above instead
(of course, with inserting 'Lesser' before 'General').
Except a bunch of files for security driver, all others are changed
automatically, the copyright for securify files are not complete,
that's why to do it manually:
src/security/security_selinux.h
src/security/security_driver.h
src/security/security_selinux.c
src/security/security_apparmor.h
src/security/security_apparmor.c
src/security/security_driver.c
In the delayed close mode, we're just waiting for final data to
be written back to the client. While waiting, we should not
bother to read more data from the client.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When a libvirt API is called from the main event loop (which seems to be
common in event-based glib apps), the client IO loop would properly
handle keepalive requests sent by a server but will not actually send
them because the main event loop is blocked with the API. This patch
gets rid of response timer and the thread which is processing keepalive
requests is also responsible for queueing responses for delivery.
Currently, we are allocating buffer for RPC messages statically.
This is not such pain when RPC limits are small. However, if we want
ever to increase those limits, we need to allocate buffer dynamically,
based on RPC message len (= the first 4 bytes). Therefore we will
decrease our mem usage in most cases and still be flexible enough in
corner cases.
To avoid a namespace clash with forthcoming identity APIs,
rename the virNet*GetLocalIdentity() APIs to have the form
virNet*GetUNIXIdentity()
* daemon/remote.c, src/libvirt_private.syms: Update
for renamed APIs
* src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h,
src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: s/LocalIdentity/UNIXIdentity/
The code calling sendfd/recvfd was mistakenly assuming those
calls would never block. They can in fact return EAGAIN and
this is causing us to drop the client connection when blocking
ocurrs while sending/receiving FDs.
Fixing this is a little hairy on the incoming side, since at
the point where we see the EAGAIN, we already thought we had
finished receiving all data for the packet. So we play a little
trick to reset bufferOffset again and go back into polling for
more data.
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Update
virNetSocketSendFD/RecvFD to return 0 on EAGAIN, or 1
on success
* src/rpc/virnetclient.c: Move decoding of header & fds
out of virNetClientCallDispatch and into virNetClientIOHandleInput.
Handling blocking when sending/receiving FDs
* src/rpc/virnetmessage.h: Add a 'donefds' field to track
how many FDs we've sent / received
* src/rpc/virnetserverclient.c: Handling blocking when
sending/receiving FDs
If daemon is using SASL it reads client data into a cache. This cache is
big (usually 65KB) and can thus contain 2 or more messages. However,
on socket event we can dispatch only one message. So if we read two
messages at once, the second will not be dispatched as the socket event
goes away with filling the cache.
Moreover, when dispatching the cache we need to remember to take care
of client max requests limit.
The RPC server classes are extended to allow FDs to be received
from clients with calls. There is not currently any way for a
procedure to pass FDs back to the client with replies
* daemon/remote.c, src/rpc/gendispatch.pl: Change virNetMessageHeaderPtr
param to virNetMessagePtr in dispatcher impls
* src/rpc/virnetserver.c, src/rpc/virnetserverclient.c,
src/rpc/virnetserverprogram.c, src/rpc/virnetserverprogram.h:
Extend to support FD passing
The libvirtd daemon had a few crude system tap probes. Some of
these were broken during the RPC rewrite. The new modular RPC
code is structured in a way that allows much more effective
tracing. Instead of trying to hook up the original probes,
define a new set of probes for the RPC and event code.
The master probes file is now src/probes.d. This contains
probes for virNetServerClientPtr, virNetClientPtr, virSocketPtr
virNetTLSContextPtr and virNetTLSSessionPtr modules. Also add
probes for the poll event loop.
The src/dtrace2systemtap.pl script can convert the probes.d
file into a libvirt_probes.stp file to make use from systemtap
much simpler.
The src/rpc/gensystemtap.pl script can generate a set of
systemtap functions for translating RPC enum values into
printable strings. This works for all RPC header enums (program,
type, status, procedure) and also the authentication enum
The PROBE macro will automatically generate a VIR_DEBUG
statement, so any place with a PROBE can remove any existing
manual DEBUG statements.
* daemon/libvirtd.stp, daemon/probes.d: Remove obsolete probing
* daemon/libvirtd.h: Remove probe macros
* daemon/Makefile.am: Remove all probe buildings/install
* daemon/remote.c: Update authentication probes
* src/dtrace2systemtap.pl, src/rpc/gensystemtap.pl: Scripts
to generate STP files
* src/internal.h: Add probe macros
* src/probes.d: Master list of probes
* src/rpc/virnetclient.c, src/rpc/virnetserverclient.c,
src/rpc/virnetsocket.c, src/rpc/virnettlscontext.c,
src/util/event_poll.c: Insert probe points, removing any
DEBUG statements that duplicate the info
Commit 2c85644b0b attempted to
fix a problem with tracking RPC messages from streams by doing
- if (msg->header.type == VIR_NET_REPLY) {
+ if (msg->header.type == VIR_NET_REPLY ||
+ (msg->header.type == VIR_NET_STREAM &&
+ msg->header.status != VIR_NET_CONTINUE)) {
client->nrequests--;
In other words any stream packet, with status NET_OK or NET_ERROR
would cause nrequests to be decremented. This is great if the
packet from from a synchronous virStreamFinish or virStreamAbort
API call, but wildly wrong if from a server initiated abort.
The latter resulted in 'nrequests' being decremented below zero.
This then causes all I/O for that client to be stopped.
Instead of trying to infer whether we need to decrement the
nrequests field, from the message type/status, introduce an
explicit 'bool tracked' field to mark whether the virNetMessagePtr
object is subject to tracking.
Also add a virNetMessageClear function to allow a message
contents to be cleared out, without adversely impacting the
'tracked' field as a naive memset() would do
* src/rpc/virnetmessage.c, src/rpc/virnetmessage.h: Add
a 'bool tracked' field and virNetMessageClear() API
* daemon/remote.c, daemon/stream.c, src/rpc/virnetclientprogram.c,
src/rpc/virnetclientstream.c, src/rpc/virnetserverclient.c,
src/rpc/virnetserverprogram.c: Switch over to use
virNetMessageClear() and pass in the 'bool tracked' value
when creating messages.
Every active stream results in a reference being held on the
virNetServerClientPtr object. This meant that if a client quit
with any streams active, although all I/O was stopped the
virNetServerClientPtr object would leak. This causes libvirtd
to leak any file handles associated with open streams when a
client quit
To fix this, when we call virNetServerClientClose there is a
callback invoked which lets the daemon release the streams
and thus the extra references
* daemon/remote.c: Add a hook to close all streams
* daemon/stream.c, daemon/stream.h: Add API for releasing
all streams
* src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h:
Allow registration of a hook to trigger when closing client
Steps to reproduce this problem (vm1 is not running):
for i in `seq 50`; do virsh managedsave vm1& done; killall virsh
Pre-patch, virNetServerClientClose could end up setting client->sock
to NULL prior to other cleanup functions trying to use client->sock.
This fixes things by checking for NULL in more places, and by deferring
the cleanup until after all queued messages have been served.
* src/rpc/virnetserverclient.c (virNetServerClientRegisterEvent)
(virNetServerClientGetFD, virNetServerClientIsSecure)
(virNetServerClientLocalAddrString)
(virNetServerClientRemoteAddrString): Check for closed socket.
(virNetServerClientClose): Rearrange close sequence.
Analysis from Wen Congyang.
When an incoming RPC message is ready for processing,
virNetServerClientDispatchRead()
will invoke the 'dispatchFunc' callback. This is set to
virNetServerDispatchNewMessage
This function puts the message + client in a queue for processing by the thread
pool. The thread pool worker function is
virNetServerHandleJob
The first thing this does is acquire an extra reference on the 'client'.
Unfortunately, between the time the message+client are put on the thread pool
queue, and the time the worker runs, the client object may have had its last
reference removed.
We clearly need to add the reference to the client object before putting the
client on the processing queue
* src/rpc/virnetserverclient.c: Add a reference to the client when
invoking the dispatch function
* src/rpc/virnetserver.c: Don't acquire a reference to the client
when in the worker thread
When unregistering an I/O callback from a virNetSocket object,
there is still a chance that an event may come in on the callback.
In this case it is possible that the virNetSocket might have been
freed already. Make use of a virFreeCallback when registering
the I/O callbacks and hold a reference for the entire time the
callback is set.
* src/rpc/virnetsocket.c: Register a free function for the
file handle watch
* src/rpc/virnetsocket.h, src/rpc/virnetserverservice.c,
src/rpc/virnetserverclient.c, src/rpc/virnetclient.c: Add
a free function for the socket I/O watches
Continuation of commit 313ac7fd, and enforce things with a syntax
check.
Technically, virNetServerClientCalculateHandleMode is not printing
a mode_t, but rather a collection of VIR_EVENT_HANDLE_* bits;
however, these bits are < 8, so there is no different in the
output, and that was the easiest way to silence the new syntax check.
* cfg.mk (sc_flags_debug): New syntax check.
(exclude_file_name_regexp--sc_flags_debug): Add exemptions.
* src/fdstream.c (virFDStreamOpenFileInternal): Print flags in
hex, mode_t in octal.
* src/libvirt-qemu.c (virDomainQemuMonitorCommand)
(virDomainQemuAttach): Likewise.
* src/locking/lock_driver_nop.c (virLockManagerNopInit): Likewise.
* src/locking/lock_driver_sanlock.c (virLockManagerSanlockInit):
Likewise.
* src/locking/lock_manager.c: Likewise.
* src/qemu/qemu_migration.c: Likewise.
* src/qemu/qemu_monitor.c: Likewise.
* src/rpc/virnetserverclient.c
(virNetServerClientCalculateHandleMode): Print mode with %o.
The dispatch for the CLOSE RPC call was invoking the method
virNetServerClientClose(). This caused the client connection
to be immediately terminated. This meant the reply to the
final RPC message was never sent. Prior to the RPC rewrite
we merely flagged the connection for closing, and actually
closed it when the next RPC call dispatch had completed.
* daemon/remote.c: Flag connection for a delayed close
* daemon/stream.c: Update to use new API for closing
failed connection
* src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h:
Add support for a delayed connection close. Rename the
virNetServerClientMarkClose method to virNetServerClientImmediateClose
to clarify its semantics
When sending back the final OK or ERROR message on completion
of a stream, we were not decrementing the 'nrequests' tracker
on the client. With the default requests limit of '5', this
meant once a client had created 5 streams, they are unable to
process any further RPC calls. There was also a bug when
handling an error from decoding a message length header, which
meant a client connection would not immediately be closed.
* src/rpc/virnetserverclient.c: Fix release of request after
stream completion & mark client for close on error
Spotted by Coverity. If we don't update tmp each time through
the loop, then if the filter being removed was not the head of
the list, we accidentally lose all filters prior to the one we
wanted to remove.
* src/rpc/virnetserverclient.c (virNetServerClientRemoveFilter):
Don't lose unrelated filters.
To save on memory reallocation, virNetMessage instances that
have been transmitted, may be reused for a subsequent incoming
message. We forgot to clear out the old data of the message
fully, which caused later confusion upon read.
* src/rpc/virnetserverclient.c: memset entire message before
reusing it
The virNetServerClient object had a hardcoded limit of 10 requests
per client. Extend constructor to allow it to be passed in as a
configurable variable. Wire this up to the 'max_client_requests'
config parameter in libvirtd
* daemon/libvirtd.c: Pass max_client_requests into services
* src/rpc/virnetserverservice.c, src/rpc/virnetserverservice.h: Pass
nrequests_client_max to clients
* src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h: Allow
configurable request limit
When a filter steals an RPC message, that message must
not be freed, except by the filter code itself
* src/rpc/virnetserverclient.c: Don't free stolen RPC
messages
To facilitate creation of new daemons providing XDR RPC services,
pull a lot of the libvirtd daemon code into a set of reusable
objects.
* virNetServer: A server contains one or more services which
accept incoming clients. It maintains the list of active
clients. It has a list of RPC programs which can be used
by clients. When clients produce a complete RPC message,
the server passes this onto the corresponding program for
handling, and queues any response back with the client.
* virNetServerClient: Encapsulates a single client connection.
All I/O for the client is handled, reading & writing RPC
messages.
* virNetServerProgram: Handles processing and dispatch of
RPC method calls for a single RPC (program,version).
Multiple programs can be registered with the server.
* virNetServerService: Encapsulates socket(s) listening for
new connections. Each service listens on a single host/port,
but may have multiple sockets if on a dual IPv4/6 host.
Each new daemon now merely has to define the list of RPC procedures
& their handlers. It does not need to deal with any network related
functionality at all.