Commit Graph

118 Commits

Author SHA1 Message Date
Daniel P. Berrange
b2c6231647 Fix sending/receiving of FDs when stream returns EAGAIN
The code calling sendfd/recvfd was mistakenly assuming those
calls would never block. They can in fact return EAGAIN and
this is causing us to drop the client connection when blocking
ocurrs while sending/receiving FDs.

Fixing this is a little hairy on the incoming side, since at
the point where we see the EAGAIN, we already thought we had
finished receiving all data for the packet. So we play a little
trick to reset bufferOffset again and go back into polling for
more data.

* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Update
  virNetSocketSendFD/RecvFD to return 0 on EAGAIN, or 1
  on success
* src/rpc/virnetclient.c: Move decoding of header & fds
  out of virNetClientCallDispatch and into virNetClientIOHandleInput.
  Handling blocking when sending/receiving FDs
* src/rpc/virnetmessage.h: Add a 'donefds' field to track
  how many FDs we've sent / received
* src/rpc/virnetserverclient.c: Handling blocking when
  sending/receiving FDs
2011-11-07 09:26:59 +00:00
Michal Privoznik
b7b5e0c833 ServerClient: Flush cached data
If daemon is using SASL it reads client data into a cache. This cache is
big (usually 65KB) and can thus contain 2 or more messages. However,
on socket event we can dispatch only one message. So if we read two
messages at once, the second will not be dispatched as the socket event
goes away with filling the cache.
Moreover, when dispatching the cache we need to remember to take care
of client max requests limit.
2011-11-01 15:58:05 +01:00
Daniel P. Berrange
3ae0ab67e6 Extend RPC server to allow FD passing
The RPC server classes are extended to allow FDs to be received
from clients with calls. There is not currently any way for a
procedure to pass FDs back to the client with replies

* daemon/remote.c, src/rpc/gendispatch.pl: Change virNetMessageHeaderPtr
  param to virNetMessagePtr in dispatcher impls
* src/rpc/virnetserver.c, src/rpc/virnetserverclient.c,
  src/rpc/virnetserverprogram.c, src/rpc/virnetserverprogram.h:
  Extend to support FD passing
2011-10-28 10:43:00 +01:00
Daniel P. Berrange
ddf3bd32ce Rewrite all the DTrace/SystemTAP probing
The libvirtd daemon had a few crude system tap probes. Some of
these were broken during the RPC rewrite. The new modular RPC
code is structured in a way that allows much more effective
tracing. Instead of trying to hook up the original probes,
define a new set of probes for the RPC and event code.

The master probes file is now src/probes.d.  This contains
probes for virNetServerClientPtr, virNetClientPtr, virSocketPtr
virNetTLSContextPtr and virNetTLSSessionPtr modules. Also add
probes for the poll event loop.

The src/dtrace2systemtap.pl script can convert the probes.d
file into a libvirt_probes.stp file to make use from systemtap
much simpler.

The src/rpc/gensystemtap.pl script can generate a set of
systemtap functions for translating RPC enum values into
printable strings. This works for all RPC header enums (program,
type, status, procedure) and also the authentication enum

The PROBE macro will automatically generate a VIR_DEBUG
statement, so any place with a PROBE can remove any existing
manual DEBUG statements.

* daemon/libvirtd.stp, daemon/probes.d: Remove obsolete probing
* daemon/libvirtd.h: Remove probe macros
* daemon/Makefile.am: Remove all probe buildings/install
* daemon/remote.c: Update authentication probes
* src/dtrace2systemtap.pl, src/rpc/gensystemtap.pl: Scripts
  to generate STP files
* src/internal.h: Add probe macros
* src/probes.d: Master list of probes
* src/rpc/virnetclient.c, src/rpc/virnetserverclient.c,
  src/rpc/virnetsocket.c, src/rpc/virnettlscontext.c,
  src/util/event_poll.c: Insert probe points, removing any
  DEBUG statements that duplicate the info
2011-10-11 11:26:13 +01:00
Daniel P. Berrange
b3fb288e52 Fix tracking of RPC messages wrt streams
Commit 2c85644b0b attempted to
fix a problem with tracking RPC messages from streams by doing

-            if (msg->header.type == VIR_NET_REPLY) {
+            if (msg->header.type == VIR_NET_REPLY ||
+                (msg->header.type == VIR_NET_STREAM &&
+                 msg->header.status != VIR_NET_CONTINUE)) {
                 client->nrequests--;

In other words any stream packet, with status NET_OK or NET_ERROR
would cause nrequests to be decremented. This is great if the
packet from from a synchronous virStreamFinish or virStreamAbort
API call, but wildly wrong if from a server initiated abort.
The latter resulted in 'nrequests' being decremented below zero.
This then causes all I/O for that client to be stopped.

Instead of trying to infer whether we need to decrement the
nrequests field, from the message type/status, introduce an
explicit 'bool tracked' field to mark whether the virNetMessagePtr
object is subject to tracking.

Also add a virNetMessageClear function to allow a message
contents to be cleared out, without adversely impacting the
'tracked' field as a naive memset() would do

* src/rpc/virnetmessage.c, src/rpc/virnetmessage.h: Add
  a 'bool tracked' field and virNetMessageClear() API
* daemon/remote.c, daemon/stream.c, src/rpc/virnetclientprogram.c,
  src/rpc/virnetclientstream.c, src/rpc/virnetserverclient.c,
  src/rpc/virnetserverprogram.c: Switch over to use
  virNetMessageClear() and pass in the 'bool tracked' value
  when creating messages.
2011-09-01 10:52:35 +01:00
Daniel P. Berrange
f682c25308 Ensure client streams are closed when marking a client for close
Every active stream results in a reference being held on the
virNetServerClientPtr object. This meant that if a client quit
with any streams active, although all I/O was stopped the
virNetServerClientPtr object would leak. This causes libvirtd
to leak any file handles associated with open streams when a
client quit

To fix this, when we call virNetServerClientClose there is a
callback invoked which lets the daemon release the streams
and thus the extra references

* daemon/remote.c: Add a hook to close all streams
* daemon/stream.c, daemon/stream.h: Add API for releasing
  all streams
* src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h:
  Allow registration of a hook to trigger when closing client
2011-08-16 14:38:11 -07:00
Eric Blake
8d0be8fd57 rpc: avoid null deref
Detected by Coverity.

* src/rpc/virnetserverclient.c (virNetServerClientDispatchRead):
Avoid null deref on OOM.
2011-08-02 16:39:01 -06:00
Eric Blake
4e006b844f rpc: avoid libvirtd crash on unexpected client close
Steps to reproduce this problem (vm1 is not running):
for i in `seq 50`; do virsh managedsave vm1& done; killall virsh

Pre-patch, virNetServerClientClose could end up setting client->sock
to NULL prior to other cleanup functions trying to use client->sock.
This fixes things by checking for NULL in more places, and by deferring
the cleanup until after all queued messages have been served.

* src/rpc/virnetserverclient.c (virNetServerClientRegisterEvent)
(virNetServerClientGetFD, virNetServerClientIsSecure)
(virNetServerClientLocalAddrString)
(virNetServerClientRemoteAddrString): Check for closed socket.
(virNetServerClientClose): Rearrange close sequence.
Analysis from Wen Congyang.
2011-08-02 07:46:37 -06:00
Daniel P. Berrange
0302391ee6 Fix race in ref counting when handling RPC jobs
When an incoming RPC message is ready for processing,

  virNetServerClientDispatchRead()

will invoke the 'dispatchFunc' callback. This is set to

  virNetServerDispatchNewMessage

This function puts the message + client in a queue for processing by the thread
pool. The thread pool worker function is

  virNetServerHandleJob

The first thing this does is acquire an extra reference on the 'client'.

Unfortunately, between the time the message+client are put on the thread pool
queue, and the time the worker runs, the client object may have had its last
reference removed.

We clearly need to add the reference to the client object before putting the
client on the processing queue

* src/rpc/virnetserverclient.c: Add a reference to the client when
  invoking the dispatch function
* src/rpc/virnetserver.c: Don't acquire a reference to the client
  when in the worker thread
2011-07-26 16:22:32 +01:00
Daniel P. Berrange
7ea2ef4ce8 Use a virFreeCallback on virNetSocket to ensure safe release
When unregistering an I/O callback from a virNetSocket object,
there is still a chance that an event may come in on the callback.
In this case it is possible that the virNetSocket might have been
freed already. Make use of a virFreeCallback when registering
the I/O callbacks and hold a reference for the entire time the
callback is set.

* src/rpc/virnetsocket.c: Register a free function for the
  file handle watch
* src/rpc/virnetsocket.h, src/rpc/virnetserverservice.c,
  src/rpc/virnetserverclient.c, src/rpc/virnetclient.c: Add
  a free function for the socket I/O watches
2011-07-19 11:20:27 -04:00
Eric Blake
06dbfa6d21 maint: print flags in hex during debug
Continuation of commit 313ac7fd, and enforce things with a syntax
check.

Technically, virNetServerClientCalculateHandleMode is not printing
a mode_t, but rather a collection of VIR_EVENT_HANDLE_* bits;
however, these bits are < 8, so there is no different in the
output, and that was the easiest way to silence the new syntax check.

* cfg.mk (sc_flags_debug): New syntax check.
(exclude_file_name_regexp--sc_flags_debug): Add exemptions.
* src/fdstream.c (virFDStreamOpenFileInternal): Print flags in
hex, mode_t in octal.
* src/libvirt-qemu.c (virDomainQemuMonitorCommand)
(virDomainQemuAttach): Likewise.
* src/locking/lock_driver_nop.c (virLockManagerNopInit): Likewise.
* src/locking/lock_driver_sanlock.c (virLockManagerSanlockInit):
Likewise.
* src/locking/lock_manager.c: Likewise.
* src/qemu/qemu_migration.c: Likewise.
* src/qemu/qemu_monitor.c: Likewise.
* src/rpc/virnetserverclient.c
(virNetServerClientCalculateHandleMode): Print mode with %o.
2011-07-13 07:47:45 -06:00
Daniel P. Berrange
3cfdc57b85 Fix sending of reply to final RPC message
The dispatch for the CLOSE RPC call was invoking the method
virNetServerClientClose(). This caused the client connection
to be immediately terminated. This meant the reply to the
final RPC message was never sent. Prior to the RPC rewrite
we merely flagged the connection for closing, and actually
closed it when the next RPC call dispatch had completed.

* daemon/remote.c: Flag connection for a delayed close
* daemon/stream.c: Update to use new API for closing
  failed connection
* src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h:
  Add support for a delayed connection close. Rename the
  virNetServerClientMarkClose method to virNetServerClientImmediateClose
  to clarify its semantics
2011-07-08 16:19:57 +01:00
Daniel P. Berrange
2c85644b0b Fix release of outgoing stream confirmation/abort message
When sending back the final OK or ERROR message on completion
of a stream, we were not decrementing the 'nrequests' tracker
on the client. With the default requests limit of '5', this
meant once a client had created 5 streams, they are unable to
process any further RPC calls.  There was also a bug when
handling an error from decoding a message length header, which
meant a client connection would not immediately be closed.

* src/rpc/virnetserverclient.c: Fix release of request after
  stream completion & mark client for close on error
2011-07-08 16:19:57 +01:00
Eric Blake
2aa83b43d3 rpc: fix logic bug
Spotted by Coverity.  If we don't update tmp each time through
the loop, then if the filter being removed was not the head of
the list, we accidentally lose all filters prior to the one we
wanted to remove.

* src/rpc/virnetserverclient.c (virNetServerClientRemoveFilter):
    Don't lose unrelated filters.
2011-07-04 09:45:21 +08:00
Daniel P. Berrange
d840fe93b0 Ensure RPC message is cleared before being reused
To save on memory reallocation, virNetMessage instances that
have been transmitted, may be reused for a subsequent incoming
message. We forgot to clear out the old data of the message
fully, which caused later confusion upon read.

* src/rpc/virnetserverclient.c: memset entire message before
  reusing it
2011-06-30 18:04:01 +01:00
Daniel P. Berrange
27111b350f Fix hardcoded limit on client requests in RPC code
The virNetServerClient object had a hardcoded limit of 10 requests
per client. Extend constructor to allow it to be passed in as a
configurable variable. Wire this up to the 'max_client_requests'
config parameter in libvirtd

* daemon/libvirtd.c: Pass max_client_requests into services
* src/rpc/virnetserverservice.c, src/rpc/virnetserverservice.h: Pass
  nrequests_client_max to clients
* src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h: Allow
  configurable request limit
2011-06-30 18:04:01 +01:00
Daniel P. Berrange
c9ede1cfba Avoid free'ing a filtered RPC message in the server
When a filter steals an RPC message, that message must
not be freed, except by the filter code itself

* src/rpc/virnetserverclient.c: Don't free stolen RPC
  messages
2011-06-29 11:08:59 +01:00
Daniel P. Berrange
4e00b1da8a Introduce generic RPC server objects
To facilitate creation of new daemons providing XDR RPC services,
pull a lot of the libvirtd daemon code into a set of reusable
objects.

 * virNetServer: A server contains one or more services which
   accept incoming clients. It maintains the list of active
   clients. It has a list of RPC programs which can be used
   by clients. When clients produce a complete RPC message,
   the server passes this onto the corresponding program for
   handling, and queues any response back with the client.

 * virNetServerClient: Encapsulates a single client connection.
   All I/O for the client is handled, reading & writing RPC
   messages.

 * virNetServerProgram: Handles processing and dispatch of
   RPC method calls for a single RPC (program,version).
   Multiple programs can be registered with the server.

 * virNetServerService: Encapsulates socket(s) listening for
   new connections. Each service listens on a single host/port,
   but may have multiple sockets if on a dual IPv4/6 host.

Each new daemon now merely has to define the list of RPC procedures
& their handlers. It does not need to deal with any network related
functionality at all.
2011-06-24 11:48:37 +01:00