In the socket event handler for the RPC client we must deal
with read/write events, before checking for EOF, otherwise
we might close the socket before we've read & acted upon the
last RPC messages
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Allow detection of socket close in virNetClient via a callback
function, triggered on any condition that causes the socket to
be closed.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently if the keepalive timer triggers, the 'markClose'
flag is set on the virNetClient. A controlled shutdown will
then be performed. If an I/O error occurs during read or
write of the connection an error is raised back to the
caller, but the connection isn't marked for close. This
patch ensures that all I/O error scenarios always result
in the connection being marked for close.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Per the FSF address could be changed from time to time, and GNU
recommends the following now: (http://www.gnu.org/licenses/gpl-howto.html)
You should have received a copy of the GNU General Public License
along with Foobar. If not, see <http://www.gnu.org/licenses/>.
This patch removes the explicit FSF address, and uses above instead
(of course, with inserting 'Lesser' before 'General').
Except a bunch of files for security driver, all others are changed
automatically, the copyright for securify files are not complete,
that's why to do it manually:
src/security/security_selinux.h
src/security/security_driver.h
src/security/security_selinux.c
src/security/security_apparmor.h
src/security/security_apparmor.c
src/security/security_driver.c
First 'poll' can't return EWOULDBLOCK, and second, we're checking errno
so far away from the poll() call that we've probably already trashed the
original errno value.
In addition to keepalive responses, we also need to send keepalive
requests from client IO loop to properly detect dead connection in case
a libvirt API is called from the main loop, which prevents any timers to
be called.
The previous commit removed the only usage of ``all'' parameter in
virKeepAliveStopInternal, which was actually the only reason for having
virKeepAliveStopInternal. This effectively reverts most of commit
6446a9e20c.
When a libvirt API is called from the main event loop (which seems to be
common in event-based glib apps), the client IO loop would properly
handle keepalive requests sent by a server but will not actually send
them because the main event loop is blocked with the API. This patch
gets rid of response timer and the thread which is processing keepalive
requests is also responsible for queueing responses for delivery.
As non-blocking calls are no longer dropped, we don't really need to
care that much about their fate and wait for the thread with the buck
to process them. If another thread has the buck, we can just push a
non-blocking call to the queue and be done with it.
So far, we were dropping non-blocking calls whenever sending them would
block. In case a client is sending lots of stream calls (which are not
supposed to generate any reply), the assumption that having other calls
in a queue is sufficient to get a reply from the server doesn't work. I
tried to fix this in b1e374a7ac but
failed and reverted that commit.
With this patch, non-blocking calls are never dropped (unless the
connection is being closed) and will always be sent.
Normally, when every call has a thread associated with it, the thread
may get the buck and be in charge of sending all calls until its own
call is done. When we introduced non-blocking calls, we had to add
special handling of new non-blocking calls. This patch uses event loop
to send data if there is no thread to get the buck so that any
non-blocking calls left in the queue are properly sent without having to
handle them specially. It also avoids adding even more cruft to client
IO loop in the following patches.
With this change in, non-blocking calls may see unpredictable delays in
delivery when the client has no event loop registered. However, the only
non-blocking calls we have are keepalives and we already require event
loop for them, which makes this a non-issue until someone introduces new
non-blocking calls.
Currently, we are allocating buffer for RPC messages statically.
This is not such pain when RPC limits are small. However, if we want
ever to increase those limits, we need to allocate buffer dynamically,
based on RPC message len (= the first 4 bytes). Therefore we will
decrease our mem usage in most cases and still be flexible enough in
corner cases.
This reverts commit b1e374a7ac, which was
rather bad since I failed to consider all sides of the issue. The main
things I didn't consider properly are:
- a thread which sends a non-blocking call waits for the thread with
the buck to process the call
- the code doesn't expect non-blocking calls to remain in the queue
unless they were already partially sent
Thus, the reverted patch actually breaks more than what it fixes and
clients (which may even be libvirtd during p2p migrations) will likely
end up in a deadlock.
Currently, non-blocking calls are either sent immediately or discarded
in case sending would block. This was implemented based on the
assumption that the non-blocking keepalive call is not needed as there
are other calls in the queue which would keep the connection alive.
However, if those calls are no-reply calls (such as those carrying
stream data), the remote party knows the connection is alive but since
we don't get any reply from it, we think the connection is dead.
This is most visible in tunnelled migration. If it happens to be longer
than keepalive timeout (30s by default), it may be unexpectedly aborted
because the connection is considered to be dead.
With this patch, we only discard non-blocking calls when the last call
with a thread is completed and thus there is no thread left to keep
sending the remaining non-blocking calls.
The docs for virConnectSetKeepAlive() advertise that this function
should be able to disable keepalives on negative or zero interval time.
This patch removes the check that prohibited this and adds code to
disable keepalives on negative/zero interval.
* src/libvirt.c: virConnectSetKeepAlive(): - remove check for negative
values
* src/rpc/virnetclient.c
* src/rpc/virnetclient.h: - add virNetClientKeepAliveStop() to disable
keepalive messages
* src/remote/remote_driver.c: remoteSetKeepAlive(): -add ability to
disable keepalives
The code is splattered with a mix of
sizeof foo
sizeof (foo)
sizeof(foo)
Standardize on sizeof(foo) and add a syntax check rule to
enforce it
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
A multi-threaded client with event loop may crash if one of its threads
closes a connection while event loop is in the middle of sending
keep-alive message (either request or response). The right place for it
is inside virNetClientIOEventLoop() between poll() and
virNetClientLock(). We should only close a connection directly if no-one
is using it and defer the closing to the last user otherwise. So far we
only did so if the close was initiated by keep-alive timeout.
If client stream does not have any data to sink and neither received
EOF, a dummy packet is sent to the daemon signalising client is ready to
sink some data. However, after we added event loop to client a race may
occur:
Thread 1 calls virNetClientStreamRecvPacket and since no data are cached
nor stream has EOF, it decides to send dummy packet to server which will
sent some data in turn. However, during this decision and actual message
exchange with server -
Thread 2 receives last stream data from server. Therefore an EOF is set
on stream and if there is a call waiting (which is not yet) it is woken
up. However, Thread 1 haven't sent anything so far, so there is no call
to be woken up. So this thread sent dummy packet to daemon, which
ignores that as no stream is associated with such packet and therefore
no reply will ever come.
This race causes client to hang indefinitely.
When one thread passes the buck to another thread, it uses
virCondSignal to wake up the target thread. The variable
'haveTheBuck' is not updated in a race-free manner when
this occurs. The current thread sets it to false, and the
woken up thread sets it to true. There is a window where
a 3rd thread can come in and grab the buck.
Even if this didn't lead to crashes & deadlocks, this would
still result in unfairness in the buckpassing algorithm.
A better solution is to *never* set haveTheBuck to false
when we're passing the buck. Only set it to false when there
is no further thread waiting for the buck.
* src/rpc/virnetclient.c: Only set haveTheBuck to false
if no thread is waiting
Commit fd06692544 tried to fix
a race condition in
commit fa9595003d
Author: Daniel P. Berrange <berrange@redhat.com>
Date: Fri Nov 11 15:28:41 2011 +0000
Explicitly track whether the buck is held in remote client
Unfortunately there is a second race condition whereby the
event loop can trigger due to incoming data to read. Revert
this fix, so a complete fix for the problem can be cleanly
applied
* src/rpc/virnetclient.c: Revert fd06692544
https://bugzilla.redhat.com/show_bug.cgi?id=648855 mentioned a
misuse of 'an' where 'a' is proper; that has since been fixed,
but a search found other problems (some were a spelling error for
'and', while most were fixed by 'a').
* daemon/stream.c: Fix grammar.
* src/conf/domain_conf.c: Likewise.
* src/conf/domain_event.c: Likewise.
* src/esx/esx_driver.c: Likewise.
* src/esx/esx_vi.c: Likewise.
* src/rpc/virnetclient.c: Likewise.
* src/rpc/virnetserverprogram.c: Likewise.
* src/storage/storage_backend_fs.c: Likewise.
* src/util/conf.c: Likewise.
* src/util/dnsmasq.c: Likewise.
* src/util/iptables.c: Likewise.
* src/xen/xen_hypervisor.c: Likewise.
* src/xen/xend_internal.c: Likewise.
* src/xen/xs_internal.c: Likewise.
* tools/virsh.c: Likewise.
Originaly, the code checked if another client is the queue and infered
ownership of the buck from that. Commit fa9595003d
added a separate variable to track the buck. That caused, that a new
call might enter claiming it has the buck, while another thread was
signalled to take the buck. This ends in two threads claiming they hold
the buck and entering poll(). This happens due to a race on waking up
threads on the client lock mutex.
This caused multi-threaded clients to hang, most prominently visible and
reproducible on python based clients, like virt-manager.
This patch causes threads, that have been signalled to take the buck to
re-check if buck is held by another thread.
Detected by Coverity. Leak introduced in commit 673adba.
Two separate bugs here:
1. call was not freed on all error paths
2. virCondDestroy was called even if virCondInit failed
Signed-off-by: Alex Jia <ajia@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
When another thread was dispatching while we wanted to send a
non-blocking call, we correctly queued the call and woke up the thread
but the thread just threw the call away since it forgot to recheck if
its socket was writable.
When virNetClientIOEventLoop is called for a non-blocking call and not
even a single byte can be sent from this call without blocking, we
properly reported that to the caller which properly frees the call. But
we never removed the call from a call queue.
Due to the asynchronous nature of streams, we might continue to
receive some stream packets from the server even after we have
shutdown the stream on the client side. These should be discarded
silently, rather than raising an error in the RPC layer.
* src/rpc/virnetclient.c: Discard stream data silently
Add a new virNetClientSendNonBlock which returns 2 on
full send, 1 on partial send, 0 on no send, -1 on error
If a partial send occurs, then a subsequent call to any
of the virNetClientSend* APIs will finish any outstanding
I/O.
TODO: the virNetClientEvent event handler could be used
to speed up completion of partial sends if an event loop
is present.
* src/rpc/virnetsocket.h, src/rpc/virnetsocket.c: Add new
virNetSocketHasPendingData() API to test for cached
data pending send.
* src/rpc/virnetclient.c, src/rpc/virnetclient.h: Add new
virNetClientSendNonBlock() API to send non-blocking API
Stop multiplexing virNetClientSend for two different purposes,
instead add virNetClientSendWithReply and virNetClientSendNoReply
* src/rpc/virnetclient.c, src/rpc/virnetclient.h: Replace
virNetClientSend with virNetClientSendWithReply and
virNetClientSendNoReply
* src/rpc/virnetclientprogram.c, src/rpc/virnetclientstream.c:
Update for new API names
Remove some duplication by pulling the code for passing the
buck out into a helper method
* src/rpc/virnetclient.c: Introduce virNetClientIOEventLoopPassTheBuck
Instead of inferring whether the buck is held from the waitDispatch
pointer, use an explicit 'bool haveTheBuck' field
* src/rpc/virnetclient.c: Explicitly track the buck
Directly messing around with the linked list is potentially
dangerous. Introduce some helper APIs to deal with list
manipulating the list
* src/rpc/virnetclient.c: Create linked list handlers
The code calling sendfd/recvfd was mistakenly assuming those
calls would never block. They can in fact return EAGAIN and
this is causing us to drop the client connection when blocking
ocurrs while sending/receiving FDs.
Fixing this is a little hairy on the incoming side, since at
the point where we see the EAGAIN, we already thought we had
finished receiving all data for the packet. So we play a little
trick to reset bufferOffset again and go back into polling for
more data.
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Update
virNetSocketSendFD/RecvFD to return 0 on EAGAIN, or 1
on success
* src/rpc/virnetclient.c: Move decoding of header & fds
out of virNetClientCallDispatch and into virNetClientIOHandleInput.
Handling blocking when sending/receiving FDs
* src/rpc/virnetmessage.h: Add a 'donefds' field to track
how many FDs we've sent / received
* src/rpc/virnetserverclient.c: Handling blocking when
sending/receiving FDs
Extend the RPC client code to allow file descriptors to be sent
to the server with calls, and received back with replies.
* src/remote/remote_driver.c: Stub extra args
* src/libvirt_private.syms, src/rpc/virnetclient.c,
src/rpc/virnetclient.h, src/rpc/virnetclientprogram.c,
src/rpc/virnetclientprogram.h: Extend APIs to allow
FD passing
The libvirtd daemon had a few crude system tap probes. Some of
these were broken during the RPC rewrite. The new modular RPC
code is structured in a way that allows much more effective
tracing. Instead of trying to hook up the original probes,
define a new set of probes for the RPC and event code.
The master probes file is now src/probes.d. This contains
probes for virNetServerClientPtr, virNetClientPtr, virSocketPtr
virNetTLSContextPtr and virNetTLSSessionPtr modules. Also add
probes for the poll event loop.
The src/dtrace2systemtap.pl script can convert the probes.d
file into a libvirt_probes.stp file to make use from systemtap
much simpler.
The src/rpc/gensystemtap.pl script can generate a set of
systemtap functions for translating RPC enum values into
printable strings. This works for all RPC header enums (program,
type, status, procedure) and also the authentication enum
The PROBE macro will automatically generate a VIR_DEBUG
statement, so any place with a PROBE can remove any existing
manual DEBUG statements.
* daemon/libvirtd.stp, daemon/probes.d: Remove obsolete probing
* daemon/libvirtd.h: Remove probe macros
* daemon/Makefile.am: Remove all probe buildings/install
* daemon/remote.c: Update authentication probes
* src/dtrace2systemtap.pl, src/rpc/gensystemtap.pl: Scripts
to generate STP files
* src/internal.h: Add probe macros
* src/probes.d: Master list of probes
* src/rpc/virnetclient.c, src/rpc/virnetserverclient.c,
src/rpc/virnetsocket.c, src/rpc/virnettlscontext.c,
src/util/event_poll.c: Insert probe points, removing any
DEBUG statements that duplicate the info
commit 984840a2c2 removed the
notification of waiting calls when VIR_NET_CONTINUE messages
arrive. This was to fix the case of a virStreamAbort() call
being prematurely notified of completion.
The problem is that sometimes there are dummy calls from a
virStreamRecv() call waiting that *do* need to be notified.
These dummy calls should have a status VIR_NET_CONTINUE. So
re-add the notification upon VIR_NET_CONTINUE, but only if
the waiter also has a status of VIR_NET_CONTINUE.
* src/rpc/virnetclient.c: Notify waiting call if stream data
arrives
* src/rpc/virnetclientstream.c: Mark dummy stream read packet
with status VIR_NET_CONTINUE
If a client had initiated a stream abort, it will have a call
waiting for a reply in the queue. If more data continues to
arrive on the stream, the abort command could mistakenly get
signalled as complete. Remove the code from async data processing
that looked for waiting calls. Add a sanity check to ensure no
async call can ever be marked as needing a reply
* src/rpc/virnetclient.c: Ensure async data packets can't
trigger a reply