http://www.uhv.edu/ac/newsletters/writing/grammartip2009.07.01.htm
(and several other sites) give hints that 'onto' is best used if
you can also add 'up' just before it and still make sense. In many
cases in the code base, we really want the two-word form, or even
a simplification to just 'on' or 'to'.
* docs/hacking.html.in: Use correct 'on to'.
* python/libvirt-override.c: Likewise.
* src/lxc/lxc_controller.c: Likewise.
* src/util/virpci.c: Likewise.
* daemon/THREADS.txt: Use simpler 'on'.
* docs/formatdomain.html.in: Better usage.
* docs/internals/rpc.html.in: Likewise.
* src/conf/domain_event.c: Likewise.
* src/rpc/virnetclient.c: Likewise.
* tests/qemumonitortestutils.c: Likewise.
* HACKING: Regenerate.
Signed-off-by: Eric Blake <eblake@redhat.com>
Despite the comment stating virNetClientIncomingEvent handler should
never be called with either client->haveTheBuck or client->wantClose
set, there is a sequence of events that may lead to both booleans being
true when virNetClientIncomingEvent is called. However, when that
happens, we must not immediately close the socket as there are other
threads waiting for the buck and they would cause SIGSEGV once they are
woken up after the socket was closed. Another thing is we should clear
all remaining calls in the queue after closing the socket.
The situation that can lead to the crash involves three threads, one of
them running event loop and the other two calling libvirt APIs. The
event loop thread detects an event on client->sock and calls
virNetClientIncomingEvent handler. But before the handler gets a chance
to lock client, the other two threads (T1 and T2) start calling some
APIs. T1 gets the buck and detects EOF on client->sock while processing
its RPC call. Since T2 is waiting for its own call, T1 passes the buck
on to it and unlocks client. But before T2 gets the signal, the event
loop thread wakes up, does its job and closes client->sock. The crash
happens when T2 actually wakes up and tries to do its job using a closed
client->sock.
When converting to virObject, the probes on the 'Free' functions
were removed on the basis that there is a probe on virObjectFree
that suffices. This puts a burden on people writing probe scripts
to identify which object is being dispose. This adds back probes
in the 'Dispose' functions and updates the rpc monitor systemtap
example to use them
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When creating the virClass object for virNetClient, we specified
virObject as the parent instead of virObjectLockable
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently all classes must directly inherit from virObject.
This allows for arbitrarily deep hierarchy. There's not much
to this aside from chaining up the 'dispose' handlers from
each class & providing APIs to check types.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
A number of bugs handling file descriptors received from the
server caused the FDs to be lost and leaked.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The libvirt coding standard is to use 'function(...args...)'
instead of 'function (...args...)'. A non-trivial number of
places did not follow this rule and are fixed in this patch.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
https://www.gnu.org/licenses/gpl-howto.html recommends that
the 'If not, see <url>.' phrase be a separate sentence.
* tests/securityselinuxhelper.c: Remove doubled line.
* tests/securityselinuxtest.c: Likewise.
* globally: s/; If/. If/
e5a1bee07 introduced a regression in Boxes: when Boxes is left idle
(it's still doing some libvirt calls in the background), the
libvirt connection gets closed after a few minutes. What happens is
that this code in virNetClientIOHandleOutput gets triggered:
if (!thecall)
return -1; /* Shouldn't happen, but you never know... */
and after the changes in e5a1bee07, this causes the libvirt connection
to be closed.
Upon further investigation, what happens is that
virNetClientIOHandleOutput is called from gvir_event_handle_dispatch
in libvirt-glib, which is triggered because the client fd became
writable. However, between the times gvir_event_handle_dispatch
is called, and the time the client lock is grabbed and
virNetClientIOHandleOutput is called, another thread runs and
completes the current call. 'thecall' is then NULL when the first
thread gets to run virNetClientIOHandleOutput.
After describing this situation on IRC, danpb suggested this:
11:37 < danpb> In that case I think the correct thing would be to change
'return -1' above to 'return 0' since that's not actually an
error - its a rare, but expected event
which is what this patch is doing. I've tested it against master
libvirt, and I didn't get disconnected in ~10 minutes while this
happens in less than 5 minutes without this patch.
Unfortunately libssh2 doesn't support all types of host keys that can be
saved in the known_hosts file. Also it does not report that parsing of
the file failed. This results into truncated known_hosts files where the
standard client stores keys also in other formats (eg.
ecdsa-sha2-nistp256).
This patch changes the default location of the known_hosts file into the
libvirt private configuration directory, where it will be only written
by the libssh2 layer itself. This prevents trashing user's known_host
file.
This patch adds a glue layer to enable using libssh2 code with the
network client code.
As in the original client implementation, shell code is sent to the
server to detect correct options for netcat and connect to libvirt's
unix socket.
Currently the virNetClientPtr constructor will always register
the async IO event handler and the keepalive objects. In the
case of the lock manager, there will be no event loop available
nor keepalive support required. Split this setup out of the
constructor and into separate methods.
The remote driver will enable async IO and keepalives, while
the LXC driver will only enable async IO
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
From man poll(2), poll does not set errno=EAGAIN on interrupt, however
it does set errno=EINTR. Have libvirt retry on the appropriate errno.
Under heavy load, a program of mine kept getting libvirt errors 'poll on
socket failed: Interrupted system call'. The signals were SIGCHLD from
processes forked by threads unrelated to those using libvirt.
In the socket event handler for the RPC client we must deal
with read/write events, before checking for EOF, otherwise
we might close the socket before we've read & acted upon the
last RPC messages
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Allow detection of socket close in virNetClient via a callback
function, triggered on any condition that causes the socket to
be closed.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently if the keepalive timer triggers, the 'markClose'
flag is set on the virNetClient. A controlled shutdown will
then be performed. If an I/O error occurs during read or
write of the connection an error is raised back to the
caller, but the connection isn't marked for close. This
patch ensures that all I/O error scenarios always result
in the connection being marked for close.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Per the FSF address could be changed from time to time, and GNU
recommends the following now: (http://www.gnu.org/licenses/gpl-howto.html)
You should have received a copy of the GNU General Public License
along with Foobar. If not, see <http://www.gnu.org/licenses/>.
This patch removes the explicit FSF address, and uses above instead
(of course, with inserting 'Lesser' before 'General').
Except a bunch of files for security driver, all others are changed
automatically, the copyright for securify files are not complete,
that's why to do it manually:
src/security/security_selinux.h
src/security/security_driver.h
src/security/security_selinux.c
src/security/security_apparmor.h
src/security/security_apparmor.c
src/security/security_driver.c
First 'poll' can't return EWOULDBLOCK, and second, we're checking errno
so far away from the poll() call that we've probably already trashed the
original errno value.
In addition to keepalive responses, we also need to send keepalive
requests from client IO loop to properly detect dead connection in case
a libvirt API is called from the main loop, which prevents any timers to
be called.
The previous commit removed the only usage of ``all'' parameter in
virKeepAliveStopInternal, which was actually the only reason for having
virKeepAliveStopInternal. This effectively reverts most of commit
6446a9e20c.
When a libvirt API is called from the main event loop (which seems to be
common in event-based glib apps), the client IO loop would properly
handle keepalive requests sent by a server but will not actually send
them because the main event loop is blocked with the API. This patch
gets rid of response timer and the thread which is processing keepalive
requests is also responsible for queueing responses for delivery.
As non-blocking calls are no longer dropped, we don't really need to
care that much about their fate and wait for the thread with the buck
to process them. If another thread has the buck, we can just push a
non-blocking call to the queue and be done with it.
So far, we were dropping non-blocking calls whenever sending them would
block. In case a client is sending lots of stream calls (which are not
supposed to generate any reply), the assumption that having other calls
in a queue is sufficient to get a reply from the server doesn't work. I
tried to fix this in b1e374a7ac but
failed and reverted that commit.
With this patch, non-blocking calls are never dropped (unless the
connection is being closed) and will always be sent.
Normally, when every call has a thread associated with it, the thread
may get the buck and be in charge of sending all calls until its own
call is done. When we introduced non-blocking calls, we had to add
special handling of new non-blocking calls. This patch uses event loop
to send data if there is no thread to get the buck so that any
non-blocking calls left in the queue are properly sent without having to
handle them specially. It also avoids adding even more cruft to client
IO loop in the following patches.
With this change in, non-blocking calls may see unpredictable delays in
delivery when the client has no event loop registered. However, the only
non-blocking calls we have are keepalives and we already require event
loop for them, which makes this a non-issue until someone introduces new
non-blocking calls.
Currently, we are allocating buffer for RPC messages statically.
This is not such pain when RPC limits are small. However, if we want
ever to increase those limits, we need to allocate buffer dynamically,
based on RPC message len (= the first 4 bytes). Therefore we will
decrease our mem usage in most cases and still be flexible enough in
corner cases.