In the socket event handler for the RPC client we must deal
with read/write events, before checking for EOF, otherwise
we might close the socket before we've read & acted upon the
last RPC messages
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Allow detection of socket close in virNetClient via a callback
function, triggered on any condition that causes the socket to
be closed.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently if the keepalive timer triggers, the 'markClose'
flag is set on the virNetClient. A controlled shutdown will
then be performed. If an I/O error occurs during read or
write of the connection an error is raised back to the
caller, but the connection isn't marked for close. This
patch ensures that all I/O error scenarios always result
in the connection being marked for close.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Any time we have a string with no % passed through gettext, a
translator can inject a % to cause a stack overread. When there
is nothing to format, it's easier to ask for a string that cannot
be used as a formatter, by using a trivial "%s" format instead.
In the past, we have used --disable-nls to catch some of the
offenders, but that doesn't get run very often, and many more
uses have crept in. Syntax check to the rescue!
The syntax check can catch uses such as
virReportError(code,
_("split "
"string"));
by using a sed script to fold context lines into one pattern
space before checking for a string without %.
This patch is just mechanical insertion of %s; there are probably
several messages touched by this patch where we would be better
off giving the user more information than a fixed string.
* cfg.mk (sc_prohibit_diagnostic_without_format): New rule.
* src/datatypes.c (virUnrefConnect, virGetDomain)
(virUnrefDomain, virGetNetwork, virUnrefNetwork, virGetInterface)
(virUnrefInterface, virGetStoragePool, virUnrefStoragePool)
(virGetStorageVol, virUnrefStorageVol, virGetNodeDevice)
(virGetSecret, virUnrefSecret, virGetNWFilter, virUnrefNWFilter)
(virGetDomainSnapshot, virUnrefDomainSnapshot): Add %s wrapper.
* src/lxc/lxc_driver.c (lxcDomainSetBlkioParameters)
(lxcDomainGetBlkioParameters): Likewise.
* src/conf/domain_conf.c (virSecurityDeviceLabelDefParseXML)
(virDomainDiskDefParseXML, virDomainGraphicsDefParseXML):
Likewise.
* src/conf/network_conf.c (virNetworkDNSHostsDefParseXML)
(virNetworkDefParseXML): Likewise.
* src/conf/nwfilter_conf.c (virNWFilterIsValidChainName):
Likewise.
* src/conf/nwfilter_params.c (virNWFilterVarValueCreateSimple)
(virNWFilterVarAccessParse): Likewise.
* src/libvirt.c (virDomainSave, virDomainSaveFlags)
(virDomainRestore, virDomainRestoreFlags)
(virDomainSaveImageGetXMLDesc, virDomainSaveImageDefineXML)
(virDomainCoreDump, virDomainGetXMLDesc)
(virDomainMigrateVersion1, virDomainMigrateVersion2)
(virDomainMigrateVersion3, virDomainMigrate, virDomainMigrate2)
(virStreamSendAll, virStreamRecvAll)
(virDomainSnapshotGetXMLDesc): Likewise.
* src/nwfilter/nwfilter_dhcpsnoop.c (virNWFilterSnoopReqLeaseDel)
(virNWFilterDHCPSnoopReq): Likewise.
* src/openvz/openvz_driver.c (openvzUpdateDevice): Likewise.
* src/openvz/openvz_util.c (openvzKBPerPages): Likewise.
* src/qemu/qemu_cgroup.c (qemuSetupCgroup): Likewise.
* src/qemu/qemu_command.c (qemuBuildHubDevStr, qemuBuildChrChardevStr)
(qemuBuildCommandLine): Likewise.
* src/qemu/qemu_driver.c (qemuDomainGetPercpuStats): Likewise.
* src/qemu/qemu_hotplug.c (qemuDomainAttachNetDevice): Likewise.
* src/rpc/virnetsaslcontext.c (virNetSASLSessionGetIdentity):
Likewise.
* src/rpc/virnetsocket.c (virNetSocketNewConnectUNIX)
(virNetSocketSendFD, virNetSocketRecvFD): Likewise.
* src/storage/storage_backend_disk.c
(virStorageBackendDiskBuildPool): Likewise.
* src/storage/storage_backend_fs.c
(virStorageBackendFileSystemProbe)
(virStorageBackendFileSystemBuild): Likewise.
* src/storage/storage_backend_rbd.c
(virStorageBackendRBDOpenRADOSConn): Likewise.
* src/storage/storage_driver.c (storageVolumeResize): Likewise.
* src/test/test_driver.c (testInterfaceChangeBegin)
(testInterfaceChangeCommit, testInterfaceChangeRollback):
Likewise.
* src/vbox/vbox_tmpl.c (vboxListAllDomains): Likewise.
* src/xenxs/xen_sxpr.c (xenFormatSxprDisk, xenFormatSxpr):
Likewise.
* src/xenxs/xen_xm.c (xenXMConfigGetUUID, xenFormatXMDisk)
(xenFormatXM): Likewise.
Per the FSF address could be changed from time to time, and GNU
recommends the following now: (http://www.gnu.org/licenses/gpl-howto.html)
You should have received a copy of the GNU General Public License
along with Foobar. If not, see <http://www.gnu.org/licenses/>.
This patch removes the explicit FSF address, and uses above instead
(of course, with inserting 'Lesser' before 'General').
Except a bunch of files for security driver, all others are changed
automatically, the copyright for securify files are not complete,
that's why to do it manually:
src/security/security_selinux.h
src/security/security_driver.h
src/security/security_selinux.c
src/security/security_apparmor.h
src/security/security_apparmor.c
src/security/security_driver.c
Update the libvirtd dispatch code to use virReportError
instead of the virNetError custom macro
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
To allow virNetServerRun/virNetServerQuit to be invoked multiple
times, we must reset the 'quit' flag in virNetServerRun
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
In the delayed close mode, we're just waiting for final data to
be written back to the client. While waiting, we should not
bother to read more data from the client.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Instead of only removing the ending newline character, it is
better to remove all of standard whitespace character for the
sake of log format.
One example that we have to do this is:
After three times incorrect password input, virsh command
virsh -c qemu://remoteserver/system will report error like:
: Connection reset by peerey,gssapi-keyex,gssapi-with-mic,password).
But it should be:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
: Connection reset by peer
The reason is that we dropped the newline, but have a '\r' left.
The terminal interprets it as "move the cursor back to the start
of the current line", so the error string is messed up.
The virNetServerMDNSTimeoutNew method was casting a long long
to an int when reporting errors. This should just be using
%lld instead of %d, avoiding the need to cast
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Commit 32a9aac switched libvirt to use the XDG base directories
to locate most of its data/config. In particular, the per-user socket
for qemu:///session is now stored in the XDG runtime directory.
This directory is located by looking at the XDG_RUNTIME_DIR environment
variable, with a fallback to ~/.cache/libvirt if this variable is not
set.
When the daemon is autospawned because a client application wants
to use qemu:///session, the daemon is ran in a clean environment
which does not contain XDG_RUNTIME_DIR. It will create its socket
in ~/.cache/libvirt. If the client application has XDG_RUNTIME_DIR
set, it will not look for the socket in the fallback place, and will
fail to connect to the autospawned daemon.
This patch adds XDG_RUNTIME_DIR to the daemon environment before
auto-starting it. I've done this in virNetSocketForkDaemon rather
than in virCommandAddEnvPassCommon as I wasn't sure we want to pass
these variables to other commands libvirt spawns. XDG_CACHE_HOME
and XDG_CONFIG_HOME are also added to the daemon env as it makes use
of those as well.
Refactor the RPC server dispatcher code so that if 'max_workers==0'
the entire server will run single threaded. This is useful for
use cases where there will only ever be 1 client connected
which serializes its requests
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The callback that is invoked when a new RPC client is
initialized does not have any opaque parameter. Add
one so that custom data can be passed into the callback
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The test of ref count is not protected by lock, which is unsafe because
the ref count may have been changed by other threads during the test.
This patch fixes this.
When shutting down libvirtd, the virNetServer shutdown can deadlock
if there are in-flight jobs being handled by virNetServerHandleJob().
virNetServerFree() will acquire the virNetServer lock and call
virThreadPoolFree() to terminate the workers, waiting for the workers
to finish. But in-flight workers will attempt to acquire the
virNetServer lock, resulting in deadlock.
Fix the deadlock by unlocking the virNetServer lock before calling
virThreadPoolFree(). This is safe since the virNetServerPtr object
is ref-counted and only decrementing the ref count needs to be
protected. Additionally, there is no need to re-acquire the lock
after virThreadPoolFree() completes as all the workers have
terminated.
First 'poll' can't return EWOULDBLOCK, and second, we're checking errno
so far away from the poll() call that we've probably already trashed the
original errno value.
In addition to keepalive responses, we also need to send keepalive
requests from client IO loop to properly detect dead connection in case
a libvirt API is called from the main loop, which prevents any timers to
be called.
The previous commit removed the only usage of ``all'' parameter in
virKeepAliveStopInternal, which was actually the only reason for having
virKeepAliveStopInternal. This effectively reverts most of commit
6446a9e20c.
When a libvirt API is called from the main event loop (which seems to be
common in event-based glib apps), the client IO loop would properly
handle keepalive requests sent by a server but will not actually send
them because the main event loop is blocked with the API. This patch
gets rid of response timer and the thread which is processing keepalive
requests is also responsible for queueing responses for delivery.
Add virKeepAliveTimeout and virKeepAliveTrigger APIs that can be used to
set poll timeouts and trigger keepalive timer. virKeepAliveTrigger
checks if it is called to early and does nothing in that case.
The code that needs to be run every keepalive interval of inactivity was
only called from a timer and thus from the main event loop. We will need
to call the code directly from another place.
As non-blocking calls are no longer dropped, we don't really need to
care that much about their fate and wait for the thread with the buck
to process them. If another thread has the buck, we can just push a
non-blocking call to the queue and be done with it.
So far, we were dropping non-blocking calls whenever sending them would
block. In case a client is sending lots of stream calls (which are not
supposed to generate any reply), the assumption that having other calls
in a queue is sufficient to get a reply from the server doesn't work. I
tried to fix this in b1e374a7ac but
failed and reverted that commit.
With this patch, non-blocking calls are never dropped (unless the
connection is being closed) and will always be sent.
Normally, when every call has a thread associated with it, the thread
may get the buck and be in charge of sending all calls until its own
call is done. When we introduced non-blocking calls, we had to add
special handling of new non-blocking calls. This patch uses event loop
to send data if there is no thread to get the buck so that any
non-blocking calls left in the queue are properly sent without having to
handle them specially. It also avoids adding even more cruft to client
IO loop in the following patches.
With this change in, non-blocking calls may see unpredictable delays in
delivery when the client has no event loop registered. However, the only
non-blocking calls we have are keepalives and we already require event
loop for them, which makes this a non-issue until someone introduces new
non-blocking calls.
My latest patch for RPC rework (a2c304f687) introduced a memory leak.
virNetMessageEncodeHeader() is calling VIR_ALLOC_N(msg->buffer, ...)
despite fact, that msg->buffer isn't VIR_FREE()'d on all paths calling
the function. Therefore, rather than injecting free statement switch to
VIR_REALLOC_N().
Since we are allocating RPC buffer dynamically, we can increase limits
for max. size of RPC message and RPC string. This is needed to cover
some corner cases where libvirt is run on such huge machines that their
capabilities XML is 4 times bigger than our current limit. This leaves
users with inability to even connect.
Currently, we are allocating buffer for RPC messages statically.
This is not such pain when RPC limits are small. However, if we want
ever to increase those limits, we need to allocate buffer dynamically,
based on RPC message len (= the first 4 bytes). Therefore we will
decrease our mem usage in most cases and still be flexible enough in
corner cases.
Remove the uid param from virGetUserConfigDirectory,
virGetUserCacheDirectory, virGetUserRuntimeDirectory,
and virGetUserDirectory
These functions were universally called with the
results of getuid() or geteuid(). To make it practical
to port to Win32, remove the uid parameter and hardcode
geteuid()
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When you try to connect to a socket in the abstract namespace,
the error will be ECONNREFUSED for a non-listening daemon. With
the non-abstract namespace though, you instead get ENOENT. Add
a check for this extra errno when auto-spawning the daemon
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This reverts commit b1e374a7ac, which was
rather bad since I failed to consider all sides of the issue. The main
things I didn't consider properly are:
- a thread which sends a non-blocking call waits for the thread with
the buck to process the call
- the code doesn't expect non-blocking calls to remain in the queue
unless they were already partially sent
Thus, the reverted patch actually breaks more than what it fixes and
clients (which may even be libvirtd during p2p migrations) will likely
end up in a deadlock.
Currently, non-blocking calls are either sent immediately or discarded
in case sending would block. This was implemented based on the
assumption that the non-blocking keepalive call is not needed as there
are other calls in the queue which would keep the connection alive.
However, if those calls are no-reply calls (such as those carrying
stream data), the remote party knows the connection is alive but since
we don't get any reply from it, we think the connection is dead.
This is most visible in tunnelled migration. If it happens to be longer
than keepalive timeout (30s by default), it may be unexpectedly aborted
because the connection is considered to be dead.
With this patch, we only discard non-blocking calls when the last call
with a thread is completed and thus there is no thread left to keep
sending the remaining non-blocking calls.
The docs for virConnectSetKeepAlive() advertise that this function
should be able to disable keepalives on negative or zero interval time.
This patch removes the check that prohibited this and adds code to
disable keepalives on negative/zero interval.
* src/libvirt.c: virConnectSetKeepAlive(): - remove check for negative
values
* src/rpc/virnetclient.c
* src/rpc/virnetclient.h: - add virNetClientKeepAliveStop() to disable
keepalive messages
* src/remote/remote_driver.c: remoteSetKeepAlive(): -add ability to
disable keepalives
POSIX says that sa_sigaction is only safe to use if sa_flags
includes SA_SIGINFO; conversely, sa_handler is only safe to
use when flags excludes that bit. Gnulib doesn't guarantee
an implementation of SA_SIGINFO, but does guarantee that
if SA_SIGINFO is undefined, we can safely define it to 0 as
long as we don't dereference the 2nd or 3rd argument of
any handler otherwise registered via sa_sigaction.
Based on a report by Wen Congyang.
* src/rpc/virnetserver.c (SA_SIGINFO): Stub for mingw.
(virNetServerSignalHandler): Avoid bogus dereference.
(virNetServerFatalSignal, virNetServerNew): Set flags properly.
(virNetServerAddSignalHandler): Drop unneeded #ifdef.
DBus connection. The HAL device code further requires that
the DBus connection is integrated with the event loop and
provides such glue logic itself.
The forthcoming FirewallD integration also requires a
dbus connection with event loop integration. Thus we need
to pull the current event loop glue out of the HAL driver.
Thus we create src/util/virdbus.{c,h} files. This contains
just one method virDBusGetSystemBus() which obtains a handle
to the single shared system bus instance, with event glue
automagically setup.
The code is splattered with a mix of
sizeof foo
sizeof (foo)
sizeof(foo)
Standardize on sizeof(foo) and add a syntax check rule to
enforce it
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Commit 5d4b0c4c80 tried to fix certain classes of VPATH builds,
but was too limited. In particular, Guannan Ren reported:
> For example: The libvirt source code resides in /home/testuser,
> I make dist in /tmp/buildvpath, the XDR routine .c file will
> include full path of the header file like:
>
> #include "/home/testuser/src/rpc/virnetprotocol.h"
> #include "internal.h"
> #include <arpa/inet.h>
>
> If we distribute the tarball to another machine to compile,
> it will report error as follows:
>
> rpc/virnetprotocol.c:7:59: fatal error:
> /home/testuser/src/rpc/virnetprotocol.h: No such file or directory
* src/rpc/genprotocol.pl: Fix more include lines.
On 64-bit platforms, unsigned long and unsigned long long are
identical, so we don't have to worry about overflow checks.
On 32-bit platforms, anywhere we narrow unsigned long long back
to unsigned long, we have to worry about overflow; it's easier
to do this in one place by having most of the code use the same
or wider types, and only doing the narrowing at the last minute.
Therefore, the memory set commands remain unsigned long, and
the memory get command now centralizes the overflow check into
libvirt.c, so that drivers don't have to repeat the work.
This also fixes a bug where xen returned the wrong value on
failure (most APIs return -1 on failure, but getMaxMemory
must return 0 on failure).
* src/driver.h (virDrvDomainGetMaxMemory): Use long long.
* src/libvirt.c (virDomainGetMaxMemory): Raise overflow.
* src/test/test_driver.c (testGetMaxMemory): Fix driver.
* src/rpc/gendispatch.pl (name_to_ProcName): Likewise.
* src/xen/xen_hypervisor.c (xenHypervisorGetMaxMemory): Likewise.
* src/xen/xen_driver.c (xenUnifiedDomainGetMaxMemory): Likewise.
* src/xen/xend_internal.c (xenDaemonDomainGetMaxMemory):
Likewise.
* src/xen/xend_internal.h (xenDaemonDomainGetMaxMemory):
Likewise.
* src/xen/xm_internal.c (xenXMDomainGetMaxMemory): Likewise.
* src/xen/xm_internal.h (xenXMDomainGetMaxMemory): Likewise.
* src/xen/xs_internal.c (xenStoreDomainGetMaxMemory): Likewise.
* src/xen/xs_internal.h (xenStoreDomainGetMaxMemory): Likewise.
* src/xenapi/xenapi_driver.c (xenapiDomainGetMaxMemory):
Likewise.
* src/esx/esx_driver.c (esxDomainGetMaxMemory): Likewise.
* src/libxl/libxl_driver.c (libxlDomainGetMaxMemory): Likewise.
* src/qemu/qemu_driver.c (qemudDomainGetMaxMemory): Likewise.
* src/lxc/lxc_driver.c (lxcDomainGetMaxMemory): Likewise.
* src/uml/uml_driver.c (umlDomainGetMaxMemory): Likewise.
A multi-threaded client with event loop may crash if one of its threads
closes a connection while event loop is in the middle of sending
keep-alive message (either request or response). The right place for it
is inside virNetClientIOEventLoop() between poll() and
virNetClientLock(). We should only close a connection directly if no-one
is using it and defer the closing to the last user otherwise. So far we
only did so if the close was initiated by keep-alive timeout.
Nuke the last vestiges of printing pid_t values with the wrong
types, at least in code compiled on mingw64. There may be other
places, but for now they are only compiled on systems where the
existing %d doesn't trigger gcc warnings.
* src/rpc/virnetsocket.c (virNetSocketNew): Use %lld and casting,
rather than assuming any particular int type for pid_t.
* src/util/command.c (virCommandRunAsync, virPidWait)
(virPidAbort): Likewise.
(verify): Drop a now stale assertion.
Our HACKING discourages use of malloc and free, for at least
a couple of years now. But we weren't enforcing it, until now :)
For now, I've exempted python and tests, and will clean those up
in subsequent patches. Examples should be permanently exempt,
since anyone copying our examples won't have use of our
internal-only memory.h via libvirt_util.la.
* cfg.mk (sc_prohibit_raw_allocation): New rule.
(exclude_file_name_regexp--sc_prohibit_raw_allocation): and
exemptions.
* src/cpu/cpu.c (cpuDataFree): Avoid false positive.
* src/conf/network_conf.c (virNetworkDNSSrvDefParseXML): Fix
offenders.
* src/libxl/libxl_conf.c (libxlMakeDomBuildInfo, libxlMakeVfb)
(libxlMakeDeviceModelInfo): Likewise.
* src/rpc/virnetmessage.c (virNetMessageSaveError): Likewise.
* tools/virsh.c (_vshMalloc, _vshCalloc): Likewise.
Qemu is adding the ability to do a partial rebase. That is, given:
base <- intermediate <- current
virDomainBlockPull will produce:
current
but qemu now has the ability to leave base in the chain, to produce:
base <- current
Note that current qemu can only do a forward merge, and only with
the current image as the destination, which is fully described by
this API without flags. But in the future, it may be possible to
enhance this API for additional scenarios by using flags:
Merging the current image back into a previous image (that is,
undoing a live snapshot), could be done by passing base as the
destination and flags with a bit requesting a backward merge.
Merging any other part of the image chain, whether forwards (the
backing image contents are pulled into the newer file) or backwards
(the deltas recorded in the newer file are merged back into the
backing file), could also be done by passing a new flag that says
that base should be treated as an XML snippet rather than an
absolute path name, where the XML could then supply the additional
instructions of which part of the image chain is being merged into
any other part.
* include/libvirt/libvirt.h.in (virDomainBlockRebase): New
declaration.
* src/libvirt.c (virDomainBlockRebase): Implement it.
* src/libvirt_public.syms (LIBVIRT_0.9.10): Export it.
* src/driver.h (virDrvDomainBlockRebase): New driver callback.
* src/rpc/gendispatch.pl (long_legacy): Add exemption.
* docs/apibuild.py (long_legacy_functions): Likewise.
This API allows a domain to be put into one of S# ACPI states.
Currently, S3 and S4 are supported. These states are shared
with virNodeSuspendForDuration.
However, for now we don't support any duration other than zero.
The same apply for flags.
The RPC generator transforms methods matching certain
patterns like 'id' or 'uuid', etc but does not anchor
its matches to the end of the word. So if a method
contains 'id' in the middle (eg virIdentity) then the
RPC generator munges that.
* src/rpc/gendispatch.pl: Anchor matches
To avoid a namespace clash with forthcoming identity APIs,
rename the virNet*GetLocalIdentity() APIs to have the form
virNet*GetUNIXIdentity()
* daemon/remote.c, src/libvirt_private.syms: Update
for renamed APIs
* src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h,
src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: s/LocalIdentity/UNIXIdentity/
If client stream does not have any data to sink and neither received
EOF, a dummy packet is sent to the daemon signalising client is ready to
sink some data. However, after we added event loop to client a race may
occur:
Thread 1 calls virNetClientStreamRecvPacket and since no data are cached
nor stream has EOF, it decides to send dummy packet to server which will
sent some data in turn. However, during this decision and actual message
exchange with server -
Thread 2 receives last stream data from server. Therefore an EOF is set
on stream and if there is a call waiting (which is not yet) it is woken
up. However, Thread 1 haven't sent anything so far, so there is no call
to be woken up. So this thread sent dummy packet to daemon, which
ignores that as no stream is associated with such packet and therefore
no reply will ever come.
This race causes client to hang indefinitely.
The RPC code had several latent memory leaks and an attempt to
free the wrong string, but thankfully nothing triggered them
(blkiotune was the only one returning a string, and always as
the last parameter). Also, our cleanups for rpcgen ended up
nuking a line of code that renders VIR_TYPED_PARAM_INT broken,
because it was the only use of 'i' in a function, even though
it was a member usage rather than a standalone declaration.
* daemon/remote.c (remoteSerializeTypedParameters): Free the
correct array element.
(remoteDispatchDomainGetSchedulerParameters)
(remoteDispatchDomainGetSchedulerParametersFlags)
(remoteDispatchDomainBlockStatsFlags)
(remoteDispatchDomainGetMemoryParameters): Don't leak strings.
* src/rpc/genprotocol.pl: Don't nuke member-usage of 'buf' or 'i'.
When one thread passes the buck to another thread, it uses
virCondSignal to wake up the target thread. The variable
'haveTheBuck' is not updated in a race-free manner when
this occurs. The current thread sets it to false, and the
woken up thread sets it to true. There is a window where
a 3rd thread can come in and grab the buck.
Even if this didn't lead to crashes & deadlocks, this would
still result in unfairness in the buckpassing algorithm.
A better solution is to *never* set haveTheBuck to false
when we're passing the buck. Only set it to false when there
is no further thread waiting for the buck.
* src/rpc/virnetclient.c: Only set haveTheBuck to false
if no thread is waiting
Commit fd06692544 tried to fix
a race condition in
commit fa9595003d
Author: Daniel P. Berrange <berrange@redhat.com>
Date: Fri Nov 11 15:28:41 2011 +0000
Explicitly track whether the buck is held in remote client
Unfortunately there is a second race condition whereby the
event loop can trigger due to incoming data to read. Revert
this fix, so a complete fix for the problem can be cleanly
applied
* src/rpc/virnetclient.c: Revert fd06692544
Currently if you try to connect to a local libvirtd when
libvirtd is not in $PATH, you'll get an error
error: internal error invalid use of command API
This is because remoteFindDaemonPath() returns NULL, which
causes us to pass NULL into virNetSocketConnectUNIX which
in turn causes us to pass NULL into virCommandNewArgList.
Adding missing error checks improves this to
error: internal error Unable to locate libvirtd daemon in $PATH
* src/remote/remote_driver.c: Report error if libvirtd
cannot be found
* src/rpc/virnetsocket.c: Report error if caller requested
spawning of daemon, but provided no binary path
https://bugzilla.redhat.com/show_bug.cgi?id=648855 mentioned a
misuse of 'an' where 'a' is proper; that has since been fixed,
but a search found other problems (some were a spelling error for
'and', while most were fixed by 'a').
* daemon/stream.c: Fix grammar.
* src/conf/domain_conf.c: Likewise.
* src/conf/domain_event.c: Likewise.
* src/esx/esx_driver.c: Likewise.
* src/esx/esx_vi.c: Likewise.
* src/rpc/virnetclient.c: Likewise.
* src/rpc/virnetserverprogram.c: Likewise.
* src/storage/storage_backend_fs.c: Likewise.
* src/util/conf.c: Likewise.
* src/util/dnsmasq.c: Likewise.
* src/util/iptables.c: Likewise.
* src/xen/xen_hypervisor.c: Likewise.
* src/xen/xend_internal.c: Likewise.
* src/xen/xs_internal.c: Likewise.
* tools/virsh.c: Likewise.
The RPC fixups needed on Linux are also needed on cygwin, and
worked without further tweaking to the list of fixups. Also,
unlike BSD, Cygwin exports 'struct ifreq', but unlike Linux,
Cygwin lacks the ioctls that we were using 'struct ifreq' to
access. This patch allows compilation under cygwin.
* src/rpc/genprotocol.pl: Also perform fixups on cygwin.
* src/util/virnetdev.c (HAVE_STRUCT_IFREQ): Also require AF_PACKET
definition.
* src/util/virnetdevbridge.c (virNetDevSetupControlFull): Only
compile if SIOCBRADDBR works.
Originaly, the code checked if another client is the queue and infered
ownership of the buck from that. Commit fa9595003d
added a separate variable to track the buck. That caused, that a new
call might enter claiming it has the buck, while another thread was
signalled to take the buck. This ends in two threads claiming they hold
the buck and entering poll(). This happens due to a race on waking up
threads on the client lock mutex.
This caused multi-threaded clients to hang, most prominently visible and
reproducible on python based clients, like virt-manager.
This patch causes threads, that have been signalled to take the buck to
re-check if buck is held by another thread.
Detected by Coverity. Leak introduced in commit 673adba.
Two separate bugs here:
1. call was not freed on all error paths
2. virCondDestroy was called even if virCondInit failed
Signed-off-by: Alex Jia <ajia@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
When another thread was dispatching while we wanted to send a
non-blocking call, we correctly queued the call and woke up the thread
but the thread just threw the call away since it forgot to recheck if
its socket was writable.
When spawning an ssh connection, the environment variables
DISPLAY, SSH_ASKPASS, ... are passed. However XAUTHORITY,
which is necessary if the .Xauthority is in a non default
place, was not passed.
Signed-off-by: Christian Franke <nobody@nowhere.ws>
The keepalive program has two procedures: PING, and PONG.
Both are used only in asynchronous messages and the sender doesn't wait
for any reply. However, the party which receives PING messages is
supposed to react by sending PONG message the other party, but no
explicit binding between PING and PONG messages is made. For backward
compatibility neither server nor client are allowed to send keepalive
messages before checking that remote party supports them.
When virNetClientIOEventLoop is called for a non-blocking call and not
even a single byte can be sent from this call without blocking, we
properly reported that to the caller which properly frees the call. But
we never removed the call from a call queue.
Due to the asynchronous nature of streams, we might continue to
receive some stream packets from the server even after we have
shutdown the stream on the client side. These should be discarded
silently, rather than raising an error in the RPC layer.
* src/rpc/virnetclient.c: Discard stream data silently
Add a new virNetClientSendNonBlock which returns 2 on
full send, 1 on partial send, 0 on no send, -1 on error
If a partial send occurs, then a subsequent call to any
of the virNetClientSend* APIs will finish any outstanding
I/O.
TODO: the virNetClientEvent event handler could be used
to speed up completion of partial sends if an event loop
is present.
* src/rpc/virnetsocket.h, src/rpc/virnetsocket.c: Add new
virNetSocketHasPendingData() API to test for cached
data pending send.
* src/rpc/virnetclient.c, src/rpc/virnetclient.h: Add new
virNetClientSendNonBlock() API to send non-blocking API
Stop multiplexing virNetClientSend for two different purposes,
instead add virNetClientSendWithReply and virNetClientSendNoReply
* src/rpc/virnetclient.c, src/rpc/virnetclient.h: Replace
virNetClientSend with virNetClientSendWithReply and
virNetClientSendNoReply
* src/rpc/virnetclientprogram.c, src/rpc/virnetclientstream.c:
Update for new API names
Remove some duplication by pulling the code for passing the
buck out into a helper method
* src/rpc/virnetclient.c: Introduce virNetClientIOEventLoopPassTheBuck
Instead of inferring whether the buck is held from the waitDispatch
pointer, use an explicit 'bool haveTheBuck' field
* src/rpc/virnetclient.c: Explicitly track the buck
Directly messing around with the linked list is potentially
dangerous. Introduce some helper APIs to deal with list
manipulating the list
* src/rpc/virnetclient.c: Create linked list handlers
The src/util/network.c file is a dumping ground for many different
APIs. Split it up into 5 pieces, along functional lines
- src/util/virnetdevbandwidth.c: virNetDevBandwidth type & helper APIs
- src/util/virnetdevvportprofile.c: virNetDevVPortProfile type & helper APIs
- src/util/virsocketaddr.c: virSocketAddr and APIs
- src/conf/netdev_bandwidth_conf.c: XML parsing / formatting
for virNetDevBandwidth
- src/conf/netdev_vport_profile_conf.c: XML parsing / formatting
for virNetDevVPortProfile
* src/util/network.c, src/util/network.h: Split into 5 pieces
* src/conf/netdev_bandwidth_conf.c, src/conf/netdev_bandwidth_conf.h,
src/conf/netdev_vport_profile_conf.c, src/conf/netdev_vport_profile_conf.h,
src/util/virnetdevbandwidth.c, src/util/virnetdevbandwidth.h,
src/util/virnetdevvportprofile.c, src/util/virnetdevvportprofile.h,
src/util/virsocketaddr.c, src/util/virsocketaddr.h: New pieces
* daemon/libvirtd.h, daemon/remote.c, src/conf/domain_conf.c,
src/conf/domain_conf.h, src/conf/network_conf.c,
src/conf/network_conf.h, src/conf/nwfilter_conf.h,
src/esx/esx_util.h, src/network/bridge_driver.c,
src/qemu/qemu_conf.c, src/rpc/virnetsocket.c,
src/rpc/virnetsocket.h, src/util/dnsmasq.h, src/util/interface.h,
src/util/iptables.h, src/util/macvtap.c, src/util/macvtap.h,
src/util/virnetdev.h, src/util/virnetdevtap.c,
tools/virsh.c: Update include files
Send and receive string typed parameters across RPC. This also
completes the back-compat mentioned in the previous patch - the
only time we have an older client talking to a newer server is
if RPC is in use, so filtering out strings during RPC prevents
returning an unknown type to the older client.
* src/remote/remote_protocol.x (remote_typed_param_value): Add
another union value.
* daemon/remote.c (remoteDeserializeTypedParameters): Handle
strings on rpc.
(remoteSerializeTypedParameters): Likewise; plus filter out
strings when replying to older clients. Adjust callers.
* src/remote/remote_driver.c (remoteFreeTypedParameters)
(remoteSerializeTypedParameters)
(remoteDeserializeTypedParameters): Handle strings on rpc.
* src/rpc/gendispatch.pl: Properly clean up typed arrays.
* src/remote_protocol-structs: Update.
Based on an initial patch by Hu Tao, with feedback from
Daniel P. Berrange.
Signed-off-by: Eric Blake <eblake@redhat.com>
The socket address APIs in src/util/network.h either take the
form virSocketAddrXXX, virSocketXXX or virSocketXXXAddr.
Sanitize this so everything is virSocketAddrXXXX, and ensure
that the virSocketAddr parameter is always the first one.
* src/util/network.c, src/util/network.h: Santize socket
address API naming
* src/conf/domain_conf.c, src/conf/network_conf.c,
src/conf/nwfilter_conf.c, src/network/bridge_driver.c,
src/nwfilter/nwfilter_ebiptables_driver.c,
src/nwfilter/nwfilter_learnipaddr.c,
src/qemu/qemu_command.c, src/rpc/virnetsocket.c,
src/util/dnsmasq.c, src/util/iptables.c,
src/util/virnetdev.c, src/vbox/vbox_tmpl.c: Update for
API renaming
The code calling sendfd/recvfd was mistakenly assuming those
calls would never block. They can in fact return EAGAIN and
this is causing us to drop the client connection when blocking
ocurrs while sending/receiving FDs.
Fixing this is a little hairy on the incoming side, since at
the point where we see the EAGAIN, we already thought we had
finished receiving all data for the packet. So we play a little
trick to reset bufferOffset again and go back into polling for
more data.
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Update
virNetSocketSendFD/RecvFD to return 0 on EAGAIN, or 1
on success
* src/rpc/virnetclient.c: Move decoding of header & fds
out of virNetClientCallDispatch and into virNetClientIOHandleInput.
Handling blocking when sending/receiving FDs
* src/rpc/virnetmessage.h: Add a 'donefds' field to track
how many FDs we've sent / received
* src/rpc/virnetserverclient.c: Handling blocking when
sending/receiving FDs
I ran into the following build failure:
$ mkdir -p build1 build2/a/very/deep/hierarcy
$ cd build2/a/very/deep/hierarcy
$ ../../../../../configure && make
$ cd ../../../../build1
$ ../configure && make
...
../../src/remote/remote_protocol.c:7:55: fatal error: ../../../../../src/remote/remote_protocol.h: No such file or directory
Turns out that we were sometimes generating the remote_protocol.c
file with information from the VPATH build, which is bad, since
any file shipped in the tarball should be idempotent no matter how
deep the VPATH build tree that created it.
* src/rpc/genprotocol.pl: Don't embed VPATH into generated file.
If daemon is using SASL it reads client data into a cache. This cache is
big (usually 65KB) and can thus contain 2 or more messages. However,
on socket event we can dispatch only one message. So if we read two
messages at once, the second will not be dispatched as the socket event
goes away with filling the cache.
Moreover, when dispatching the cache we need to remember to take care
of client max requests limit.
The RPC server classes are extended to allow FDs to be received
from clients with calls. There is not currently any way for a
procedure to pass FDs back to the client with replies
* daemon/remote.c, src/rpc/gendispatch.pl: Change virNetMessageHeaderPtr
param to virNetMessagePtr in dispatcher impls
* src/rpc/virnetserver.c, src/rpc/virnetserverclient.c,
src/rpc/virnetserverprogram.c, src/rpc/virnetserverprogram.h:
Extend to support FD passing
Extend the RPC client code to allow file descriptors to be sent
to the server with calls, and received back with replies.
* src/remote/remote_driver.c: Stub extra args
* src/libvirt_private.syms, src/rpc/virnetclient.c,
src/rpc/virnetclient.h, src/rpc/virnetclientprogram.c,
src/rpc/virnetclientprogram.h: Extend APIs to allow
FD passing
Define two new RPC message types VIR_NET_CALL_WITH_FDS and
VIR_NET_REPLY_WITH_FDS. These message types are equivalent
to VIR_NET_CALL and VIR_NET_REPLY, except that between the
message header, and payload there is a 32-bit integer field
specifying how many file descriptors have been passed.
The actual file descriptors are sent/recv'd out of band.
* src/rpc/virnetmessage.c, src/rpc/virnetmessage.h,
src/libvirt_private.syms: Add support for handling
passed file descriptors
* src/rpc/virnetprotocol.x: Extend protocol for FD
passing
Add APIs to the virNetSocket object, to allow file descriptors
to be sent/received over UNIX domain socket connections
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h,
src/libvirt_private.syms: Add APIs for FD send/recv
The libvirtd daemon had a few crude system tap probes. Some of
these were broken during the RPC rewrite. The new modular RPC
code is structured in a way that allows much more effective
tracing. Instead of trying to hook up the original probes,
define a new set of probes for the RPC and event code.
The master probes file is now src/probes.d. This contains
probes for virNetServerClientPtr, virNetClientPtr, virSocketPtr
virNetTLSContextPtr and virNetTLSSessionPtr modules. Also add
probes for the poll event loop.
The src/dtrace2systemtap.pl script can convert the probes.d
file into a libvirt_probes.stp file to make use from systemtap
much simpler.
The src/rpc/gensystemtap.pl script can generate a set of
systemtap functions for translating RPC enum values into
printable strings. This works for all RPC header enums (program,
type, status, procedure) and also the authentication enum
The PROBE macro will automatically generate a VIR_DEBUG
statement, so any place with a PROBE can remove any existing
manual DEBUG statements.
* daemon/libvirtd.stp, daemon/probes.d: Remove obsolete probing
* daemon/libvirtd.h: Remove probe macros
* daemon/Makefile.am: Remove all probe buildings/install
* daemon/remote.c: Update authentication probes
* src/dtrace2systemtap.pl, src/rpc/gensystemtap.pl: Scripts
to generate STP files
* src/internal.h: Add probe macros
* src/probes.d: Master list of probes
* src/rpc/virnetclient.c, src/rpc/virnetserverclient.c,
src/rpc/virnetsocket.c, src/rpc/virnettlscontext.c,
src/util/event_poll.c: Insert probe points, removing any
DEBUG statements that duplicate the info
Pull the call to gnutls_x509_crt_get_dn up into a higher function
so that the 'dname' variable will be available for probe points
* src/rpc/virnettlscontext.c: Pull gnutls_x509_crt_get_dn up
one level
If we receive an error on the stream, set the EOF marker so
that any further (bogus) incoming data is dropped.
* src/rpc/virnetclientstream.c: Set EOF on stream
If we send back an unknown program error for async messages,
we will confuse the client because they only expect replies
for method calls. Just log & drop any invalid async messages
* src/rpc/virnetserver.c: Don't send error for async messages
Commit 597fe3cee6 accidentally
introduced a deadlock when reporting an unknown RPC program.
The virNetServerDispatchNewMessage method is called with
the client locked, and must therefore not attempt to send
any RPC messages back to the client. Only once the incoming
message is passed off to the virNetServerHandleJob worker
is it safe to start sending messages back
* src/rpc/virnetserver.c: Delay checking for unknown RPC
program until in worker thread
Do not crash if virStreamFinish is called after error.
==11000== Invalid read of size 4
==11000== at 0x373A8099A0: pthread_mutex_lock (pthread_mutex_lock.c:51)
==11000== by 0x4C7CADE: virMutexLock (threads-pthread.c:85)
==11000== by 0x4D57C31: virNetClientStreamRaiseError (virnetclientstream.c:203)
==11000== by 0x4D385E4: remoteStreamFinish (remote_driver.c:3541)
==11000== by 0x4D182F9: virStreamFinish (libvirt.c:14157)
==11000== by 0x40FDC4: cmdScreenshot (virsh.c:3075)
==11000== by 0x42BA40: vshCommandRun (virsh.c:14922)
==11000== by 0x42ECCA: main (virsh.c:16381)
==11000== Address 0x59b86c0 is 16 bytes inside a block of size 216 free'd
==11000== at 0x4A06928: free (vg_replace_malloc.c:427)
==11000== by 0x4C69E2B: virFree (memory.c:310)
==11000== by 0x4D57B56: virNetClientStreamFree (virnetclientstream.c:184)
==11000== by 0x4D3DB7A: remoteDomainScreenshot (remote_client_bodies.h:1812)
==11000== by 0x4CFD245: virDomainScreenshot (libvirt.c:2903)
==11000== by 0x40FB73: cmdScreenshot (virsh.c:3029)
==11000== by 0x42BA40: vshCommandRun (virsh.c:14922)
==11000== by 0x42ECCA: main (virsh.c:16381)
Mostly straight-forward, although this is the first API that
returns a new snapshot based on a snapshot rather than a domain.
* src/remote/remote_protocol.x
(REMOTE_PROC_DOMAIN_SNAPSHOT_GET_PARENT): New rpc.
(remote_domain_snapshot_get_parent_args)
(remote_domain_snapshot_get_parent_ret): New structs.
* src/rpc/gendispatch.pl: Adjust generator.
* src/remote/remote_driver.c (remote_driver): Use it.
* src/remote_protocol-structs: Update.
commit 984840a2c2 removed the
notification of waiting calls when VIR_NET_CONTINUE messages
arrive. This was to fix the case of a virStreamAbort() call
being prematurely notified of completion.
The problem is that sometimes there are dummy calls from a
virStreamRecv() call waiting that *do* need to be notified.
These dummy calls should have a status VIR_NET_CONTINUE. So
re-add the notification upon VIR_NET_CONTINUE, but only if
the waiter also has a status of VIR_NET_CONTINUE.
* src/rpc/virnetclient.c: Notify waiting call if stream data
arrives
* src/rpc/virnetclientstream.c: Mark dummy stream read packet
with status VIR_NET_CONTINUE
Libvirt special-cases a specific VIR_ERR_RPC from the remote driver
back into VIR_ERR_NO_SUPPORT on the client, so that clients can
handle missing rpc functions the same whether the hypervisor driver
is local or remote. However, commit c1b22644 introduced a regression:
VIR_FROM_THIS changed from VIR_FROM_REMOTE to VIR_FROM_RPC, so the
special casing no longer works if the server uses the newer error
domain.
* src/rpc/virnetclientprogram.c
(virNetClientProgramDispatchError): Also cater to 0.9.3 and newer.
* src/rpc/virnettlscontext.c: fix memory leak on
virNetTLSContextValidCertificate.
* Detected in valgrind run:
==25667==
==25667== 6,085 (44 direct, 6,041 indirect) bytes in 1 blocks are definitely
lost in loss record 326 of 351
==25667== at 0x4005447: calloc (vg_replace_malloc.c:467)
==25667== by 0x4F2791F3: _asn1_add_node_only (structure.c:53)
==25667== by 0x4F27997A: _asn1_copy_structure3 (structure.c:421)
==25667== by 0x4F276A50: _asn1_append_sequence_set (element.c:144)
==25667== by 0x4F2743FF: asn1_der_decoding (decoding.c:1194)
==25667== by 0x4F22B9CC: gnutls_x509_crt_import (x509.c:229)
==25667== by 0x805274B: virNetTLSContextCheckCertificate
(virnettlscontext.c:1009)
==25667== by 0x804DE32: testTLSSessionInit (virnettlscontexttest.c:693)
==25667== by 0x804F14D: virtTestRun (testutils.c:140)
==25667==
==25667== 23,188 (88 direct, 23,100 indirect) bytes in 11 blocks are definitely
lost in loss record 346 of 351
==25667== at 0x4005447: calloc (vg_replace_malloc.c:467)
==25667== by 0x4F22B841: gnutls_x509_crt_init (x509.c:50)
==25667== by 0x805272B: virNetTLSContextCheckCertificate
(virnettlscontext.c:1003)
==25667== by 0x804DDD1: testTLSSessionInit (virnettlscontexttest.c:673)
==25667== by 0x804F14D: virtTestRun (testutils.c:140)
* How to reproduce?
% cd libvirt && ./configure && make && make -C tests valgrind
or
% valgrind -v --leak-check=full ./tests/virnettlscontexttest
Signed-off-by: Alex Jia <ajia@redhat.com>
This patch annotates APIs with low or high priority.
In low set MUST be all APIs which might eventually access monitor
(and thus block indefinitely). Other APIs may be marked as high
priority. However, some must be (e.g. domainDestroy).
For high priority calls (HPC), there are some high priority workers
(HPW) created in the pool. HPW can execute only HPC, although normal
worker can process any call regardless priority. Therefore, only those
APIs which are guaranteed to end in reasonable small amount of time
can be marked as HPC.
The size of this HPC pool is static, because HPC are expected to end
quickly, therefore jobs assigned to this pool will be served quickly.
It can be configured in libvirtd.conf via prio_workers variable.
Default is set to 5.
To mark API with low or high priority, append priority:{low|high} to
it's comment in src/remote/remote_protocol.x. This is similar to
autogen|skipgen. If not marked, the generator assumes low as default.
Commit 2c85644b0b attempted to
fix a problem with tracking RPC messages from streams by doing
- if (msg->header.type == VIR_NET_REPLY) {
+ if (msg->header.type == VIR_NET_REPLY ||
+ (msg->header.type == VIR_NET_STREAM &&
+ msg->header.status != VIR_NET_CONTINUE)) {
client->nrequests--;
In other words any stream packet, with status NET_OK or NET_ERROR
would cause nrequests to be decremented. This is great if the
packet from from a synchronous virStreamFinish or virStreamAbort
API call, but wildly wrong if from a server initiated abort.
The latter resulted in 'nrequests' being decremented below zero.
This then causes all I/O for that client to be stopped.
Instead of trying to infer whether we need to decrement the
nrequests field, from the message type/status, introduce an
explicit 'bool tracked' field to mark whether the virNetMessagePtr
object is subject to tracking.
Also add a virNetMessageClear function to allow a message
contents to be cleared out, without adversely impacting the
'tracked' field as a naive memset() would do
* src/rpc/virnetmessage.c, src/rpc/virnetmessage.h: Add
a 'bool tracked' field and virNetMessageClear() API
* daemon/remote.c, daemon/stream.c, src/rpc/virnetclientprogram.c,
src/rpc/virnetclientstream.c, src/rpc/virnetserverclient.c,
src/rpc/virnetserverprogram.c: Switch over to use
virNetMessageClear() and pass in the 'bool tracked' value
when creating messages.
The bufferOffset has been initialized to zero in virNetMessageEncodePayloadRaw(),
so, we use bufferLength to represent the length of message which is going to be
sent to client side.
From: Matthias Bolte <matthias.bolte@googlemail.com>
Tested-by: Jim Fehlig <jfehlig@novell.com>
Matthias provided this patch to fix an issue I encountered in the
generator with APIs containing call-by-ref long type, e.g.
int virDomainMigrateGetMaxSpeed(virDomainPtr domain,
unsigned long *bandwidth,
unsigned int flags);
In case we add a new program in the future (we did that in the past and
we are going to do it again soon) current daemon will behave badly with
new client that wants to use the new program. Before the RPC rewrite we
used to just send an error reply to any request with unknown program.
With the RPC rewrite in 0.9.3 the daemon just closes the connection
through which such request was sent. This patch fixes this regression.
When spice_tls is set but listen_tls is not, we don't initialize
GnuTLS library. So any later gnutls call (e.g. during migration,
where we initialize a certificate) will access uninitialized GnuTLS
internal structs and throws an error.
Although, we might now initialize GnuTLS twice, it is safe according
to the documentation:
This function can be called many times,
but will only do something the first time.
This patch creates 2 functions: virNetTLSInit and virNetTLSDeinit
with respect to written above.
If a client had initiated a stream abort, it will have a call
waiting for a reply in the queue. If more data continues to
arrive on the stream, the abort command could mistakenly get
signalled as complete. Remove the code from async data processing
that looked for waiting calls. Add a sanity check to ensure no
async call can ever be marked as needing a reply
* src/rpc/virnetclient.c: Ensure async data packets can't
trigger a reply
If a stream gets a server initiated abort, the client may still
send an abort request before it receives the server side abort.
This causes the server to send back another abort for the
stream. Since the protocol defines that abort is the last thing
to be sent, the client gets confused by this second abort from
the server. If the stream is already shutdown, just drop any
client requested abort, rather than sending back another message.
This fixes the regression from previous versions.
Tested as follows
In one virsh session
virsh # start foo
virsh # console foo
In other virsh session
virsh # destroy foo
The first virsh session should be able to continue issuing
commands without error. Prior to this patch it saw
virsh # list
error: Failed to list active domains
error: An error occurred, but the cause is unknown
virsh # list
error: Failed to list active domains
error: no call waiting for reply with prog 536903814 vers 1 serial 9
* src/rpc/virnetserverprogram.c: Drop abort requests
for streams which no longer exist
Every active stream results in a reference being held on the
virNetServerClientPtr object. This meant that if a client quit
with any streams active, although all I/O was stopped the
virNetServerClientPtr object would leak. This causes libvirtd
to leak any file handles associated with open streams when a
client quit
To fix this, when we call virNetServerClientClose there is a
callback invoked which lets the daemon release the streams
and thus the extra references
* daemon/remote.c: Add a hook to close all streams
* daemon/stream.c, daemon/stream.h: Add API for releasing
all streams
* src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h:
Allow registration of a hook to trigger when closing client
When trying to use any SASL authentication for TCP sockets by
setting auth_tls = "sasl" in libvirtd.conf on server side, the
client will hang because of the sasl session relocking other than
dropping the lock when exiting virNetSASLSessionExtKeySize()
* src/rpc/virnetsaslcontext.c: virNetSASLSessionExtKeySize drop the
lock on exit
This patch introduces a internal RPC API "virNetServerClose", which
is standalone with "virNetServerFree". it closes all the socket fds,
and unlinks the unix socket paths, regardless of whether the socket
is still referenced or not.
This is to address regression bug:
https://bugzilla.redhat.com/show_bug.cgi?id=725702
In virNetServerNew, Coverity didn't realize that srv->mdsnGroupName
can only be non-NULL if mdsnGroupName was non-NULL.
In virNetServerRun, Coverity didn't realize that the array is non-NULL
if the array count is non-zero.
* src/rpc/virnetserver.c (virNetServerNew): Use alternate pointer.
(virNetServerRun): Give coverity a hint.
Detected by Coverity. Freeing the wrong variable results in both
a memory leak and the likelihood of the caller dereferencing through
a freed pointer.
* src/rpc/virnettlscontext.c (virNetTLSSessionNew): Free correct
variable.
Detected by Coverity. We want to compare the result of fnmatch 'rv',
not our pre-set return value 'ret'.
* src/rpc/virnetsaslcontext.c (virNetSASLContextCheckIdentity):
Check correct variable.
Spotted by Coverity. Gnutls documents that buffer must be NULL
if gnutls_x509_crt_get_key_purpose_oid is to be used to determine
the correct size needed for allocating a buffer.
* src/rpc/virnettlscontext.c
(virNetTLSContextCheckCertKeyPurpose): Initialize buffer.
Spotted by coverity. If pipe2 fails, then we attempt to close
uninitialized fds, which may result in a double-close.
* src/rpc/virnetserver.c (virNetServerSignalSetup): Initialize fds.
Steps to reproduce this problem (vm1 is not running):
for i in `seq 50`; do virsh managedsave vm1& done; killall virsh
Pre-patch, virNetServerClientClose could end up setting client->sock
to NULL prior to other cleanup functions trying to use client->sock.
This fixes things by checking for NULL in more places, and by deferring
the cleanup until after all queued messages have been served.
* src/rpc/virnetserverclient.c (virNetServerClientRegisterEvent)
(virNetServerClientGetFD, virNetServerClientIsSecure)
(virNetServerClientLocalAddrString)
(virNetServerClientRemoteAddrString): Check for closed socket.
(virNetServerClientClose): Rearrange close sequence.
Analysis from Wen Congyang.
Without this, cygwin failed to compile:
In file included from ../src/rpc/virnetmessage.h:24,
from ../src/rpc/virnetclient.h:27,
from remote/remote_driver.c:31:
../src/rpc/virnetprotocol.h:9:21: error: rpc/rpc.h: No such file or directory
With that fixed, compilation warned:
rpc/virnetsocket.c: In function 'virNetSocketNewListenUNIX':
rpc/virnetsocket.c:347: warning: format '%d' expects type 'int', but argument 8 has type 'gid_t' [-Wformat]
rpc/virnetsocket.c: In function 'virNetSocketGetLocalIdentity':
rpc/virnetsocket.c:743: warning: pointer targets in passing argument 5 of 'getsockopt' differ in signedness
* src/Makefile.am (libvirt_driver_remote_la_CFLAGS)
(libvirt_net_rpc_client_la_CFLAGS)
(libvirt_net_rpc_server_la_CFLAGS): Include XDR_CFLAGS, for rpc
headers on cygwin.
* src/rpc/virnetsocket.c (virNetSocketNewListenUNIX)
(virNetSocketGetLocalIdentity): Avoid compiler warnings.
On RHEL 5, with gcc 4.1.2:
rpc/virnetsaslcontext.c: In function 'virNetSASLSessionUpdateBufSize':
rpc/virnetsaslcontext.c:396: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
* src/rpc/virnetsaslcontext.c (virNetSASLSessionUpdateBufSize):
Use a union to work around gcc warning.
When an incoming RPC message is ready for processing,
virNetServerClientDispatchRead()
will invoke the 'dispatchFunc' callback. This is set to
virNetServerDispatchNewMessage
This function puts the message + client in a queue for processing by the thread
pool. The thread pool worker function is
virNetServerHandleJob
The first thing this does is acquire an extra reference on the 'client'.
Unfortunately, between the time the message+client are put on the thread pool
queue, and the time the worker runs, the client object may have had its last
reference removed.
We clearly need to add the reference to the client object before putting the
client on the processing queue
* src/rpc/virnetserverclient.c: Add a reference to the client when
invoking the dispatch function
* src/rpc/virnetserver.c: Don't acquire a reference to the client
when in the worker thread
The virNetSASLContext, virNetSASLSession, virNetTLSContext and
virNetTLSSession classes previously relied in their owners
(virNetClient / virNetServer / virNetServerClient) to provide
locking protection for concurrent usage. When virNetSocket
gained its own locking code, this invalidated the implicit
safety the SASL/TLS modules relied on. Thus we need to give
them all explicit locking of their own via new mutexes.
* src/rpc/virnetsaslcontext.c, src/rpc/virnettlscontext.c: Add
a mutex per object
When setting up a server socket, we must skip EADDRINUSE errors
from bind, since the IPv6 socket bind may have already bound to
the IPv4 socket too. If we don't manage to bind to any sockets
at all though, we should then report the EADDRINUSE error as
normal.
This fixes the case where libvirtd would not exit if some other
program was listening on its TCP/TLS ports.
* src/rpc/virnetsocket.c: Report EADDRINUSE
When libvirtd starts it it will sanity check its own certs,
and before libvirt clients connect to a remote server they
will sanity check their own certs. This patch allows such
sanity checking to be skipped. There is no strong reason to
need to do this, other than to bypass possible libvirt bugs
in sanity checking, or for testing purposes.
libvirt.conf gains tls_no_sanity_certificate parameter to
go along with tls_no_verify_certificate. The remote driver
client URIs gain a no_sanity URI parameter
* daemon/test_libvirtd.aug, daemon/libvirtd.conf,
daemon/libvirtd.c, daemon/libvirtd.aug: Add parameter to
allow cert sanity checks to be skipped
* src/remote/remote_driver.c: Add no_sanity parameter to
skip cert checks
* src/rpc/virnettlscontext.c, src/rpc/virnettlscontext.h:
Add new parameter for skipping sanity checks independantly
of skipping session cert validation checks
There is some commonality between the code for sanity checking
certs when initializing libvirt and the code for validating
certs during a live TLS session handshake. This patchset splits
up the sanity checking function into several smaller functions
each doing a specific type of check. The cert validation code
is then updated to also call into these functions
* src/rpc/virnettlscontext.c: Refactor cert validation code
The gnutls_certificate_type_set_priority method is deprecated.
Since we already set the default gnutls priority, it was not
serving any useful purpose and can be removed
* src/rpc/virnettlscontext.c: Remove gnutls_certificate_type_set_priority
call
If the virStateInitialize call fails we must shutdown libvirtd
since drivers will not be available. Just free'ing the virNetServer
is not sufficient, we must send a SIGTERM to ourselves so that
we interrupt the event loop and trigger a orderly shutdown
* daemon/libvirtd.c: Kill ourselves if state init fails
* src/rpc/virnetserver.c: Add some debugging to event loop
The generator can handle everything except virDomainGetBlockJobInfo().
* src/remote/remote_protocol.x: provide defines for the new entry points
* src/remote/remote_driver.c daemon/remote.c: implement the client and
server side for virDomainGetBlockJobInfo.
* src/remote_protocol-structs: structure definitions for protocol verification
* src/rpc/gendispatch.pl: Permit some unsigned long parameters
The only 'void name(void)' style procedure in the protocol is 'close' that
is handled special, but also programming errors like a missing _args or
_ret suffix on the structs in the .x files can create such a situation by
accident. Making the generator aware of this avoids bogus errors from the
generator such as:
Use of uninitialized value in exists at ./rpc/gendispatch.pl line 967.
Also this allows to get rid of the -c option and the special case code for
the 'close' procedure, as the generator handles it now correctly.
Reported by Michal Privoznik
Though we prefer users to have SSH keys setup, virt-manager users still
depend on remote SSH connections to launch a password dialog. This fixes
launch ssh-askpass
Fix suggested by danpb
If a key purpose or usage field is marked as non-critical in the
certificate, then a data mismatch is not (ordinarily) a cause for
rejecting the connection
* src/rpc/virnettlscontext.c: Honour key usage/purpose criticality
If key usage or purpose data is not present in the cert, the
RFC recommends that access be allowed. Also fix checking of
key usage to include requirements for client/server certs,
and fix key purpose checking to treat data as a list of bits
Gnutls requires that certificates have basic constraints present
to be used as a CA certificate. OpenSSL doesn't add this data
by default, so add a sanity check to catch this situation. Also
validate that the key usage and key purpose constraints contain
correct data
* src/rpc/virnettlscontext.c: Add sanity checking of certificate
constraints
If the libvirt daemon or libvirt client is configured with bogus
certificates, it is very unhelpful to only find out about this
when a TLS connection is actually attempted. Not least because
the error messages you get back for failures are incredibly
obscure.
This adds some basic sanity checking of certificates at the
time the virNetTLSContext object is created. This is at libvirt
startup, or when creating a virNetClient instance.
This checks that the certificate expiry/start dates are valid
and that the certificate is actually signed by the CA that is
loaded.
* src/rpc/virnettlscontext.c: Add certificate sanity checks
Since the I/O callback registered against virNetSocket will
hold a reference on the virNetClient, we can't rely on the
virNetClientFree to be able to close the network connection.
The last reference will only go away when the event callback
fires (likely due to EOF from the server).
This is sub-optimal and can potentially cause a leak of the
virNetClient object if the server were to not explicitly
close the socket itself
* src/remote/remote_driver.c: Explicitly close the client
object when disconnecting
* src/rpc/virnetclient.c, src/rpc/virnetclient.h: Add a
virNetClientClose method
When unregistering an I/O callback from a virNetSocket object,
there is still a chance that an event may come in on the callback.
In this case it is possible that the virNetSocket might have been
freed already. Make use of a virFreeCallback when registering
the I/O callbacks and hold a reference for the entire time the
callback is set.
* src/rpc/virnetsocket.c: Register a free function for the
file handle watch
* src/rpc/virnetsocket.h, src/rpc/virnetserverservice.c,
src/rpc/virnetserverclient.c, src/rpc/virnetclient.c: Add
a free function for the socket I/O watches
Remove the need for a virNetSocket object to be protected by
locks from the object using it, by introducing its own native
locking and reference counting
* src/rpc/virnetsocket.c: Add locking & reference counting
If we get an I/O error in the async event callback for an RPC
client, we might not have consumed all pending data off the
wire. This could result in the callback being immediately
invoked again. At which point the same I/O might occur. And
we're invoked again. And again...And again...
Unregistering the async event callback if an error occurs is
a good safety net. The real error will be seen when the next
RPC method is invoked
* src/rpc/virnetclient.c: Unregister event callback on error
These typos are introduced by file renaming in commit b17b4afaf.
src/remote/qemu_protocol.x \
src/remote/remote_protocol.x \
src/rpc/gendispatch.pl:
s/remote_generator/gendispatch/
src/rpc/genprotocol.pl:
s/remote\/remote_protocol/remote_protocol/
If the server succesfully validates the client cert, it will send
back a single byte, under TLS. If it fails, it will close the
connection. In this case, we were just reporting the standard
I/O error. The original RPC code had a special case hack for the
GNUTLS_E_UNEXPECTED_PACKET_LENGTH error code to make us report
a more useful error message
* src/rpc/virnetclient.c: Return ENOMSG if we get
GNUTLS_E_UNEXPECTED_PACKET_LENGTH
* src/rpc/virnettlscontext.c: Report cert failure if we
see ENOMSG
Rather than trying to clean up the ssh child ourselves, and risk
subtle differences from the socket creation error path, we can
just use the new APIs.
* src/rpc/virnetsocket.c (virNetSocketFree): Use new function.
Continuation of commit 313ac7fd, and enforce things with a syntax
check.
Technically, virNetServerClientCalculateHandleMode is not printing
a mode_t, but rather a collection of VIR_EVENT_HANDLE_* bits;
however, these bits are < 8, so there is no different in the
output, and that was the easiest way to silence the new syntax check.
* cfg.mk (sc_flags_debug): New syntax check.
(exclude_file_name_regexp--sc_flags_debug): Add exemptions.
* src/fdstream.c (virFDStreamOpenFileInternal): Print flags in
hex, mode_t in octal.
* src/libvirt-qemu.c (virDomainQemuMonitorCommand)
(virDomainQemuAttach): Likewise.
* src/locking/lock_driver_nop.c (virLockManagerNopInit): Likewise.
* src/locking/lock_driver_sanlock.c (virLockManagerSanlockInit):
Likewise.
* src/locking/lock_manager.c: Likewise.
* src/qemu/qemu_migration.c: Likewise.
* src/qemu/qemu_monitor.c: Likewise.
* src/rpc/virnetserverclient.c
(virNetServerClientCalculateHandleMode): Print mode with %o.
When replacing the default SEGV/ABORT/BUS signal handlers you
can't rely on the process being terminated after your custom
handler runs. It is neccessary to manually restore the default
handler and then re-raise the signal
* src/rpc/virnetserver.c: Restore default handler and raise
signal
This tweaks the RPC generator to cope with some naming
conventions used for the QEMU specific APIs
* daemon/remote.c: Server side dispatcher
* src/remote/remote_driver.c: Client side dispatcher
* src/remote/qemu_protocol.x: Wire protocol definition
* src/rpc/gendispatch.pl: Use '$structprefix' in method
names, fix QEMU flags and fix dispatcher method names
Set StrictHostKeyChecking=no to auto-accept new ssh host keys if the
no_verify extra parameter was specified. This won't disable host key
checking for already known hosts. Includes a test and documentation.
The dispatch for the CLOSE RPC call was invoking the method
virNetServerClientClose(). This caused the client connection
to be immediately terminated. This meant the reply to the
final RPC message was never sent. Prior to the RPC rewrite
we merely flagged the connection for closing, and actually
closed it when the next RPC call dispatch had completed.
* daemon/remote.c: Flag connection for a delayed close
* daemon/stream.c: Update to use new API for closing
failed connection
* src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h:
Add support for a delayed connection close. Rename the
virNetServerClientMarkClose method to virNetServerClientImmediateClose
to clarify its semantics
When sending back the final OK or ERROR message on completion
of a stream, we were not decrementing the 'nrequests' tracker
on the client. With the default requests limit of '5', this
meant once a client had created 5 streams, they are unable to
process any further RPC calls. There was also a bug when
handling an error from decoding a message length header, which
meant a client connection would not immediately be closed.
* src/rpc/virnetserverclient.c: Fix release of request after
stream completion & mark client for close on error
In one exit path we forgot to free the virNetMessage object causing
a large memory leak for streams which send a lot of data. Some other
paths were calling VIR_FREE directly instead of virNetMessageFree
although this was (currently) harmless.
* src/rpc/virnetclientstream.c: Fix leak of msg object
* src/rpc/virnetclientprogram.c: Call virNetMessageFree instead
of VIR_FREE
The virNetTLSContextNew was being passed key/cert parameters in
the wrong order. This wasn't immediately visible because if
virNetTLSContextNewPath was used, a second bug reversed the order
of those parameters again.
Only if the paths were manually specified in /etc/libvirt/libvirtd.conf
did the bug appear
* src/rpc/virnettlscontext.c: Fix order of params passed to
virNetTLSContextNew
Coverity noted that 4 out of 5 calls to virNetClientStreamRaiseError
checked the return value. This case expects a particular value, so
warn if our expectations went wrong due to some bug elsewhere.
* src/rpc/virnetclient.c (virNetClientCallDispatchStream): Warn on
unexpected scenario.
Detected by Coverity. The leak is on an error path, but I'm not
sure whether that path is likely to be triggered in practice.
* src/rpc/virnetserverservice.c (virNetServerServiceAccept): Plug leak.
Spotted by Coverity. If we don't update tmp each time through
the loop, then if the filter being removed was not the head of
the list, we accidentally lose all filters prior to the one we
wanted to remove.
* src/rpc/virnetserverclient.c (virNetServerClientRemoveFilter):
Don't lose unrelated filters.
Detected by Coverity. No real harm in leaving these, but fixing
them cuts down on the noise for future analysis.
* src/rpc/virnetserver.c (virNetServerAddService): Delete unused
entry.
* src/util/sysinfo.c (virSysinfoRead): Delete dead assignment to
base.
Detected by Coverity. Both are instances of bad things happening
if pipe2 fails; the virNetClientNew failure could free garbage,
and virNetSocketNewConnectCommand could close random fds.
Note: POSIX doesn't guarantee the contents of fd[0] and fd[1]
after pipe failure: http://austingroupbugs.net/view.php?id=467
We may need to introduce a virPipe2 wrapper that guarantees
that on pipe failure, the fds are explicitly set to -1, rather
than our current state of assuming the fds are unchanged from
their value prior to the failed pipe call.
* src/rpc/virnetclient.c (virNetClientNew): Initialize variable.
* src/rpc/virnetsocket.c (virNetSocketNewConnectCommand):
Likewise.
We ignore any stream data packets which come in for streams which
are not registered, since these packets are async and do not have
a reply. If we get a stream control packet though we must send back
an actual error, otherwise a (broken) client may hang forever
making it hard to diagnose the client bug.
* src/rpc/virnetserverprogram.c: Send back error for unexpected
stream control messages
If a message packet for a invalid stream is received it is just
free'd. This is not good because it doesn't let the client RPC
request counter decrement. If a stream is shutdown with pending
packets the message also isn't released properly because of an
incorrect header type
* daemon/stream.c: Fix message header type
* src/rpc/virnetserverprogram.c: Send dummy reply instead of
free'ing ignored stream message
To save on memory reallocation, virNetMessage instances that
have been transmitted, may be reused for a subsequent incoming
message. We forgot to clear out the old data of the message
fully, which caused later confusion upon read.
* src/rpc/virnetserverclient.c: memset entire message before
reusing it
The virNetServerClient object had a hardcoded limit of 10 requests
per client. Extend constructor to allow it to be passed in as a
configurable variable. Wire this up to the 'max_client_requests'
config parameter in libvirtd
* daemon/libvirtd.c: Pass max_client_requests into services
* src/rpc/virnetserverservice.c, src/rpc/virnetserverservice.h: Pass
nrequests_client_max to clients
* src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h: Allow
configurable request limit
When the remote client receives end of file on the stream
it never invokes the stream callback. Applications relying
on async event driven I/O will thus never see the EOF
condition on the stream
* src/rpc/virnetclient.c, src/rpc/virnetclientstream.c:
Ensure EOF is dispatched
The client stream object can be used independently of the
virNetClientPtr object, so must have full locking of its
own and not rely on any caller.
* src/remote/remote_driver.c: Remove locking around stream
callback
* src/rpc/virnetclientstream.c: Add locking to all APIs
and callbacks
When a filter steals an RPC message, that message must
not be freed, except by the filter code itself
* src/rpc/virnetserverclient.c: Don't free stolen RPC
messages
Improve log messages issued when encountering a bogus
message length to include the actual length and the
limit violated
* src/rpc/virnetmessage.c: Improve log messages
On stream completion it is neccessary to send back a
message with an empty payload. The message header was
not being filled out correctly, since we were not writing
any payload. Add a method for encoding an empty payload
which updates the message headers correctly.
* src/rpc/virnetmessage.c, src/rpc/virnetmessage.h: Add
a virNetMessageEncodePayloadEmpty method
* src/rpc/virnetserverprogram.c: Write empty payload on
stream completion
The RPC client treats failure to register a socket watch
as non-fatal, since we do not mandate that a libvirt client
application provide an event loop implementation. It is
thus inappropriate to a log a message at VIR_LOG_WARN
* src/rpc/virnetsocket.c: Lower logging level
If a streams error is raised, virNetClientIOEventLoop
returns 0, but an error is set. Check for this and
propagate it if present
* src/rpc/virnetclient.c: Propagate streams error
This guts the libvirtd daemon, removing all its networking and
RPC handling code. Instead it calls out to the new virServerPtr
APIs for all its RPC & networking work
As a fallout all libvirtd daemon error reporting now takes place
via the normal internal error reporting APIs. There is no need
to call separate error reporting APIs in RPC code, nor should
code use VIR_WARN/VIR_ERROR for reporting fatal problems anymore.
* daemon/qemu_dispatch_*.h, daemon/remote_dispatch_*.h: Remove
old generated dispatcher code
* daemon/qemu_dispatch.h, daemon/remote_dispatch.h: New dispatch
code
* daemon/dispatch.c, daemon/dispatch.h: Remove obsoleted code
* daemon/remote.c, daemon/remote.h: Rewrite for new dispatch
APIs
* daemon/libvirtd.c, daemon/libvirtd.h: Remove all networking
code
* daemon/stream.c, daemon/stream.h: Update for new APIs
* daemon/Makefile.am: Link to libvirt-net-rpc-server.la
This guts the current remote driver, removing all its networking
handling code. Instead it calls out to the new virClientPtr and
virClientProgramPtr APIs for all RPC & networking work.
* src/Makefile.am: Link remote driver with generic RPC code
* src/remote/remote_driver.c: Gut code, replacing with RPC
API calls
* src/rpc/gendispatch.pl: Update for changes in the way
streams are handled
Move the daemon/remote_generator.pl to src/rpc/gendispatch.pl
and move the src/remote/rpcgen_fix.pl to src/rpc/genprotocol.pl
* daemon/Makefile.am: Update for new name/location of generator
* src/Makefile.am: Update for new name/location of generator
To facilitate creation of new clients using XDR RPC services,
pull alot of the remote driver code into a set of reusable
objects.
- virNetClient: Encapsulates a socket connection to a
remote RPC server. Handles all the network I/O for
reading/writing RPC messages. Delegates RPC encoding
and decoding to the registered programs
- virNetClientProgram: Handles processing and dispatch
of RPC messages for a single RPC (program,version).
A program can register to receive async events
from a client
- virNetClientStream: Handles generic I/O stream
integration to RPC layer
Each new client program now merely needs to define the list of
RPC procedures & events it wants and their handlers. It does
not need to deal with any of the network I/O functionality at
all.
Allow RPC servers to advertise themselves using MDNS,
via Avahi
* src/rpc/virnetserver.c, src/rpc/virnetserver.h: Allow
registration of MDNS services via avahi
* src/rpc/virnetserverservice.c, src/rpc/virnetserverservice.h: Add
API to fetch the listen port number
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Add API to
fetch the local port number
* src/rpc/virnetservermdns.c, src/rpc/virnetservermdns.h: Represent
an MDNS advertisement
To facilitate creation of new daemons providing XDR RPC services,
pull a lot of the libvirtd daemon code into a set of reusable
objects.
* virNetServer: A server contains one or more services which
accept incoming clients. It maintains the list of active
clients. It has a list of RPC programs which can be used
by clients. When clients produce a complete RPC message,
the server passes this onto the corresponding program for
handling, and queues any response back with the client.
* virNetServerClient: Encapsulates a single client connection.
All I/O for the client is handled, reading & writing RPC
messages.
* virNetServerProgram: Handles processing and dispatch of
RPC method calls for a single RPC (program,version).
Multiple programs can be registered with the server.
* virNetServerService: Encapsulates socket(s) listening for
new connections. Each service listens on a single host/port,
but may have multiple sockets if on a dual IPv4/6 host.
Each new daemon now merely has to define the list of RPC procedures
& their handlers. It does not need to deal with any network related
functionality at all.
This extends the basic virNetSocket APIs to allow them to have
a handle to the TLS/SASL session objects, once established.
This ensures that any data reads/writes are automagically
passed through the TLS/SASL encryption layers if required.
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Wire up
SASL/TLS encryption
This provides two modules for handling SASL
* virNetSASLContext provides the process-wide state, currently
just a whitelist of usernames on the server and a one time
library init call
* virNetTLSSession provides the per-connection state, ie the
SASL session itself. This also include APIs for providing
data encryption/decryption once the session is established
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetsaslcontext.c, src/rpc/virnetsaslcontext.h: Generic
SASL handling code
This provides two modules for handling TLS
* virNetTLSContext provides the process-wide state, in particular
all the x509 credentials, DH params and x509 whitelists
* virNetTLSSession provides the per-connection state, ie the
TLS session itself.
The virNetTLSContext provides APIs for validating a TLS session's
x509 credentials. The virNetTLSSession includes APIs for performing
the initial TLS handshake and sending/recving encrypted data
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnettlscontext.c, src/rpc/virnettlscontext.h: Generic
TLS handling code
Introduces a simple wrapper around the raw POSIX sockets APIs
and name resolution APIs. Allows for easy creation of client
and server sockets with correct usage of name resolution APIs
for protocol agnostic socket setup.
It can listen for UNIX and TCP stream sockets.
It can connect to UNIX, TCP streams directly, or indirectly
to UNIX sockets via an SSH tunnel or external command
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Generic
sockets APIs
* tests/Makefile.am: Add socket test
* tests/virnetsockettest.c: New test case
* tests/testutils.c: Avoid overriding LIBVIRT_DEBUG settings
* tests/ssh.c: Dumb helper program for SSH tunnelling tests
This provides a new struct that contains a buffer for the RPC
message header+payload, as well as a decoded copy of the message
header. There is an API for applying a XDR encoding & decoding
of the message headers and payloads. There are also APIs for
maintaining a simple FIFO queue of message instances.
Expected usage scenarios are:
To send a message
msg = virNetMessageNew()
...fill in msg->header fields..
virNetMessageEncodeHeader(msg)
...loook at msg->header fields to determine payload filter
virNetMessageEncodePayload(msg, xdrfilter, data)
...send msg->bufferLength worth of data from buffer
To receive a message
msg = virNetMessageNew()
...read VIR_NET_MESSAGE_LEN_MAX of data into buffer
virNetMessageDecodeLength(msg)
...read msg->bufferLength-msg->bufferOffset of data into buffer
virNetMessageDecodeHeader(msg)
...look at msg->header fields to determine payload filter
virNetMessageDecodePayload(msg, xdrfilter, data)
...run payload processor
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetmessage.c, src/rpc/virnetmessage.h: Internal
message handling API.
* testutils.c, testutils.h: Helper for printing binary differences
* virnetmessagetest.c: Validate all XDR encoding/decoding
This patch defines the basics of a generic RPC protocol in XDR.
This is wire ABI compatible with the original remote_protocol.x.
It takes everything except for the RPC calls / events from that
protocol
- The basic header virNetMessageHeader (aka remote_message_header)
- The error object virNetMessageError (aka remote_error)
- Two dummy objects virNetMessageDomain & virNetMessageNetwork
sadly needed to keep virNetMessageError ABI compatible with
the old remote_error
The RPC protocol supports method calls, async events and
bidirectional data streams as before
* src/Makefile.am: Add rules for generating RPC code from
protocol & define a new libvirt-net-rpc.la helper library
* src/rpc/virnetprotocol.x: New generic RPC protocol