Coverity spotted a use of possibly undefined variable. If a server is
restarting as an result of update, the JSON file that keeps current
value of some variables will not contain the new variables. This is
the case of @max_anonymous_clients too. We are correctly querying if
there's "max_anonymous_clients" in the JSON, however, we are not
setting a sane default if there's none.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=992980
This config tunable allows users to determine the maximum number of
accepted but yet not authenticated users.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The counter gets incremented on each unauthenticated client added to the
server and decremented whenever the client authenticates.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Systemd does not forget about the cases, where client service needs to
wait for daemon service to initialize and start accepting new clients.
Setting a dependency in client is not enough as systemd doesn't know
when the daemon has initialized itself and started accepting new
clients. However, it offers a mechanism to solve this. The daemon needs
to call a special systemd function by which the daemon tells "I'm ready
to accept new clients". This is exactly what we need with
libvirtd-guests (client) and libvirtd (daemon). So now, with this
change, libvirt-guests.service is invoked not any sooner than
libvirtd.service calls the systemd notify function.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
There are still two places where we are using 1bit width unsigned
integer to store a boolean. There's no real need for this and these
occurrences can be replaced with 'bool'.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Currently, even if max_client limit is hit, we accept() incoming
connection request, but close it immediately. This has disadvantage of
not using listen() queue. We should accept() only those clients we
know we can serve and let all other wait in the (limited) queue.
Convert the type of loop iterators named 'i', 'j', k',
'ii', 'jj', 'kk', to be 'size_t' instead of 'int' or
'unsigned int', also santizing 'ii', 'jj', 'kk' to use
the normal 'i', 'j', 'k' naming
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
We need to drop the server lock before calling virObjectUnlock(client)
since in case we had the last reference to the client, its dispose
callback would be called and that could possibly try to lock the server
and cause a deadlock. This is exactly what happens when there is only
one QEMU domain running and it is marked to be autodestroyed when the
connection dies. This results in qemuProcessAutoDestroy ->
qemuProcessStop -> virNetServerRemoveShutdownInhibition call sequence,
where the last function locks the server.
Currently all classes must directly inherit from virObject.
This allows for arbitrarily deep hierarchy. There's not much
to this aside from chaining up the 'dispose' handlers from
each class & providing APIs to check types.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
RHEL 6.3 uses dbus-devel-1.2.24, which lacked support for the
DBUS_TYPE_UNIX_FD define (contrast with Fedora 18 using 1.6.8).
But since it is an older dbus, it also lacks support for shutdown
inhibitions as provided by newer systemd.
Compilation failure introduced in commit 31330926.
* src/rpc/virnetserver.c (virNetServerAddShutdownInhibition):
Compile out if dbus is too old.
Use the freedesktop inhibition DBus service to prevent host
shutdown or session logout while any VMs are running.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently to deal with auto-shutdown libvirtd must periodically
poll all stateful drivers. Thus sucks because it requires
acquiring both the driver lock and locks on every single virtual
machine. Instead pass in a "inhibit" callback to virStateInitialize
which drivers can invoke whenever they want to inhibit shutdown
due to existance of active VMs.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Add two new APIs virNetServerNewPostExecRestart and
virNetServerPreExecRestart which allow a virNetServerPtr
object to be created from a JSON object and saved to a
JSON object, for the purpose of re-exec'ing a process.
This includes serialization of all registered services
and clients
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Add two new APIs virNetServerClientNewPostExecRestart and
virNetServerClientPreExecRestart which allow a virNetServerClientPtr
object to be created from a JSON object and saved to a
JSON object, for the purpose of re-exec'ing a process.
This includes serialization of the connected socket associated
with the client
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
https://www.gnu.org/licenses/gpl-howto.html recommends that
the 'If not, see <url>.' phrase be a separate sentence.
* tests/securityselinuxhelper.c: Remove doubled line.
* tests/securityselinuxtest.c: Likewise.
* globally: s/; If/. If/
Currently the virNetServerDispatchNewClient both creates the
virNetServerClientPtr instance and registers it with the
virNetServerPtr internal state. Split the client registration
code out into a separate virNetServerAddClient method to
allow future reuse from other contexts
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently there is a hook function that is invoked when a
new client connection comes in, which allows an app to
setup private data. This setup will make it difficult to
serialize client state during process re-exec(). Change to
a model where the app registers a callback when creating
the virNetServerPtr instance, which is used to allocate
the client private data immediately during virNetClientPtr
construction.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently the virNetServerServicePtr is responsible for
creating the virNetServerClientPtr instance when accepting
a new connection. Change this so that the virNetServerServicePtr
merely gives virNetServerPtr a virNetSocketPtr instance. The
virNetServerPtr can then create the virNetServerClientPtr
as it desires
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Commit 1f6f723 missed a step. At first I was worried that scrubbing
the conditionals would lead to a runtime failure when compiled without
avahi, but my testing makes it appear that the runtime error will only
occur if the .conf files in /etc request mdns advertisement; and the
old behavior was to silently ignore the request, so this is actually
a better behavior of only failing when the config requests the
impossible.
* src/rpc/virnetserver.c: Drop HAVE_AVAHI conditionals; all
callers already passed NULL if mdns_adv was not configured.
Per the FSF address could be changed from time to time, and GNU
recommends the following now: (http://www.gnu.org/licenses/gpl-howto.html)
You should have received a copy of the GNU General Public License
along with Foobar. If not, see <http://www.gnu.org/licenses/>.
This patch removes the explicit FSF address, and uses above instead
(of course, with inserting 'Lesser' before 'General').
Except a bunch of files for security driver, all others are changed
automatically, the copyright for securify files are not complete,
that's why to do it manually:
src/security/security_selinux.h
src/security/security_driver.h
src/security/security_selinux.c
src/security/security_apparmor.h
src/security/security_apparmor.c
src/security/security_driver.c
To allow virNetServerRun/virNetServerQuit to be invoked multiple
times, we must reset the 'quit' flag in virNetServerRun
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Refactor the RPC server dispatcher code so that if 'max_workers==0'
the entire server will run single threaded. This is useful for
use cases where there will only ever be 1 client connected
which serializes its requests
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The callback that is invoked when a new RPC client is
initialized does not have any opaque parameter. Add
one so that custom data can be passed into the callback
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The test of ref count is not protected by lock, which is unsafe because
the ref count may have been changed by other threads during the test.
This patch fixes this.
When shutting down libvirtd, the virNetServer shutdown can deadlock
if there are in-flight jobs being handled by virNetServerHandleJob().
virNetServerFree() will acquire the virNetServer lock and call
virThreadPoolFree() to terminate the workers, waiting for the workers
to finish. But in-flight workers will attempt to acquire the
virNetServer lock, resulting in deadlock.
Fix the deadlock by unlocking the virNetServer lock before calling
virThreadPoolFree(). This is safe since the virNetServerPtr object
is ref-counted and only decrementing the ref count needs to be
protected. Additionally, there is no need to re-acquire the lock
after virThreadPoolFree() completes as all the workers have
terminated.
POSIX says that sa_sigaction is only safe to use if sa_flags
includes SA_SIGINFO; conversely, sa_handler is only safe to
use when flags excludes that bit. Gnulib doesn't guarantee
an implementation of SA_SIGINFO, but does guarantee that
if SA_SIGINFO is undefined, we can safely define it to 0 as
long as we don't dereference the 2nd or 3rd argument of
any handler otherwise registered via sa_sigaction.
Based on a report by Wen Congyang.
* src/rpc/virnetserver.c (SA_SIGINFO): Stub for mingw.
(virNetServerSignalHandler): Avoid bogus dereference.
(virNetServerFatalSignal, virNetServerNew): Set flags properly.
(virNetServerAddSignalHandler): Drop unneeded #ifdef.
DBus connection. The HAL device code further requires that
the DBus connection is integrated with the event loop and
provides such glue logic itself.
The forthcoming FirewallD integration also requires a
dbus connection with event loop integration. Thus we need
to pull the current event loop glue out of the HAL driver.
Thus we create src/util/virdbus.{c,h} files. This contains
just one method virDBusGetSystemBus() which obtains a handle
to the single shared system bus instance, with event glue
automagically setup.
The RPC server classes are extended to allow FDs to be received
from clients with calls. There is not currently any way for a
procedure to pass FDs back to the client with replies
* daemon/remote.c, src/rpc/gendispatch.pl: Change virNetMessageHeaderPtr
param to virNetMessagePtr in dispatcher impls
* src/rpc/virnetserver.c, src/rpc/virnetserverclient.c,
src/rpc/virnetserverprogram.c, src/rpc/virnetserverprogram.h:
Extend to support FD passing