mirror of
https://gitlab.com/libvirt/libvirt.git
synced 2024-11-02 19:31:18 +00:00
075bb5f1aa
The virStateInitialize() call for starting up stateful drivers may require that the event loop is running already. This it is neccessary to start the event loop before this call. At the same time, network clients must not be processed until afte virStateInitialize has completed. The qemudListenUnix() and remoteListenTCP() methods must therefore not register file handle watches, merely open the network sockets & listen() on them. This means clients can connected and are queued, pending completion of initialization The qemudRunLoop() method is moved into a background thread that is started early to allow access to the event loop during driver initialization. The main process thread leader pretty much does nothing once the daemon is running, merely waits for the event loop thread to quit * daemon/libvirtd.c, daemon/libvirtd.h: Move event loop into a background thread * daemon/THREADING.txt: Rewrite docs to better reflect reality
53 lines
2.1 KiB
Plaintext
53 lines
2.1 KiB
Plaintext
|
|
Threading in the libvirtd daemon
|
|
================================
|
|
|
|
To allow efficient processing of RPC requests, the libvirtd daemon
|
|
makes use of threads.
|
|
|
|
- The process leader. This is the initial thread of control
|
|
when the daemon starts running. It is responsible for
|
|
initializing all the state, and starting the event loop.
|
|
Once that's all done, this thread does nothing except
|
|
wait for the event loop to quit, thus indicating an orderly
|
|
shutdown is required.
|
|
|
|
- The event loop. This thread runs the event loop, sitting
|
|
in poll() on all monitored file handles, and calculating
|
|
and dispatching any timers that may be registered. When
|
|
this thread quits, the entire daemon will shutdown.
|
|
|
|
- The workers. These 'n' threads all sit around waiting to
|
|
process incoming RPC requests. Since RPC requests may take
|
|
a long time to complete, with long idle periods, there will
|
|
be quite a few workers running.
|
|
|
|
The use of threads obviously requires locking to ensure safety when
|
|
accessing/changing data structures.
|
|
|
|
- the top level lock is on 'struct qemud_server'. This must be
|
|
held before acquiring any other lock
|
|
|
|
- Each 'struct qemud_client' object has a lock. The server lock
|
|
must be held before acquiring it. Once the client lock is acquired
|
|
the server lock can (optionally) be dropped.
|
|
|
|
- The event loop has its own self-contained lock. You can ignore
|
|
this as a caller of virEvent APIs.
|
|
|
|
|
|
The server lock is used in conjunction with a condition variable
|
|
to pass jobs from the event loop thread to the workers. The main
|
|
event loop thread handles I/O from the client socket, and once a
|
|
complete RPC message has been read off the wire (and optionally
|
|
decrypted), it will be placed onto the 'dx' job queue for the
|
|
associated client object. The job condition will be signalled and
|
|
a worker will wakup and process it.
|
|
|
|
The worker thread must quickly drop its locks on the server and
|
|
client to allow the main event loop thread to continue running
|
|
with its other work. Critically important, is that now libvirt
|
|
API call will ever be made with the server or client locks held.
|
|
|
|
-- End
|