2009-11-03 18:26:32 +00:00
|
|
|
QEMU Driver Threading: The Rules
|
|
|
|
=================================
|
|
|
|
|
|
|
|
This document describes how thread safety is ensured throughout
|
|
|
|
the QEMU driver. The criteria for this model are:
|
|
|
|
|
2011-02-02 00:28:55 +00:00
|
|
|
- Objects must never be exclusively locked for any prolonged time
|
2009-11-03 18:26:32 +00:00
|
|
|
- Code which sleeps must be able to time out after suitable period
|
2011-07-27 22:20:00 +00:00
|
|
|
- Must be safe against dispatch of asynchronous events from monitor
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
|
|
|
Basic locking primitives
|
|
|
|
------------------------
|
|
|
|
|
|
|
|
There are a number of locks on various objects
|
|
|
|
|
|
|
|
* struct qemud_driver: RWLock
|
|
|
|
|
|
|
|
This is the top level lock on the entire driver. Every API call in
|
|
|
|
the QEMU driver is blocked while this is held, though some internal
|
|
|
|
callbacks may still run asynchronously. This lock must never be held
|
2011-07-27 22:20:00 +00:00
|
|
|
for anything which sleeps/waits (i.e. monitor commands)
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
When obtaining the driver lock, under *NO* circumstances must
|
|
|
|
any lock be held on a virDomainObjPtr. This *WILL* result in
|
|
|
|
deadlock.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
* virDomainObjPtr: Mutex
|
|
|
|
|
|
|
|
Will be locked after calling any of the virDomainFindBy{ID,Name,UUID}
|
|
|
|
methods.
|
|
|
|
|
|
|
|
Lock must be held when changing/reading any variable in the virDomainObjPtr
|
|
|
|
|
|
|
|
Once the lock is held, you must *NOT* try to lock the driver. You must
|
|
|
|
release all virDomainObjPtr locks before locking the driver, or deadlock
|
2011-02-02 00:28:55 +00:00
|
|
|
*WILL* occur.
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
If the lock needs to be dropped & then re-acquired for a short period of
|
|
|
|
time, the reference count must be incremented first using virDomainObjRef().
|
2011-02-02 00:28:55 +00:00
|
|
|
If the reference count is incremented in this way, it is not necessary
|
2009-11-03 18:26:32 +00:00
|
|
|
to have the driver locked when re-acquiring the dropped locked, since the
|
|
|
|
reference count prevents it being freed by another thread.
|
|
|
|
|
2011-07-27 22:20:00 +00:00
|
|
|
This lock must not be held for anything which sleeps/waits (i.e. monitor
|
2009-11-03 18:26:32 +00:00
|
|
|
commands).
|
|
|
|
|
|
|
|
|
|
|
|
|
2011-06-30 09:23:50 +00:00
|
|
|
* qemuMonitorPrivatePtr: Job conditions
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2011-02-02 00:28:55 +00:00
|
|
|
Since virDomainObjPtr lock must not be held during sleeps, the job
|
2011-06-30 09:23:50 +00:00
|
|
|
conditions provide additional protection for code making updates.
|
|
|
|
|
|
|
|
Qemu driver uses two kinds of job conditions: asynchronous and
|
|
|
|
normal.
|
|
|
|
|
|
|
|
Asynchronous job condition is used for long running jobs (such as
|
|
|
|
migration) that consist of several monitor commands and it is
|
|
|
|
desirable to allow calling a limited set of other monitor commands
|
|
|
|
while such job is running. This allows clients to, e.g., query
|
|
|
|
statistical data, cancel the job, or change parameters of the job.
|
|
|
|
|
|
|
|
Normal job condition is used by all other jobs to get exclusive
|
|
|
|
access to the monitor and also by every monitor command issued by an
|
|
|
|
asynchronous job. When acquiring normal job condition, the job must
|
|
|
|
specify what kind of action it is about to take and this is checked
|
|
|
|
against the allowed set of jobs in case an asynchronous job is
|
|
|
|
running. If the job is incompatible with current asynchronous job,
|
|
|
|
it needs to wait until the asynchronous job ends and try to acquire
|
|
|
|
the job again.
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2011-02-02 00:28:55 +00:00
|
|
|
Immediately after acquiring the virDomainObjPtr lock, any method
|
2011-06-30 09:23:50 +00:00
|
|
|
which intends to update state must acquire either asynchronous or
|
|
|
|
normal job condition. The virDomainObjPtr lock is released while
|
|
|
|
blocking on these condition variables. Once the job condition is
|
|
|
|
acquired, a method can safely release the virDomainObjPtr lock
|
|
|
|
whenever it hits a piece of code which may sleep/wait, and
|
|
|
|
re-acquire it after the sleep/wait. Whenever an asynchronous job
|
|
|
|
wants to talk to the monitor, it needs to acquire nested job (a
|
2011-07-27 22:20:00 +00:00
|
|
|
special kind of normal job) to obtain exclusive access to the
|
2011-06-30 09:23:50 +00:00
|
|
|
monitor.
|
2011-02-02 00:28:55 +00:00
|
|
|
|
|
|
|
Since the virDomainObjPtr lock was dropped while waiting for the
|
|
|
|
job condition, it is possible that the domain is no longer active
|
|
|
|
when the condition is finally obtained. The monitor lock is only
|
|
|
|
safe to grab after verifying that the domain is still active.
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
|
|
|
* qemuMonitorPtr: Mutex
|
|
|
|
|
|
|
|
Lock to be used when invoking any monitor command to ensure safety
|
|
|
|
wrt any asynchronous events that may be dispatched from the monitor.
|
|
|
|
It should be acquired before running a command.
|
|
|
|
|
|
|
|
The job condition *MUST* be held before acquiring the monitor lock
|
|
|
|
|
|
|
|
The virDomainObjPtr lock *MUST* be held before acquiring the monitor
|
|
|
|
lock.
|
|
|
|
|
|
|
|
The virDomainObjPtr lock *MUST* then be released when invoking the
|
|
|
|
monitor command.
|
|
|
|
|
|
|
|
The driver lock *MUST* be released when invoking the monitor commands.
|
|
|
|
|
|
|
|
This ensures that the virDomainObjPtr & driver are both unlocked while
|
|
|
|
sleeping/waiting for the monitor response.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Helper methods
|
|
|
|
--------------
|
|
|
|
|
|
|
|
To lock the driver
|
|
|
|
|
|
|
|
qemuDriverLock()
|
|
|
|
- Acquires the driver lock
|
|
|
|
|
|
|
|
qemuDriverUnlock()
|
|
|
|
- Releases the driver lock
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
To lock the virDomainObjPtr
|
|
|
|
|
|
|
|
virDomainObjLock()
|
|
|
|
- Acquires the virDomainObjPtr lock
|
|
|
|
|
|
|
|
virDomainObjUnlock()
|
|
|
|
- Releases the virDomainObjPtr lock
|
|
|
|
|
|
|
|
|
|
|
|
|
2011-06-30 09:23:50 +00:00
|
|
|
To acquire the normal job condition
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
qemuDomainObjBeginJob() (if driver is unlocked)
|
|
|
|
- Increments ref count on virDomainObjPtr
|
2011-06-30 09:23:50 +00:00
|
|
|
- Waits until the job is compatible with current async job or no
|
|
|
|
async job is running
|
2011-07-27 22:20:00 +00:00
|
|
|
- Waits for job.cond condition 'job.active != 0' using virDomainObjPtr
|
2011-06-30 09:23:50 +00:00
|
|
|
mutex
|
|
|
|
- Rechecks if the job is still compatible and repeats waiting if it
|
|
|
|
isn't
|
|
|
|
- Sets job.active to the job type
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
qemuDomainObjBeginJobWithDriver() (if driver needs to be locked)
|
|
|
|
- Increments ref count on virDomainObjPtr
|
2011-06-30 09:23:50 +00:00
|
|
|
- Unlocks driver
|
|
|
|
- Waits until the job is compatible with current async job or no
|
|
|
|
async job is running
|
2011-07-27 22:20:00 +00:00
|
|
|
- Waits for job.cond condition 'job.active != 0' using virDomainObjPtr
|
2011-06-30 09:23:50 +00:00
|
|
|
mutex
|
|
|
|
- Rechecks if the job is still compatible and repeats waiting if it
|
|
|
|
isn't
|
|
|
|
- Sets job.active to the job type
|
2009-11-03 18:26:32 +00:00
|
|
|
- Unlocks virDomainObjPtr
|
|
|
|
- Locks driver
|
|
|
|
- Locks virDomainObjPtr
|
|
|
|
|
2011-06-30 09:23:50 +00:00
|
|
|
NB: this variant is required in order to comply with lock ordering
|
2011-07-27 22:20:00 +00:00
|
|
|
rules for virDomainObjPtr vs. driver
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
|
|
|
qemuDomainObjEndJob()
|
2011-06-30 09:23:50 +00:00
|
|
|
- Sets job.active to 0
|
|
|
|
- Signals on job.cond condition
|
|
|
|
- Decrements ref count on virDomainObjPtr
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
To acquire the asynchronous job condition
|
|
|
|
|
|
|
|
qemuDomainObjBeginAsyncJob() (if driver is unlocked)
|
|
|
|
- Increments ref count on virDomainObjPtr
|
|
|
|
- Waits until no async job is running
|
2011-07-27 22:20:00 +00:00
|
|
|
- Waits for job.cond condition 'job.active != 0' using virDomainObjPtr
|
2011-06-30 09:23:50 +00:00
|
|
|
mutex
|
|
|
|
- Rechecks if any async job was started while waiting on job.cond
|
|
|
|
and repeats waiting in that case
|
|
|
|
- Sets job.asyncJob to the asynchronous job type
|
|
|
|
|
|
|
|
qemuDomainObjBeginAsyncJobWithDriver() (if driver needs to be locked)
|
|
|
|
- Increments ref count on virDomainObjPtr
|
|
|
|
- Unlocks driver
|
|
|
|
- Waits until no async job is running
|
2011-07-27 22:20:00 +00:00
|
|
|
- Waits for job.cond condition 'job.active != 0' using virDomainObjPtr
|
2011-06-30 09:23:50 +00:00
|
|
|
mutex
|
|
|
|
- Rechecks if any async job was started while waiting on job.cond
|
|
|
|
and repeats waiting in that case
|
|
|
|
- Sets job.asyncJob to the asynchronous job type
|
|
|
|
- Unlocks virDomainObjPtr
|
|
|
|
- Locks driver
|
|
|
|
- Locks virDomainObjPtr
|
|
|
|
|
|
|
|
NB: this variant is required in order to comply with lock ordering
|
|
|
|
rules for virDomainObjPtr vs driver
|
|
|
|
|
|
|
|
|
|
|
|
qemuDomainObjEndAsyncJob()
|
|
|
|
- Sets job.asyncJob to 0
|
|
|
|
- Broadcasts on job.asyncCond condition
|
2009-11-03 18:26:32 +00:00
|
|
|
- Decrements ref count on virDomainObjPtr
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
To acquire the QEMU monitor lock
|
|
|
|
|
|
|
|
qemuDomainObjEnterMonitor()
|
|
|
|
- Acquires the qemuMonitorObjPtr lock
|
|
|
|
- Releases the virDomainObjPtr lock
|
|
|
|
|
|
|
|
qemuDomainObjExitMonitor()
|
|
|
|
- Releases the qemuMonitorObjPtr lock
|
2011-02-02 00:28:55 +00:00
|
|
|
- Acquires the virDomainObjPtr lock
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2011-02-02 00:28:55 +00:00
|
|
|
NB: caller must take care to drop the driver lock if necessary
|
2009-11-03 18:26:32 +00:00
|
|
|
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
These functions must not be used by an asynchronous job.
|
2011-06-30 09:23:50 +00:00
|
|
|
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
To acquire the QEMU monitor lock with the driver lock held
|
|
|
|
|
|
|
|
qemuDomainObjEnterMonitorWithDriver()
|
|
|
|
- Acquires the qemuMonitorObjPtr lock
|
|
|
|
- Releases the virDomainObjPtr lock
|
|
|
|
- Releases the driver lock
|
|
|
|
|
|
|
|
qemuDomainObjExitMonitorWithDriver()
|
2011-02-02 00:28:55 +00:00
|
|
|
- Releases the qemuMonitorObjPtr lock
|
2009-11-03 18:26:32 +00:00
|
|
|
- Acquires the driver lock
|
|
|
|
- Acquires the virDomainObjPtr lock
|
|
|
|
|
2011-02-02 00:28:55 +00:00
|
|
|
NB: caller must take care to drop the driver lock if necessary
|
|
|
|
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
These functions must not be used inside an asynchronous job.
|
|
|
|
|
|
|
|
|
|
|
|
To acquire the QEMU monitor lock with the driver lock held and as part
|
|
|
|
of an asynchronous job
|
|
|
|
|
|
|
|
qemuDomainObjEnterMonitorAsync()
|
|
|
|
- Validates that the right async job is still running
|
|
|
|
- Acquires the qemuMonitorObjPtr lock
|
|
|
|
- Releases the virDomainObjPtr lock
|
|
|
|
- Releases the driver lock
|
|
|
|
- Validates that the VM is still active
|
|
|
|
|
|
|
|
qemuDomainObjExitMonitorWithDriver()
|
|
|
|
- Releases the qemuMonitorObjPtr lock
|
|
|
|
- Acquires the driver lock
|
|
|
|
- Acquires the virDomainObjPtr lock
|
|
|
|
|
|
|
|
NB: caller must take care to drop the driver lock if necessary
|
|
|
|
|
|
|
|
These functions are for use inside an asynchronous job; the caller
|
|
|
|
must check for a return of -1 (VM not running, so nothing to exit).
|
|
|
|
Helper functions may also call this with QEMU_ASYNC_JOB_NONE when
|
|
|
|
used from a sync job (such as when first starting a domain).
|
2011-06-30 09:23:50 +00:00
|
|
|
|
2011-02-02 00:28:55 +00:00
|
|
|
|
|
|
|
To keep a domain alive while waiting on a remote command, starting
|
|
|
|
with the driver lock held
|
|
|
|
|
|
|
|
qemuDomainObjEnterRemoterWithDriver()
|
|
|
|
- Increments ref count on virDomainObjPtr
|
|
|
|
- Releases the virDomainObjPtr lock
|
|
|
|
- Releases the driver lock
|
|
|
|
|
|
|
|
qemuDomainObjExitRemoteWithDriver()
|
|
|
|
- Acquires the driver lock
|
|
|
|
- Acquires the virDomainObjPtr lock
|
|
|
|
- Decrements ref count on virDomainObjPtr
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
|
|
|
Design patterns
|
|
|
|
---------------
|
|
|
|
|
|
|
|
|
|
|
|
* Accessing or updating something with just the driver
|
|
|
|
|
|
|
|
qemuDriverLock(driver);
|
|
|
|
|
|
|
|
...do work...
|
|
|
|
|
|
|
|
qemuDriverUnlock(driver);
|
|
|
|
|
|
|
|
|
|
|
|
|
2011-07-27 22:20:00 +00:00
|
|
|
* Accessing something directly to do with a virDomainObjPtr
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
virDomainObjPtr obj;
|
|
|
|
|
|
|
|
qemuDriverLock(driver);
|
|
|
|
obj = virDomainFindByUUID(driver->domains, dom->uuid);
|
|
|
|
qemuDriverUnlock(driver);
|
|
|
|
|
|
|
|
...do work...
|
|
|
|
|
|
|
|
virDomainObjUnlock(obj);
|
|
|
|
|
|
|
|
|
|
|
|
|
2011-07-27 22:20:00 +00:00
|
|
|
* Accessing something directly to do with a virDomainObjPtr and driver
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
virDomainObjPtr obj;
|
|
|
|
|
|
|
|
qemuDriverLock(driver);
|
|
|
|
obj = virDomainFindByUUID(driver->domains, dom->uuid);
|
|
|
|
|
|
|
|
...do work...
|
|
|
|
|
|
|
|
virDomainObjUnlock(obj);
|
|
|
|
qemuDriverUnlock(driver);
|
|
|
|
|
|
|
|
|
|
|
|
|
2011-07-27 22:20:00 +00:00
|
|
|
* Updating something directly to do with a virDomainObjPtr
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
virDomainObjPtr obj;
|
|
|
|
|
2011-07-27 22:20:00 +00:00
|
|
|
qemuDriverLock(driver);
|
2009-11-03 18:26:32 +00:00
|
|
|
obj = virDomainFindByUUID(driver->domains, dom->uuid);
|
|
|
|
qemuDriverUnlock(driver);
|
|
|
|
|
2011-06-30 09:23:50 +00:00
|
|
|
qemuDomainObjBeginJob(obj, QEMU_JOB_TYPE);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
...do work...
|
|
|
|
|
|
|
|
qemuDomainObjEndJob(obj);
|
|
|
|
|
|
|
|
virDomainObjUnlock(obj);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
* Invoking a monitor command on a virDomainObjPtr
|
|
|
|
|
|
|
|
|
|
|
|
virDomainObjPtr obj;
|
|
|
|
qemuDomainObjPrivatePtr priv;
|
|
|
|
|
2011-07-27 22:20:00 +00:00
|
|
|
qemuDriverLock(driver);
|
2009-11-03 18:26:32 +00:00
|
|
|
obj = virDomainFindByUUID(driver->domains, dom->uuid);
|
|
|
|
qemuDriverUnlock(driver);
|
|
|
|
|
2011-06-30 09:23:50 +00:00
|
|
|
qemuDomainObjBeginJob(obj, QEMU_JOB_TYPE);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
...do prep work...
|
|
|
|
|
2011-02-02 00:28:55 +00:00
|
|
|
if (virDomainObjIsActive(vm)) {
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
qemuDomainObjEnterMonitor(obj);
|
2011-02-02 00:28:55 +00:00
|
|
|
qemuMonitorXXXX(priv->mon);
|
|
|
|
qemuDomainObjExitMonitor(obj);
|
|
|
|
}
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
...do final work...
|
|
|
|
|
|
|
|
qemuDomainObjEndJob(obj);
|
|
|
|
virDomainObjUnlock(obj);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
* Invoking a monitor command on a virDomainObjPtr with driver locked too
|
|
|
|
|
|
|
|
|
|
|
|
virDomainObjPtr obj;
|
|
|
|
qemuDomainObjPrivatePtr priv;
|
|
|
|
|
|
|
|
qemuDriverLock(driver);
|
|
|
|
obj = virDomainFindByUUID(driver->domains, dom->uuid);
|
|
|
|
|
2011-06-30 09:23:50 +00:00
|
|
|
qemuDomainObjBeginJobWithDriver(obj, QEMU_JOB_TYPE);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
...do prep work...
|
|
|
|
|
2011-02-02 00:28:55 +00:00
|
|
|
if (virDomainObjIsActive(vm)) {
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
qemuDomainObjEnterMonitorWithDriver(driver, obj);
|
2011-02-02 00:28:55 +00:00
|
|
|
qemuMonitorXXXX(priv->mon);
|
|
|
|
qemuDomainObjExitMonitorWithDriver(driver, obj);
|
|
|
|
}
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
...do final work...
|
|
|
|
|
|
|
|
qemuDomainObjEndJob(obj);
|
|
|
|
virDomainObjUnlock(obj);
|
|
|
|
qemuDriverUnlock(driver);
|
|
|
|
|
|
|
|
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
* Running asynchronous job with driver lock held
|
2011-06-30 09:23:50 +00:00
|
|
|
|
|
|
|
virDomainObjPtr obj;
|
|
|
|
qemuDomainObjPrivatePtr priv;
|
|
|
|
|
|
|
|
qemuDriverLock(driver);
|
|
|
|
obj = virDomainFindByUUID(driver->domains, dom->uuid);
|
|
|
|
|
|
|
|
qemuDomainObjBeginAsyncJobWithDriver(obj, QEMU_ASYNC_JOB_TYPE);
|
|
|
|
qemuDomainObjSetAsyncJobMask(obj, allowedJobs);
|
|
|
|
|
|
|
|
...do prep work...
|
|
|
|
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
if (qemuDomainObjEnterMonitorAsync(driver, obj,
|
|
|
|
QEMU_ASYNC_JOB_TYPE) < 0) {
|
2011-06-30 09:23:50 +00:00
|
|
|
/* domain died in the meantime */
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
...start qemu job...
|
|
|
|
qemuDomainObjExitMonitorWithDriver(driver, obj);
|
|
|
|
|
|
|
|
while (!finished) {
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
if (qemuDomainObjEnterMonitorAsync(driver, obj,
|
|
|
|
QEMU_ASYNC_JOB_TYPE) < 0) {
|
2011-06-30 09:23:50 +00:00
|
|
|
/* domain died in the meantime */
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
...monitor job progress...
|
|
|
|
qemuDomainObjExitMonitorWithDriver(driver, obj);
|
|
|
|
|
|
|
|
virDomainObjUnlock(obj);
|
|
|
|
sleep(aWhile);
|
|
|
|
virDomainObjLock(obj);
|
|
|
|
}
|
|
|
|
|
|
|
|
...do final work...
|
|
|
|
|
|
|
|
qemuDomainObjEndAsyncJob(obj);
|
|
|
|
virDomainObjUnlock(obj);
|
|
|
|
qemuDriverUnlock(driver);
|
|
|
|
|
|
|
|
|
|
|
|
* Coordinating with a remote server for migration
|
2011-02-02 00:28:55 +00:00
|
|
|
|
|
|
|
virDomainObjPtr obj;
|
|
|
|
qemuDomainObjPrivatePtr priv;
|
|
|
|
|
|
|
|
qemuDriverLock(driver);
|
|
|
|
obj = virDomainFindByUUID(driver->domains, dom->uuid);
|
|
|
|
|
2011-06-30 09:23:50 +00:00
|
|
|
qemuDomainObjBeginAsyncJobWithDriver(obj, QEMU_ASYNC_JOB_TYPE);
|
2011-02-02 00:28:55 +00:00
|
|
|
|
|
|
|
...do prep work...
|
|
|
|
|
|
|
|
if (virDomainObjIsActive(vm)) {
|
|
|
|
qemuDomainObjEnterRemoteWithDriver(driver, obj);
|
|
|
|
...communicate with remote...
|
|
|
|
qemuDomainObjExitRemoteWithDriver(driver, obj);
|
|
|
|
/* domain may have been stopped while we were talking to remote */
|
|
|
|
if (!virDomainObjIsActive(vm)) {
|
|
|
|
qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s",
|
|
|
|
_("guest unexpectedly quit"));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
...do final work...
|
|
|
|
|
2011-06-30 09:23:50 +00:00
|
|
|
qemuDomainObjEndAsyncJob(obj);
|
2011-02-02 00:28:55 +00:00
|
|
|
virDomainObjUnlock(obj);
|
|
|
|
qemuDriverUnlock(driver);
|
|
|
|
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
Summary
|
|
|
|
-------
|
|
|
|
|
|
|
|
* Respect lock ordering rules: never lock driver if anything else is
|
|
|
|
already locked
|
|
|
|
|
|
|
|
* Don't hold locks in code which sleeps: unlock driver & virDomainObjPtr
|
|
|
|
when using monitor
|