2009-11-03 18:26:32 +00:00
|
|
|
QEMU Driver Threading: The Rules
|
|
|
|
=================================
|
|
|
|
|
|
|
|
This document describes how thread safety is ensured throughout
|
|
|
|
the QEMU driver. The criteria for this model are:
|
|
|
|
|
2011-02-02 00:28:55 +00:00
|
|
|
- Objects must never be exclusively locked for any prolonged time
|
2009-11-03 18:26:32 +00:00
|
|
|
- Code which sleeps must be able to time out after suitable period
|
2011-07-27 22:20:00 +00:00
|
|
|
- Must be safe against dispatch of asynchronous events from monitor
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
|
|
|
Basic locking primitives
|
|
|
|
------------------------
|
|
|
|
|
|
|
|
There are a number of locks on various objects
|
|
|
|
|
2013-02-06 18:17:20 +00:00
|
|
|
* virQEMUDriverPtr
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2013-02-06 18:17:20 +00:00
|
|
|
The qemu_conf.h file has inline comments describing the locking
|
|
|
|
needs for each field. Any field marked immutable, self-locking
|
|
|
|
can be accessed without the driver lock. For other fields there
|
|
|
|
are typically helper APIs in qemu_conf.c that provide serialized
|
|
|
|
access to the data. No code outside qemu_conf.c should ever
|
|
|
|
acquire this lock
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2013-02-06 18:17:20 +00:00
|
|
|
* virDomainObjPtr
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2015-07-29 09:11:20 +00:00
|
|
|
Will be locked after calling any of the virDomainObjListFindBy{ID,Name,UUID}
|
qemu: completely rework reference counting
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2014-12-04 13:41:36 +00:00
|
|
|
methods. However, preferred method is qemuDomObjFromDomain() that uses
|
|
|
|
virDomainFindByUUIDRef() which also increases the reference counter and
|
|
|
|
finds the domain in the domain list without blocking all other lookups.
|
2015-03-19 15:53:00 +00:00
|
|
|
When the domain is locked and the reference increased, the preferred way of
|
qemu: completely rework reference counting
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2014-12-04 13:41:36 +00:00
|
|
|
decrementing the reference counter and unlocking the domain is using the
|
2015-04-23 15:27:58 +00:00
|
|
|
virDomainObjEndAPI() function.
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
Lock must be held when changing/reading any variable in the virDomainObjPtr
|
|
|
|
|
|
|
|
If the lock needs to be dropped & then re-acquired for a short period of
|
|
|
|
time, the reference count must be incremented first using virDomainObjRef().
|
qemu: completely rework reference counting
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2014-12-04 13:41:36 +00:00
|
|
|
There is no need to increase the reference count if qemuDomObjFromDomain()
|
|
|
|
was used for looking up the domain. In this case there is one reference
|
|
|
|
already added by that function.
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2011-07-27 22:20:00 +00:00
|
|
|
This lock must not be held for anything which sleeps/waits (i.e. monitor
|
2009-11-03 18:26:32 +00:00
|
|
|
commands).
|
|
|
|
|
|
|
|
|
|
|
|
|
2011-06-30 09:23:50 +00:00
|
|
|
* qemuMonitorPrivatePtr: Job conditions
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2011-02-02 00:28:55 +00:00
|
|
|
Since virDomainObjPtr lock must not be held during sleeps, the job
|
2011-06-30 09:23:50 +00:00
|
|
|
conditions provide additional protection for code making updates.
|
|
|
|
|
|
|
|
Qemu driver uses two kinds of job conditions: asynchronous and
|
|
|
|
normal.
|
|
|
|
|
|
|
|
Asynchronous job condition is used for long running jobs (such as
|
|
|
|
migration) that consist of several monitor commands and it is
|
|
|
|
desirable to allow calling a limited set of other monitor commands
|
|
|
|
while such job is running. This allows clients to, e.g., query
|
|
|
|
statistical data, cancel the job, or change parameters of the job.
|
|
|
|
|
|
|
|
Normal job condition is used by all other jobs to get exclusive
|
|
|
|
access to the monitor and also by every monitor command issued by an
|
|
|
|
asynchronous job. When acquiring normal job condition, the job must
|
|
|
|
specify what kind of action it is about to take and this is checked
|
|
|
|
against the allowed set of jobs in case an asynchronous job is
|
|
|
|
running. If the job is incompatible with current asynchronous job,
|
|
|
|
it needs to wait until the asynchronous job ends and try to acquire
|
|
|
|
the job again.
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2011-02-02 00:28:55 +00:00
|
|
|
Immediately after acquiring the virDomainObjPtr lock, any method
|
2011-06-30 09:23:50 +00:00
|
|
|
which intends to update state must acquire either asynchronous or
|
|
|
|
normal job condition. The virDomainObjPtr lock is released while
|
|
|
|
blocking on these condition variables. Once the job condition is
|
|
|
|
acquired, a method can safely release the virDomainObjPtr lock
|
|
|
|
whenever it hits a piece of code which may sleep/wait, and
|
|
|
|
re-acquire it after the sleep/wait. Whenever an asynchronous job
|
|
|
|
wants to talk to the monitor, it needs to acquire nested job (a
|
2011-07-27 22:20:00 +00:00
|
|
|
special kind of normal job) to obtain exclusive access to the
|
2011-06-30 09:23:50 +00:00
|
|
|
monitor.
|
2011-02-02 00:28:55 +00:00
|
|
|
|
|
|
|
Since the virDomainObjPtr lock was dropped while waiting for the
|
|
|
|
job condition, it is possible that the domain is no longer active
|
|
|
|
when the condition is finally obtained. The monitor lock is only
|
|
|
|
safe to grab after verifying that the domain is still active.
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
|
|
|
* qemuMonitorPtr: Mutex
|
|
|
|
|
|
|
|
Lock to be used when invoking any monitor command to ensure safety
|
|
|
|
wrt any asynchronous events that may be dispatched from the monitor.
|
|
|
|
It should be acquired before running a command.
|
|
|
|
|
|
|
|
The job condition *MUST* be held before acquiring the monitor lock
|
|
|
|
|
|
|
|
The virDomainObjPtr lock *MUST* be held before acquiring the monitor
|
|
|
|
lock.
|
|
|
|
|
|
|
|
The virDomainObjPtr lock *MUST* then be released when invoking the
|
|
|
|
monitor command.
|
|
|
|
|
|
|
|
|
|
|
|
Helper methods
|
|
|
|
--------------
|
|
|
|
|
|
|
|
To lock the virDomainObjPtr
|
|
|
|
|
2013-01-09 21:00:32 +00:00
|
|
|
virObjectLock()
|
2009-11-03 18:26:32 +00:00
|
|
|
- Acquires the virDomainObjPtr lock
|
|
|
|
|
2013-01-09 21:00:32 +00:00
|
|
|
virObjectUnlock()
|
2009-11-03 18:26:32 +00:00
|
|
|
- Releases the virDomainObjPtr lock
|
|
|
|
|
|
|
|
|
|
|
|
|
2011-06-30 09:23:50 +00:00
|
|
|
To acquire the normal job condition
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2013-02-06 18:17:20 +00:00
|
|
|
qemuDomainObjBeginJob()
|
2011-06-30 09:23:50 +00:00
|
|
|
- Waits until the job is compatible with current async job or no
|
|
|
|
async job is running
|
2011-07-27 22:20:00 +00:00
|
|
|
- Waits for job.cond condition 'job.active != 0' using virDomainObjPtr
|
2011-06-30 09:23:50 +00:00
|
|
|
mutex
|
|
|
|
- Rechecks if the job is still compatible and repeats waiting if it
|
|
|
|
isn't
|
|
|
|
- Sets job.active to the job type
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
|
|
|
qemuDomainObjEndJob()
|
2011-06-30 09:23:50 +00:00
|
|
|
- Sets job.active to 0
|
|
|
|
- Signals on job.cond condition
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
To acquire the asynchronous job condition
|
|
|
|
|
2013-02-06 18:17:20 +00:00
|
|
|
qemuDomainObjBeginAsyncJob()
|
2011-06-30 09:23:50 +00:00
|
|
|
- Waits until no async job is running
|
2011-07-27 22:20:00 +00:00
|
|
|
- Waits for job.cond condition 'job.active != 0' using virDomainObjPtr
|
2011-06-30 09:23:50 +00:00
|
|
|
mutex
|
|
|
|
- Rechecks if any async job was started while waiting on job.cond
|
|
|
|
and repeats waiting in that case
|
|
|
|
- Sets job.asyncJob to the asynchronous job type
|
|
|
|
|
|
|
|
|
|
|
|
qemuDomainObjEndAsyncJob()
|
|
|
|
- Sets job.asyncJob to 0
|
|
|
|
- Broadcasts on job.asyncCond condition
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
To acquire the QEMU monitor lock
|
|
|
|
|
|
|
|
qemuDomainObjEnterMonitor()
|
|
|
|
- Acquires the qemuMonitorObjPtr lock
|
|
|
|
- Releases the virDomainObjPtr lock
|
|
|
|
|
|
|
|
qemuDomainObjExitMonitor()
|
|
|
|
- Releases the qemuMonitorObjPtr lock
|
2011-02-02 00:28:55 +00:00
|
|
|
- Acquires the virDomainObjPtr lock
|
2009-11-03 18:26:32 +00:00
|
|
|
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
These functions must not be used by an asynchronous job.
|
2014-12-12 15:57:21 +00:00
|
|
|
Note that the virDomainObj is unlocked during the time in
|
|
|
|
monitor and it can be changed, e.g. if QEMU dies, qemuProcessStop
|
|
|
|
may free the live domain definition and put the persistent
|
|
|
|
definition back in vm->def. The callers should check the return
|
|
|
|
value of ExitMonitor to see if the domain is still alive.
|
2011-06-30 09:23:50 +00:00
|
|
|
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2013-02-06 18:17:20 +00:00
|
|
|
To acquire the QEMU monitor lock as part of an asynchronous job
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
|
|
|
|
qemuDomainObjEnterMonitorAsync()
|
|
|
|
- Validates that the right async job is still running
|
|
|
|
- Acquires the qemuMonitorObjPtr lock
|
|
|
|
- Releases the virDomainObjPtr lock
|
|
|
|
- Validates that the VM is still active
|
|
|
|
|
2013-02-06 18:17:20 +00:00
|
|
|
qemuDomainObjExitMonitor()
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
- Releases the qemuMonitorObjPtr lock
|
|
|
|
- Acquires the virDomainObjPtr lock
|
|
|
|
|
|
|
|
These functions are for use inside an asynchronous job; the caller
|
|
|
|
must check for a return of -1 (VM not running, so nothing to exit).
|
|
|
|
Helper functions may also call this with QEMU_ASYNC_JOB_NONE when
|
|
|
|
used from a sync job (such as when first starting a domain).
|
2011-06-30 09:23:50 +00:00
|
|
|
|
2011-02-02 00:28:55 +00:00
|
|
|
|
2013-02-06 18:17:20 +00:00
|
|
|
To keep a domain alive while waiting on a remote command
|
2011-02-02 00:28:55 +00:00
|
|
|
|
2013-02-06 18:17:20 +00:00
|
|
|
qemuDomainObjEnterRemote()
|
2011-02-02 00:28:55 +00:00
|
|
|
- Releases the virDomainObjPtr lock
|
|
|
|
|
2013-02-06 18:17:20 +00:00
|
|
|
qemuDomainObjExitRemote()
|
2011-02-02 00:28:55 +00:00
|
|
|
- Acquires the virDomainObjPtr lock
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
|
|
|
Design patterns
|
|
|
|
---------------
|
|
|
|
|
|
|
|
|
2011-07-27 22:20:00 +00:00
|
|
|
* Accessing something directly to do with a virDomainObjPtr
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
virDomainObjPtr obj;
|
|
|
|
|
qemu: completely rework reference counting
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2014-12-04 13:41:36 +00:00
|
|
|
obj = qemuDomObjFromDomain(dom);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
...do work...
|
|
|
|
|
2015-04-23 15:27:58 +00:00
|
|
|
virDomainObjEndAPI(&obj);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
2011-07-27 22:20:00 +00:00
|
|
|
* Updating something directly to do with a virDomainObjPtr
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
virDomainObjPtr obj;
|
|
|
|
|
qemu: completely rework reference counting
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2014-12-04 13:41:36 +00:00
|
|
|
obj = qemuDomObjFromDomain(dom);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2011-06-30 09:23:50 +00:00
|
|
|
qemuDomainObjBeginJob(obj, QEMU_JOB_TYPE);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
...do work...
|
|
|
|
|
|
|
|
qemuDomainObjEndJob(obj);
|
|
|
|
|
2015-04-23 15:27:58 +00:00
|
|
|
virDomainObjEndAPI(&obj);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
|
|
|
* Invoking a monitor command on a virDomainObjPtr
|
|
|
|
|
|
|
|
virDomainObjPtr obj;
|
|
|
|
qemuDomainObjPrivatePtr priv;
|
|
|
|
|
qemu: completely rework reference counting
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2014-12-04 13:41:36 +00:00
|
|
|
obj = qemuDomObjFromDomain(dom);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2011-06-30 09:23:50 +00:00
|
|
|
qemuDomainObjBeginJob(obj, QEMU_JOB_TYPE);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
...do prep work...
|
|
|
|
|
2011-02-02 00:28:55 +00:00
|
|
|
if (virDomainObjIsActive(vm)) {
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
qemuDomainObjEnterMonitor(obj);
|
2011-02-02 00:28:55 +00:00
|
|
|
qemuMonitorXXXX(priv->mon);
|
|
|
|
qemuDomainObjExitMonitor(obj);
|
|
|
|
}
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
...do final work...
|
|
|
|
|
|
|
|
qemuDomainObjEndJob(obj);
|
2015-04-23 15:27:58 +00:00
|
|
|
virDomainObjEndAPI(&obj);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
2013-02-06 18:17:20 +00:00
|
|
|
* Running asynchronous job
|
2011-06-30 09:23:50 +00:00
|
|
|
|
|
|
|
virDomainObjPtr obj;
|
|
|
|
qemuDomainObjPrivatePtr priv;
|
|
|
|
|
qemu: completely rework reference counting
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2014-12-04 13:41:36 +00:00
|
|
|
obj = qemuDomObjFromDomain(dom);
|
2011-06-30 09:23:50 +00:00
|
|
|
|
2013-02-06 18:17:20 +00:00
|
|
|
qemuDomainObjBeginAsyncJob(obj, QEMU_ASYNC_JOB_TYPE);
|
2011-06-30 09:23:50 +00:00
|
|
|
qemuDomainObjSetAsyncJobMask(obj, allowedJobs);
|
|
|
|
|
|
|
|
...do prep work...
|
|
|
|
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
if (qemuDomainObjEnterMonitorAsync(driver, obj,
|
|
|
|
QEMU_ASYNC_JOB_TYPE) < 0) {
|
2011-06-30 09:23:50 +00:00
|
|
|
/* domain died in the meantime */
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
...start qemu job...
|
2013-02-06 18:17:20 +00:00
|
|
|
qemuDomainObjExitMonitor(driver, obj);
|
2011-06-30 09:23:50 +00:00
|
|
|
|
|
|
|
while (!finished) {
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
if (qemuDomainObjEnterMonitorAsync(driver, obj,
|
|
|
|
QEMU_ASYNC_JOB_TYPE) < 0) {
|
2011-06-30 09:23:50 +00:00
|
|
|
/* domain died in the meantime */
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
...monitor job progress...
|
2013-02-06 18:17:20 +00:00
|
|
|
qemuDomainObjExitMonitor(driver, obj);
|
2011-06-30 09:23:50 +00:00
|
|
|
|
2013-01-09 21:00:32 +00:00
|
|
|
virObjectUnlock(obj);
|
2011-06-30 09:23:50 +00:00
|
|
|
sleep(aWhile);
|
2013-01-09 21:00:32 +00:00
|
|
|
virObjectLock(obj);
|
2011-06-30 09:23:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
...do final work...
|
|
|
|
|
|
|
|
qemuDomainObjEndAsyncJob(obj);
|
2015-04-23 15:27:58 +00:00
|
|
|
virDomainObjEndAPI(&obj);
|
2011-06-30 09:23:50 +00:00
|
|
|
|
|
|
|
|
|
|
|
* Coordinating with a remote server for migration
|
2011-02-02 00:28:55 +00:00
|
|
|
|
|
|
|
virDomainObjPtr obj;
|
|
|
|
qemuDomainObjPrivatePtr priv;
|
|
|
|
|
qemu: completely rework reference counting
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2014-12-04 13:41:36 +00:00
|
|
|
obj = qemuDomObjFromDomain(dom);
|
2011-02-02 00:28:55 +00:00
|
|
|
|
2013-02-06 18:17:20 +00:00
|
|
|
qemuDomainObjBeginAsyncJob(obj, QEMU_ASYNC_JOB_TYPE);
|
2011-02-02 00:28:55 +00:00
|
|
|
|
|
|
|
...do prep work...
|
|
|
|
|
|
|
|
if (virDomainObjIsActive(vm)) {
|
2013-02-06 18:17:20 +00:00
|
|
|
qemuDomainObjEnterRemote(obj);
|
2011-02-02 00:28:55 +00:00
|
|
|
...communicate with remote...
|
2013-02-06 18:17:20 +00:00
|
|
|
qemuDomainObjExitRemote(obj);
|
2011-02-02 00:28:55 +00:00
|
|
|
/* domain may have been stopped while we were talking to remote */
|
|
|
|
if (!virDomainObjIsActive(vm)) {
|
|
|
|
qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s",
|
|
|
|
_("guest unexpectedly quit"));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
...do final work...
|
|
|
|
|
2011-06-30 09:23:50 +00:00
|
|
|
qemuDomainObjEndAsyncJob(obj);
|
2015-04-23 15:27:58 +00:00
|
|
|
virDomainObjEndAPI(&obj);
|