2022-05-13 11:02:21 +00:00
|
|
|
QEMU Driver Threading: The Rules
|
|
|
|
================================
|
|
|
|
|
|
|
|
.. contents::
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
This document describes how thread safety is ensured throughout
|
|
|
|
the QEMU driver. The criteria for this model are:
|
|
|
|
|
2011-02-02 00:28:55 +00:00
|
|
|
- Objects must never be exclusively locked for any prolonged time
|
2009-11-03 18:26:32 +00:00
|
|
|
- Code which sleeps must be able to time out after suitable period
|
2011-07-27 22:20:00 +00:00
|
|
|
- Must be safe against dispatch of asynchronous events from monitor
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
Basic locking primitives
|
|
|
|
------------------------
|
|
|
|
|
|
|
|
There are a number of locks on various objects
|
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
``virQEMUDriver``
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
The ``qemu_conf.h`` file has inline comments describing the locking
|
2013-02-06 18:17:20 +00:00
|
|
|
needs for each field. Any field marked immutable, self-locking
|
|
|
|
can be accessed without the driver lock. For other fields there
|
2022-05-13 11:02:21 +00:00
|
|
|
are typically helper APIs in ``qemu_conf.c`` that provide serialized
|
|
|
|
access to the data. No code outside ``qemu_conf.c`` should ever
|
2013-02-06 18:17:20 +00:00
|
|
|
acquire this lock
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
``virDomainObj``
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2019-09-04 17:23:30 +00:00
|
|
|
Will be locked and the reference counter will be increased after calling
|
2022-05-13 11:02:21 +00:00
|
|
|
any of the ``virDomainObjListFindBy{ID,Name,UUID}`` methods. The preferred way
|
2019-09-04 17:23:30 +00:00
|
|
|
of decrementing the reference counter and unlocking the domain is using the
|
2022-05-13 11:02:21 +00:00
|
|
|
``virDomainObjEndAPI()`` function.
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
Lock must be held when changing/reading any variable in the ``virDomainObj``
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2011-07-27 22:20:00 +00:00
|
|
|
This lock must not be held for anything which sleeps/waits (i.e. monitor
|
2009-11-03 18:26:32 +00:00
|
|
|
commands).
|
|
|
|
|
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
``qemuMonitorPrivatePtr`` job conditions
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
Since ``virDomainObj`` lock must not be held during sleeps, the job
|
2011-06-30 09:23:50 +00:00
|
|
|
conditions provide additional protection for code making updates.
|
|
|
|
|
2018-06-19 06:12:11 +00:00
|
|
|
QEMU driver uses three kinds of job conditions: asynchronous, agent
|
|
|
|
and normal.
|
2011-06-30 09:23:50 +00:00
|
|
|
|
|
|
|
Asynchronous job condition is used for long running jobs (such as
|
|
|
|
migration) that consist of several monitor commands and it is
|
|
|
|
desirable to allow calling a limited set of other monitor commands
|
|
|
|
while such job is running. This allows clients to, e.g., query
|
|
|
|
statistical data, cancel the job, or change parameters of the job.
|
|
|
|
|
|
|
|
Normal job condition is used by all other jobs to get exclusive
|
|
|
|
access to the monitor and also by every monitor command issued by an
|
|
|
|
asynchronous job. When acquiring normal job condition, the job must
|
|
|
|
specify what kind of action it is about to take and this is checked
|
|
|
|
against the allowed set of jobs in case an asynchronous job is
|
|
|
|
running. If the job is incompatible with current asynchronous job,
|
|
|
|
it needs to wait until the asynchronous job ends and try to acquire
|
|
|
|
the job again.
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2018-06-19 06:12:11 +00:00
|
|
|
Agent job condition is then used when thread wishes to talk to qemu
|
|
|
|
agent monitor. It is possible to acquire just agent job
|
2022-09-05 13:57:13 +00:00
|
|
|
(``virDomainObjBeginAgentJob``), or only normal job (``virDomainObjBeginJob``)
|
2020-01-10 23:32:14 +00:00
|
|
|
but not both at the same time. Holding an agent job and a normal job would
|
|
|
|
allow an unresponsive or malicious agent to block normal libvirt API and
|
|
|
|
potentially result in a denial of service. Which type of job to grab
|
|
|
|
depends whether caller wishes to communicate only with agent socket, or
|
|
|
|
only with qemu monitor socket.
|
2018-06-19 06:12:11 +00:00
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
Immediately after acquiring the ``virDomainObj`` lock, any method
|
2018-06-19 06:12:11 +00:00
|
|
|
which intends to update state must acquire asynchronous, normal or
|
2022-05-13 11:02:21 +00:00
|
|
|
agent job . The ``virDomainObj`` lock is released while blocking on
|
2018-06-19 06:12:11 +00:00
|
|
|
these condition variables. Once the job condition is acquired, a
|
2022-05-13 11:02:21 +00:00
|
|
|
method can safely release the ``virDomainObj`` lock whenever it hits
|
2018-06-19 06:12:11 +00:00
|
|
|
a piece of code which may sleep/wait, and re-acquire it after the
|
|
|
|
sleep/wait. Whenever an asynchronous job wants to talk to the
|
|
|
|
monitor, it needs to acquire nested job (a special kind of normal
|
|
|
|
job) to obtain exclusive access to the monitor.
|
2011-02-02 00:28:55 +00:00
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
Since the ``virDomainObj`` lock was dropped while waiting for the
|
2011-02-02 00:28:55 +00:00
|
|
|
job condition, it is possible that the domain is no longer active
|
|
|
|
when the condition is finally obtained. The monitor lock is only
|
|
|
|
safe to grab after verifying that the domain is still active.
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
``qemuMonitor`` mutex
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
Lock to be used when invoking any monitor command to ensure safety
|
|
|
|
wrt any asynchronous events that may be dispatched from the monitor.
|
|
|
|
It should be acquired before running a command.
|
|
|
|
|
|
|
|
The job condition *MUST* be held before acquiring the monitor lock
|
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
The ``virDomainObj`` lock *MUST* be held before acquiring the monitor
|
2009-11-03 18:26:32 +00:00
|
|
|
lock.
|
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
The ``virDomainObj`` lock *MUST* then be released when invoking the
|
2009-11-03 18:26:32 +00:00
|
|
|
monitor command.
|
|
|
|
|
|
|
|
|
|
|
|
Helper methods
|
|
|
|
--------------
|
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
To lock the ``virDomainObj``
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
``virObjectLock()``
|
|
|
|
- Acquires the ``virDomainObj`` lock
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
``virObjectUnlock()``
|
|
|
|
- Releases the ``virDomainObj`` lock
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
2011-06-30 09:23:50 +00:00
|
|
|
To acquire the normal job condition
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2022-09-05 13:57:05 +00:00
|
|
|
``virDomainObjBeginJob()``
|
2011-06-30 09:23:50 +00:00
|
|
|
- Waits until the job is compatible with current async job or no
|
|
|
|
async job is running
|
2022-05-13 11:02:21 +00:00
|
|
|
- Waits for ``job.cond`` condition ``job.active != 0`` using ``virDomainObj``
|
2011-06-30 09:23:50 +00:00
|
|
|
mutex
|
|
|
|
- Rechecks if the job is still compatible and repeats waiting if it
|
|
|
|
isn't
|
2022-05-13 11:02:21 +00:00
|
|
|
- Sets ``job.active`` to the job type
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2022-09-05 13:57:09 +00:00
|
|
|
``virDomainObjEndJob()``
|
2011-06-30 09:23:50 +00:00
|
|
|
- Sets job.active to 0
|
|
|
|
- Signals on job.cond condition
|
|
|
|
|
|
|
|
|
2018-06-19 06:12:11 +00:00
|
|
|
To acquire the agent job condition
|
|
|
|
|
2022-09-05 13:57:13 +00:00
|
|
|
``virDomainObjBeginAgentJob()``
|
2018-06-19 06:12:11 +00:00
|
|
|
- Waits until there is no other agent job set
|
2022-05-13 11:02:21 +00:00
|
|
|
- Sets ``job.agentActive`` to the job type
|
2018-06-19 06:12:11 +00:00
|
|
|
|
2022-09-05 13:57:13 +00:00
|
|
|
``virDomainObjEndAgentJob()``
|
2022-05-13 11:02:21 +00:00
|
|
|
- Sets ``job.agentActive`` to 0
|
|
|
|
- Signals on ``job.cond`` condition
|
2018-06-19 06:12:11 +00:00
|
|
|
|
|
|
|
|
2011-06-30 09:23:50 +00:00
|
|
|
To acquire the asynchronous job condition
|
|
|
|
|
2022-09-05 13:57:14 +00:00
|
|
|
``virDomainObjBeginAsyncJob()``
|
2011-06-30 09:23:50 +00:00
|
|
|
- Waits until no async job is running
|
2022-05-13 11:02:21 +00:00
|
|
|
- Waits for ``job.cond`` condition ``job.active != 0`` using ``virDomainObj``
|
2011-06-30 09:23:50 +00:00
|
|
|
mutex
|
2022-05-13 11:02:21 +00:00
|
|
|
- Rechecks if any async job was started while waiting on ``job.cond``
|
2011-06-30 09:23:50 +00:00
|
|
|
and repeats waiting in that case
|
2022-05-13 11:02:21 +00:00
|
|
|
- Sets ``job.asyncJob`` to the asynchronous job type
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2022-09-05 13:57:14 +00:00
|
|
|
``virDomainObjEndAsyncJob()``
|
2022-05-13 11:02:21 +00:00
|
|
|
- Sets ``job.asyncJob`` to 0
|
|
|
|
- Broadcasts on ``job.asyncCond`` condition
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
|
|
|
To acquire the QEMU monitor lock
|
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
``qemuDomainObjEnterMonitor()``
|
|
|
|
- Acquires the ``qemuMonitorObj`` lock
|
|
|
|
- Releases the ``virDomainObj`` lock
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
``qemuDomainObjExitMonitor()``
|
|
|
|
- Releases the ``qemuMonitorObj`` lock
|
|
|
|
- Acquires the ``virDomainObj`` lock
|
2009-11-03 18:26:32 +00:00
|
|
|
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
These functions must not be used by an asynchronous job.
|
2011-06-30 09:23:50 +00:00
|
|
|
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2013-02-06 18:17:20 +00:00
|
|
|
To acquire the QEMU monitor lock as part of an asynchronous job
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
``qemuDomainObjEnterMonitorAsync()``
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
- Validates that the right async job is still running
|
2022-05-13 11:02:21 +00:00
|
|
|
- Acquires the ``qemuMonitorObj`` lock
|
|
|
|
- Releases the ``virDomainObj`` lock
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
- Validates that the VM is still active
|
|
|
|
|
2013-02-06 18:17:20 +00:00
|
|
|
qemuDomainObjExitMonitor()
|
2022-05-13 11:02:21 +00:00
|
|
|
- Releases the ``qemuMonitorObj`` lock
|
|
|
|
- Acquires the ``virDomainObj`` lock
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
|
|
|
|
These functions are for use inside an asynchronous job; the caller
|
|
|
|
must check for a return of -1 (VM not running, so nothing to exit).
|
2022-05-13 11:02:21 +00:00
|
|
|
Helper functions may also call this with ``VIR_ASYNC_JOB_NONE`` when
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
used from a sync job (such as when first starting a domain).
|
2011-06-30 09:23:50 +00:00
|
|
|
|
2011-02-02 00:28:55 +00:00
|
|
|
|
2013-02-06 18:17:20 +00:00
|
|
|
To keep a domain alive while waiting on a remote command
|
2011-02-02 00:28:55 +00:00
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
``qemuDomainObjEnterRemote()``
|
|
|
|
- Releases the ``virDomainObj`` lock
|
2011-02-02 00:28:55 +00:00
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
``qemuDomainObjExitRemote()``
|
|
|
|
- Acquires the ``virDomainObj`` lock
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
|
|
|
Design patterns
|
|
|
|
---------------
|
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
* Accessing something directly to do with a ``virDomainObj``::
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2021-03-11 07:16:13 +00:00
|
|
|
virDomainObj *obj;
|
2009-11-03 18:26:32 +00:00
|
|
|
|
qemu: completely rework reference counting
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2014-12-04 13:41:36 +00:00
|
|
|
obj = qemuDomObjFromDomain(dom);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
...do work...
|
|
|
|
|
2015-04-23 15:27:58 +00:00
|
|
|
virDomainObjEndAPI(&obj);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
* Updating something directly to do with a ``virDomainObj``::
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2021-03-11 07:16:13 +00:00
|
|
|
virDomainObj *obj;
|
2009-11-03 18:26:32 +00:00
|
|
|
|
qemu: completely rework reference counting
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2014-12-04 13:41:36 +00:00
|
|
|
obj = qemuDomObjFromDomain(dom);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2022-09-05 13:57:05 +00:00
|
|
|
virDomainObjBeginJob(obj, VIR_JOB_TYPE);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
...do work...
|
|
|
|
|
2022-09-05 13:57:09 +00:00
|
|
|
virDomainObjEndJob(obj);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2015-04-23 15:27:58 +00:00
|
|
|
virDomainObjEndAPI(&obj);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
* Invoking a monitor command on a ``virDomainObj``::
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2021-03-11 07:16:13 +00:00
|
|
|
virDomainObj *obj;
|
|
|
|
qemuDomainObjPrivate *priv;
|
2009-11-03 18:26:32 +00:00
|
|
|
|
qemu: completely rework reference counting
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2014-12-04 13:41:36 +00:00
|
|
|
obj = qemuDomObjFromDomain(dom);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
2022-09-05 13:57:05 +00:00
|
|
|
virDomainObjBeginJob(obj, VIR_JOB_TYPE);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
...do prep work...
|
|
|
|
|
2011-02-02 00:28:55 +00:00
|
|
|
if (virDomainObjIsActive(vm)) {
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
qemuDomainObjEnterMonitor(obj);
|
2011-02-02 00:28:55 +00:00
|
|
|
qemuMonitorXXXX(priv->mon);
|
|
|
|
qemuDomainObjExitMonitor(obj);
|
|
|
|
}
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
...do final work...
|
|
|
|
|
2022-09-05 13:57:09 +00:00
|
|
|
virDomainObjEndJob(obj);
|
2015-04-23 15:27:58 +00:00
|
|
|
virDomainObjEndAPI(&obj);
|
2009-11-03 18:26:32 +00:00
|
|
|
|
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
* Invoking an agent command on a ``virDomainObj``::
|
2018-06-19 06:12:11 +00:00
|
|
|
|
2021-03-11 07:16:13 +00:00
|
|
|
virDomainObj *obj;
|
|
|
|
qemuAgent *agent;
|
2018-06-19 06:12:11 +00:00
|
|
|
|
|
|
|
obj = qemuDomObjFromDomain(dom);
|
|
|
|
|
2022-09-05 13:57:13 +00:00
|
|
|
virDomainObjBeginAgentJob(obj, VIR_AGENT_JOB_TYPE);
|
2018-06-19 06:12:11 +00:00
|
|
|
|
|
|
|
...do prep work...
|
|
|
|
|
|
|
|
if (!qemuDomainAgentAvailable(obj, true))
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
agent = qemuDomainObjEnterAgent(obj);
|
|
|
|
qemuAgentXXXX(agent, ..);
|
|
|
|
qemuDomainObjExitAgent(obj, agent);
|
|
|
|
|
|
|
|
...do final work...
|
|
|
|
|
2022-09-05 13:57:13 +00:00
|
|
|
virDomainObjEndAgentJob(obj);
|
2018-06-19 06:12:11 +00:00
|
|
|
virDomainObjEndAPI(&obj);
|
|
|
|
|
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
* Running asynchronous job::
|
2011-06-30 09:23:50 +00:00
|
|
|
|
2021-03-11 07:16:13 +00:00
|
|
|
virDomainObj *obj;
|
|
|
|
qemuDomainObjPrivate *priv;
|
2011-06-30 09:23:50 +00:00
|
|
|
|
qemu: completely rework reference counting
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2014-12-04 13:41:36 +00:00
|
|
|
obj = qemuDomObjFromDomain(dom);
|
2011-06-30 09:23:50 +00:00
|
|
|
|
2022-09-05 13:57:14 +00:00
|
|
|
virDomainObjBeginAsyncJob(obj, VIR_ASYNC_JOB_TYPE);
|
2011-06-30 09:23:50 +00:00
|
|
|
qemuDomainObjSetAsyncJobMask(obj, allowedJobs);
|
|
|
|
|
|
|
|
...do prep work...
|
|
|
|
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
if (qemuDomainObjEnterMonitorAsync(driver, obj,
|
2022-03-24 15:32:42 +00:00
|
|
|
VIR_ASYNC_JOB_TYPE) < 0) {
|
2011-06-30 09:23:50 +00:00
|
|
|
/* domain died in the meantime */
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
...start qemu job...
|
2022-03-18 10:17:28 +00:00
|
|
|
qemuDomainObjExitMonitor(obj);
|
2011-06-30 09:23:50 +00:00
|
|
|
|
|
|
|
while (!finished) {
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 23:18:24 +00:00
|
|
|
if (qemuDomainObjEnterMonitorAsync(driver, obj,
|
2022-03-24 15:32:42 +00:00
|
|
|
VIR_ASYNC_JOB_TYPE) < 0) {
|
2011-06-30 09:23:50 +00:00
|
|
|
/* domain died in the meantime */
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
...monitor job progress...
|
2022-03-18 10:17:28 +00:00
|
|
|
qemuDomainObjExitMonitor(obj);
|
2011-06-30 09:23:50 +00:00
|
|
|
|
2013-01-09 21:00:32 +00:00
|
|
|
virObjectUnlock(obj);
|
2011-06-30 09:23:50 +00:00
|
|
|
sleep(aWhile);
|
2013-01-09 21:00:32 +00:00
|
|
|
virObjectLock(obj);
|
2011-06-30 09:23:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
...do final work...
|
|
|
|
|
2022-09-05 13:57:14 +00:00
|
|
|
virDomainObjEndAsyncJob(obj);
|
2015-04-23 15:27:58 +00:00
|
|
|
virDomainObjEndAPI(&obj);
|
2011-06-30 09:23:50 +00:00
|
|
|
|
|
|
|
|
2022-05-13 11:02:21 +00:00
|
|
|
* Coordinating with a remote server for migration::
|
2011-02-02 00:28:55 +00:00
|
|
|
|
2021-03-11 07:16:13 +00:00
|
|
|
virDomainObj *obj;
|
|
|
|
qemuDomainObjPrivate *priv;
|
2011-02-02 00:28:55 +00:00
|
|
|
|
qemu: completely rework reference counting
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2014-12-04 13:41:36 +00:00
|
|
|
obj = qemuDomObjFromDomain(dom);
|
2011-02-02 00:28:55 +00:00
|
|
|
|
2022-09-05 13:57:14 +00:00
|
|
|
virDomainObjBeginAsyncJob(obj, VIR_ASYNC_JOB_TYPE);
|
2011-02-02 00:28:55 +00:00
|
|
|
|
|
|
|
...do prep work...
|
|
|
|
|
|
|
|
if (virDomainObjIsActive(vm)) {
|
2013-02-06 18:17:20 +00:00
|
|
|
qemuDomainObjEnterRemote(obj);
|
2011-02-02 00:28:55 +00:00
|
|
|
...communicate with remote...
|
2013-02-06 18:17:20 +00:00
|
|
|
qemuDomainObjExitRemote(obj);
|
2011-02-02 00:28:55 +00:00
|
|
|
/* domain may have been stopped while we were talking to remote */
|
|
|
|
if (!virDomainObjIsActive(vm)) {
|
|
|
|
qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s",
|
|
|
|
_("guest unexpectedly quit"));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
...do final work...
|
|
|
|
|
2022-09-05 13:57:14 +00:00
|
|
|
virDomainObjEndAsyncJob(obj);
|
2015-04-23 15:27:58 +00:00
|
|
|
virDomainObjEndAPI(&obj);
|