Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
Given a PID, the QEMU driver reads /proc/$PID/cmdline and
/proc/$PID/environ to get the configuration. This is fed
into the ARGV->XML convertor to build an XML configuration
for the process.
/proc/$PID/exe is resolved to identify the full command
binary path
After checking for name/uuid uniqueness, an attempt is
made to connect to the monitor socket. If successful
then 'info status' and 'info kvm' are issued to determine
whether the CPUs are running and if KVM is enabled.
* src/qemu/qemu_driver.c: Implement virDomainQemuAttach
* src/qemu/qemu_process.h, src/qemu/qemu_process.c: Add
qemuProcessAttach to connect to the monitor of an
existing QEMU process
If an application is using libvirt + KVM as a piece of its
internal infrastructure to perform a specific task, it can
be desirable to guarentee the VM dies when the virConnectPtr
disconnects from libvirtd. This ensures the app can't leak
any VMs it was using. Adding VIR_DOMAIN_START_AUTOKILL as
a flag when starting guests enables this to be done.
* include/libvirt/libvirt.h.in: All VIR_DOMAIN_START_AUTOKILL
* src/qemu/qemu_driver.c: Support automatic killing of guests
upon connection close
* tools/virsh.c: Add --autokill flag to 'start' and 'create'
commands
Sometimes it is useful to be able to automatically destroy a guest when
a connection is closed. For example, kill an incoming migration if
the client managing the migration dies. This introduces a map between
guest 'uuid' strings and virConnectPtr objects. When a connection is
closed, any associated guests are killed off.
* src/qemu/qemu_conf.h: Add autokill hash table to qemu driver
* src/qemu/qemu_process.c, src/qemu/qemu_process.h: Add APIs
for performing autokill of guests associated with a connection
* src/qemu/qemu_driver.c: Initialize autodestroy map
Only in drivers which use virDomainObj, drivers that query hypervisor
for domain status need to be updated separately in case their hypervisor
supports this functionality.
The reason is also saved into domain state XML so if a domain is not
running (i.e., no state XML exists) the reason will be lost by libvirtd
restart. I think this is an acceptable limitation.
Introduce a virProcessKill function that can be safely called
even when the job mutex is held. This allows virDomainDestroy
to kill any VM even if it is asleep in a monitor job. The PID
will die and the thread asleep on the monitor will then wake
up releasing the job mutex.
* src/qemu/qemu_driver.c: Kill process before using qemuProcessStop
to ensure job is released
* src/qemu/qemu_process.c: Add virProcessKill for killing off
QEMU processes
Move the qemudStartVMDaemon and qemudShutdownVMDaemon
methods into a separate file, renaming them to
qemuProcessStart, qemuProcessStop. All helper methods
called by these are also moved & renamed to match
* src/Makefile.am: Add qemu_process.c/.h
* src/qemu/qemu_command.c: Add qemuDomainAssignPCIAddresses
* src/qemu/qemu_command.h: Add VNC port min/max
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add
domain event queue helpers
* src/qemu/qemu_driver.c, src/qemu/qemu_driver.h: Remove
all QEMU process startup/shutdown functions
* src/qemu/qemu_process.c, src/qemu/qemu_process.h: Add
all QEMU process startup/shutdown functions