This patch is mostly code motion - moving some functions out
of qemu_driver and into qemu_domain so they can be reused by
multiple qemu_* files (since qemu_driver.h must not grow).
It also adds a new helper function, qemuDomainRemoveInactive,
which will be used in the next patch.
* src/qemu/qemu_domain.h (qemuFindQemuImgBinary)
(qemuDomainSnapshotWriteMetadata, qemuDomainSnapshotForEachQcow2)
(qemuDomainSnapshotDiscard, qemuDomainSnapshotDiscardAll)
(qemuDomainRemoveInactive): New prototypes.
(struct qemu_snap_remove): New struct.
* src/qemu/qemu_domain.c (qemuDomainRemoveInactive)
(qemuDomainSnapshotDiscardAllMetadata): New functions.
(qemuFindQemuImgBinary, qemuDomainSnapshotWriteMetadata)
(qemuDomainSnapshotForEachQcow2, qemuDomainSnapshotDiscard)
(qemuDomainSnapshotDiscardAll): Move here...
* src/qemu/qemu_driver.c (qemuFindQemuImgBinary)
(qemuDomainSnapshotWriteMetadata, qemuDomainSnapshotForEachQcow2)
(qemuDomainSnapshotDiscard, qemuDomainSnapshotDiscardAll): ...from
here.
(qemuDomainUndefineFlags): Update caller.
* src/conf/domain_conf.c (virDomainRemoveInactive): Doc fixes.
gcc warns when building libvirt 0.9.5 on a 32-bit machine:
qemu/qemu_migration.c: In function 'qemuMigrationToFile':
qemu/qemu_migration.c:2727:38: error: large integer implicitly truncated to unsigned type [-Woverflow]
* src/qemu/qemu_domain.h (QEMU_DOMAIN_FILE_MIG_BANDWIDTH_MAX): Cap
to long when building for 32-bit platform.
The maximum bandwidth that can be consumed when migrating a domain
is better classified as an operational vs configuration parameter of
the dommain. As such, store this parameter in qemuDomainObjPrivate
structure.
This patch creates an optional BeginJob queue size limit. When
active, all other attempts above level will fail. To set this
feature assign desired value to max_queued variable in qemu.conf.
Setting it to 0 turns it off.
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
qemuMigrationUpdateJobStatus (called in a loop by migration
and save tasks) uses qemuDomainObjEnterMonitorWithDriver;
however, that function ended up starting a nested job without
releasing the driver.
Since no one else is making nested calls, we can inline the
internal functions to properly track driver_locked.
* src/qemu/qemu_domain.h (qemuDomainObjBeginNestedJob)
(qemuDomainObjBeginNestedJobWithDriver)
(qemuDomainObjEndNestedJob): Drop unused prototypes.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal):
Reflect driver lock to nested job.
(qemuDomainObjBeginNestedJob)
(qemuDomainObjBeginNestedJobWithDriver)
(qemuDomainObjEndNestedJob): Drop unused functions.
The compiler might optimize based on our declaration that something
is unused. Putting that declaration in the header risks getting
out of sync with the actual implementation, so it belongs better
only in the .c files. We were mostly compliant, and a new syntax
check will help us in the future.
* cfg.mk (sc_avoid_attribute_unused_in_header): New syntax check.
* src/nodeinfo.h (nodeGetCPUStats, nodeGetMemoryStats): Delete
attribute already present in .c file.
* src/qemu/qemu_domain.h (qemuDomainEventFlush): Likewise.
* src/util/virterror_internal.h (virReportErrorHelper): Parameters
are actually used by .c file.
* src/xenxs/xen_sxpr.h (xenFormatSxprDisk): Adjust prototype.
* src/xenxs/xen_sxpr.c (xenFormatSxprDisk): Delete unused argument.
(xenFormatSxpr): Adjust caller.
* src/xen/xend_internal.c (xenDaemonAttachDeviceFlags)
(xenDaemonUpdateDeviceFlags): Likewise.
Suggested by Daniel Veillard.
Asynchronous jobs may take long time to finish and may consist of
several phases which we need to now about to help with recovery/rollback
after libvirtd restarts.
Query commands are safe to be called during long running jobs (such as
migration). This patch makes them all work without the need to
special-case every single one of them.
The patch introduces new job.asyncCond condition and associated
job.asyncJob which are dedicated to asynchronous (from qemu monitor
point of view) jobs that can take arbitrarily long time to finish while
qemu monitor is still usable for other commands.
The existing job.active (and job.cond condition) is used all other
synchronous jobs (including the commands run during async job).
Locking schema is changed to use these two conditions. While asyncJob is
active, only allowed set of synchronous jobs is allowed (the set can be
different according to a particular asyncJob) so any method that
communicates to qemu monitor needs to check if it is allowed to be
executed during current asyncJob (if any). Once the check passes, the
method needs to normally acquire job.cond to ensure no other command is
running. Since domain object lock is released during that time, asyncJob
could have been started in the meantime so the method needs to recheck
the first condition. Then, normal jobs set job.active and asynchronous
jobs set job.asyncJob and optionally change the list of allowed job
groups.
Since asynchronous jobs only set job.asyncJob, other allowed commands
can still be run when domain object is unlocked (when communicating to
remote libvirtd or sleeping). To protect its own internal synchronous
commands, the asynchronous job needs to start a special nested job
before entering qemu monitor. The nested job doesn't check asyncJob, it
only acquires job.cond and sets job.active to block other jobs.
Avoid re-formatting the pidfile path everytime we need it. Create
it once when starting the guest, and preserve it until the guest
is shutdown.
* src/libvirt_private.syms, src/util/util.c,
src/util/util.h: Add virFileReadPidPath
* src/qemu/qemu_domain.h: Add pidfile field
* src/qemu/qemu_process.c: Store pidfile path in qemuDomainObjPrivate
The drivers were accepting domain configs without checking if those
were actually meant for them. For example the LXC driver happily
accepts configs with type QEMU.
Add a check for the expected domain types to the virDomainDefParse*
functions.
For controlled shutdown we issue a 'system_powerdown' command
to the QEMU monitor. This triggers an ACPI event which (most)
guest OS wire up to a controlled shutdown. There is no equiv
ACPI event to trigger a controlled reboot. This patch attempts
to fake a reboot.
- In qemuDomainObjPrivatePtr we have a bool fakeReboot
flag.
- The virDomainReboot method sets this flag and then
triggers a normal 'system_powerdown'.
- The QEMU process is started with '-no-shutdown'
so that the guest CPUs pause when it powers off the
guest
- When we receive the 'POWEROFF' event from QEMU JSON
monitor if fakeReboot is not set we invoke the
qemuProcessKill command and shutdown continues
normally
- If fakeReboot was set, we spawn a background thread
which issues 'system_reset' to perform a warm reboot
of the guest hardware. Then it issues 'cont' to
start the CPUs again
* src/qemu/qemu_command.c: Add -no-shutdown flag if
we have JSON support
* src/qemu/qemu_domain.h: Add 'fakeReboot' flag to
qemuDomainObjPrivate struct
* src/qemu/qemu_driver.c: Fake reboot using the
system_powerdown command if JSON support is available
* src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h,
src/qemu/qemu_monitor_json.c, src/qemu/qemu_monitor_json.h,
src/qemu/qemu_monitor_text.c, src/qemu/qemu_monitor_text.h: Add
binding for system_reset command
* src/qemu/qemu_process.c: Reset the guest & start CPUs if
fakeReboot is set
The QEMU integrates with the lock manager instructure in a number
of key places
* During startup, a lock is acquired in between the fork & exec
* During startup, the libvirtd process acquires a lock before
setting file labelling
* During shutdown, the libvirtd process acquires a lock
before restoring file labelling
* During hotplug, unplug & media change the libvirtd process
holds a lock while setting/restoring labels
The main content lock is only ever held by the QEMU child process,
or libvirtd during VM shutdown. The rest of the operations only
require libvirtd to hold the metadata locks, relying on the active
QEMU still holding the content lock.
* src/qemu/qemu_conf.c, src/qemu/qemu_conf.h,
src/qemu/libvirtd_qemu.aug, src/qemu/test_libvirtd_qemu.aug:
Add config parameter for configuring lock managers
* src/qemu/qemu_driver.c: Add calls to the lock manager
Update the qemuDomainMigrateBegin method so that it accepts
an optional incoming XML document. This will be validated
for ABI compatibility against the current domain config,
and if this check passes, will be passed back out for use
by the qemuDomainMigratePrepare method on the target
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h,
src/qemu/qemu_migration.c: Allow custom XML to be passed
Originally most of libvirt domain-specific calls were blocking
during a migration.
A new mechanism to allow specific calls (blkstat/blkinfo) to be
executed in such condition has been implemented.
In the long term it'd be desirable to get a more general
solution to mark further APIs as migration safe, without needing
special case code.
* src/qemu/qemu_migration.c: add some additional job signal
flags for doing blkstat/blkinfo during a migration
* src/qemu/qemu_domain.c: add a condition variable that can be
used to efficiently wait for the migration code to clear the
signal flag
* src/qemu/qemu_driver.c: execute blkstat/blkinfo using the
job signal flags during migration
As well as taint warnings going to the main libvirt log,
add taint warnings to the per-domain logfile
Domain id=3 is tainted: high-privileges
Domain id=3 is tainted: disk-probing
Domain id=3 is tainted: shell-scripts
Domain id=3 is tainted: custom-monitor
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Enhance
qemuDomainTaint to also log to the domain logfile
* src/qemu/qemu_driver.c: Pass -1 for logFD to taint methods to
auto-append to logfile
* src/qemu/qemu_process.c: Pass open logFD at startup for taint
methods
The qemuDomainAppendLog method allows writing a formatted string
to the end of the domain logfile, optionally opening it if needed.
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add
qemuDomainAppendLog
Move the qemuProcessLogReadFD and qemuProcessLogFD methods
into qemu_domain.c, renaming them to qemuDomainCreateLog
and qemuDomainOpenLog.
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add
qemuDomainCreateLog and qemuDomainOpenLog.
* src/qemu/qemu_process.c: Remove qemuProcessLogFD
and qemuProcessLogReadFD
Wire up logging of VM tainting to the QEMU driver
- If running QEMU as root user/group or without capabilities
being cleared
- If passing custom QEMU command line args
- If issuing custom QEMU monitor commands
- If using a network interface config with an associated
shell script
- If using a disk config relying on format probing
The warnings, per-VM appear in the main libvirtd logs
11:56:17.571: 10832: warning : qemuDomainObjTaint:712 : Domain id=1 name='l2' uuid=c7a3edbd-edaf-9455-926a-d65c16db1802 is tainted: high-privileges
11:56:17.571: 10832: warning : qemuDomainObjTaint:712 : Domain id=1 name='l2' uuid=c7a3edbd-edaf-9455-926a-d65c16db1802 is tainted: disk-probing
The taint flags are reset when the VM is stopped.
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Helper APIs
for logging taint warnings
* src/qemu/qemu_driver.c: Log tainting with custom QEMU monitor
commands and disk/net hotplug with unsupported configs
* src/qemu/qemu_process.c: Log tainting at startup based on
unsupported configs
To cope with the QEMU binary being changed while a VM is running,
it is neccessary to persist the original qemu capabilities at the
time the VM is booted.
* src/qemu/qemu_capabilities.c, src/qemu/qemu_capabilities.h: Add
an enum for a string rep of every capability
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Support for
storing capabilities in the domain status XML
* src/qemu/qemu_process.c: Populate & free QEMU capabilities at
domain startup
Enhance the QEMU migration monitoring loop, so that it can get
a signal to change migration speed on the fly
* src/qemu/qemu_domain.h: Add signal for changing speed on the fly
* src/qemu/qemu_driver.c: Wire up virDomainMigrateSetSpeed driver
* src/qemu/qemu_migration.c: Support signal for changing speed
The introduction of the v3 migration protocol, along with
support for migration cookies, will significantly expand
the size of the migration code. Move it all to a separate
file to make it more manageable
The functions are not moved 100%. The API entry points
remain in the main QEMU driver, but once the public
virDomainPtr is resolved to the internal virDomainObjPtr,
all following code is moved.
This will allow the new v3 API entry points to call into the
same shared internal migration functions
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add
qemuDomainFormatXML helper method
* src/qemu/qemu_driver.c: Remove all migration code
* src/qemu/qemu_migration.c, src/qemu/qemu_migration.h: Add
all migration code.
Move the qemudStartVMDaemon and qemudShutdownVMDaemon
methods into a separate file, renaming them to
qemuProcessStart, qemuProcessStop. All helper methods
called by these are also moved & renamed to match
* src/Makefile.am: Add qemu_process.c/.h
* src/qemu/qemu_command.c: Add qemuDomainAssignPCIAddresses
* src/qemu/qemu_command.h: Add VNC port min/max
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add
domain event queue helpers
* src/qemu/qemu_driver.c, src/qemu/qemu_driver.h: Remove
all QEMU process startup/shutdown functions
* src/qemu/qemu_process.c, src/qemu/qemu_process.h: Add
all QEMU process startup/shutdown functions
To allow the APIs to be used from separate files, move the domain
lock / job helper code into qemu_domain.c
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add domain lock
/ job code
* src/qemu/qemu_driver.c: Remove domain lock / job code
Move the code for handling the QEMU virDomainObjPtr private
data, and custom XML namespace into a separate file
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: New file
for private data & namespace code
* src/qemu/qemu_driver.c, src/qemu/qemu_driver.h: Remove
private data & namespace code
* src/qemu/qemu_driver.h, src/qemu/qemu_command.h: Update
includes
* src/Makefile.am: Add src/qemu/qemu_domain.c