The previous patch removed all snapshots, but not the directory
where the snapshots lived, which is still a form of stale data.
* src/qemu/qemu_domain.c (qemuDomainRemoveInactive): Wipe any
snapshot directory.
This patch is mostly code motion - moving some functions out
of qemu_driver and into qemu_domain so they can be reused by
multiple qemu_* files (since qemu_driver.h must not grow).
It also adds a new helper function, qemuDomainRemoveInactive,
which will be used in the next patch.
* src/qemu/qemu_domain.h (qemuFindQemuImgBinary)
(qemuDomainSnapshotWriteMetadata, qemuDomainSnapshotForEachQcow2)
(qemuDomainSnapshotDiscard, qemuDomainSnapshotDiscardAll)
(qemuDomainRemoveInactive): New prototypes.
(struct qemu_snap_remove): New struct.
* src/qemu/qemu_domain.c (qemuDomainRemoveInactive)
(qemuDomainSnapshotDiscardAllMetadata): New functions.
(qemuFindQemuImgBinary, qemuDomainSnapshotWriteMetadata)
(qemuDomainSnapshotForEachQcow2, qemuDomainSnapshotDiscard)
(qemuDomainSnapshotDiscardAll): Move here...
* src/qemu/qemu_driver.c (qemuFindQemuImgBinary)
(qemuDomainSnapshotWriteMetadata, qemuDomainSnapshotForEachQcow2)
(qemuDomainSnapshotDiscard, qemuDomainSnapshotDiscardAll): ...from
here.
(qemuDomainUndefineFlags): Update caller.
* src/conf/domain_conf.c (virDomainRemoveInactive): Doc fixes.
The maximum bandwidth that can be consumed when migrating a domain
is better classified as an operational vs configuration parameter of
the dommain. As such, store this parameter in qemuDomainObjPrivate
structure.
This patch creates an optional BeginJob queue size limit. When
active, all other attempts above level will fail. To set this
feature assign desired value to max_queued variable in qemu.conf.
Setting it to 0 turns it off.
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
qemuMigrationUpdateJobStatus (called in a loop by migration
and save tasks) uses qemuDomainObjEnterMonitorWithDriver;
however, that function ended up starting a nested job without
releasing the driver.
Since no one else is making nested calls, we can inline the
internal functions to properly track driver_locked.
* src/qemu/qemu_domain.h (qemuDomainObjBeginNestedJob)
(qemuDomainObjBeginNestedJobWithDriver)
(qemuDomainObjEndNestedJob): Drop unused prototypes.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal):
Reflect driver lock to nested job.
(qemuDomainObjBeginNestedJob)
(qemuDomainObjBeginNestedJobWithDriver)
(qemuDomainObjEndNestedJob): Drop unused functions.
Asynchronous jobs may take long time to finish and may consist of
several phases which we need to now about to help with recovery/rollback
after libvirtd restarts.
Query commands are safe to be called during long running jobs (such as
migration). This patch makes them all work without the need to
special-case every single one of them.
The patch introduces new job.asyncCond condition and associated
job.asyncJob which are dedicated to asynchronous (from qemu monitor
point of view) jobs that can take arbitrarily long time to finish while
qemu monitor is still usable for other commands.
The existing job.active (and job.cond condition) is used all other
synchronous jobs (including the commands run during async job).
Locking schema is changed to use these two conditions. While asyncJob is
active, only allowed set of synchronous jobs is allowed (the set can be
different according to a particular asyncJob) so any method that
communicates to qemu monitor needs to check if it is allowed to be
executed during current asyncJob (if any). Once the check passes, the
method needs to normally acquire job.cond to ensure no other command is
running. Since domain object lock is released during that time, asyncJob
could have been started in the meantime so the method needs to recheck
the first condition. Then, normal jobs set job.active and asynchronous
jobs set job.asyncJob and optionally change the list of allowed job
groups.
Since asynchronous jobs only set job.asyncJob, other allowed commands
can still be run when domain object is unlocked (when communicating to
remote libvirtd or sleeping). To protect its own internal synchronous
commands, the asynchronous job needs to start a special nested job
before entering qemu monitor. The nested job doesn't check asyncJob, it
only acquires job.cond and sets job.active to block other jobs.
EnterMonitor and ExitMonitor methods are very similar to their
*WithDriver variants; consolidate them into EnterMonitorInternal and
ExitMonitorInternal to avoid (mainly future) code duplication.
Coverity warns if the majority of callers check a function for
errors, but a few don't; but in qemu_audit and qemu_domain, the
choice to not check for failures was safe. In qemu_command, the
failure to generate a uuid can only occur on a bad pointer.
* src/qemu/qemu_audit.c (qemuAuditCgroup): Ignore failure to get
cgroup controller.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitor)
(qemuDomainObjEnterMonitorWithDriver): Ignore failure to get
timestamp.
* src/qemu/qemu_command.c (qemuParseCommandLine): Check for error.
The code emitting taint warnings was mistakenly thinking
that guests run from the QEMU session driver were tainted
for having high privileges. This is of course nonsense
since the session driver is always unprivileged
* src/qemu/qemu_domain.c: Don't warn for high privileges in
non-privileged QEMU
Since we virEventRegisterDefaultImpl is now a public API, callers need
a way to invoke the default registered Handle and Timeout functions. We
already have general functions for these internally, so promote
them to the public API.
v2:
Actually add APIs to libvirt.h
The QEMU integrates with the lock manager instructure in a number
of key places
* During startup, a lock is acquired in between the fork & exec
* During startup, the libvirtd process acquires a lock before
setting file labelling
* During shutdown, the libvirtd process acquires a lock
before restoring file labelling
* During hotplug, unplug & media change the libvirtd process
holds a lock while setting/restoring labels
The main content lock is only ever held by the QEMU child process,
or libvirtd during VM shutdown. The rest of the operations only
require libvirtd to hold the metadata locks, relying on the active
QEMU still holding the content lock.
* src/qemu/qemu_conf.c, src/qemu/qemu_conf.h,
src/qemu/libvirtd_qemu.aug, src/qemu/test_libvirtd_qemu.aug:
Add config parameter for configuring lock managers
* src/qemu/qemu_driver.c: Add calls to the lock manager
Update the qemuDomainMigrateBegin method so that it accepts
an optional incoming XML document. This will be validated
for ABI compatibility against the current domain config,
and if this check passes, will be passed back out for use
by the qemuDomainMigratePrepare method on the target
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h,
src/qemu/qemu_migration.c: Allow custom XML to be passed
Originally most of libvirt domain-specific calls were blocking
during a migration.
A new mechanism to allow specific calls (blkstat/blkinfo) to be
executed in such condition has been implemented.
In the long term it'd be desirable to get a more general
solution to mark further APIs as migration safe, without needing
special case code.
* src/qemu/qemu_migration.c: add some additional job signal
flags for doing blkstat/blkinfo during a migration
* src/qemu/qemu_domain.c: add a condition variable that can be
used to efficiently wait for the migration code to clear the
signal flag
* src/qemu/qemu_driver.c: execute blkstat/blkinfo using the
job signal flags during migration
These VIR_XXXX0 APIs make us confused, use the non-0-suffix APIs instead.
How do these coversions works? The magic is using the gcc extension of ##.
When __VA_ARGS__ is empty, "##" will swallow the "," in "fmt," to
avoid compile error.
example: origin after CPP
high_level_api("%d", a_int) low_level_api("%d", a_int)
high_level_api("a string") low_level_api("a string")
About 400 conversions.
8 special conversions:
VIR_XXXX0("") -> VIR_XXXX("msg") (avoid empty format) 2 conversions
VIR_XXXX0(string_literal_with_%) -> VIR_XXXX(%->%%) 0 conversions
VIR_XXXX0(non_string_literal) -> VIR_XXXX("%s", non_string_literal)
(for security) 6 conversions
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
As well as taint warnings going to the main libvirt log,
add taint warnings to the per-domain logfile
Domain id=3 is tainted: high-privileges
Domain id=3 is tainted: disk-probing
Domain id=3 is tainted: shell-scripts
Domain id=3 is tainted: custom-monitor
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Enhance
qemuDomainTaint to also log to the domain logfile
* src/qemu/qemu_driver.c: Pass -1 for logFD to taint methods to
auto-append to logfile
* src/qemu/qemu_process.c: Pass open logFD at startup for taint
methods
The qemuDomainAppendLog method allows writing a formatted string
to the end of the domain logfile, optionally opening it if needed.
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add
qemuDomainAppendLog
Move the qemuProcessLogReadFD and qemuProcessLogFD methods
into qemu_domain.c, renaming them to qemuDomainCreateLog
and qemuDomainOpenLog.
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add
qemuDomainCreateLog and qemuDomainOpenLog.
* src/qemu/qemu_process.c: Remove qemuProcessLogFD
and qemuProcessLogReadFD
Wire up logging of VM tainting to the QEMU driver
- If running QEMU as root user/group or without capabilities
being cleared
- If passing custom QEMU command line args
- If issuing custom QEMU monitor commands
- If using a network interface config with an associated
shell script
- If using a disk config relying on format probing
The warnings, per-VM appear in the main libvirtd logs
11:56:17.571: 10832: warning : qemuDomainObjTaint:712 : Domain id=1 name='l2' uuid=c7a3edbd-edaf-9455-926a-d65c16db1802 is tainted: high-privileges
11:56:17.571: 10832: warning : qemuDomainObjTaint:712 : Domain id=1 name='l2' uuid=c7a3edbd-edaf-9455-926a-d65c16db1802 is tainted: disk-probing
The taint flags are reset when the VM is stopped.
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Helper APIs
for logging taint warnings
* src/qemu/qemu_driver.c: Log tainting with custom QEMU monitor
commands and disk/net hotplug with unsupported configs
* src/qemu/qemu_process.c: Log tainting at startup based on
unsupported configs
We already have virAsprintf, so picking a similar name helps for
seeing a similar purpose. Furthermore, the prefix V before printf
generally implies 'va_list', even though this variant was '...', and
the old name got in the way of adding a new va_list version.
global rename performed with:
$ git grep -l virBufferVSprintf \
| xargs -L1 sed -i 's/virBufferVSprintf/virBufferAsprintf/g'
then revert the changes in ChangeLog-old.
To cope with the QEMU binary being changed while a VM is running,
it is neccessary to persist the original qemu capabilities at the
time the VM is booted.
* src/qemu/qemu_capabilities.c, src/qemu/qemu_capabilities.h: Add
an enum for a string rep of every capability
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Support for
storing capabilities in the domain status XML
* src/qemu/qemu_process.c: Populate & free QEMU capabilities at
domain startup
In qemuDomainObjBeginJobWithDriver, when virCondWaitUntil timeouts,
the function tries to call qemuDriverLock with virDomainObj locked,
this causes the dead-lock problem. This patch fixes this.
Add the compiler attribute to ensure we don't introduce any more
ref bugs like were just patched in commit 9741f34, then explicitly
mark the remaining places in code that are safe.
* src/qemu/qemu_monitor.h (qemuMonitorUnref): Mark
ATTRIBUTE_RETURN_CHECK.
* src/conf/domain_conf.h (virDomainObjUnref): Likewise.
* src/conf/domain_conf.c (virDomainObjParseXML)
(virDomainLoadStatus): Fix offenders.
* src/openvz/openvz_conf.c (openvzLoadDomains): Likewise.
* src/vmware/vmware_conf.c (vmwareLoadDomains): Likewise.
* src/qemu/qemu_domain.c (qemuDomainObjBeginJob)
(qemuDomainObjBeginJobWithDriver)
(qemuDomainObjExitRemoteWithDriver): Likewise.
* src/qemu/qemu_monitor.c (QEMU_MONITOR_CALLBACK): Likewise.
Suggested by Daniel P. Berrange.
Steps to reproduce this bug:
# cat test.sh
#! /bin/bash -x
virsh start domain
sleep 5
virsh qemu-monitor-command domain 'cpu_set 2 online' --hmp
# while true; do ./test.sh ; done
Then libvirtd will crash.
The reason is that:
we add a reference of obj when we open the monitor. We will reduce this
reference when we free the monitor.
If the reference of monitor is 0, we will free monitor automatically and
the reference of obj is reduced.
But in the function qemuDomainObjExitMonitorWithDriver(), we reduce this
reference again when the reference of monitor is 0.
It will cause the obj be freed in the function qemuDomainObjEndJob().
Then we start the domain again, and libvirtd will crash in the function
virDomainObjListSearchName(), because we pass a null pointer(obj->def->name)
to strcmp().
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
The introduction of the v3 migration protocol, along with
support for migration cookies, will significantly expand
the size of the migration code. Move it all to a separate
file to make it more manageable
The functions are not moved 100%. The API entry points
remain in the main QEMU driver, but once the public
virDomainPtr is resolved to the internal virDomainObjPtr,
all following code is moved.
This will allow the new v3 API entry points to call into the
same shared internal migration functions
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add
qemuDomainFormatXML helper method
* src/qemu/qemu_driver.c: Remove all migration code
* src/qemu/qemu_migration.c, src/qemu/qemu_migration.h: Add
all migration code.
Move the qemudStartVMDaemon and qemudShutdownVMDaemon
methods into a separate file, renaming them to
qemuProcessStart, qemuProcessStop. All helper methods
called by these are also moved & renamed to match
* src/Makefile.am: Add qemu_process.c/.h
* src/qemu/qemu_command.c: Add qemuDomainAssignPCIAddresses
* src/qemu/qemu_command.h: Add VNC port min/max
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add
domain event queue helpers
* src/qemu/qemu_driver.c, src/qemu/qemu_driver.h: Remove
all QEMU process startup/shutdown functions
* src/qemu/qemu_process.c, src/qemu/qemu_process.h: Add
all QEMU process startup/shutdown functions
To allow the APIs to be used from separate files, move the domain
lock / job helper code into qemu_domain.c
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add domain lock
/ job code
* src/qemu/qemu_driver.c: Remove domain lock / job code
Move the code for handling the QEMU virDomainObjPtr private
data, and custom XML namespace into a separate file
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: New file
for private data & namespace code
* src/qemu/qemu_driver.c, src/qemu/qemu_driver.h: Remove
private data & namespace code
* src/qemu/qemu_driver.h, src/qemu/qemu_command.h: Update
includes
* src/Makefile.am: Add src/qemu/qemu_domain.c