The functions for manipulating pidfiles are in util/util.{c,h}.
We will shortly be adding some further pidfile related functions.
To avoid further growing util.c, this moves the pidfile related
functions into a dedicated virpidfile.{c,h}. The functions are
also all renamed to have 'virPidFile' as their name prefix
* util/util.h, util/util.c: Remove all pidfile code
* util/virpidfile.c, util/virpidfile.h: Add new APIs for pidfile
handling.
* lxc/lxc_controller.c, lxc/lxc_driver.c, network/bridge_driver.c,
qemu/qemu_process.c: Add virpidfile.h include and adapt for API
renames
Detected by ccc-analyzer, reported by Alex Jia.
qemuProcessStart always calls qemuProcessWaitForMonitor with a
non-negative position, but qemuProcessAttach always calls with -1.
In the latter case, there is no log file we can scrape, so we
also should not be trying to scrape the logs if the qemu process
died at the very end.
* src/qemu/qemu_process.c (qemuProcessWaitForMonitor): Don't try
to read from log in qemuProcessAttach case.
Value stored to 'ret' is never read, in fact, 'cleanup' section will
directly return -1 when function is fail, so remove this dead assignment.
* src/qemu/qemu_process.c: kill dead assignment.
Signed-off-by: Alex Jia <ajia@redhat.com>
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
Although most functions in libvirt return 0 on success and < 0 on
failure, there are a few functions lingering around that return errno
(a positive value) on failure, and sometimes code calling those
functions incorrectly assumes the <0 standard. I noticed one of these
the other day when auditing networkStartDhcpDaemon after Guido Gunther
found a place where success was improperly returned on failure (that
patch has been acked and is pending a push). The problem was that it
expected the return value from virFileReadPid to be < 0 on failure,
but it was actually positive (it was also neglected to set the return
code in this case, similar to the bug found by Guido).
This all led to the fact that *all* of the virFile*Pid functions in
util.c are returning errno on failure. This patch remedies that
problem by changing them all to return -errno on failure, and makes
any necessary changes to callers of the functions. (In the meantime, I
also properly set the return code on failure of virFileReadPid in
networkStartDhcpDaemon).
When an operation started by virDomainBlockPull completes (either with
success or with failure), raise an event to indicate the final status.
This API allow users to avoid polling on virDomainGetBlockJobInfo if
they would prefer to use an event mechanism.
* daemon/remote.c: Dispatch events to client
* include/libvirt/libvirt.h.in: Define event ID and callback signature
* src/conf/domain_event.c, src/conf/domain_event.h,
src/libvirt_private.syms: Extend API to handle the new event
* src/qemu/qemu_driver.c: Connect to the QEMU monitor event
for block_stream completion and emit a libvirt block pull event
* src/remote/remote_driver.c: Receive and dispatch events to application
* src/remote/remote_protocol.x: Wire protocol definition for the event
* src/remote_protocol-structs: structure definitions for protocol verification
* src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h,
src/qemu/qemu_monitor_json.c: Watch for BLOCK_STREAM_COMPLETED event
from QEMU monitor
The network driver needs to assign physical devices for use by modes
that use macvtap, keeping track of which physical devices are in use
(and how many instances, when the devices can be shared). Three calls
are added:
networkAllocateActualDevice - finds a physical device for use by the
domain, and sets up the virDomainActualNetDef accordingly.
networkNotifyActualDevice - assumes that the domain was already
running, but libvirtd was restarted, and needs to be notified by each
already-running domain about what interfaces they are using.
networkReleaseActualDevice - decrements the usage count of the
allocated physical device, and frees the virDomainActualNetDef to
avoid later accidentally using the device.
bridge_driver.[hc] - the new APIs. When WITH_NETWORK is false, these
functions are all #defined to be "0" in the .h file (effectively
becoming a NOP) to prevent link errors.
qemu_(command|driver|hotplug|process).c - add calls to the above APIs
in the appropriate places.
tests/Makefile.am - we need to include libvirt_driver_network.la
whenever libvirt_driver_qemu.la is linked, to avoid unreferenced
symbols (in functions that are never called by the test
programs...)
The qemu driver accesses fields in the virDomainNetDef directly, but
with the advent of the virDomainActualNetDef, some pieces of
information may be found in a different place (the ActualNetDef) if
the network connection is of type='network' and that network is of
forward type='bridge|private|vepa|passthrough'. The previous patch
added functions to mask this difference from callers - they hide the
decision making process and just pick the value from the proper place.
This patch uses those functions in the qemu driver as a first step in
making qemu work with the new network types. At this point, the
virDomainActualNetDef is guaranteed always NULL, so the GetActualX()
function will return exactly what the def->X that's being replaced
would have returned (ie bisecting is not compromised).
There is one place (in qemu_driver.c) where the internal details of
the NetDef are directly manipulated by the code, so the GetActual
functions cannot be used there without extra additional code; that
file will be treated in a separate patch.
The virtPortProfile in the domain interface struct is now a separately
allocated object *pointed to by* (rather than contained in) the main
virDomainNetDef object. This is done to make it easier to figure out
when a virtualPortProfile has/hasn't been specified in a particular
config.
There were two API in driver.c that were silently masking flags
bits prior to calling out to the drivers, and several others
that were explicitly masking flags bits. This is not
forward-compatible - if we ever have that many flags in the
future, then talking to an old server that masks out the
flags would be indistinguishable from talking to a new server
that can honor the flag. In general, libvirt.c should forward
_all_ flags on to drivers, and only the drivers should reject
unknown flags.
In the case of virDrvSecretGetValue, the solution is to separate
the internal driver callback function to have two parameters
instead of one, with only one parameter affected by the public
API. In the case of virDomainGetXMLDesc, it turns out that
no one was ever mixing VIR_DOMAIN_XML_INTERNAL_STATUS with
the dumpxml path in the first place; that internal flag was
only used in saving and restoring state files, which happened
to be in functions internal to a single file, so there is no
mixing of the internal flag with a public flags argument.
Additionally, virDomainMemoryStats passed a flags argument
over RPC, but not to the driver.
* src/driver.h (VIR_DOMAIN_XML_FLAGS_MASK)
(VIR_SECRET_GET_VALUE_FLAGS_MASK): Delete.
(virDrvSecretGetValue): Separate out internal flags.
(virDrvDomainMemoryStats): Provide missing flags argument.
* src/driver.c (verify): Drop unused check.
* src/conf/domain_conf.h (virDomainObjParseFile): Delete
declaration.
(virDomainXMLInternalFlags): Move...
* src/conf/domain_conf.c: ...here. Delete redundant include.
(virDomainObjParseFile): Make static.
* src/libvirt.c (virDomainGetXMLDesc, virSecretGetValue): Update
clients.
(virDomainMemoryPeek, virInterfaceGetXMLDesc)
(virDomainMemoryStats, virDomainBlockPeek, virNetworkGetXMLDesc)
(virStoragePoolGetXMLDesc, virStorageVolGetXMLDesc)
(virNodeNumOfDevices, virNodeListDevices, virNWFilterGetXMLDesc):
Don't mask unknown flags.
* src/interface/netcf_driver.c (interfaceGetXMLDesc): Reject
unknown flags.
* src/secret/secret_driver.c (secretGetValue): Update clients.
* src/remote/remote_driver.c (remoteSecretGetValue)
(remoteDomainMemoryStats): Likewise.
* src/qemu/qemu_process.c (qemuProcessGetVolumeQcowPassphrase):
Likewise.
* src/qemu/qemu_driver.c (qemudDomainMemoryStats): Likewise.
* daemon/remote.c (remoteDispatchDomainMemoryStats): Likewise.
When creating new qemu process we saved domain status XML only after the
process was fully setup and running. In case libvirtd was killed before
the whole process finished, once libvirtd started again it didn't know
anything about the new process and we end up with an orphaned qemu
process. Let's save the domain status XML as soon as we know the PID so
that libvirtd can kill the process on restart.
Query commands are safe to be called during long running jobs (such as
migration). This patch makes them all work without the need to
special-case every single one of them.
The patch introduces new job.asyncCond condition and associated
job.asyncJob which are dedicated to asynchronous (from qemu monitor
point of view) jobs that can take arbitrarily long time to finish while
qemu monitor is still usable for other commands.
The existing job.active (and job.cond condition) is used all other
synchronous jobs (including the commands run during async job).
Locking schema is changed to use these two conditions. While asyncJob is
active, only allowed set of synchronous jobs is allowed (the set can be
different according to a particular asyncJob) so any method that
communicates to qemu monitor needs to check if it is allowed to be
executed during current asyncJob (if any). Once the check passes, the
method needs to normally acquire job.cond to ensure no other command is
running. Since domain object lock is released during that time, asyncJob
could have been started in the meantime so the method needs to recheck
the first condition. Then, normal jobs set job.active and asynchronous
jobs set job.asyncJob and optionally change the list of allowed job
groups.
Since asynchronous jobs only set job.asyncJob, other allowed commands
can still be run when domain object is unlocked (when communicating to
remote libvirtd or sleeping). To protect its own internal synchronous
commands, the asynchronous job needs to start a special nested job
before entering qemu monitor. The nested job doesn't check asyncJob, it
only acquires job.cond and sets job.active to block other jobs.
The LXC and UML drivers can both make use of auditing. Move
the qemu_audit.{c,h} files to src/conf/domain_audit.{c,h}
* src/conf/domain_audit.c: Rename from src/qemu/qemu_audit.c
* src/conf/domain_audit.h: Rename from src/qemu/qemu_audit.h
* src/Makefile.am: Remove qemu_audit.{c,h}, add domain_audit.{c,h}
* src/qemu/qemu_audit.h, src/qemu/qemu_cgroup.c,
src/qemu/qemu_command.c, src/qemu/qemu_driver.c,
src/qemu/qemu_hotplug.c, src/qemu/qemu_migration.c,
src/qemu/qemu_process.c: Update for changed audit API names
Given a PID, the QEMU driver reads /proc/$PID/cmdline and
/proc/$PID/environ to get the configuration. This is fed
into the ARGV->XML convertor to build an XML configuration
for the process.
/proc/$PID/exe is resolved to identify the full command
binary path
After checking for name/uuid uniqueness, an attempt is
made to connect to the monitor socket. If successful
then 'info status' and 'info kvm' are issued to determine
whether the CPUs are running and if KVM is enabled.
* src/qemu/qemu_driver.c: Implement virDomainQemuAttach
* src/qemu/qemu_process.h, src/qemu/qemu_process.c: Add
qemuProcessAttach to connect to the monitor of an
existing QEMU process
Avoid re-formatting the pidfile path everytime we need it. Create
it once when starting the guest, and preserve it until the guest
is shutdown.
* src/libvirt_private.syms, src/util/util.c,
src/util/util.h: Add virFileReadPidPath
* src/qemu/qemu_domain.h: Add pidfile field
* src/qemu/qemu_process.c: Store pidfile path in qemuDomainObjPrivate
Some callers expected virFileMakePath to set errno, some expected
it to return an errno value. Unify this to return 0 on success and
-1 on error. Set errno to report detailed error information.
Also optimize virFileMakePath if stat fails with an errno different
from ENOENT.
Add a new attribute to the <seclabel> XML to allow resource
relabelling to be enabled with static label usage.
<seclabel model='selinux' type='static' relabel='yes'>
<label>system_u:system_r:svirt_t:s0:c392,c662</label>
</seclabel>
* docs/schemas/domain.rng: Add relabel attribute
* src/conf/domain_conf.c, src/conf/domain_conf.h: Parse
the 'relabel' attribute
* src/qemu/qemu_process.c: Unconditionally clear out the
'imagelabel' attribute
* src/security/security_apparmor.c: Skip based on 'relabel'
attribute instead of label type
* src/security/security_selinux.c: Skip based on 'relabel'
attribute instead of label type and fill in <imagelabel>
attribute if relabel is enabled.
Normally the dynamic labelling mode will always use a base
label of 'svirt_t' for VMs. Introduce a <baselabel> field
in the <seclabel> XML to allow this base label to be changed
eg
<seclabel type='dynamic' model='selinux'>
<baselabel>system_u:object_r:virt_t:s0</baselabel>
</seclabel>
* docs/schemas/domain.rng: Add <baselabel>
* src/conf/domain_conf.c, src/conf/domain_conf.h: Parsing
of base label
* src/qemu/qemu_process.c: Don't reset 'model' attribute if
a base label is specified
* src/security/security_apparmor.c: Refuse to support base label
* src/security/security_selinux.c: Use 'baselabel' when generating
label, if available
The libvirt sanlock plugin is intentionally leaking a file
descriptor to QEMU. To enable QEMU to use this FD under
SELinux, it must be labelled correctly. We dont want to use
the svirt_image_t for this, since QEMU must not be allowed
to actually use the FD. So instead we label it with svirt_t
using virSecurityManagerSetProcessFDLabel
* src/locking/domain_lock.c, src/locking/domain_lock.h,
src/locking/lock_driver.h, src/locking/lock_driver_nop.c,
src/locking/lock_driver_sanlock.c, src/locking/lock_manager.c,
src/locking/lock_manager.h: Optionally pass an FD back to
the hypervisor for security driver labelling
* src/qemu/qemu_process.c: label the lock manager plugin
FD with the process label
The virSecurityManagerSetFDLabel method is used to label
file descriptors associated with disk images. There will
shortly be a need to label other file descriptors in a
different way. So the current name is ambiguous. Rename
the method to virSecurityManagerSetImageFDLabel to clarify
its purpose
* src/libvirt_private.syms,
src/qemu/qemu_migration.c, src/qemu/qemu_process.c,
src/security/security_apparmor.c, src/security/security_dac.c,
src/security/security_driver.h, src/security/security_manager.c,
src/security/security_manager.h, src/security/security_selinux.c,
src/security/security_stack.c: s/FDLabel/ImageFDLabel/
Use NUMA's older nodemask_t (fixed-size map) rather than the newer
'struct bitmask' (variable-size) in order to still compile on RHEL 5,
with its numactl-devel-0.9.8.
* src/qemu/qemu_process.c [HAVE_NUMA]: Prefer back-compat mode.
(qemuProcessInitNumaMemoryPolicy): Use older nodemask_t.
If an application is using libvirt + KVM as a piece of its
internal infrastructure to perform a specific task, it can
be desirable to guarentee the VM dies when the virConnectPtr
disconnects from libvirtd. This ensures the app can't leak
any VMs it was using. Adding VIR_DOMAIN_START_AUTOKILL as
a flag when starting guests enables this to be done.
* include/libvirt/libvirt.h.in: All VIR_DOMAIN_START_AUTOKILL
* src/qemu/qemu_driver.c: Support automatic killing of guests
upon connection close
* tools/virsh.c: Add --autokill flag to 'start' and 'create'
commands
Sometimes it is useful to be able to automatically destroy a guest when
a connection is closed. For example, kill an incoming migration if
the client managing the migration dies. This introduces a map between
guest 'uuid' strings and virConnectPtr objects. When a connection is
closed, any associated guests are killed off.
* src/qemu/qemu_conf.h: Add autokill hash table to qemu driver
* src/qemu/qemu_process.c, src/qemu/qemu_process.h: Add APIs
for performing autokill of guests associated with a connection
* src/qemu/qemu_driver.c: Initialize autodestroy map
For controlled shutdown we issue a 'system_powerdown' command
to the QEMU monitor. This triggers an ACPI event which (most)
guest OS wire up to a controlled shutdown. There is no equiv
ACPI event to trigger a controlled reboot. This patch attempts
to fake a reboot.
- In qemuDomainObjPrivatePtr we have a bool fakeReboot
flag.
- The virDomainReboot method sets this flag and then
triggers a normal 'system_powerdown'.
- The QEMU process is started with '-no-shutdown'
so that the guest CPUs pause when it powers off the
guest
- When we receive the 'POWEROFF' event from QEMU JSON
monitor if fakeReboot is not set we invoke the
qemuProcessKill command and shutdown continues
normally
- If fakeReboot was set, we spawn a background thread
which issues 'system_reset' to perform a warm reboot
of the guest hardware. Then it issues 'cont' to
start the CPUs again
* src/qemu/qemu_command.c: Add -no-shutdown flag if
we have JSON support
* src/qemu/qemu_domain.h: Add 'fakeReboot' flag to
qemuDomainObjPrivate struct
* src/qemu/qemu_driver.c: Fake reboot using the
system_powerdown command if JSON support is available
* src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h,
src/qemu/qemu_monitor_json.c, src/qemu/qemu_monitor_json.h,
src/qemu/qemu_monitor_text.c, src/qemu/qemu_monitor_text.h: Add
binding for system_reset command
* src/qemu/qemu_process.c: Reset the guest & start CPUs if
fakeReboot is set
The following patch addresses the problem that when a PASSTHROUGH
mode DIRECT NIC connection is made the MAC address of the NIC is
not automatically set and reset to the configured VM MAC and
back again.
The attached patch fixes this problem by setting and resetting the MAC
while remembering the previous setting while the VM is running.
This also works if libvirtd is restarted while the VM is running.
the patch passes make syntax-check
Implemented as setting NUMA policy between fork and exec as a hook,
using libnuma. Only support memory tuning on domain process currently.
For the nodemask out of range, will report soft warning instead of
hard error in libvirt layer. (Kernel will be silent as long as one
of set bit in the nodemask is valid on the host. E.g. For a host
has two NUMA nodes, kernel will be silent for nodemask "01010101").
So, soft warning is the only thing libvirt can do, as one might want
to specify the numa policy prior to a node that doesn't exist yet,
however, it may come as hotplug soon.
When an operation started by virDomainBlockPullAll completes (either with
success or with failure), raise an event to indicate the final status. This
allows an API user to avoid polling on virDomainBlockPullInfo if they would
prefer to use the event mechanism.
* daemon/remote.c: Dispatch events to client
* include/libvirt/libvirt.h.in: Define event ID and callback signature
* src/conf/domain_event.c, src/conf/domain_event.h,
src/libvirt_private.syms: Extend API to handle the new event
* src/qemu/qemu_driver.c: Connect to the QEMU monitor event
for block_stream completion and emit a libvirt block pull event
* src/remote/remote_driver.c: Receive and dispatch events to application
* src/remote/remote_protocol.x: Wire protocol definition for the event
* src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h,
src/qemu/qemu_monitor_json.c: Watch for BLOCK_STREAM_COMPLETED event
from QEMU monitor
Signed-off-by: Adam Litke <agl@us.ibm.com>
If qemu supports -chardev, our char frontend aliases are ex. 'charserial0'
not just 'serial0'. Typically we don't use this code path because the
pty's are scraped from stdout.
There is the case where cpu affinites for vcpu of qemu doesn't work
correctly. For example, if only one vcpupin setting entry is provided
and its setting is not for vcpu0, it doesn't work.
# virsh dumpxml VM
...
<vcpu>4</vcpu>
<cputune>
<vcpupin vcpu='3' cpuset='9-11'/>
</cputune>
...
# virsh start VM
Domain VM started
# virsh vcpuinfo VM
VCPU: 0
CPU: 31
State: running
CPU time: 2.5s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
VCPU: 1
CPU: 12
State: running
CPU time: 0.9s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
VCPU: 2
CPU: 30
State: running
CPU time: 1.5s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
VCPU: 3
CPU: 13
State: running
CPU time: 1.7s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
This patch fixes this problem.
Signed-off-by: Taku Izumi <izumi.taku@jp.fujitsu.com>
The error code for virKillProcess is returned in the errno variable
not the return value. THis mistake caused the logs to be filled with
errors when shutting down QEMU processes
* src/qemu/qemu_process.c: Fix process kill check.
Detected by Coverity. This leaked a cpumap on every iteration
of the loop. Leak introduced in commit 1cc4d02 (v0.9.0).
* src/qemu/qemu_process.c (qemuProcessSetVcpuAffinites): Plug
leak, and hoist allocation outside loop.
The QEMU integrates with the lock manager instructure in a number
of key places
* During startup, a lock is acquired in between the fork & exec
* During startup, the libvirtd process acquires a lock before
setting file labelling
* During shutdown, the libvirtd process acquires a lock
before restoring file labelling
* During hotplug, unplug & media change the libvirtd process
holds a lock while setting/restoring labels
The main content lock is only ever held by the QEMU child process,
or libvirtd during VM shutdown. The rest of the operations only
require libvirtd to hold the metadata locks, relying on the active
QEMU still holding the content lock.
* src/qemu/qemu_conf.c, src/qemu/qemu_conf.h,
src/qemu/libvirtd_qemu.aug, src/qemu/test_libvirtd_qemu.aug:
Add config parameter for configuring lock managers
* src/qemu/qemu_driver.c: Add calls to the lock manager