Found when attempting to build on Fedora 17 alpha with:
./autogen.sh --system --enable-compile-warnings=error
(this same build command works without problem on Fedora 16). Since
the consumer of the qemuProcessReconnectData doesn't assume that the
other fields of the struct are initialized (although it uses them
internally), the simpler solution is to just switch to C99-style
struct initialization (which doesn't require specification of all
fields).
Return statements with parameter enclosed in parentheses were modified
and parentheses were removed. The whole change was scripted, here is how:
List of files was obtained using this command:
git grep -l -e '\<return\s*([^()]*\(([^()]*)[^()]*\)*)\s*;' | \
grep -e '\.[ch]$' -e '\.py$'
Found files were modified with this command:
sed -i -e \
's_^\(.*\<return\)\s*(\(\([^()]*([^()]*)[^()]*\)*\))\s*\(;.*$\)_\1 \2\4_' \
-e 's_^\(.*\<return\)\s*(\([^()]*\))\s*\(;.*$\)_\1 \2\3_'
Then checked for nonsense.
The whole command looks like this:
git grep -l -e '\<return\s*([^()]*\(([^()]*)[^()]*\)*)\s*;' | \
grep -e '\.[ch]$' -e '\.py$' | xargs sed -i -e \
's_^\(.*\<return\)\s*(\(\([^()]*([^()]*)[^()]*\)*\))\s*\(;.*$\)_\1 \2\4_' \
-e 's_^\(.*\<return\)\s*(\([^()]*\))\s*\(;.*$\)_\1 \2\3_'
This introduces a new running reason VIR_DOMAIN_RUNNING_WAKEUP,
and new suspend event type VIR_DOMAIN_EVENT_STARTED_WAKEUP.
While a wakeup event is emitted, the domain which entered into
VIR_DOMAIN_PMSUSPENDED will be transferred to "running"
with reason VIR_DOMAIN_RUNNING_WAKEUP, and a new domain lifecycle
event emitted with type VIR_DOMAIN_EVENT_STARTED_WAKEUP.
This patch introduces a new event type for the QMP event
SUSPEND:
VIR_DOMAIN_EVENT_ID_PMSUSPEND
The event doesn't take any data, but considering there might
be reason for wakeup in future, the callback definition is:
typedef void
(*virConnectDomainEventSuspendCallback)(virConnectPtr conn,
virDomainPtr dom,
int reason,
void *opaque);
"reason" is unused currently, always passes "0".
This patch introduces a new event type for the QMP event
WAKEUP:
VIR_DOMAIN_EVENT_ID_PMWAKEUP
The event doesn't take any data, but considering there might
be reason for wakeup in future, the callback definition is:
typedef void
(*virConnectDomainEventWakeupCallback)(virConnectPtr conn,
virDomainPtr dom,
int reason,
void *opaque);
"reason" is unused currently, always passes "0".
This patch introduces a new event type for the QMP event
DEVICE_TRAY_MOVED, which occurs when the tray of a removable
disk is moved (i.e opened or closed):
VIR_DOMAIN_EVENT_ID_TRAY_CHANGE
The event's data includes the device alias and the reason
for tray status' changing, which indicates why the tray
status was changed. Thus the callback definition for the event
is:
enum {
VIR_DOMAIN_EVENT_TRAY_CHANGE_OPEN = 0,
VIR_DOMAIN_EVENT_TRAY_CHANGE_CLOSE,
\#ifdef VIR_ENUM_SENTINELS
VIR_DOMAIN_EVENT_TRAY_CHANGE_LAST
\#endif
} virDomainEventTrayChangeReason;
typedef void
(*virConnectDomainEventTrayChangeCallback)(virConnectPtr conn,
virDomainPtr dom,
const char *devAlias,
int reason,
void *opaque);
numad is an user-level daemon that monitors NUMA topology and
processes resource consumption to facilitate good NUMA resource
alignment of applications/virtual machines to improve performance
and minimize cost of remote memory latencies. It provides a
pre-placement advisory interface, so significant processes can
be pre-bound to nodes with sufficient available resources.
More details: http://fedoraproject.org/wiki/Features/numad
"numad -w ncpus:memory_amount" is the advisory interface numad
provides currently.
This patch add the support by introducing a new XML attribute
for <vcpu>. e.g.
<vcpu placement="auto">4</vcpu>
<vcpu placement="static" cpuset="1-10^6">4</vcpu>
The returned advisory nodeset from numad will be printed
in domain's dumped XML. e.g.
<vcpu placement="auto" cpuset="1-10^6">4</vcpu>
If placement is "auto", the number of vcpus and the current
memory amount specified in domain XML will be used for numad
command line (numad uses MB for memory amount):
numad -w $num_of_vcpus:$current_memory_amount / 1024
The advisory nodeset returned from numad will be used to set
domain process CPU affinity then. (e.g. qemuProcessInitCpuAffinity).
If the user specifies both CPU affinity policy (e.g.
(<vcpu cpuset="1-10,^7,^8">4</vcpu>) and placement == "auto"
the specified CPU affinity will be overridden.
Only QEMU/KVM drivers support it now.
See docs update in patch for more details.
Even though we say in documentation setting (tls-)port to -1 is legacy
compat style for enabling autoport, we're roughly doing this for VNC.
However, in case of SPICE auto enable autoport iff both port & tlsPort
are equal -1 as documentation says autoport plays with both.
Currently, startupPolicy='requisite' was determining cold boot
by migrateFrom != NULL. That means, if domain was started up
with migrateFrom set we didn't require disk source path and allowed
it to be dropped. However, on snapshot-revert domain wasn't migrated
but according to documentation, requisite should drop disk source
as well.
Using 'unsigned long' for memory values is risky on 32-bit platforms,
as a PAE guest can have more than 4GiB memory. Our API is
(unfortunately) locked at 'unsigned long' and a scale of 1024, but
the rest of our system should consistently use 64-bit values,
especially since the previous patch centralized overflow checking.
* src/conf/domain_conf.h (_virDomainDef): Always use 64-bit values
for memory. Change hugepage_backed to a bool.
* src/conf/domain_conf.c (virDomainDefParseXML)
(virDomainDefCheckABIStability, virDomainDefFormatInternal): Fix
clients.
* src/vmx/vmx.c (virVMXFormatConfig): Likewise.
* src/xenxs/xen_sxpr.c (xenParseSxpr, xenFormatSxpr): Likewise.
* src/xenxs/xen_xm.c (xenXMConfigGetULongLong): New function.
(xenXMConfigGetULong, xenXMConfigSetInt): Avoid truncation.
(xenParseXM, xenFormatXM): Fix clients.
* src/phyp/phyp_driver.c (phypBuildLpar): Likewise.
* src/openvz/openvz_driver.c (openvzDomainSetMemoryInternal):
Likewise.
* src/vbox/vbox_tmpl.c (vboxDomainDefineXML): Likewise.
* src/qemu/qemu_command.c (qemuBuildCommandLine): Likewise.
* src/qemu/qemu_process.c (qemuProcessStart): Likewise.
* src/qemu/qemu_monitor.h (qemuMonitorGetBalloonInfo): Likewise.
* src/qemu/qemu_monitor_text.h (qemuMonitorTextGetBalloonInfo):
Likewise.
* src/qemu/qemu_monitor_text.c (qemuMonitorTextGetBalloonInfo):
Likewise.
* src/qemu/qemu_monitor_json.h (qemuMonitorJSONGetBalloonInfo):
Likewise.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONGetBalloonInfo):
Likewise.
* src/qemu/qemu_driver.c (qemudDomainGetInfo)
(qemuDomainGetXMLDesc): Likewise.
* src/uml/uml_conf.c (umlBuildCommandLine): Likewise.
No thanks to 64-bit windows, with 64-bit pid_t, we have to avoid
constructs like 'int pid'. Our API in libvirt-qemu cannot be
changed without breaking ABI; but then again, libvirt-qemu can
only be used on systems that support UNIX sockets, which rules
out Windows (even if qemu could be compiled there) - so for all
points on the call chain that interact with this API decision,
we require a different variable name to make it clear that we
audited the use for safety.
Adding a syntax-check rule only solves half the battle; anywhere
that uses printf on a pid_t still needs to be converted, but that
will be a separate patch.
* cfg.mk (sc_correct_id_types): New syntax check.
* src/libvirt-qemu.c (virDomainQemuAttach): Document why we didn't
use pid_t for pid, and validate for overflow.
* include/libvirt/libvirt-qemu.h (virDomainQemuAttach): Tweak name
for syntax check.
* src/vmware/vmware_conf.c (vmwareExtractPid): Likewise.
* src/driver.h (virDrvDomainQemuAttach): Likewise.
* tools/virsh.c (cmdQemuAttach): Likewise.
* src/remote/qemu_protocol.x (qemu_domain_attach_args): Likewise.
* src/qemu_protocol-structs (qemu_domain_attach_args): Likewise.
* src/util/cgroup.c (virCgroupPidCode, virCgroupKillInternal):
Likewise.
* src/qemu/qemu_command.c(qemuParseProcFileStrings): Likewise.
(qemuParseCommandLinePid): Use pid_t for pid.
* daemon/libvirtd.c (daemonForkIntoBackground): Likewise.
* src/conf/domain_conf.h (_virDomainObj): Likewise.
* src/probes.d (rpc_socket_new): Likewise.
* src/qemu/qemu_command.h (qemuParseCommandLinePid): Likewise.
* src/qemu/qemu_driver.c (qemudGetProcessInfo, qemuDomainAttach):
Likewise.
* src/qemu/qemu_process.c (qemuProcessAttach): Likewise.
* src/qemu/qemu_process.h (qemuProcessAttach): Likewise.
* src/uml/uml_driver.c (umlGetProcessInfo): Likewise.
* src/util/virnetdev.h (virNetDevSetNamespace): Likewise.
* src/util/virnetdev.c (virNetDevSetNamespace): Likewise.
* tests/testutils.c (virtTestCaptureProgramOutput): Likewise.
* src/conf/storage_conf.h (_virStoragePerms): Use mode_t, uid_t,
and gid_t rather than int.
* src/security/security_dac.c (virSecurityDACSetOwnership): Likewise.
* src/conf/storage_conf.c (virStorageDefParsePerms): Avoid
compiler warning.
* src/qemu/qemu_process.c (qemuFindAgentConfig): avoid crash libvirtd due to
deref a NULL pointer.
* How to reproduce?
1. virsh edit the following xml into guest configuration:
<channel type='pty'>
<target type='virtio'/>
</channel>
2. virsh start <domain>
or
% virt-install -n foo -r 1024 --disk path=/var/lib/libvirt/images/foo.img,size=1 \
--channel pty,target_type=virtio -l <installation tree>
Signed-off-by: Alex Jia <ajia@redhat.com>
This patch allows libvirt to add interfaces to already
existing Open vSwitch bridges. The following syntax in
domain XML file can be used:
<interface type='bridge'>
<mac address='52:54:00:d0:3f:f2'/>
<source bridge='ovsbr'/>
<virtualport type='openvswitch'>
<parameters interfaceid='921a80cd-e6de-5a2e-db9c-ab27f15a6e1d'/>
</virtualport>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x03' function='0x0'/>
</interface>
or if libvirt should auto-generate the interfaceid use
following syntax:
<interface type='bridge'>
<mac address='52:54:00:d0:3f:f2'/>
<source bridge='ovsbr'/>
<virtualport type='openvswitch'>
</virtualport>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x03' function='0x0'/>
</interface>
It is also possible to pass an optional profileid. To do that
use following syntax:
<interface type='bridge'>
<source bridge='ovsbr'/>
<mac address='00:55:1a:65:a2:8d'/>
<virtualport type='openvswitch'>
<parameters interfaceid='921a80cd-e6de-5a2e-db9c-ab27f15a6e1d'
profileid='test-profile'/>
</virtualport>
</interface>
To create Open vSwitch bridge install Open vSwitch and
run the following command:
ovs-vsctl add-br ovsbr
The current default method of terminating the qemu process is to send
a SIGTERM, wait for up to 1.6 seconds for it to cleanly shutdown, then
send a SIGKILL and wait for up to 1.4 seconds more for the process to
terminate. This is problematic because occasionally 1.6 seconds is not
long enough for the qemu process to flush its disk buffers, so the
guest's disk ends up in an inconsistent state.
Since this only occasionally happens when the timeout prior to SIGKILL
is 1.6 seconds, this patch increases that timeout to 10 seconds. At
the very least, this should reduce the occurrence from "occasionally"
to "extremely rarely". (Once SIGKILL is sent, it waits another 5
seconds for the process to die before returning).
Note that in the cases where it takes less than this for qemu to
shutdown cleanly, libvirt will *not* wait for any longer than it would
without this patch - qemuProcessKill polls the process and returns as
soon as it is gone.
This patch is based on an earlier patch by Eric Blake which was never
committed:
https://www.redhat.com/archives/libvir-list/2011-November/msg00243.html
Aside from rebasing, this patch only drops the driver lock once (prior
to the first time the function sleeps), then leaves it dropped until
it returns (Eric's patch would drop and re-acquire the lock around
each call to sleep).
At the time Eric sent his patch, the response (from Dan Berrange) was
that, while it wasn't a good thing to be holding the driver lock while
sleeping, we really need to rethink locking wrt the driver object,
switching to a finer-grained approach that locks individual items
within the driver object separately to allow for greater concurrency.
This is a good plan, and at the time it made sense to not apply the
patch because there was no known bug related to the driver lock being
held in this function.
However, we now know that the length of the wait in qemuProcessKill is
sometimes too short to allow the qemu process to fully flush its disk
cache before SIGKILL is sent, so we need to lengthen the timeout (in
order to improve the situation with management applications until they
can be updated to use the new VIR_DOMAIN_DESTROY_GRACEFUL flag added
in commit 72f8a7f197). But, if we
lengthen the timeout, we also lengthen the amount of time that all
other threads in libvirtd are essentially blocked from doing anything
(since just about everything needs to acquire the driver lock, if only
for long enough to get a pointer to a domain).
The solution is to modify qemuProcessKill to drop the driver lock
while sleeping, as proposed in Eric's patch. Then we can increase the
timeout with a clear conscience, and thus at least lower the chances
that someone running with existing management software will suffer the
consequence's of qemu's disk cache not being flushed.
In the meantime, we still should work on Dan's proposal to make
locking within the driver object more fine grained.
(NB: although I couldn't find any instance where qemuProcessKill() was
called with no jobs active for the domain (or some other guarantee
that the current thread had at least one refcount on the domain
object), this patch still follows Eric's method of temporarily adding
a ref prior to unlocking the domain object, because I couldn't
convince myself 100% that this was the case.)
In the future (my next patch in fact) we may want to make
decisions depending on qemu having a monitor command or not.
Therefore, we want to set qemuCaps flag instead of querying
on the monitor each time we are about to make that decision.
When libvirt's virDomainDestroy API is shutting down the qemu process,
it first sends SIGTERM, then waits for 1.6 seconds and, if it sees the
process still there, sends a SIGKILL.
There have been reports that this behavior can lead to data loss
because the guest running in qemu doesn't have time to flush its disk
cache buffers before it's unceremoniously whacked.
This patch maintains that default behavior, but provides a new flag
VIR_DOMAIN_DESTROY_GRACEFUL to alter the behavior. If this flag is set
in the call to virDomainDestroyFlags, SIGKILL will never be sent to
the qemu process; instead, if the timeout is reached and the qemu
process still exists, virDomainDestroy will return an error.
Once this patch is in, the recommended method for applications to call
virDomainDestroyFlags will be with VIR_DOMAIN_DESTROY_GRACEFUL
included. If that fails, then the application can decide if and when
to call virDomainDestroyFlags again without
VIR_DOMAIN_DESTROY_GRACEFUL (to force the issue with SIGKILL).
(Note that this does not address the issue of existing applications
that have not yet been modified to use VIR_DOMAIN_DESTROY_GRACEFUL.
That is a separate patch.)
This patch revises qemuProcessStart() function for qemu
processes to retain CAP_SYS_RAWIO if needed.
And in case of that, add taint flag to domain.
Signed-off-by: Taku Izumi <izumi.taku@jp.fujitsu.com>
Signed-off-by: Shota Hirae <m11g1401@hibikino.ne.jp>
There is now a standard QEMU guest agent that can be installed
and given a virtio serial channel
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/f16x86_64.agent'/>
<target type='virtio' name='org.qemu.guest_agent.0'/>
</channel>
The protocol that runs over the guest agent is JSON based and
very similar to the JSON monitor. We can't use exactly the same
code because there are some odd differences in the way messages
and errors are structured. The qemu_agent.c file is based on
a combination and simplification of qemu_monitor.c and
qemu_monitor_json.c
* src/qemu/qemu_agent.c, src/qemu/qemu_agent.h: Support for
talking to the agent for shutdown
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add thread
helpers for talking to the agent
* src/qemu/qemu_process.c: Connect to agent whenever starting
a guest
* src/qemu/qemu_monitor_json.c: Make variable static
When sVirt is integrated with the LXC driver, it will be neccessary
to invoke the security driver APIs using only a virDomainDefPtr
since the lxc_container.c code has no virDomainObjPtr available.
Aside from two functions which want obj->pid, every bit of the
security driver code only touches obj->def. So we don't need to
pass a virDomainObjPtr into the security drivers, a virDomainDefPtr
is sufficient. Two functions also gain a 'pid_t pid' argument.
* src/qemu/qemu_driver.c, src/qemu/qemu_hotplug.c,
src/qemu/qemu_migration.c, src/qemu/qemu_process.c,
src/security/security_apparmor.c,
src/security/security_dac.c,
src/security/security_driver.h,
src/security/security_manager.c,
src/security/security_manager.h,
src/security/security_nop.c,
src/security/security_selinux.c,
src/security/security_stack.c: Change all security APIs to use a
virDomainDefPtr instead of virDomainObjPtr
This *kind of* addresses:
https://bugzilla.redhat.com/show_bug.cgi?id=772395
(it doesn't eliminate the failure to start, but causes libvirt to give
a better idea about the cause of the failure).
If a guest uses a kvm emulator (e.g. /usr/bin/qemu-kvm) and the guest
is started when kvm isn't available (either because virtualization is
unavailable / has been disabled in the BIOS, or the kvm modules
haven't been loaded for some reason), a semi-cryptic error message is
logged:
libvirtError: internal error Child process (LC_ALL=C
PATH=/sbin:/usr/sbin:/bin:/usr/bin /usr/bin/qemu-kvm -device ? -device
pci-assign,? -device virtio-blk-pci,? -device virtio-net-pci,?) status
unexpected: exit status 1
This patch notices at process start that a guest needs kvm, and checks
for the presence of /dev/kvm (a reasonable indicator that kvm is
available) before trying to execute the qemu binary. If kvm isn't
available, a more useful (too verbose??) error is logged.
This patch adds max_files option to qemu.conf which can be used to
override system default limit on number of opened files that are
allowed for qemu user.
When destroying a domain qemuDomainDestroy kills its qemu process and
starts a new job, which means it unlocks the domain object and locks it
again after some time. Although the object is usually unlocked for a
pretty short time, chances are another thread processing an EOF event on
qemu monitor is able to lock the object first and does all the cleanup
by itself. This leads to wrong shutoff reason and lifecycle event detail
and virDomainDestroy API incorrectly reporting failure to destroy an
inactive domain.
Reported by Charlie Smurthwaite.
A preparatory patch for DHCP snooping where we want to be able to
differentiate between a VM's interface using the tuple of
<VM UUID, Interface MAC address>. We assume that MAC addresses could
possibly be re-used between different networks (VLANs) thus do not only
want to rely on the MAC address to identify an interface.
At the current 'final destination' in virNWFilterInstantiate I am leaving
the vmuuid parameter as ATTRIBUTE_UNUSED until the DHCP snooping patches arrive.
(we may not post the DHCP snooping patches for 0.9.9, though)
Mostly this is a pretty trivial patch. On the lowest layers, in lxc_driver
and uml_conf, I am passing the virDomainDefPtr around until I am passing
only the VM's uuid into the NWFilter calls.
This patch cleans up return codes in the nwfilter subsystem.
Some functions in nwfilter_conf.c (validators and formatters) are
keeping their bool return for now and I am converting their return
code to true/false.
All other functions now have failure return codes of -1 and success
of 0.
[I searched for all occurences of ' 1;' and checked all 'if ' and
adapted where needed. After that I did a grep for 'NWFilter' in the source
tree.]
With security_driver set to "none" in /etc/libvirt/qemu.conf,
libvirtd would crash when attempted to attach to an existing
qemu process. Only copy the security model if it actually exists.
During virDomainDestroy, QEMU may emit SHUTDOWN event as a response to
SIGTERM and since domain object is still locked, the event is processed
after the domain is destroyed. We need to ignore this event in such case
to avoid changing domain state from shutoff to shutdown.
When QEMU guest finishes its shutdown sequence, qemu stops virtual CPUs
and when started with -no-shutdown waits for us to kill it using
SGITERM. Since QEMU is flushing its internal buffers, some time may pass
before QEMU actually dies. We mistakenly used "paused" state (and
events) for this which is quite confusing since users may see a domain
going to pause while they expect it to shutdown. Since we already have
"shutdown" state with "the domain is being shut down" semantics, we
should use it for this state.
However, the state didn't have a corresponding event so I created one
and called its detail as VIR_DOMAIN_EVENT_SHUTDOWN_FINISHED (guest OS
finished its shutdown sequence) with the intent to add
VIR_DOMAIN_EVENT_SHUTDOWN_STARTED in the future if we have a
sufficiently capable guest agent that can notify us when guest OS starts
to shutdown.
The virTimestamp and virTimeMs functions in src/util/util.h
duplicate functionality from virtime.h, in a non-async signal
safe manner. Remove them, and convert all code over to the new
APIs.
* src/util/util.c, src/util/util.h: Delete virTimeMs and virTimestamp
* src/lxc/lxc_driver.c, src/qemu/qemu_domain.c,
src/qemu/qemu_driver.c, src/qemu/qemu_migration.c,
src/qemu/qemu_process.c, src/util/event_poll.c: Convert to use
virtime APIs
Now, when we support multiple consoles per domain,
the vm->def->console[0] can still remain an alias
for vm->def->serial[0]; However, we need to copy
it's source definition as well otherwise we'll regress
on virDomainOpenConsole.
Rename virNetDevMacVLanCreate to virNetDevMacVLanCreateWithVPortProfile
and virNetDevMacVLanDelete to virNetDevMacVLanDeleteWithVPortProfile
To make way for renaming the other macvlan creation APIs in
interface.c
* util/virnetdevmacvlan.c, util/virnetdevmacvlan.h,
qemu/qemu_command.c, qemu/qemu_hotplug.c, qemu/qemu_process.c:
Rename APIs
Rename the macvtap.c file to virnetdevmacvlan.c to reflect its
functionality. Move the port profile association code out into
virnetdevvportprofile.c. Make the APIs available unconditionally
to callers
* src/util/macvtap.h: rename to src/util/virnetdevmacvlan.h,
* src/util/macvtap.c: rename to src/util/virnetdevmacvlan.c
* src/util/virnetdevvportprofile.c, src/util/virnetdevvportprofile.h:
Pull in vport association code
* src/Makefile.am, src/conf/domain_conf.h, src/qemu/qemu_conf.c,
src/qemu/qemu_conf.h, src/qemu/qemu_driver.c: Update include
paths & remove conditional compilation
In preparation for code re-organization, rename the Macvtap
management APIs to have the following patterns
virNetDevMacVLanXXXXX - macvlan/macvtap interface management
virNetDevVPortProfileXXXX - virtual port profile management
* src/util/macvtap.c, src/util/macvtap.h: Rename APIs
* src/conf/domain_conf.c, src/network/bridge_driver.c,
src/qemu/qemu_command.c, src/qemu/qemu_command.h,
src/qemu/qemu_driver.c, src/qemu/qemu_hotplug.c,
src/qemu/qemu_migration.c, src/qemu/qemu_process.c,
src/qemu/qemu_process.h: Update for renamed APIs
While Xen only has a single paravirt console, UML, and
QEMU both support multiple paravirt consoles. The LXC
driver can also be trivially made to support multiple
consoles. This patch extends the XML to allow multiple
<console> elements in the XML. It also makes the UML
and QEMU drivers support this config.
* src/conf/domain_conf.c, src/conf/domain_conf.h: Allow
multiple <console> devices
* src/lxc/lxc_driver.c, src/xen/xen_driver.c,
src/xenxs/xen_sxpr.c, src/xenxs/xen_xm.c: Update for
internal API changes
* src/security/security_selinux.c, src/security/virt-aa-helper.c:
Only label consoles that aren't a copy of the serial device
* src/qemu/qemu_command.c, src/qemu/qemu_driver.c,
src/qemu/qemu_process.c, src/uml/uml_conf.c,
src/uml/uml_driver.c: Support multiple console devices
* tests/qemuxml2xmltest.c, tests/qemuxml2argvtest.c: Extra
tests for multiple virtio consoles. Set QEMU_CAPS_CHARDEV
for all console /channel tests
* tests/qemuxml2argvdata/qemuxml2argv-channel-virtio-auto.args,
tests/qemuxml2argvdata/qemuxml2argv-channel-virtio.args
tests/qemuxml2argvdata/qemuxml2argv-console-virtio.args: Update
for correct chardev syntax
* tests/qemuxml2argvdata/qemuxml2argv-console-virtio-many.args,
tests/qemuxml2argvdata/qemuxml2argv-console-virtio-many.xml: New
test file
Rather than making all clients of monitor commands that are JSON-only
check whether yajl support was compiled in, it is simpler to just
avoid setting the capability bit up front if we can't use the capability.
* src/qemu/qemu_capabilities.c (qemuCapsComputeCmdFlags): Only set
capability bit if we also have yajl library to use it.
* src/qemu/qemu_driver.c (qemuDomainReboot): Drop #ifdefs.
* src/qemu/qemu_process.c (qemuProcessStart): Likewise.
* tests/qemuhelptest.c (testHelpStrParsing): Pass test even
without yajl.
* tests/qemuxml2argvtest.c (mymain): Simplify use of json flag.
* tests/qemuxml2argvdata/qemuxml2argv-disk-drive-error-*.args:
Update expected results to match.
If a disk source gets dropped because it is not accessible,
mgmt application might want to be informed about this. Therefore
we need to emit an event. The event presented in this patch
is however a bit superset of what written above. The reason is simple:
an intention to be easily expanded, e.g. on 'user ejected disk
in guest' events. Therefore, callback gets source string and disk alias
(which should be unique among a domain) and reason (an integer);
This patch implements on_missing feature in qemu driver.
Upon qemu startup process an accessibility of CDROMs
and floppy disks is checked. The source might get dropped
if unavailable and on_missing is set accordingly.
No event is emit thought. Look for follow up patch.
This patch is rather cosmetic as it only moves device alias
assignation from command line construction just before that.
However, it is needed in connotation of previous and next patch.
If the daemon is restarted so we reconnect to monitor, cdrom media
can be ejected. In that case we don't want to show it in domain xml,
or require it on migration destination.
To check for disk status use 'info block' monitor command.
If a domain started with -no-shutdown shuts down while libvirtd is not
running, it will be seen as paused when libvirtd reconnects to it. Use
the paused reason to detect if a domain was stopped because of shutdown
and finish the process just as if a SHUTDOWN event is delivered from
qemu.
Commit 282fe1f0 documented that transient domains will auto-delete
any snapshot metadata when the last reference to the domain is
removed, and that management apps are in charge of grabbing any
snapshot metadata prior to that point. However, this was not
actually implemented for qemu until now.
* src/qemu/qemu_driver.c (qemudDomainCreate)
(qemuDomainDestroyFlags, qemuDomainSaveInternal)
(qemudDomainCoreDump, qemuDomainRestoreFlags, qemudDomainDefine)
(qemuDomainUndefineFlags, qemuDomainMigrateConfirm3)
(qemuDomainRevertToSnapshot): Clean up snapshot metadata.
* src/qemu/qemu_migration.c (qemuMigrationPrepareAny)
(qemuMigrationPerformJob, qemuMigrationPerformPhase)
(qemuMigrationFinish): Likewise.
* src/qemu/qemu_process.c (qemuProcessHandleMonitorEOF)
(qemuProcessReconnect, qemuProcessReconnectHelper)
(qemuProcessAutoDestroyDom): Likewise.
This patch is mostly code motion - moving some functions out
of qemu_driver and into qemu_domain so they can be reused by
multiple qemu_* files (since qemu_driver.h must not grow).
It also adds a new helper function, qemuDomainRemoveInactive,
which will be used in the next patch.
* src/qemu/qemu_domain.h (qemuFindQemuImgBinary)
(qemuDomainSnapshotWriteMetadata, qemuDomainSnapshotForEachQcow2)
(qemuDomainSnapshotDiscard, qemuDomainSnapshotDiscardAll)
(qemuDomainRemoveInactive): New prototypes.
(struct qemu_snap_remove): New struct.
* src/qemu/qemu_domain.c (qemuDomainRemoveInactive)
(qemuDomainSnapshotDiscardAllMetadata): New functions.
(qemuFindQemuImgBinary, qemuDomainSnapshotWriteMetadata)
(qemuDomainSnapshotForEachQcow2, qemuDomainSnapshotDiscard)
(qemuDomainSnapshotDiscardAll): Move here...
* src/qemu/qemu_driver.c (qemuFindQemuImgBinary)
(qemuDomainSnapshotWriteMetadata, qemuDomainSnapshotForEachQcow2)
(qemuDomainSnapshotDiscard, qemuDomainSnapshotDiscardAll): ...from
here.
(qemuDomainUndefineFlags): Update caller.
* src/conf/domain_conf.c (virDomainRemoveInactive): Doc fixes.
* src/qemu/qemu_process.c: Taking if (qemuDomainObjEndJob(driver, obj) == 0)
true branch then 'obj' is NULL, virDomainObjIsActive(obj) and
virDomainObjUnref(obj) will dereference NULL pointer.
Signed-off-by: Alex Jia <ajia@redhat.com>
Once virDomainReboot is called for a domain, guest OS initiated shutdown
would always result in reboot instead of shutdown. Only
virDomainShutdown would actually shutd such domain down. That's because
we forgot to reset fakeReboot flag once we asked the domain to reboot.
Qemu sends STOP event as part of the shutdown process. Detect such STOP
event and consider shutdown to be reason of emitting such event. That's
the best we can do until qemu provides us the reason directly in STOP
event. This allows us to report shutdown reason for paused state so that
apps can detect domains that failed to finish the shutdown process
(e.g., because qemu is buggy and doesn't exit on SIGTERM or it is
blocked in flushing disk buffers).
Ever since we introduced fake reboot, we call qemuProcessKill as a
reaction to SHUTDOWN event. Unfortunately, qemu doesn't guarantee it
flushed all internal buffers before sending SHUTDOWN, in which case
killing the process forcibly may result in (virtual) disk corruption.
By sending just SIGTERM without SIGKILL we give qemu time to to flush
all buffers and exit. Once qemu exits, we will see an EOF on monitor
connection and tear down the domain. In case qemu ignores SIGTERM or
just hangs there, the process stays running but that's not any different
from a possible hang anytime during the shutdown process so I think it's
just fine.
Also qemu (since 0.14 until it's fixed) has a bug in SIGTERM processing
which causes it not to exit but instead send new SHUTDOWN event and keep
waiting. I think the best we can do is to ignore duplicate SHUTDOWN
events to avoid a SHUTDOWN-SIGTERM loop and leave the domain in paused
state.
When a domain is rebooted using libvirt API, we use fake reboot
consisting of shutting down and resetting the domain. Thus we see a
SHUTDOWN event and set gotShutdown flag. But we never reset it back and
if the domain crashes after it was rebooted this way, we consider it was
a normal shutdown and not a crash.
Commit 4454a9efc7 changed shutoff reason
from VIR_DOMAIN_SHUTOFF_CRASHED to VIR_DOMAIN_SHUTOFF_FAILED in case we
see an unexpected EOF on monitor connection. But FAILED reason is
dedicated for domains that fail to start. CRASHED reason is the right
one to use in this situation.
This patch fixes the bug shown in bugzilla 738778. It's not an nwfilter problem but a connection sharing / closure issue.
https://bugzilla.redhat.com/show_bug.cgi?id=738778
Depending on the speed / #CPUs of the machine you are using you may not see this bug all the time.
This patch enables modifying network device configuration using the
virUpdateDeviceFlags API method. Matching of devices is accomplished
using MAC addresses.
While updating live configuration of a running domain, the user is
allowed only to change link state of the interface. Additional
modifications may be added later. For now the code checks for
unsupported changes and thereafter changes the link state, if
applicable.
When updating persistent configuration of guest's network interface the
whole configuration (except for the MAC address) may be modified and
is stored for the next startup.
* src/qemu/qemu_driver.c - Add dispatching of virUpdateDevice for
network devices update (live/config)
* src/qemu/qemu_hotplug.c - add setting of initial link state on live
device addition
- add function to change network device
configuration. By now it supports only
changing of link state
* src/qemu/qemu_hotplug.h - Headers to above functions
* src/qemu/qemu_process.c - set link states before virtual machine
start. Qemu does not support setting of
this on the command line.
If libvirt daemon gets restarted and there is (at least) one
unresponsive qemu, the startup procedure hangs up. This patch creates
one thread per vm in which we try to reconnect to monitor. Therefore,
blocking in one thread will not affect other APIs.
This patch annotates APIs with low or high priority.
In low set MUST be all APIs which might eventually access monitor
(and thus block indefinitely). Other APIs may be marked as high
priority. However, some must be (e.g. domainDestroy).
For high priority calls (HPC), there are some high priority workers
(HPW) created in the pool. HPW can execute only HPC, although normal
worker can process any call regardless priority. Therefore, only those
APIs which are guaranteed to end in reasonable small amount of time
can be marked as HPC.
The size of this HPC pool is static, because HPC are expected to end
quickly, therefore jobs assigned to this pool will be served quickly.
It can be configured in libvirtd.conf via prio_workers variable.
Default is set to 5.
To mark API with low or high priority, append priority:{low|high} to
it's comment in src/remote/remote_protocol.x. This is similar to
autogen|skipgen. If not marked, the generator assumes low as default.
I got confused when 'virsh domblkinfo dom disk' required the
path to a disk (which can be ambiguous, since a single file
can back multiple disks), rather than the unambiguous target
device name that I was using in disk snapshots. So, in true
developer fashion, I went for the best of both worlds - all
interfaces that operate on a disk (aka block) now accept
either the target name or the unambiguous path to the backing
file used by the disk.
* src/conf/domain_conf.h (virDomainDiskIndexByName): Add
parameter.
(virDomainDiskPathByName): New prototype.
* src/libvirt_private.syms (domain_conf.h): Export it.
* src/conf/domain_conf.c (virDomainDiskIndexByName): Also allow
searching by path, and decide whether ambiguity is okay.
(virDomainDiskPathByName): New function.
(virDomainDiskRemoveByName, virDomainSnapshotAlignDisks): Update
callers.
* src/qemu/qemu_driver.c (qemudDomainBlockPeek)
(qemuDomainAttachDeviceConfig, qemuDomainUpdateDeviceConfig)
(qemuDomainGetBlockInfo, qemuDiskPathToAlias): Likewise.
* src/qemu/qemu_process.c (qemuProcessFindDomainDiskByPath):
Likewise.
* src/libxl/libxl_driver.c (libxlDomainAttachDeviceDiskLive)
(libxlDomainDetachDeviceDiskLive, libxlDomainAttachDeviceConfig)
(libxlDomainUpdateDeviceConfig): Likewise.
* src/uml/uml_driver.c (umlDomainBlockPeek): Likewise.
* src/xen/xend_internal.c (xenDaemonDomainBlockPeek): Likewise.
* docs/formatsnapshot.html.in: Update documentation.
* tools/virsh.pod (domblkstat, domblkinfo): Likewise.
* docs/schemas/domaincommon.rng (diskTarget): Tighten pattern on
disk targets.
* docs/schemas/domainsnapshot.rng (disksnapshot): Update to match.
* tests/domainsnapshotxml2xmlin/disk_snapshot.xml: Update test.
It is not possible to change the label of a TCP socket once it
has been opened. When creating a TCP socket care must be taken
to ensure the socket creation label is set & then cleared.
Remove the bogus call to virSecurityManagerSetProcessFDLabel
from the lock driver guest setup code and instead make use of
virSecurityManagerSetSocketLabel
The APIs are designed to label a socket in a way that the libvirt daemon
itself is able to access it (i.e., in SELinux the label is virtd_t based
as opposed to svirt_* we use for labeling resources that need to be
accessed by a vm). The new name reflects this.
Changing the current vm, and writing that change to the file
system, all before a new qemu starts, is risky; it's hard to
roll back if starting the new qemu fails for some reason.
Instead of abusing vm->current_snapshot and making the command
line generator decide whether the current snapshot warrants
using -loadvm, it is better to just directly pass a snapshot all
the way through the call chain if it is to be loaded.
This frees up the last use of snapshot->def->active for qemu's
use, so the next patch can repurpose that field for tracking
which snapshot is current.
* src/qemu/qemu_command.c (qemuBuildCommandLine): Don't use active
field of snapshot.
* src/qemu/qemu_process.c (qemuProcessStart): Add a parameter.
* src/qemu/qemu_process.h (qemuProcessStart): Update prototype.
* src/qemu/qemu_migration.c (qemuMigrationPrepareAny): Update
callers.
* src/qemu/qemu_driver.c (qemudDomainCreate)
(qemuDomainSaveImageStartVM, qemuDomainObjStart)
(qemuDomainRevertToSnapshot): Likewise.
(qemuDomainSnapshotSetCurrentActive)
(qemuDomainSnapshotSetCurrentInactive): Delete unused functions.
The functions for manipulating pidfiles are in util/util.{c,h}.
We will shortly be adding some further pidfile related functions.
To avoid further growing util.c, this moves the pidfile related
functions into a dedicated virpidfile.{c,h}. The functions are
also all renamed to have 'virPidFile' as their name prefix
* util/util.h, util/util.c: Remove all pidfile code
* util/virpidfile.c, util/virpidfile.h: Add new APIs for pidfile
handling.
* lxc/lxc_controller.c, lxc/lxc_driver.c, network/bridge_driver.c,
qemu/qemu_process.c: Add virpidfile.h include and adapt for API
renames
Detected by ccc-analyzer, reported by Alex Jia.
qemuProcessStart always calls qemuProcessWaitForMonitor with a
non-negative position, but qemuProcessAttach always calls with -1.
In the latter case, there is no log file we can scrape, so we
also should not be trying to scrape the logs if the qemu process
died at the very end.
* src/qemu/qemu_process.c (qemuProcessWaitForMonitor): Don't try
to read from log in qemuProcessAttach case.
Value stored to 'ret' is never read, in fact, 'cleanup' section will
directly return -1 when function is fail, so remove this dead assignment.
* src/qemu/qemu_process.c: kill dead assignment.
Signed-off-by: Alex Jia <ajia@redhat.com>
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
Although most functions in libvirt return 0 on success and < 0 on
failure, there are a few functions lingering around that return errno
(a positive value) on failure, and sometimes code calling those
functions incorrectly assumes the <0 standard. I noticed one of these
the other day when auditing networkStartDhcpDaemon after Guido Gunther
found a place where success was improperly returned on failure (that
patch has been acked and is pending a push). The problem was that it
expected the return value from virFileReadPid to be < 0 on failure,
but it was actually positive (it was also neglected to set the return
code in this case, similar to the bug found by Guido).
This all led to the fact that *all* of the virFile*Pid functions in
util.c are returning errno on failure. This patch remedies that
problem by changing them all to return -errno on failure, and makes
any necessary changes to callers of the functions. (In the meantime, I
also properly set the return code on failure of virFileReadPid in
networkStartDhcpDaemon).
When an operation started by virDomainBlockPull completes (either with
success or with failure), raise an event to indicate the final status.
This API allow users to avoid polling on virDomainGetBlockJobInfo if
they would prefer to use an event mechanism.
* daemon/remote.c: Dispatch events to client
* include/libvirt/libvirt.h.in: Define event ID and callback signature
* src/conf/domain_event.c, src/conf/domain_event.h,
src/libvirt_private.syms: Extend API to handle the new event
* src/qemu/qemu_driver.c: Connect to the QEMU monitor event
for block_stream completion and emit a libvirt block pull event
* src/remote/remote_driver.c: Receive and dispatch events to application
* src/remote/remote_protocol.x: Wire protocol definition for the event
* src/remote_protocol-structs: structure definitions for protocol verification
* src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h,
src/qemu/qemu_monitor_json.c: Watch for BLOCK_STREAM_COMPLETED event
from QEMU monitor
The network driver needs to assign physical devices for use by modes
that use macvtap, keeping track of which physical devices are in use
(and how many instances, when the devices can be shared). Three calls
are added:
networkAllocateActualDevice - finds a physical device for use by the
domain, and sets up the virDomainActualNetDef accordingly.
networkNotifyActualDevice - assumes that the domain was already
running, but libvirtd was restarted, and needs to be notified by each
already-running domain about what interfaces they are using.
networkReleaseActualDevice - decrements the usage count of the
allocated physical device, and frees the virDomainActualNetDef to
avoid later accidentally using the device.
bridge_driver.[hc] - the new APIs. When WITH_NETWORK is false, these
functions are all #defined to be "0" in the .h file (effectively
becoming a NOP) to prevent link errors.
qemu_(command|driver|hotplug|process).c - add calls to the above APIs
in the appropriate places.
tests/Makefile.am - we need to include libvirt_driver_network.la
whenever libvirt_driver_qemu.la is linked, to avoid unreferenced
symbols (in functions that are never called by the test
programs...)
The qemu driver accesses fields in the virDomainNetDef directly, but
with the advent of the virDomainActualNetDef, some pieces of
information may be found in a different place (the ActualNetDef) if
the network connection is of type='network' and that network is of
forward type='bridge|private|vepa|passthrough'. The previous patch
added functions to mask this difference from callers - they hide the
decision making process and just pick the value from the proper place.
This patch uses those functions in the qemu driver as a first step in
making qemu work with the new network types. At this point, the
virDomainActualNetDef is guaranteed always NULL, so the GetActualX()
function will return exactly what the def->X that's being replaced
would have returned (ie bisecting is not compromised).
There is one place (in qemu_driver.c) where the internal details of
the NetDef are directly manipulated by the code, so the GetActual
functions cannot be used there without extra additional code; that
file will be treated in a separate patch.
The virtPortProfile in the domain interface struct is now a separately
allocated object *pointed to by* (rather than contained in) the main
virDomainNetDef object. This is done to make it easier to figure out
when a virtualPortProfile has/hasn't been specified in a particular
config.
There were two API in driver.c that were silently masking flags
bits prior to calling out to the drivers, and several others
that were explicitly masking flags bits. This is not
forward-compatible - if we ever have that many flags in the
future, then talking to an old server that masks out the
flags would be indistinguishable from talking to a new server
that can honor the flag. In general, libvirt.c should forward
_all_ flags on to drivers, and only the drivers should reject
unknown flags.
In the case of virDrvSecretGetValue, the solution is to separate
the internal driver callback function to have two parameters
instead of one, with only one parameter affected by the public
API. In the case of virDomainGetXMLDesc, it turns out that
no one was ever mixing VIR_DOMAIN_XML_INTERNAL_STATUS with
the dumpxml path in the first place; that internal flag was
only used in saving and restoring state files, which happened
to be in functions internal to a single file, so there is no
mixing of the internal flag with a public flags argument.
Additionally, virDomainMemoryStats passed a flags argument
over RPC, but not to the driver.
* src/driver.h (VIR_DOMAIN_XML_FLAGS_MASK)
(VIR_SECRET_GET_VALUE_FLAGS_MASK): Delete.
(virDrvSecretGetValue): Separate out internal flags.
(virDrvDomainMemoryStats): Provide missing flags argument.
* src/driver.c (verify): Drop unused check.
* src/conf/domain_conf.h (virDomainObjParseFile): Delete
declaration.
(virDomainXMLInternalFlags): Move...
* src/conf/domain_conf.c: ...here. Delete redundant include.
(virDomainObjParseFile): Make static.
* src/libvirt.c (virDomainGetXMLDesc, virSecretGetValue): Update
clients.
(virDomainMemoryPeek, virInterfaceGetXMLDesc)
(virDomainMemoryStats, virDomainBlockPeek, virNetworkGetXMLDesc)
(virStoragePoolGetXMLDesc, virStorageVolGetXMLDesc)
(virNodeNumOfDevices, virNodeListDevices, virNWFilterGetXMLDesc):
Don't mask unknown flags.
* src/interface/netcf_driver.c (interfaceGetXMLDesc): Reject
unknown flags.
* src/secret/secret_driver.c (secretGetValue): Update clients.
* src/remote/remote_driver.c (remoteSecretGetValue)
(remoteDomainMemoryStats): Likewise.
* src/qemu/qemu_process.c (qemuProcessGetVolumeQcowPassphrase):
Likewise.
* src/qemu/qemu_driver.c (qemudDomainMemoryStats): Likewise.
* daemon/remote.c (remoteDispatchDomainMemoryStats): Likewise.
When creating new qemu process we saved domain status XML only after the
process was fully setup and running. In case libvirtd was killed before
the whole process finished, once libvirtd started again it didn't know
anything about the new process and we end up with an orphaned qemu
process. Let's save the domain status XML as soon as we know the PID so
that libvirtd can kill the process on restart.
Query commands are safe to be called during long running jobs (such as
migration). This patch makes them all work without the need to
special-case every single one of them.
The patch introduces new job.asyncCond condition and associated
job.asyncJob which are dedicated to asynchronous (from qemu monitor
point of view) jobs that can take arbitrarily long time to finish while
qemu monitor is still usable for other commands.
The existing job.active (and job.cond condition) is used all other
synchronous jobs (including the commands run during async job).
Locking schema is changed to use these two conditions. While asyncJob is
active, only allowed set of synchronous jobs is allowed (the set can be
different according to a particular asyncJob) so any method that
communicates to qemu monitor needs to check if it is allowed to be
executed during current asyncJob (if any). Once the check passes, the
method needs to normally acquire job.cond to ensure no other command is
running. Since domain object lock is released during that time, asyncJob
could have been started in the meantime so the method needs to recheck
the first condition. Then, normal jobs set job.active and asynchronous
jobs set job.asyncJob and optionally change the list of allowed job
groups.
Since asynchronous jobs only set job.asyncJob, other allowed commands
can still be run when domain object is unlocked (when communicating to
remote libvirtd or sleeping). To protect its own internal synchronous
commands, the asynchronous job needs to start a special nested job
before entering qemu monitor. The nested job doesn't check asyncJob, it
only acquires job.cond and sets job.active to block other jobs.
The LXC and UML drivers can both make use of auditing. Move
the qemu_audit.{c,h} files to src/conf/domain_audit.{c,h}
* src/conf/domain_audit.c: Rename from src/qemu/qemu_audit.c
* src/conf/domain_audit.h: Rename from src/qemu/qemu_audit.h
* src/Makefile.am: Remove qemu_audit.{c,h}, add domain_audit.{c,h}
* src/qemu/qemu_audit.h, src/qemu/qemu_cgroup.c,
src/qemu/qemu_command.c, src/qemu/qemu_driver.c,
src/qemu/qemu_hotplug.c, src/qemu/qemu_migration.c,
src/qemu/qemu_process.c: Update for changed audit API names
Given a PID, the QEMU driver reads /proc/$PID/cmdline and
/proc/$PID/environ to get the configuration. This is fed
into the ARGV->XML convertor to build an XML configuration
for the process.
/proc/$PID/exe is resolved to identify the full command
binary path
After checking for name/uuid uniqueness, an attempt is
made to connect to the monitor socket. If successful
then 'info status' and 'info kvm' are issued to determine
whether the CPUs are running and if KVM is enabled.
* src/qemu/qemu_driver.c: Implement virDomainQemuAttach
* src/qemu/qemu_process.h, src/qemu/qemu_process.c: Add
qemuProcessAttach to connect to the monitor of an
existing QEMU process
Avoid re-formatting the pidfile path everytime we need it. Create
it once when starting the guest, and preserve it until the guest
is shutdown.
* src/libvirt_private.syms, src/util/util.c,
src/util/util.h: Add virFileReadPidPath
* src/qemu/qemu_domain.h: Add pidfile field
* src/qemu/qemu_process.c: Store pidfile path in qemuDomainObjPrivate
Some callers expected virFileMakePath to set errno, some expected
it to return an errno value. Unify this to return 0 on success and
-1 on error. Set errno to report detailed error information.
Also optimize virFileMakePath if stat fails with an errno different
from ENOENT.
Add a new attribute to the <seclabel> XML to allow resource
relabelling to be enabled with static label usage.
<seclabel model='selinux' type='static' relabel='yes'>
<label>system_u:system_r:svirt_t:s0:c392,c662</label>
</seclabel>
* docs/schemas/domain.rng: Add relabel attribute
* src/conf/domain_conf.c, src/conf/domain_conf.h: Parse
the 'relabel' attribute
* src/qemu/qemu_process.c: Unconditionally clear out the
'imagelabel' attribute
* src/security/security_apparmor.c: Skip based on 'relabel'
attribute instead of label type
* src/security/security_selinux.c: Skip based on 'relabel'
attribute instead of label type and fill in <imagelabel>
attribute if relabel is enabled.
Normally the dynamic labelling mode will always use a base
label of 'svirt_t' for VMs. Introduce a <baselabel> field
in the <seclabel> XML to allow this base label to be changed
eg
<seclabel type='dynamic' model='selinux'>
<baselabel>system_u:object_r:virt_t:s0</baselabel>
</seclabel>
* docs/schemas/domain.rng: Add <baselabel>
* src/conf/domain_conf.c, src/conf/domain_conf.h: Parsing
of base label
* src/qemu/qemu_process.c: Don't reset 'model' attribute if
a base label is specified
* src/security/security_apparmor.c: Refuse to support base label
* src/security/security_selinux.c: Use 'baselabel' when generating
label, if available
The libvirt sanlock plugin is intentionally leaking a file
descriptor to QEMU. To enable QEMU to use this FD under
SELinux, it must be labelled correctly. We dont want to use
the svirt_image_t for this, since QEMU must not be allowed
to actually use the FD. So instead we label it with svirt_t
using virSecurityManagerSetProcessFDLabel
* src/locking/domain_lock.c, src/locking/domain_lock.h,
src/locking/lock_driver.h, src/locking/lock_driver_nop.c,
src/locking/lock_driver_sanlock.c, src/locking/lock_manager.c,
src/locking/lock_manager.h: Optionally pass an FD back to
the hypervisor for security driver labelling
* src/qemu/qemu_process.c: label the lock manager plugin
FD with the process label
The virSecurityManagerSetFDLabel method is used to label
file descriptors associated with disk images. There will
shortly be a need to label other file descriptors in a
different way. So the current name is ambiguous. Rename
the method to virSecurityManagerSetImageFDLabel to clarify
its purpose
* src/libvirt_private.syms,
src/qemu/qemu_migration.c, src/qemu/qemu_process.c,
src/security/security_apparmor.c, src/security/security_dac.c,
src/security/security_driver.h, src/security/security_manager.c,
src/security/security_manager.h, src/security/security_selinux.c,
src/security/security_stack.c: s/FDLabel/ImageFDLabel/
Use NUMA's older nodemask_t (fixed-size map) rather than the newer
'struct bitmask' (variable-size) in order to still compile on RHEL 5,
with its numactl-devel-0.9.8.
* src/qemu/qemu_process.c [HAVE_NUMA]: Prefer back-compat mode.
(qemuProcessInitNumaMemoryPolicy): Use older nodemask_t.
If an application is using libvirt + KVM as a piece of its
internal infrastructure to perform a specific task, it can
be desirable to guarentee the VM dies when the virConnectPtr
disconnects from libvirtd. This ensures the app can't leak
any VMs it was using. Adding VIR_DOMAIN_START_AUTOKILL as
a flag when starting guests enables this to be done.
* include/libvirt/libvirt.h.in: All VIR_DOMAIN_START_AUTOKILL
* src/qemu/qemu_driver.c: Support automatic killing of guests
upon connection close
* tools/virsh.c: Add --autokill flag to 'start' and 'create'
commands
Sometimes it is useful to be able to automatically destroy a guest when
a connection is closed. For example, kill an incoming migration if
the client managing the migration dies. This introduces a map between
guest 'uuid' strings and virConnectPtr objects. When a connection is
closed, any associated guests are killed off.
* src/qemu/qemu_conf.h: Add autokill hash table to qemu driver
* src/qemu/qemu_process.c, src/qemu/qemu_process.h: Add APIs
for performing autokill of guests associated with a connection
* src/qemu/qemu_driver.c: Initialize autodestroy map
For controlled shutdown we issue a 'system_powerdown' command
to the QEMU monitor. This triggers an ACPI event which (most)
guest OS wire up to a controlled shutdown. There is no equiv
ACPI event to trigger a controlled reboot. This patch attempts
to fake a reboot.
- In qemuDomainObjPrivatePtr we have a bool fakeReboot
flag.
- The virDomainReboot method sets this flag and then
triggers a normal 'system_powerdown'.
- The QEMU process is started with '-no-shutdown'
so that the guest CPUs pause when it powers off the
guest
- When we receive the 'POWEROFF' event from QEMU JSON
monitor if fakeReboot is not set we invoke the
qemuProcessKill command and shutdown continues
normally
- If fakeReboot was set, we spawn a background thread
which issues 'system_reset' to perform a warm reboot
of the guest hardware. Then it issues 'cont' to
start the CPUs again
* src/qemu/qemu_command.c: Add -no-shutdown flag if
we have JSON support
* src/qemu/qemu_domain.h: Add 'fakeReboot' flag to
qemuDomainObjPrivate struct
* src/qemu/qemu_driver.c: Fake reboot using the
system_powerdown command if JSON support is available
* src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h,
src/qemu/qemu_monitor_json.c, src/qemu/qemu_monitor_json.h,
src/qemu/qemu_monitor_text.c, src/qemu/qemu_monitor_text.h: Add
binding for system_reset command
* src/qemu/qemu_process.c: Reset the guest & start CPUs if
fakeReboot is set
The following patch addresses the problem that when a PASSTHROUGH
mode DIRECT NIC connection is made the MAC address of the NIC is
not automatically set and reset to the configured VM MAC and
back again.
The attached patch fixes this problem by setting and resetting the MAC
while remembering the previous setting while the VM is running.
This also works if libvirtd is restarted while the VM is running.
the patch passes make syntax-check
Implemented as setting NUMA policy between fork and exec as a hook,
using libnuma. Only support memory tuning on domain process currently.
For the nodemask out of range, will report soft warning instead of
hard error in libvirt layer. (Kernel will be silent as long as one
of set bit in the nodemask is valid on the host. E.g. For a host
has two NUMA nodes, kernel will be silent for nodemask "01010101").
So, soft warning is the only thing libvirt can do, as one might want
to specify the numa policy prior to a node that doesn't exist yet,
however, it may come as hotplug soon.
When an operation started by virDomainBlockPullAll completes (either with
success or with failure), raise an event to indicate the final status. This
allows an API user to avoid polling on virDomainBlockPullInfo if they would
prefer to use the event mechanism.
* daemon/remote.c: Dispatch events to client
* include/libvirt/libvirt.h.in: Define event ID and callback signature
* src/conf/domain_event.c, src/conf/domain_event.h,
src/libvirt_private.syms: Extend API to handle the new event
* src/qemu/qemu_driver.c: Connect to the QEMU monitor event
for block_stream completion and emit a libvirt block pull event
* src/remote/remote_driver.c: Receive and dispatch events to application
* src/remote/remote_protocol.x: Wire protocol definition for the event
* src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h,
src/qemu/qemu_monitor_json.c: Watch for BLOCK_STREAM_COMPLETED event
from QEMU monitor
Signed-off-by: Adam Litke <agl@us.ibm.com>
If qemu supports -chardev, our char frontend aliases are ex. 'charserial0'
not just 'serial0'. Typically we don't use this code path because the
pty's are scraped from stdout.
There is the case where cpu affinites for vcpu of qemu doesn't work
correctly. For example, if only one vcpupin setting entry is provided
and its setting is not for vcpu0, it doesn't work.
# virsh dumpxml VM
...
<vcpu>4</vcpu>
<cputune>
<vcpupin vcpu='3' cpuset='9-11'/>
</cputune>
...
# virsh start VM
Domain VM started
# virsh vcpuinfo VM
VCPU: 0
CPU: 31
State: running
CPU time: 2.5s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
VCPU: 1
CPU: 12
State: running
CPU time: 0.9s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
VCPU: 2
CPU: 30
State: running
CPU time: 1.5s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
VCPU: 3
CPU: 13
State: running
CPU time: 1.7s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
This patch fixes this problem.
Signed-off-by: Taku Izumi <izumi.taku@jp.fujitsu.com>
The error code for virKillProcess is returned in the errno variable
not the return value. THis mistake caused the logs to be filled with
errors when shutting down QEMU processes
* src/qemu/qemu_process.c: Fix process kill check.
Detected by Coverity. This leaked a cpumap on every iteration
of the loop. Leak introduced in commit 1cc4d02 (v0.9.0).
* src/qemu/qemu_process.c (qemuProcessSetVcpuAffinites): Plug
leak, and hoist allocation outside loop.
The QEMU integrates with the lock manager instructure in a number
of key places
* During startup, a lock is acquired in between the fork & exec
* During startup, the libvirtd process acquires a lock before
setting file labelling
* During shutdown, the libvirtd process acquires a lock
before restoring file labelling
* During hotplug, unplug & media change the libvirtd process
holds a lock while setting/restoring labels
The main content lock is only ever held by the QEMU child process,
or libvirtd during VM shutdown. The rest of the operations only
require libvirtd to hold the metadata locks, relying on the active
QEMU still holding the content lock.
* src/qemu/qemu_conf.c, src/qemu/qemu_conf.h,
src/qemu/libvirtd_qemu.aug, src/qemu/test_libvirtd_qemu.aug:
Add config parameter for configuring lock managers
* src/qemu/qemu_driver.c: Add calls to the lock manager
Currently whenever there is any failure with parsing the monitor,
this is treated in the same was as end-of-file (ie QEMU quit).
The domain is terminated, if not already dead.
With this change, failures in parsing the monitor stream do not
result in the death of QEMU. The guest continues running unchanged,
but all further use of the monitor will be disabled.
The VMM_FAILURE event will be emitted, and the mgmt application
can decide when to kill/restart the guest to re-gain control
* src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h: Run a
different callback for monitor EOF vs error conditions.
* src/qemu/qemu_process.c: Emit VMM_FAILURE event when monitor
fails
Only in drivers which use virDomainObj, drivers that query hypervisor
for domain status need to be updated separately in case their hypervisor
supports this functionality.
The reason is also saved into domain state XML so if a domain is not
running (i.e., no state XML exists) the reason will be lost by libvirtd
restart. I think this is an acceptable limitation.
These VIR_XXXX0 APIs make us confused, use the non-0-suffix APIs instead.
How do these coversions works? The magic is using the gcc extension of ##.
When __VA_ARGS__ is empty, "##" will swallow the "," in "fmt," to
avoid compile error.
example: origin after CPP
high_level_api("%d", a_int) low_level_api("%d", a_int)
high_level_api("a string") low_level_api("a string")
About 400 conversions.
8 special conversions:
VIR_XXXX0("") -> VIR_XXXX("msg") (avoid empty format) 2 conversions
VIR_XXXX0(string_literal_with_%) -> VIR_XXXX(%->%%) 0 conversions
VIR_XXXX0(non_string_literal) -> VIR_XXXX("%s", non_string_literal)
(for security) 6 conversions
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Introduce a virProcessKill function that can be safely called
even when the job mutex is held. This allows virDomainDestroy
to kill any VM even if it is asleep in a monitor job. The PID
will die and the thread asleep on the monitor will then wake
up releasing the job mutex.
* src/qemu/qemu_driver.c: Kill process before using qemuProcessStop
to ensure job is released
* src/qemu/qemu_process.c: Add virProcessKill for killing off
QEMU processes
As well as taint warnings going to the main libvirt log,
add taint warnings to the per-domain logfile
Domain id=3 is tainted: high-privileges
Domain id=3 is tainted: disk-probing
Domain id=3 is tainted: shell-scripts
Domain id=3 is tainted: custom-monitor
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Enhance
qemuDomainTaint to also log to the domain logfile
* src/qemu/qemu_driver.c: Pass -1 for logFD to taint methods to
auto-append to logfile
* src/qemu/qemu_process.c: Pass open logFD at startup for taint
methods
Move the qemuProcessLogReadFD and qemuProcessLogFD methods
into qemu_domain.c, renaming them to qemuDomainCreateLog
and qemuDomainOpenLog.
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add
qemuDomainCreateLog and qemuDomainOpenLog.
* src/qemu/qemu_process.c: Remove qemuProcessLogFD
and qemuProcessLogReadFD
Wire up logging of VM tainting to the QEMU driver
- If running QEMU as root user/group or without capabilities
being cleared
- If passing custom QEMU command line args
- If issuing custom QEMU monitor commands
- If using a network interface config with an associated
shell script
- If using a disk config relying on format probing
The warnings, per-VM appear in the main libvirtd logs
11:56:17.571: 10832: warning : qemuDomainObjTaint:712 : Domain id=1 name='l2' uuid=c7a3edbd-edaf-9455-926a-d65c16db1802 is tainted: high-privileges
11:56:17.571: 10832: warning : qemuDomainObjTaint:712 : Domain id=1 name='l2' uuid=c7a3edbd-edaf-9455-926a-d65c16db1802 is tainted: disk-probing
The taint flags are reset when the VM is stopped.
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Helper APIs
for logging taint warnings
* src/qemu/qemu_driver.c: Log tainting with custom QEMU monitor
commands and disk/net hotplug with unsupported configs
* src/qemu/qemu_process.c: Log tainting at startup based on
unsupported configs
To cope with the QEMU binary being changed while a VM is running,
it is neccessary to persist the original qemu capabilities at the
time the VM is booted.
* src/qemu/qemu_capabilities.c, src/qemu/qemu_capabilities.h: Add
an enum for a string rep of every capability
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Support for
storing capabilities in the domain status XML
* src/qemu/qemu_process.c: Populate & free QEMU capabilities at
domain startup
This last minute addition caused a build failure
cc1: warnings being treated as errors
qemu/qemu_process.c: In function 'qemuProcessHandleWatchdog':
qemu/qemu_process.c:436:34: error: ignoring return value of 'virDomainObjUnref', declared with attribute warn_unused_result [-Wunused-result]
make[3]: *** [libvirt_driver_qemu_la-qemu_process.lo] Error 1
This patch does the following two things:
1. hold an extra reference while handling watchdog event
If the domain is not persistent, and qemu quits unexpectedly before
calling processWatchdogEvent(), vm will be freed and the function
processWatchdogEvent() will be dangerous.
2. unlock qemu driver and vm before returning from processWatchdogEvent()
When the function processWatchdogEvent() failed, we only free wdEvent,
but forget to unlock qemu driver and vm, free dumpfile.
This patch adds max_processes option to qemu.conf which can be used to
override system default limit on number of processes that are allowed to
be running for qemu user.
Even with -Wuninitialized (which is part of autobuild.sh
--enable-compile-warnings=error), gcc does NOT catch this
use of an uninitialized variable:
{
if (cond)
goto error;
int a = 1;
error:
printf("%d", a);
}
which prints 0 (supposing the stack started life wiped) if
cond was true. Clang will catch it, but we don't use clang
as often. Using gcc -Wjump-misses-init catches it, but also
gives false positives:
{
if (cond)
goto error;
int a = 1;
return a;
error:
return 0;
}
Here, a was never used in the scope of the error block, so
declaring it after goto is technically fine (and clang agrees).
However, given that our HACKING already documents a preference
to C89 decl-before-statement, the false positive warning is
enough of a prod to comply with HACKING.
[Personally, I'd _really_ rather use C99 decl-after-statement
to minimize scope, but until gcc can efficiently and reliably
catch scoping and uninitialized usage bugs, I'll settle with
the compromise of enforcing a coding standard that happens to
reject false positives if it can also detect real bugs.]
* acinclude.m4 (LIBVIRT_COMPILE_WARNINGS): Add -Wjump-misses-init.
* src/util/util.c (__virExec): Adjust offenders.
* src/conf/domain_conf.c (virDomainTimerDefParseXML): Likewise.
* src/remote/remote_driver.c (doRemoteOpen): Likewise.
* src/phyp/phyp_driver.c (phypGetLparNAME, phypGetLparProfile)
(phypGetVIOSFreeSCSIAdapter, phypVolumeGetKey)
(phypGetStoragePoolDevice)
(phypVolumeGetPhysicalVolumeByStoragePool)
(phypVolumeGetPath): Likewise.
* src/vbox/vbox_tmpl.c (vboxNetworkUndefineDestroy)
(vboxNetworkCreate, vboxNetworkDumpXML)
(vboxNetworkDefineCreateXML): Likewise.
* src/xenapi/xenapi_driver.c (getCapsObject)
(xenapiDomainDumpXML): Likewise.
* src/xenapi/xenapi_utils.c (createVMRecordFromXml): Likewise.
* src/security/security_selinux.c (SELinuxGenNewContext):
Likewise.
* src/qemu/qemu_command.c (qemuBuildCommandLine): Likewise.
* src/qemu/qemu_hotplug.c (qemuDomainChangeEjectableMedia):
Likewise.
* src/qemu/qemu_process.c (qemuProcessWaitForMonitor): Likewise.
* src/qemu/qemu_monitor_text.c (qemuMonitorTextGetPtyPaths):
Likewise.
* src/qemu/qemu_driver.c (qemudDomainShutdown)
(qemudDomainBlockStats, qemudDomainMemoryPeek): Likewise.
* src/storage/storage_backend_iscsi.c
(virStorageBackendCreateIfaceIQN): Likewise.
* src/node_device/node_device_udev.c (udevProcessPCI): Likewise.
If strdup("x509dname") or strdup("saslUsername") success, but
strdup(x509dname) or strdup(saslUsername) failed, subject->nidentity
is not the num elements of subject->identities, and we will leak some
memory.
qemu driver uses a 4K buffer for reading qemu log file. This is enough
when only qemu's output is present in the log file. However, when
debugging messages are turned on, intermediate libvirt process fills the
log with a bunch of debugging messages before it executes qemu binary.
In such a case the buffer may become too small. However, we are not
really interested in libvirt messages so they can be filtered out from
the buffer.
When domain startup, setting cpu affinity and cpu shares according
to the cputune xml specified in domain xml.
Modify "qemudDomainPinVcpu" to update domain config for vcpupin,
and modify "qemuSetSchedulerParameters" to update domain config
for cpu shares.
v1 - v2:
* Use "VIR_ALLOC_N" instead of "VIR_ALLOC_VAR"
* But keep raising error when it fails on adding vcpupin xml
entry, as I still don't have a better idea yet.
Add the compiler attribute to ensure we don't introduce any more
ref bugs like were just patched in commit 9741f34, then explicitly
mark the remaining places in code that are safe.
* src/qemu/qemu_monitor.h (qemuMonitorUnref): Mark
ATTRIBUTE_RETURN_CHECK.
* src/conf/domain_conf.h (virDomainObjUnref): Likewise.
* src/conf/domain_conf.c (virDomainObjParseXML)
(virDomainLoadStatus): Fix offenders.
* src/openvz/openvz_conf.c (openvzLoadDomains): Likewise.
* src/vmware/vmware_conf.c (vmwareLoadDomains): Likewise.
* src/qemu/qemu_domain.c (qemuDomainObjBeginJob)
(qemuDomainObjBeginJobWithDriver)
(qemuDomainObjExitRemoteWithDriver): Likewise.
* src/qemu/qemu_monitor.c (QEMU_MONITOR_CALLBACK): Likewise.
Suggested by Daniel P. Berrange.
Fix for bug https://bugzilla.redhat.com/show_bug.cgi?id=618970
The "prepare" hook is called very early in the VM statup process
before device labeling, so that it can allocate ressources not
managed by libvirt, such as DRBD, or for instance create missing
bridges and vlan interfaces.
* src/util/hooks.c src/util/hooks.h: add definitions for new hooks
VIR_HOOK_QEMU_OP_PREPARE and VIR_HOOK_QEMU_OP_RELEASE
* src/qemu/qemu_process.c: use them in qemuProcessStart and
qemuProcessStop()
THREADS.txt states that the contents of vm should not be read or
modified while the vm lock is not held, but that the lock must not
be held while performing a monitor command. This fixes all the
offenders that I could find.
* src/qemu/qemu_process.c (qemuProcessStartCPUs)
(qemuProcessInitPasswords, qemuProcessStart): Don't modify or
refer to vm state outside lock.
* src/qemu/qemu_driver.c (qemudDomainHotplugVcpus): Likewise.
* src/qemu/qemu_hotplug.c (qemuDomainChangeGraphicsPasswords):
Likewise.
Some qemu monitor event handlers were issuing inadequate warning when
virDomainSaveStatus() failed. They copied the message from I/O error
handler without customizing it to provide better information on why
virDomainSaveStatus() was called.
This was also found while investigating
https://bugzilla.redhat.com/show_bug.cgi?id=670848
An EOF on a domain's monitor socket results in an event being queued
to handle the EOF. The handler calls qemuProcessHandleMonitorEOF. If
it is a transient domain, this leads to a call to
virDomainRemoveInactive, which removes the domain from the driver's
hashtable and unref's it. Nowhere in this code is the qemu driver lock
acquired.
However, all modifications to the driver's domain hashtable *must* be
done while holding the driver lock, otherwise the hashtable can become
corrupt, and (even more likely) another thread could call a different
hashtable function and acquire a pointer to the domain that is in the
process of being destroyed.
To prevent such a disaster, qemuProcessHandleMonitorEOF must get the
qemu driver lock *before* it gets the DomainObj's lock, and hold it
until it is finished with the DomainObj. This guarantees that nobody
else modifies the hashtable at the same time, and that anyone who had
already gotten the DomainObj from the hashtable prior to this call has
finished with it before we remove/destroy it.
This was found while researching the root cause of:
https://bugzilla.redhat.com/show_bug.cgi?id=670848
virDomainUnref should only be called with the lock held for the
virDomainObj in question. However, when a transient qemu domain gets
EOF on its monitor socket, it queues an event which frees the monitor,
which unref's the virDomainObj without first locking it. If another
thread has already locked the virDomainObj, the modification of the
refcount could potentially be corrupted. In an extreme case, it could
also be potentially unlocked by virDomainObjFree, thus left open to
modification by anyone else who would have otherwise waited for the
lock (not to mention the fact that they would be accessing freed
data!).
The solution is to have qemuMonitorFree lock the domain object right
before unrefing it. Since the caller to qemuMonitorFree doesn't expect
this lock to be held, if the refcount doesn't go all the way to 0,
qemuMonitorFree must unlock it after the unref.
Relax the restriction that the hash table key must be a string
by allowing an arbitrary hash code generator + comparison func
to be provided
* util/hash.c, util/hash.h: Allow any pointer as a key
* internal.h: Include stdbool.h as standard.
* conf/domain_conf.c, conf/domain_conf.c,
conf/nwfilter_params.c, nwfilter/nwfilter_gentech_driver.c,
nwfilter/nwfilter_gentech_driver.h, nwfilter/nwfilter_learnipaddr.c,
qemu/qemu_command.c, qemu/qemu_driver.c,
qemu/qemu_process.c, uml/uml_driver.c,
xen/xm_internal.c: s/char */void */ in hash callbacks
This is done for two reasons:
- we are getting very close to 64 flags which is the maximum we can use
with unsigned long long
- by using LL constants in enum we already violates C99 constraint that
enum values have to fit into int
Done mechanically with:
$ git grep -l '\bDEBUG0\? *(' | xargs -L1 sed -i 's/\bDEBUG0\? *(/VIR_&/'
followed by manual deletion of qemudDebug in daemon/libvirtd.c, along
with a single 'make syntax-check' fallout in the same file, and the
actual deletion in src/util/logging.h.
* src/util/logging.h (DEBUG, DEBUG0): Delete.
* daemon/libvirtd.h (qemudDebug): Likewise.
* global: Change remaining clients over to VIR_DEBUG counterpart.
Now that the virHash handling functions call virReportOOMError by
themselves when needed, users of the virHash API no longer need to
do it by themselves. Since users of the virHash API were not
consistently calling virReportOOMError after memory failures from
the virHash code, this has the added benefit of making OOM
reporting from this code more consistent and reliable.
Move the qemudStartVMDaemon and qemudShutdownVMDaemon
methods into a separate file, renaming them to
qemuProcessStart, qemuProcessStop. All helper methods
called by these are also moved & renamed to match
* src/Makefile.am: Add qemu_process.c/.h
* src/qemu/qemu_command.c: Add qemuDomainAssignPCIAddresses
* src/qemu/qemu_command.h: Add VNC port min/max
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add
domain event queue helpers
* src/qemu/qemu_driver.c, src/qemu/qemu_driver.h: Remove
all QEMU process startup/shutdown functions
* src/qemu/qemu_process.c, src/qemu/qemu_process.h: Add
all QEMU process startup/shutdown functions