Depending on the set of mingw packages installed, it is possible
that other .c files hit the mingw header pollution from the
virdbus.h file.
In file included from ../../src/rpc/virnetserver.c:39:0:
../../src/util/virdbus.h:41:35: error: expected ';', ',' or ')' before 'struct'
const char *interface,
^
* src/util/virdbus.h (virDBusCallMethod): Match .c file change.
Signed-off-by: Eric Blake <eblake@redhat.com>
On platforms without decent group support, the build failed:
Cannot export virGetGroupList: symbol not defined
./.libs/libvirt_security_manager.a(libvirt_security_manager_la-security_dac.o): In function `virSecurityDACPreFork':
/home/eblake/libvirt-tmp/build/src/../../src/security/security_dac.c:248: undefined reference to `virGetGroupList'
collect2: error: ld returned 1 exit status
* src/util/virutil.c (virGetGroupList): Provide dummy implementation.
Signed-off-by: Eric Blake <eblake@redhat.com>
Our recent conversion to make VIR_ALLOC report oom wasn't
tested on mingw:
In file included from ../../src/util/virthread.c:29:0:
../../src/util/virthreadwin32.c: In function 'virCondWait':
../../src/util/virthreadwin32.c:166:81: error: 'VIR_FROM_THIS' undeclared (first use in this function)
if (VIR_REALLOC_N(c->waiters, c->nwaiters + 1) < 0) {
^
* src/util/virthreadwin32.c (VIR_FROM_THIS): Define.
Signed-off-by: Eric Blake <eblake@redhat.com>
The previous patch was incomplete.
CC libvirt_util_la-vircgroup.lo
../../src/util/vircgroup.c:70:12: error: 'virCgroupPartitionEscape' declared 'static' but never defined [-Werror=unused-function]
static int virCgroupPartitionEscape(char **path);
^
* src/util/vircgroup.c (virCgroupPartitionEscape): Move forward
declaration inside conditional.
Signed-off-by: Eric Blake <eblake@redhat.com>
The virCgroupValidateMachineGroup method calls some functions
which are only conditionally compiled, thus it too must be
made conditional. This fixes the build on non-Linux hosts.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
A VPATH build 'make check' was failing with:
GEN check-driverimpls
Can't open ../../src/../../src/lxc/lxc_monitor_protocol.h: No such file or directory at ../../src/check-driverimpls.pl line 29, <> line 27153.
Can't open ../../src/../../src/lxc/lxc_monitor_protocol.c: No such file or directory at ../../src/check-driverimpls.pl line 29, <> line 27153.
...
GEN check-aclrules
cannot read ../../src/../../src/remote/remote_protocol.x at ../../src/check-aclrules.pl line 128.
because $(srcdir) was being prepended to file names that already
included it.
* src/Makefile.am (check-driverimpls): Don't add srcdir twice.
Signed-off-by: Eric Blake <eblake@redhat.com>
When the legacy Xen driver probes with a NULL URI, and
finds itself running on Xen, it will set conn->uri. A
little bit later though it checks to see if libxl support
exists, and if so declines the driver. This leaves the
conn->uri set to 'xen:///', so if libxl also declines
it, it prevents probing of the QEMU driver.
Once a driver has set the conn->uri, it must *never*
decline an open request. So we must move the libxl
check earlier
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=981094
The commit 0ad9025ef introduce qemu flag QEMU_CAPS_DEVICE_VIDEO_PRIMARY
for using -device VGA, -device cirrus-vga, -device vmware-svga and
-device qxl-vga. In use, for -device qxl-vga, mouse doesn't display
in guest window like the desciption in above bug.
This patch try to use -device for primary video when qemu >=1.6 which
contains the bug fix patch
Otherwise, with new enough gcc compiling at -O2, the build fails with:
../../src/conf/domain_conf.c: In function ‘virDomainDeviceDefPostParse’:
../../src/conf/domain_conf.c:2821:29: error: ‘cnt’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
for (i = 0; i < *cnt; i++) {
^
../../src/conf/domain_conf.c:2795:20: note: ‘cnt’ was declared here
size_t i, *cnt;
^
../../src/conf/domain_conf.c:2794:30: error: ‘arrPtr’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
virDomainChrDefPtr **arrPtr;
^
* src/conf/domain_conf.c (virDomainChrGetDomainPtrs): Always
assign into output parameters.
Signed-off-by: Eric Blake <eblake@redhat.com>
By setting the default partition in libvirt_lxc it is not
visible when querying the live XML. Move setting of the
default partition into libvirtd virLXCProcessStart
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Decrementing it when it was already 0 causes an invalid free
in virNetworkDefUpdateDNSHost if virNetworkDNSHostDefParseXML
fails and virNetworkDNSHostDefClear gets called twice.
virNetworkForwardDefClear left the number untouched even if it
freed all the elements.
If the app has provided a whitelist of controllers to be used,
we skip detecting its mount point. We still, however, fill in
the placement info which later confuses the machine name
validation code. Skip detecting placement if the controller
mount point is not set
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When a VM has an 'emulator' child cgroup present, we must
strip off that suffix when detecting the cgroup for a
machine
Rename the virCgroupIsValidMachineGroup method to
virCgroupValidateMachineGroup to make a bit clearer
that this isn't simply a boolean check, it will make
changes to the object.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The virCgroupIsValidMachine does not need to be called from
outside the cgroups file now, so make it static.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Instead of requiring drivers to use a combination of calls
to virCgroupNewDetect and virCgroupIsValidMachine, combine
the two into virCgroupNewDetectMachine
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Both virStoragePoolFree and virStorageVolFree reset the last error,
which might lead to the cryptic message:
An error occurred, but the cause is unknown
When the volume wasn't found, virStorageVolFree was called with NULL,
leading to an error:
invalid storage volume pointer in virStorageVolFree
This patch changes it to:
Storage volume not found: no storage vol with matching name 'tomato'
Add protection such that the virCgroupRemove and
virCgroupKill* do not do anything to the root cgroup.
Killing all PIDs in the root cgroup does not end well.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Instead of requiring one API call to create a cgroup and
another to add a task to it, introduce a new API
virCgroupNewMachine which does both jobs at once. This
will facilitate the later code to talk to systemd to
achieve this job which is also atomic.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
There's a race in lxc driver causing a deadlock. If a domain is
destroyed immediately after started, the deadlock can occur. When domain
is started, the even loop tries to connect to the monitor. If the
connecting succeeds, virLXCProcessMonitorInitNotify() is called with
@mon->client locked. The first thing that callee does, is
virObjectLock(vm). So the order of locking is: 1) @mon->client, 2) @vm.
However, if there's another thread executing virDomainDestroy on the
very same domain, the first thing done here is locking the @vm. Then,
the corresponding libvirt_lxc process is killed and monitor is closed
via calling virLXCMonitorClose(). This callee tries to lock @mon->client
too. So the order is reversed to the first case. This situation results
in deadlock and unresponsive libvirtd (since the eventloop is involved).
The proper solution is to unlock the @vm in virLXCMonitorClose prior
entering virNetClientClose(). See the backtrace as follows:
Thread 25 (Thread 0x7f1b7c9b8700 (LWP 16312)):
0 0x00007f1b80539714 in __lll_lock_wait () from /lib64/libpthread.so.0
1 0x00007f1b8053516c in _L_lock_516 () from /lib64/libpthread.so.0
2 0x00007f1b80534fbb in pthread_mutex_lock () from /lib64/libpthread.so.0
3 0x00007f1b82a637cf in virMutexLock (m=0x7f1b3c0038d0) at util/virthreadpthread.c:85
4 0x00007f1b82a4ccf2 in virObjectLock (anyobj=0x7f1b3c0038c0) at util/virobject.c:320
5 0x00007f1b82b861f6 in virNetClientCloseInternal (client=0x7f1b3c0038c0, reason=3) at rpc/virnetclient.c:696
6 0x00007f1b82b862f5 in virNetClientClose (client=0x7f1b3c0038c0) at rpc/virnetclient.c:721
7 0x00007f1b6ee12500 in virLXCMonitorClose (mon=0x7f1b3c007210) at lxc/lxc_monitor.c:216
8 0x00007f1b6ee129f0 in virLXCProcessCleanup (driver=0x7f1b68100240, vm=0x7f1b680ceb70, reason=VIR_DOMAIN_SHUTOFF_DESTROYED) at lxc/lxc_process.c:174
9 0x00007f1b6ee14106 in virLXCProcessStop (driver=0x7f1b68100240, vm=0x7f1b680ceb70, reason=VIR_DOMAIN_SHUTOFF_DESTROYED) at lxc/lxc_process.c:710
10 0x00007f1b6ee1aa36 in lxcDomainDestroyFlags (dom=0x7f1b5c002560, flags=0) at lxc/lxc_driver.c:1291
11 0x00007f1b6ee1ab1a in lxcDomainDestroy (dom=0x7f1b5c002560) at lxc/lxc_driver.c:1321
12 0x00007f1b82b05be5 in virDomainDestroy (domain=0x7f1b5c002560) at libvirt.c:2303
13 0x00007f1b835a7e85 in remoteDispatchDomainDestroy (server=0x7f1b857419d0, client=0x7f1b8574ae40, msg=0x7f1b8574acf0, rerr=0x7f1b7c9b7c30, args=0x7f1b5c004a50) at remote_dispatch.h:3143
14 0x00007f1b835a7d78 in remoteDispatchDomainDestroyHelper (server=0x7f1b857419d0, client=0x7f1b8574ae40, msg=0x7f1b8574acf0, rerr=0x7f1b7c9b7c30, args=0x7f1b5c004a50, ret=0x7f1b5c0029e0) at remote_dispatch.h:3121
15 0x00007f1b82b93704 in virNetServerProgramDispatchCall (prog=0x7f1b8573af90, server=0x7f1b857419d0, client=0x7f1b8574ae40, msg=0x7f1b8574acf0) at rpc/virnetserverprogram.c:435
16 0x00007f1b82b93263 in virNetServerProgramDispatch (prog=0x7f1b8573af90, server=0x7f1b857419d0, client=0x7f1b8574ae40, msg=0x7f1b8574acf0) at rpc/virnetserverprogram.c:305
17 0x00007f1b82b8c0f6 in virNetServerProcessMsg (srv=0x7f1b857419d0, client=0x7f1b8574ae40, prog=0x7f1b8573af90, msg=0x7f1b8574acf0) at rpc/virnetserver.c:163
18 0x00007f1b82b8c1da in virNetServerHandleJob (jobOpaque=0x7f1b8574dca0, opaque=0x7f1b857419d0) at rpc/virnetserver.c:184
19 0x00007f1b82a64158 in virThreadPoolWorker (opaque=0x7f1b8573cb10) at util/virthreadpool.c:144
20 0x00007f1b82a63ae5 in virThreadHelper (data=0x7f1b8574b9f0) at util/virthreadpthread.c:161
21 0x00007f1b80532f4a in start_thread () from /lib64/libpthread.so.0
22 0x00007f1b7fc4f20d in clone () from /lib64/libc.so.6
Thread 1 (Thread 0x7f1b83546740 (LWP 16297)):
0 0x00007f1b80539714 in __lll_lock_wait () from /lib64/libpthread.so.0
1 0x00007f1b8053516c in _L_lock_516 () from /lib64/libpthread.so.0
2 0x00007f1b80534fbb in pthread_mutex_lock () from /lib64/libpthread.so.0
3 0x00007f1b82a637cf in virMutexLock (m=0x7f1b680ceb80) at util/virthreadpthread.c:85
4 0x00007f1b82a4ccf2 in virObjectLock (anyobj=0x7f1b680ceb70) at util/virobject.c:320
5 0x00007f1b6ee13bd7 in virLXCProcessMonitorInitNotify (mon=0x7f1b3c007210, initpid=4832, vm=0x7f1b680ceb70) at lxc/lxc_process.c:601
6 0x00007f1b6ee11fd3 in virLXCMonitorHandleEventInit (prog=0x7f1b3c001f10, client=0x7f1b3c0038c0, evdata=0x7f1b8574a7d0, opaque=0x7f1b3c007210) at lxc/lxc_monitor.c:109
7 0x00007f1b82b8a196 in virNetClientProgramDispatch (prog=0x7f1b3c001f10, client=0x7f1b3c0038c0, msg=0x7f1b3c003928) at rpc/virnetclientprogram.c:259
8 0x00007f1b82b87030 in virNetClientCallDispatchMessage (client=0x7f1b3c0038c0) at rpc/virnetclient.c:1019
9 0x00007f1b82b876bb in virNetClientCallDispatch (client=0x7f1b3c0038c0) at rpc/virnetclient.c:1140
10 0x00007f1b82b87d41 in virNetClientIOHandleInput (client=0x7f1b3c0038c0) at rpc/virnetclient.c:1312
11 0x00007f1b82b88f51 in virNetClientIncomingEvent (sock=0x7f1b3c0044e0, events=1, opaque=0x7f1b3c0038c0) at rpc/virnetclient.c:1832
12 0x00007f1b82b9e1c8 in virNetSocketEventHandle (watch=3321, fd=54, events=1, opaque=0x7f1b3c0044e0) at rpc/virnetsocket.c:1695
13 0x00007f1b82a272cf in virEventPollDispatchHandles (nfds=21, fds=0x7f1b8574ded0) at util/vireventpoll.c:498
14 0x00007f1b82a27af2 in virEventPollRunOnce () at util/vireventpoll.c:645
15 0x00007f1b82a25a61 in virEventRunDefaultImpl () at util/virevent.c:273
16 0x00007f1b82b8e97e in virNetServerRun (srv=0x7f1b857419d0) at rpc/virnetserver.c:1097
17 0x00007f1b8359db6b in main (argc=2, argv=0x7ffff98dbaa8) at libvirtd.c:1512
The avail_vcpu bitmap has to be allocated before it can be used (using
the maximum allowed value for that). Then for each available VCPU the
bit in the mask has to be set (libxl_bitmap_set takes a bit position
as an argument, not the number of bits to set).
Without this, I would always only get one VCPU for guests created
through libvirt/libxl.
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
virCgroupAvailable() implementation calls getmntent_r
without checking if HAVE_GETMNTENT_R is defined, so it fails
to build on platforms without getmntent_r support.
Make virCgroupAvailable() just return false without
HAVE_GETMNTENT_R.
Function qemuOpenFile() haven't had any idea about seclabels applied
to VMs only, so in case the seclabel differed from the "user:group"
from configuration, there might have been issues with opening files.
Make qemuOpenFile() VM-aware, but only optionally, passing NULL
argument means skipping VM seclabel info completely.
However, all current qemuOpenFile() calls look like they should use VM
seclabel info in case there is any, so convert these calls as well.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=869053
Parsing 'user:group' is useful even outside the DAC security driver,
so expose the most abstract function which has no DAC security driver
bits in itself.
Since PCI bridges, PCIe bridges, PCIe switches, and PCIe root ports
all share the same namespace, they are all defined as controllers of
type='pci' in libvirt (but with a differing model attribute). Each of
these controllers has a certain connection type upstream, allows
certain connection types downstream, and each can either allow a
single downstream connection at slot 0, or connections from slot 1 -
31.
Right now, we only support the pci-root and pci-bridge devices, both
of which only allow PCI devices to connect, and both which have usable
slots 1 - 31. In preparation for adding other types of controllers
that have different capabilities, this patch 1) adds info to the
qemuDomainPCIAddressBus object to indicate the capabilities, 2) sets
those capabilities appropriately for pci-root and pci-bridge devices,
and 3) validates that the controller being connected to is the proper
type when allocating slots or validating that a user-selected slot is
appropriate for a device..
Having this infrastructure in place will make it much easier to add
support for the other PCI controller types.
While it would be possible to do all the necessary checking by just
storing the controller model in the qemyuDomainPCIAddressBus, it
greatly simplifies all the validation code to also keep a "flags",
"minSlot" and "maxSlot" for each - that way we can just check those
attributes rather than requiring a nearly identical switch statement
everywhere we need to validate compatibility.
You may notice many places where the flags are seemingly hard-coded to
QEMU_PCI_CONNECT_HOTPLUGGABLE | QEMU_PCI_CONNECT_TYPE_PCI
This is currently the correct value for all PCI devices, and in the
future will be the default, with small bits of code added to change to
the flags for the few devices which are the exceptions to this rule.
Finally, there are a few places with "FIXME" comments. Note that these
aren't indicating places that are broken according to the currently
supported devices, they are places that will need fixing when support
for new PCI controller models is added.
To assure that there was no regression in the auto-allocation of PCI
addresses or auto-creation of integrated pci-root, ide, and usb
controllers, a new test case (pci-bridge-many-disks) has been added to
both the qemuxml2argv and qemuxml2xml tests. This new test defines a
domain with several dozen virtio disks but no pci-root or
pci-bridges. The .args file of the new test case was created using
libvirt sources from before this patch, and the test still passes
after this patch has been applied.
Although these two enums are named ..._LAST, they really had the value
of ..._SIZE. This patch changes their values so that, e.g.,
QEMU_PCI_ADDRESS_SLOT_LAST really is the slot number of the last slot
on a PCI bus.
The implicit IDE, USB, and video controllers provided by the PIIX3
chipset in the pc-* machinetypes are not present on other
machinetypes, so we shouldn't be doing the special checking for
them. This patch places those validation checks into a separate
function that is only called for machine types that have a PIIX3 chip
(which happens to be the i440fx-based pc-* machine types).
One qemuxml2argv test data file had to be changed - the
pseries-usb-multi test had included a piix3-usb-uhci device, which was
being placed at a specific address, and also had slot 2 auto reserved
for a video device, but the pseries virtual machine doesn't actually
have a PIIX3 chip, so even if there was a piix3-usb-uhci driver for
it, the device wouldn't need to reside at slot 1 function 2. I just
changed the .argv file to have the generic slot info for the two
devices that results when the special PIIX3 code isn't executed.
qemuDomainPCIAddressBus was an array of QEMU_PCI_ADDRESS_SLOT_LAST
uint8_t's, which worked fine as long as every PCI bus was
identical. In the future, some PCI busses will allow connecting PCI
devices, and some will allow PCIe devices; also some will only allow
connection of a single device, while others will allow connecting 31
devices.
In order to keep track of that information for each bus, we need to
turn qemuDomainPCIAddressBus into a struct, for now with just one
member:
uint8_t slots[QEMU_PCI_ADDRESS_SLOT_LAST];
Additional members will come in later patches.
The item in qemuDomainPCIAddresSet that contains the array of
qemuDomainPCIAddressBus is now called "buses" to be more consistent
with the already existing "nbuses" (and with the new "slots" array).
Thanks to a lack of coordination between kernel and glibc folks,
it has been impossible to mix code using <linux/in.h> and
<net/in.h> for some time now (see for example commit c308a9a).
On at least RHEL 6, <linux/if_bridge.h> tries to use the kernel
side, and fails due to our desire to use the glibc side elsewhere:
In file included from /usr/include/linux/if_bridge.h:17,
from util/virnetdevbridge.c:42:
/usr/include/linux/in6.h:31: error: redefinition of ‘struct in6_addr’
/usr/include/linux/in6.h:48: error: redefinition of ‘struct sockaddr_in6’
/usr/include/linux/in6.h:56: error: redefinition of ‘struct ipv6_mreq’
Thankfully, the kernel layout of these structs is ABI-compatible,
they only differ in the type system presented to the C compiler.
While there are other versions of kernel headers that avoid the
problem, it is easier to just work around the issue than to expect
all developers to upgrade to working kernel headers.
* src/util/virnetdevbridge.c (includes): Coerce the kernel version
of in.h to not collide with the normal version.
Signed-off-by: Eric Blake <eblake@redhat.com>
dbus 1.2.24 (on RHEL 6) lacks DBUS_TYPE_UNIX_FD; but as we aren't
trying to pass one of those anyways, we can just drop support for
it in our wrapper. Solves this build error introduced in commit
834c9c94:
CC libvirt_util_la-virdbus.lo
util/virdbus.c:242: error: 'DBUS_TYPE_UNIX_FD' undeclared here (not in a function)
* src/util/virdbus.c (virDBusBasicTypes): Drop support for unix fds.
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit id '4421e257' strdup'd devAlias, but didn't free
Running qemuhotplugtest under valgrind resulted in the following:
==7375== 9 bytes in 1 blocks are definitely lost in loss record 11 of 70
==7375== at 0x4A0887C: malloc (vg_replace_malloc.c:270)
==7375== by 0x37C1085D71: strdup (strdup.c:42)
==7375== by 0x4CBBD5F: virStrdup (virstring.c:554)
==7375== by 0x4CFF9CB: virDomainEventDeviceRemovedNew (domain_event.c:1174)
==7375== by 0x427791: qemuDomainRemoveChrDevice (qemu_hotplug.c:2508)
==7375== by 0x42C65D: qemuDomainDetachChrDevice (qemu_hotplug.c:3357)
==7375== by 0x41C94F: testQemuHotplug (qemuhotplugtest.c:115)
==7375== by 0x41D817: virtTestRun (testutils.c:168)
==7375== by 0x41C400: mymain (qemuhotplugtest.c:322)
==7375== by 0x41DF3A: virtTestMain (testutils.c:764)
==7375== by 0x37C1021A04: (below main) (libc-start.c:225)
Commit 'c8695053' resulted in the following:
Coverity error seen in the output:
ERROR: REVERSE_INULL
FUNCTION: lxcProcessAutoDestroy
Due to the 'dom' being checked before 'dom->persistent' since 'dom'
is already dereferenced prior to that.
Currently the LXC driver creates the VM's cgroup prior to
forking, and then libvirt_lxc moves the child process
into the cgroup. This won't work with systemd whose APIs
do the creation of cgroups + attachment of processes atomically.
Fortunately we simply move the entire cgroups setup into
the libvirt_lxc child process. We make it take place before
fork'ing into the background, so by the time virCommandRun
returns in the LXC driver, the cgroup is guaranteed to be
present.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently the QEMU driver creates the VM's cgroup prior to
forking, and then uses a virCommand hook to move the child
into the cgroup. This won't work with systemd whose APIs
do the creation of cgroups + attachment of processes atomically.
Fortunately we have a handshake taking place between the
QEMU driver and the child process prior to QEMU being exec()d,
which was introduced to allow setup of disk locking. By good
fortune this synchronization point can be used to enable the
QEMU driver to do atomic setup of cgroups removing the use
of the hook script.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The virCgroupNewDomainDriver and virCgroupNewDriver methods
are obsolete now that we can auto-detect existing cgroup
placement. Delete them to reduce code bloat.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Use the new virCgroupNewDetect function to determine cgroup
placement of existing running VMs. This will allow the legacy
cgroups creation APIs to be removed entirely
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Add virCgroupIsValidMachine API to check whether an auto
detected cgroup is valid for a machine. This lets us
check if a VM has just been placed into some generic
shared cgroup, or worse, the root cgroup
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>