One of my previous patches manipulated virSecurityLabel* APIs,
some were added to header files, and some were renamed. However,
these changes were not reflected in libvirt_private.syms.
https://bugzilla.redhat.com/show_bug.cgi?id=923946
The <seclabel type='none'/> should be added iff there is no other
seclabel defined within a domain. This bug can be easily reproduced:
1) configure selinux seclabel for a domain
2) disable system's selinux and restart libvirtd
3) observe <seclabel type='none'/> being appended to a domain on its
startup
The virDomainDefGetSecurityLabelDef was modifying the domain XML.
It tried to find a seclabel corresponding to given sec driver. If the
label wasn't found, the function created one which is wrong. In fact
it's security manager which should modify this part of domain XML.
When libvirtd loads active network configs from network state directory,
it should release the class_id memory block which was allocated
at the time of loading xml from network config directory.
virBitmapParse will create a new memory block of bitmap class_id which
causes a memory leak.
This happens when at least one virtual network is active before.
==12234== 8,216 (24 direct, 8,192 indirect) bytes in 1 blocks are definitely \
lost in loss record 702 of 709
==12234== at 0x4A06B2F: calloc (vg_replace_malloc.c:593)
==12234== by 0x37AB04D77D: virAlloc (in /usr/lib64/libvirt.so.0.1000.3)
==12234== by 0x37AB04EF89: virBitmapNew (in /usr/lib64/libvirt.so.0.1000.3)
==12234== by 0x37AB0BFB37: virNetworkAssignDef (in /usr/lib64/libvirt.so.0.1000.3)
==12234== by 0x37AB0BFD31: ??? (in /usr/lib64/libvirt.so.0.1000.3)
==12234== by 0x37AB0BFE92: virNetworkLoadAllConfigs (in /usr/lib64/libvirt.so.0.1000.3)
==12234== by 0x10650E5A: ??? (in /usr/lib64/libvirt/connection-driver/libvirt_driver_network.so)
==12234== by 0x37AB0EB72F: virStateInitialize (in /usr/lib64/libvirt.so.0.1000.3)
==12234== by 0x40DE04: ??? (in /usr/sbin/libvirtd)
==12234== by 0x37AB0832E8: ??? (in /usr/lib64/libvirt.so.0.1000.3)
==12234== by 0x3796807D14: start_thread (in /usr/lib64/libpthread-2.16.so)
==12234== by 0x37960F246C: clone (in /usr/lib64/libc-2.16.so)
iptables-1.4.18 removed the long deprecated "state" match.
Use "conntrack" instead in forwarding rules.
Fixes openSUSE bug https://bugzilla.novell.com/811251#811251.
Despite the comment stating virNetClientIncomingEvent handler should
never be called with either client->haveTheBuck or client->wantClose
set, there is a sequence of events that may lead to both booleans being
true when virNetClientIncomingEvent is called. However, when that
happens, we must not immediately close the socket as there are other
threads waiting for the buck and they would cause SIGSEGV once they are
woken up after the socket was closed. Another thing is we should clear
all remaining calls in the queue after closing the socket.
The situation that can lead to the crash involves three threads, one of
them running event loop and the other two calling libvirt APIs. The
event loop thread detects an event on client->sock and calls
virNetClientIncomingEvent handler. But before the handler gets a chance
to lock client, the other two threads (T1 and T2) start calling some
APIs. T1 gets the buck and detects EOF on client->sock while processing
its RPC call. Since T2 is waiting for its own call, T1 passes the buck
on to it and unlocks client. But before T2 gets the signal, the event
loop thread wakes up, does its job and closes client->sock. The crash
happens when T2 actually wakes up and tries to do its job using a closed
client->sock.
When we write a log message into a log, we separate thread ID from
timestamp using ": ". However, when storing the message into the ring
buffer, we omitted the separator, e.g.:
2013-02-27 11:49:11.852+00003745: ...
#virsh detach-device $guest usb.xml
error: Failed to detach device from usb2.xml
error: operation failed: host usb device vendor=0x0951 \
product=0x1625 not found
This regresstion is due to a typo in matching function. The first
argument is always the usb device that we are checking for. If the
usb xml file provided by user contains bus and device info, we try
to search it by them, otherwise, we use vendor and product info.
The bug occurred only when detaching a usb device with no bus and
device info provided in the usb xml file.
f946462e14 changed behavior by settings
VIR_DOMAIN_DEVICE_ADDRESS_TYPE_PCI upfront. If we do so before invoking
qemuDomainPCIAddressEnsureAddr we merely try to set the PCI slot via
qemuDomainPCIAddressReserveSlot instead reserving a new address via
qemuDomainPCIAddressSetNextAddr which fails with
$ ~/run-tck-test domain/200-disk-hotplug.t
./scripts/domain/200-disk-hotplug.t .. # Creating a new transient domain
./scripts/domain/200-disk-hotplug.t .. 1/5 # Attaching the new disk /var/lib/jenkins/jobs/libvirt-tck-build/workspace/scratchdir/200-disk-hotplug/extra.img
# Failed test 'disk has been attached'
# at ./scripts/domain/200-disk-hotplug.t line 67.
# died: Sys::Virt::Error (libvirt error code: 1, message: internal error unable to reserve PCI address 0:0:0.0
# )
Since we switched from direct host migration scheme to the one,
where we connect to the destination and then just pass a FD to a
qemu, we have uncovered a qemu bug. Qemu expects migration FD to
block. However, we are passing a nonblocking one which results in
cryptic error messages like:
qemu: warning: error while loading state section id 2
load of migration failed
The bug is already known to Qemu folks, but we should workaround
already released Qemus. Patch has been originally proposed by Stefan
Hajnoczi <stefanha@gmail.com>
virConnectOpenAuth didn't require 'name' to be specified (VIR_DEBUG
used NULLSTR() for the output) and by default, if name == NULL, the
default connection uri is used. This was not indicated in the
documentation and wasn't checked for in other API's VIR_DEBUG outputs.
When prefixing with string (optional) or optional in the description
of arguments to libvirt C APIs, in python, these arguments will be
set as optional arugments, for example:
* virDomainSaveFlags:
* @domain: a domain object
* @to: path for the output file
* @dxml: (optional) XML config for adjusting guest xml used on restore
* @flags: bitwise-OR of virDomainSaveRestoreFlags
the corresponding python APIs is
restoreFlags(self, frm, dxml=None, flags=0)
The following python APIs are changed to:
blockCommit(self, disk, base, top, bandwidth=0, flags=0)
blockPull(self, disk, bandwidth=0, flags=0)
blockRebase(self, disk, base, bandwidth=0, flags=0)
migrate(self, dconn, flags=0, dname=None, uri=None, bandwidth=0)
migrate2(self, dconn, dxml=None, flags=0, dname=None, uri=None, bandwidth=0)
migrateToURI(self, duri, flags=0, dname=None, bandwidth=0)
migrateToURI2(self, dconnuri=None, miguri=None, dxml=None, flags=0, \
dname=None, bandwidth=0)
saveFlags(self, to, dxml=None, flags=0)
migrate(self, domain, flags=0, dname=None, uri=None, bandwidth=0)
migrate2(self, domain, dxml=None, flags=0, dname=None, uri=None, bandwidth=0)
restoreFlags(self, frm, dxml=None, flags=0)
This reverts commit 5ac846e42e.
After further discussions with Alon Levy, I learned the following:
The use of '-vga qxl' vs. '-device qxl-vga' is completely orthogonal
to whether ram_size can be exposed. Downstream distros are interested
in backporting support for multi-head qxl, but this can be done in
one of two ways:
1. Support one head per PCI device. If you do this, then it makes
sense to have full control over the PCI address of each device. For
full control, you need '-device qxl-vga' instead of '-vga qxl'.
2. Support multiple heads through a single PCI device. If you do
this, then you need to allocate more RAM to that PCI device (enough
ram to cover the multiple screens). Here, the device is hard-coded
to 0:0:2.0, both in qemu and libvirt code.
Apparently, backporting ram_size changes to allow multiple heads in
a single device is much easier than backporting multiple device
support. Furthermore, the presence or absence of qxl-vga.surfaces
is no different than the presence or absence of qxl-vga.ram_size;
both properties can be applied regardless of whether you have one
PCI device (-vga qxl) or multiple (-device qxl-vga), so this property
is NOT a good witness of whether '-device qxl-vga' support has been
backported.
Downstream RHEL will NOT be using this patch; and worse, leaving this
patch in risks doing the wrong thing if compiling upstream libvirt
on RHEL, so the best course of action is to revert it. That means
that libvirt will go back to only using '-device qxl-vga' for qemu
>= 1.2, but this is just fine because we know of no distros that plan
on backporting multiple PCI address support to any older version of
qemu. Meanwhile, downstream can still use ram_size to pack multiple
heads through a single PCI device.
Right now, libvirt-guests gives awkward output. It's possible to
force faster failure by setting /etc/sysconfig/libvirt-guests to use:
ON_SHUTDOWN=shutdown
PARALLEL_SHUTDOWN=0
SHUTDOWN_TIMEOUT=1
ON_BOOT=ignore
at which point, we see:
$ service libvirt-guests restart
Running guests on default URI: a, b, d, c
Shutting down guests on default URI...
Starting shutdown on guest: a
Shutdown of guest a failed to complete in time.Starting shutdown on guest: b
Shutdown of guest b failed to complete in time.Starting shutdown on guest: d
Shutdown of guest d failed to complete in time.Starting shutdown on guest: c
Shutdown of guest c failed to complete in time.libvirt-guests is configured not to start any guests on boot
* tools/libvirt-guests.sh.in (shutdown_guest): Add missing newline.
Reported by Xuesong Zhang.
virPCIGetVirtualFunctions returns 0 even if there is no "virtfn"
entry under the device sysfs path.
And virPCIGetVirtualFunctions returns -1 when it fails to get
the PCI config space of one VF, however, with keeping the
the VFs already detected.
That's why udevProcessPCI and gather_pci_cap use logic like:
if (!virPCIGetVirtualFunctions(syspath,
&data->pci_dev.virtual_functions,
&data->pci_dev.num_virtual_functions) ||
data->pci_dev.num_virtual_functions > 0)
data->pci_dev.flags |= VIR_NODE_DEV_CAP_FLAG_PCI_VIRTUAL_FUNCTION;
to tag the PCI device with "virtual_function" cap.
However, this results in a VF will aslo get "virtual_function" cap.
This patch fixes it by:
* Ignoring the VF which has failure of getting PCI config space
(given that the successfully detected VFs are kept , it makes
sense to not give up on the failure of one VF too) with a warning,
so virPCIGetVirtualFunctions will not return -1 except out of memory.
* Free the allocated *virtual_functions when out of memory
And thus the logic can be changed to:
/* Out of memory */
int ret = virPCIGetVirtualFunctions(syspath,
&data->pci_dev.virtual_functions,
&data->pci_dev.num_virtual_functions);
if (ret < 0 )
goto out;
if (data->pci_dev.num_virtual_functions > 0)
data->pci_dev.flags |= VIR_NODE_DEV_CAP_FLAG_PCI_VIRTUAL_FUNCTION;
This abstracts nodeDeviceVportCreateDelete as an util function
virManageVport, which can be further used by later storage patches
(to support persistent vHBA, I don't want to create the vHBA
using the public API, which is not good).
This enrichs HBA's xml by dumping the number of max vports and
vports in use. Format is like:
<capability type='vport_ops'>
<max_vports>164</max_vports>
<vports>5</vports>
</capability>
* docs/formatnode.html.in: (Document the new XML)
* docs/schemas/nodedev.rng: (Add the schema)
* src/conf/node_device_conf.h: (New member for data.scsi_host)
* src/node_device/node_device_linux_sysfs.c: (Collect the value of
max_vports and vports)
This adds two util functions (virIsCapableFCHost and virIsCapableVport),
and rename helper check_fc_host_linux as detect_scsi_host_caps,
check_capable_vport_linux is removed, as it's abstracted to the util
function virIsCapableVport. detect_scsi_host_caps nows detect both
the fc_host and vport_ops capabilities. "stat(2)" is replaced with
"access(2)" for saving.
* src/util/virutil.h:
- Declare virIsCapableFCHost and virIsCapableVport
* src/util/virutil.c:
- Implement virIsCapableFCHost and virIsCapableVport
* src/node_device/node_device_linux_sysfs.c:
- Remove check_capable_vport_linux
- Rename check_fc_host_linux as detect_scsi_host_caps, and refactor
it a bit to detect both fc_host and vport_os capabilities
* src/node_device/node_device_driver.h:
- Change/remove the related declarations
* src/node_device/node_device_udev.c: (Use detect_scsi_host_caps)
* src/node_device/node_device_hal.c: (Likewise)
* src/node_device/node_device_driver.c (Likewise)
The use of 'stat' in nodeDeviceVportCreateDelete is only to check
if the file exists or not, it's a bit overkill, and safe to replace
with the wrapper of access(2) (virFileExists).
"open_wwn_file" in node_device_linux_sysfs.c is redundant, on one
hand it duplicates work of virFileReadAll, on the other hand, it's
waste to use a function for it, as there is no other users of it.
So I don't see why the file opening work cannot be done in
"read_wwn_linux".
"read_wwn_linux" can be abstracted as an util function. As what all
it does is to read the sysfs entry.
So this patch removes "open_wwn_file", and abstract "read_wwn_linux"
as an util function "virReadFCHost" (a more general name, because
after changes, it can read each of the fc_host entry now).
* src/util/virutil.h: (Declare virReadFCHost)
* src/util/virutil.c: (Implement virReadFCHost)
* src/node_device/node_device_linux_sysfs.c: (Remove open_wwn_file,
and read_wwn_linux)
src/node_device/node_device_driver.h: (Remove the declaration of
read_wwn_linux, and the related macros)
src/libvirt_private.syms: (Export virReadFCHost)
VIR_CONNECT_LIST_NODE_DEVICES_CAP_FC_HOST to filter the FC HBA,
and VIR_CONNECT_LIST_NODE_DEVICES_CAP_VPORTS to filter the FC HBA
which supports vport.
Guess it was created for the fc_host and vports_ops capabilities
purpose, but there is enum virNodeDevScsiHostCapFlags for them,
and enum virNodeDevHBACapType is unused, and actually both
VIR_ENUM_DECL and VIR_ENUM_IMPL use the wrong enum name
"virNodeDevHBACap".
The docs assumed the command works always for QEMU and other
hypervisors. As this is done using the balloon mechainism live increase
of the maximum memory limit isn't supported. Fix the docs to mention
this limitation.
For a root filesystem with type=file or type=block, the LXC
container was forgetting to actually mount it, before doing
the pivot root step.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently the lxc controller sets up the devpts instance on
$rootfsdef->src, but this only works if $rootfsdef is using
type=mount. To support type=block or type=file for the root
filesystem, we must use /var/lib/libvirt/lxc/$NAME.devpts
for the temporary devpts mount in the controller
Instead of using /var/lib/libvirt/lxc/$NAME for the FUSE
filesystem, use /var/lib/libvirt/lxc/$NAME.fuse. This allows
room for other temporary mounts in the same directory
Some of the LXC callbacks did not lock the virDomainObjPtr
instance. This caused transient errors like
error: Failed to start domain busy-mount
error: cannot rename file '/var/run/libvirt/lxc/busy-mount.xml.new' as '/var/run/libvirt/lxc/busy-mount.xml': No such file or directory
as 2 threads tried to update the status file concurrently
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The virDomainGetRootFilesystem was only returning filesystems
with type=mount. This is bogus - any type of filesystem is
valid as the root, if dst=/.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Noticed that parsing bond interface XML containing the miimon element
fails
<interface type="bond" name="bond0">
...
<bond mode="active-backup">
<miimon freq="100" carrier="netif"/>
...
</bond>
</interface>
This configuration does not contain the optional updelay and downdelay
attributes, but parsing will fail due to returning the result of
virXPathULong (a -1 when the attribute doesn't exist) from
virInterfaceDefParseBond after examining the updelay attribute.
While fixing this bug, cleanup the function to use virXPathInt instead
of virXPathULong, and store the result directly instead of using a tmp
variable. Using virXPathInt actually fixes a potential silent
truncation bug noted by Eric Blake.
Also, there is no cleanup in the error label. Remove the label,
returning failure where failure occurs and success if the end of the
function is reached.
Don't print the pool option name if it's null.
Before:
virsh # vol-name vol
error: failed to get vol 'vol', specifying --(null) might help
error: Storage volume not found: no storage vol with matching path vol
After:
virsh # vol-name vol
error: failed to get vol 'vol'
error: Storage volume not found: no storage vol with matching path vol
Bug: https://bugzilla.redhat.com/show_bug.cgi?id=924571
The 'nodeset' variable was never initialized, causing a later
VIR_FREE(nodeset) to free uninitialized memory.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This does nothing more than adding the new device and capability.
The device is present since QEMU 1.2.0.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
A better way to do this would be to use a configuration file like
[iscsi "target-name"]
user = name
password = pwd
and pass it via -readconfig. This would remove the username and password
from the "ps" output. For now, however, keep this solution.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Only sheepdog actually required it in the code, and we can use 7000 as the
default---the same value that QEMU uses for the simple "sheepdog:VOLUME"
syntax. With this change, the schema can be fixed to allow no port.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
libiscsi provides a userspace iSCSI initiator.
The main advantage over the kernel initiator is that it is very
easy to provide different initiator names for VMs on the same host.
Thus libiscsi supports usage of persistent reservations in the VM,
which otherwise would only be possible with NPIV.
libiscsi uses "iscsi" as the scheme, not "iscsi+tcp". We can change
this in the tests (while remaining backwards-compatible manner, because
QEMU uses TCP as the default transport for both Gluster and NBD).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>