f946462e14 changed behavior by settings
VIR_DOMAIN_DEVICE_ADDRESS_TYPE_PCI upfront. If we do so before invoking
qemuDomainPCIAddressEnsureAddr we merely try to set the PCI slot via
qemuDomainPCIAddressReserveSlot instead reserving a new address via
qemuDomainPCIAddressSetNextAddr which fails with
$ ~/run-tck-test domain/200-disk-hotplug.t
./scripts/domain/200-disk-hotplug.t .. # Creating a new transient domain
./scripts/domain/200-disk-hotplug.t .. 1/5 # Attaching the new disk /var/lib/jenkins/jobs/libvirt-tck-build/workspace/scratchdir/200-disk-hotplug/extra.img
# Failed test 'disk has been attached'
# at ./scripts/domain/200-disk-hotplug.t line 67.
# died: Sys::Virt::Error (libvirt error code: 1, message: internal error unable to reserve PCI address 0:0:0.0
# )
Since we switched from direct host migration scheme to the one,
where we connect to the destination and then just pass a FD to a
qemu, we have uncovered a qemu bug. Qemu expects migration FD to
block. However, we are passing a nonblocking one which results in
cryptic error messages like:
qemu: warning: error while loading state section id 2
load of migration failed
The bug is already known to Qemu folks, but we should workaround
already released Qemus. Patch has been originally proposed by Stefan
Hajnoczi <stefanha@gmail.com>
virConnectOpenAuth didn't require 'name' to be specified (VIR_DEBUG
used NULLSTR() for the output) and by default, if name == NULL, the
default connection uri is used. This was not indicated in the
documentation and wasn't checked for in other API's VIR_DEBUG outputs.
When prefixing with string (optional) or optional in the description
of arguments to libvirt C APIs, in python, these arguments will be
set as optional arugments, for example:
* virDomainSaveFlags:
* @domain: a domain object
* @to: path for the output file
* @dxml: (optional) XML config for adjusting guest xml used on restore
* @flags: bitwise-OR of virDomainSaveRestoreFlags
the corresponding python APIs is
restoreFlags(self, frm, dxml=None, flags=0)
The following python APIs are changed to:
blockCommit(self, disk, base, top, bandwidth=0, flags=0)
blockPull(self, disk, bandwidth=0, flags=0)
blockRebase(self, disk, base, bandwidth=0, flags=0)
migrate(self, dconn, flags=0, dname=None, uri=None, bandwidth=0)
migrate2(self, dconn, dxml=None, flags=0, dname=None, uri=None, bandwidth=0)
migrateToURI(self, duri, flags=0, dname=None, bandwidth=0)
migrateToURI2(self, dconnuri=None, miguri=None, dxml=None, flags=0, \
dname=None, bandwidth=0)
saveFlags(self, to, dxml=None, flags=0)
migrate(self, domain, flags=0, dname=None, uri=None, bandwidth=0)
migrate2(self, domain, dxml=None, flags=0, dname=None, uri=None, bandwidth=0)
restoreFlags(self, frm, dxml=None, flags=0)
This reverts commit 5ac846e42e.
After further discussions with Alon Levy, I learned the following:
The use of '-vga qxl' vs. '-device qxl-vga' is completely orthogonal
to whether ram_size can be exposed. Downstream distros are interested
in backporting support for multi-head qxl, but this can be done in
one of two ways:
1. Support one head per PCI device. If you do this, then it makes
sense to have full control over the PCI address of each device. For
full control, you need '-device qxl-vga' instead of '-vga qxl'.
2. Support multiple heads through a single PCI device. If you do
this, then you need to allocate more RAM to that PCI device (enough
ram to cover the multiple screens). Here, the device is hard-coded
to 0:0:2.0, both in qemu and libvirt code.
Apparently, backporting ram_size changes to allow multiple heads in
a single device is much easier than backporting multiple device
support. Furthermore, the presence or absence of qxl-vga.surfaces
is no different than the presence or absence of qxl-vga.ram_size;
both properties can be applied regardless of whether you have one
PCI device (-vga qxl) or multiple (-device qxl-vga), so this property
is NOT a good witness of whether '-device qxl-vga' support has been
backported.
Downstream RHEL will NOT be using this patch; and worse, leaving this
patch in risks doing the wrong thing if compiling upstream libvirt
on RHEL, so the best course of action is to revert it. That means
that libvirt will go back to only using '-device qxl-vga' for qemu
>= 1.2, but this is just fine because we know of no distros that plan
on backporting multiple PCI address support to any older version of
qemu. Meanwhile, downstream can still use ram_size to pack multiple
heads through a single PCI device.
Right now, libvirt-guests gives awkward output. It's possible to
force faster failure by setting /etc/sysconfig/libvirt-guests to use:
ON_SHUTDOWN=shutdown
PARALLEL_SHUTDOWN=0
SHUTDOWN_TIMEOUT=1
ON_BOOT=ignore
at which point, we see:
$ service libvirt-guests restart
Running guests on default URI: a, b, d, c
Shutting down guests on default URI...
Starting shutdown on guest: a
Shutdown of guest a failed to complete in time.Starting shutdown on guest: b
Shutdown of guest b failed to complete in time.Starting shutdown on guest: d
Shutdown of guest d failed to complete in time.Starting shutdown on guest: c
Shutdown of guest c failed to complete in time.libvirt-guests is configured not to start any guests on boot
* tools/libvirt-guests.sh.in (shutdown_guest): Add missing newline.
Reported by Xuesong Zhang.
virPCIGetVirtualFunctions returns 0 even if there is no "virtfn"
entry under the device sysfs path.
And virPCIGetVirtualFunctions returns -1 when it fails to get
the PCI config space of one VF, however, with keeping the
the VFs already detected.
That's why udevProcessPCI and gather_pci_cap use logic like:
if (!virPCIGetVirtualFunctions(syspath,
&data->pci_dev.virtual_functions,
&data->pci_dev.num_virtual_functions) ||
data->pci_dev.num_virtual_functions > 0)
data->pci_dev.flags |= VIR_NODE_DEV_CAP_FLAG_PCI_VIRTUAL_FUNCTION;
to tag the PCI device with "virtual_function" cap.
However, this results in a VF will aslo get "virtual_function" cap.
This patch fixes it by:
* Ignoring the VF which has failure of getting PCI config space
(given that the successfully detected VFs are kept , it makes
sense to not give up on the failure of one VF too) with a warning,
so virPCIGetVirtualFunctions will not return -1 except out of memory.
* Free the allocated *virtual_functions when out of memory
And thus the logic can be changed to:
/* Out of memory */
int ret = virPCIGetVirtualFunctions(syspath,
&data->pci_dev.virtual_functions,
&data->pci_dev.num_virtual_functions);
if (ret < 0 )
goto out;
if (data->pci_dev.num_virtual_functions > 0)
data->pci_dev.flags |= VIR_NODE_DEV_CAP_FLAG_PCI_VIRTUAL_FUNCTION;
This abstracts nodeDeviceVportCreateDelete as an util function
virManageVport, which can be further used by later storage patches
(to support persistent vHBA, I don't want to create the vHBA
using the public API, which is not good).
This enrichs HBA's xml by dumping the number of max vports and
vports in use. Format is like:
<capability type='vport_ops'>
<max_vports>164</max_vports>
<vports>5</vports>
</capability>
* docs/formatnode.html.in: (Document the new XML)
* docs/schemas/nodedev.rng: (Add the schema)
* src/conf/node_device_conf.h: (New member for data.scsi_host)
* src/node_device/node_device_linux_sysfs.c: (Collect the value of
max_vports and vports)
This adds two util functions (virIsCapableFCHost and virIsCapableVport),
and rename helper check_fc_host_linux as detect_scsi_host_caps,
check_capable_vport_linux is removed, as it's abstracted to the util
function virIsCapableVport. detect_scsi_host_caps nows detect both
the fc_host and vport_ops capabilities. "stat(2)" is replaced with
"access(2)" for saving.
* src/util/virutil.h:
- Declare virIsCapableFCHost and virIsCapableVport
* src/util/virutil.c:
- Implement virIsCapableFCHost and virIsCapableVport
* src/node_device/node_device_linux_sysfs.c:
- Remove check_capable_vport_linux
- Rename check_fc_host_linux as detect_scsi_host_caps, and refactor
it a bit to detect both fc_host and vport_os capabilities
* src/node_device/node_device_driver.h:
- Change/remove the related declarations
* src/node_device/node_device_udev.c: (Use detect_scsi_host_caps)
* src/node_device/node_device_hal.c: (Likewise)
* src/node_device/node_device_driver.c (Likewise)
The use of 'stat' in nodeDeviceVportCreateDelete is only to check
if the file exists or not, it's a bit overkill, and safe to replace
with the wrapper of access(2) (virFileExists).
"open_wwn_file" in node_device_linux_sysfs.c is redundant, on one
hand it duplicates work of virFileReadAll, on the other hand, it's
waste to use a function for it, as there is no other users of it.
So I don't see why the file opening work cannot be done in
"read_wwn_linux".
"read_wwn_linux" can be abstracted as an util function. As what all
it does is to read the sysfs entry.
So this patch removes "open_wwn_file", and abstract "read_wwn_linux"
as an util function "virReadFCHost" (a more general name, because
after changes, it can read each of the fc_host entry now).
* src/util/virutil.h: (Declare virReadFCHost)
* src/util/virutil.c: (Implement virReadFCHost)
* src/node_device/node_device_linux_sysfs.c: (Remove open_wwn_file,
and read_wwn_linux)
src/node_device/node_device_driver.h: (Remove the declaration of
read_wwn_linux, and the related macros)
src/libvirt_private.syms: (Export virReadFCHost)
VIR_CONNECT_LIST_NODE_DEVICES_CAP_FC_HOST to filter the FC HBA,
and VIR_CONNECT_LIST_NODE_DEVICES_CAP_VPORTS to filter the FC HBA
which supports vport.
Guess it was created for the fc_host and vports_ops capabilities
purpose, but there is enum virNodeDevScsiHostCapFlags for them,
and enum virNodeDevHBACapType is unused, and actually both
VIR_ENUM_DECL and VIR_ENUM_IMPL use the wrong enum name
"virNodeDevHBACap".
The docs assumed the command works always for QEMU and other
hypervisors. As this is done using the balloon mechainism live increase
of the maximum memory limit isn't supported. Fix the docs to mention
this limitation.
For a root filesystem with type=file or type=block, the LXC
container was forgetting to actually mount it, before doing
the pivot root step.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently the lxc controller sets up the devpts instance on
$rootfsdef->src, but this only works if $rootfsdef is using
type=mount. To support type=block or type=file for the root
filesystem, we must use /var/lib/libvirt/lxc/$NAME.devpts
for the temporary devpts mount in the controller
Instead of using /var/lib/libvirt/lxc/$NAME for the FUSE
filesystem, use /var/lib/libvirt/lxc/$NAME.fuse. This allows
room for other temporary mounts in the same directory
Some of the LXC callbacks did not lock the virDomainObjPtr
instance. This caused transient errors like
error: Failed to start domain busy-mount
error: cannot rename file '/var/run/libvirt/lxc/busy-mount.xml.new' as '/var/run/libvirt/lxc/busy-mount.xml': No such file or directory
as 2 threads tried to update the status file concurrently
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The virDomainGetRootFilesystem was only returning filesystems
with type=mount. This is bogus - any type of filesystem is
valid as the root, if dst=/.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Noticed that parsing bond interface XML containing the miimon element
fails
<interface type="bond" name="bond0">
...
<bond mode="active-backup">
<miimon freq="100" carrier="netif"/>
...
</bond>
</interface>
This configuration does not contain the optional updelay and downdelay
attributes, but parsing will fail due to returning the result of
virXPathULong (a -1 when the attribute doesn't exist) from
virInterfaceDefParseBond after examining the updelay attribute.
While fixing this bug, cleanup the function to use virXPathInt instead
of virXPathULong, and store the result directly instead of using a tmp
variable. Using virXPathInt actually fixes a potential silent
truncation bug noted by Eric Blake.
Also, there is no cleanup in the error label. Remove the label,
returning failure where failure occurs and success if the end of the
function is reached.
Don't print the pool option name if it's null.
Before:
virsh # vol-name vol
error: failed to get vol 'vol', specifying --(null) might help
error: Storage volume not found: no storage vol with matching path vol
After:
virsh # vol-name vol
error: failed to get vol 'vol'
error: Storage volume not found: no storage vol with matching path vol
Bug: https://bugzilla.redhat.com/show_bug.cgi?id=924571
The 'nodeset' variable was never initialized, causing a later
VIR_FREE(nodeset) to free uninitialized memory.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This does nothing more than adding the new device and capability.
The device is present since QEMU 1.2.0.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
A better way to do this would be to use a configuration file like
[iscsi "target-name"]
user = name
password = pwd
and pass it via -readconfig. This would remove the username and password
from the "ps" output. For now, however, keep this solution.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Only sheepdog actually required it in the code, and we can use 7000 as the
default---the same value that QEMU uses for the simple "sheepdog:VOLUME"
syntax. With this change, the schema can be fixed to allow no port.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
libiscsi provides a userspace iSCSI initiator.
The main advantage over the kernel initiator is that it is very
easy to provide different initiator names for VMs on the same host.
Thus libiscsi supports usage of persistent reservations in the VM,
which otherwise would only be possible with NPIV.
libiscsi uses "iscsi" as the scheme, not "iscsi+tcp". We can change
this in the tests (while remaining backwards-compatible manner, because
QEMU uses TCP as the default transport for both Gluster and NBD).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The following four functions have not changed because default arguments
have to come after positional arguments. Changing them will break the
the binding APIs.
migrate(self, dconn, flags, dname, uri, bandwidth):
migrate2(self, dconn, dxml, flags, dname, uri, bandwidth):
migrateToURI(self, duri, flags, dname, bandwidth):
migrateToURI2(self, dconnuri, miguri, dxml, flags, dname, bandwidth):
The 'trang' utility, which is able to transform '.rng' files into
'.rnc' files, reported some errors in our schemas that weren't caught
by the tools we use in the build. I haven't added a test for this,
but the validity can be checked by the following command:
trang -I rng -O rnc domain.rng domain.rnc
There were unescaped minuses in regular expressions and we were
constraining int (which is by default in the range of [-2^31;2^31-1]
to maximum of 2^32. But what we wanted was exactly an unsignedInt.
This patch adds three macros to the virsh source tree that help to
easily check for mutually exclusive parameters.
VSH_EXCLUSIVE_OPTIONS_EXPR has four arguments, two expressions to check
and two names of the parameters to print in the message.
VSH_EXCLUSIVE_OPTIONS is more specific and check the command structure
for the parameters using vshCommandOptBool.
VSH_EXCLUSIVE_OPTIONS_VAR is meant to check boolean variables with the
same name as the parameters.
The addition of emulator pinning APIs didn't think of doing the right
job with python APIs for them. The default generator produced unusable
code for this.
This patch switches to proper code as in the case of domain Vcpu pining.
This change can be classified as a python API-breaker but in the state
the code was before I doubt anyone was able to use it successfully.