Swap out all instances of cpu_set_t and replace them with virBitmap,
which some of the code was already using anyway.
The changes are pretty mechanical, with one notable exception: an
assumption has been added on the max value we can run into while
reading either socket_it or core_id.
While this specific assumption was not in place before, we were
using cpu_set_t improperly by not making sure not to set any bit
past CPU_SETSIZE or explicitly allocating bigger bitmaps; in fact
the default size of a cpu_set_t, 1024, is way too low to run our
testsuite, which includes core_id values in the 2000s.
The new name makes it clear that the returned bitmap contains the
information about which CPUs are online, not eg. which CPUs are
present.
No behavioral change.
If the cpu/present file is not available, we assume that the kernel
is too old to support non-consecutive CPU ids and return a bitmap
with all the bits set to represent this fact. This assumption is
already exploited in nodeGetCPUCount().
This means users of this API can expect the information to always
be available unless an error has occurred, and no longer need to
treat the NULL return value as a special case.
The error message has been updated as well.
The original name was confusing because the function returns the number
of CPUs, not the maximum CPU id. The comment above the function has
been updated to reflect this.
No behavioral changes.
During the recent refactoring/cleanups, a bug has been introduced
that caused all CPUs to be reported as online unless the sysfs
cpu/present file was available.
This commit fixes the fallback code path by building the directory
path passed to virNodeGetCpuValue() correctly.
When cleaning up the data (taken from a running system) for inclusion
I went a little too far and deleted a bunch of links that should have
been left alone. The test worked despite this because it was going
through a fallback code path.
A few other files are affected as well: again, the data is taken from
a running system, so even thought we would probably be okay if we
just added the links, aligning everything is definitely safer.
The scope name, even according to our docs is
"machine-$DRIVER\x2d$VMNAME.scope" virSystemdMakeScopeName would use the
resource partition name instead of "machine-" if it was specified thus
creating invalid scope paths.
This makes libvirt drop cgroups for a VM that uses custom resource
partition upon reconnecting since the detected scope name would not
match the expected name generated by virSystemdMakeScopeName.
The error is exposed by the following log entry:
debug : virCgroupValidateMachineGroup:302 : Name 'machine-qemu\x2dtestvm.scope' for controller 'cpu' does not match 'testvm', 'testvm.libvirt-qemu' or 'machine-test-qemu\x2dtestvm.scope'
for a "/machine/test" resource and "testvm" vm.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1238570
As of fedora polkit-0.113-2, polkit-devel only pulls in polkit-libs, not
full polkit, but we need the latter for pkcheck otherwise our configure
test fails.
Reuse the vshBlockJobWait infrastructure to refactor cmdBlockCommit to
use the common code. This additionally fixes a bug when working with
new qemus, where when doing an active commit with --pivot the pivoting
would fail, since qemu reaches 100% completion but the job doesn't
switch to synchronized phase right away.
Introduce helper function that will provide logic for waiting for block
job completion so the 3 open coded places can be unified and improved.
This patch introduces the whole logic and uses it to fix
cmdBlockJobPull. The vshBlockJobWait function provides common logic for
block job waiting that should be robust enough to work across all
previous versions of libvirt. Since virsh allows passing user-provided
strings as paths of block devices we can't reliably use block job events
for detection of block job states so the function contains a great deal
of fallback logic.
Few parts of the code looked at the current progress of and assumed that
a two phase blockjob is in the _READY state as soon as the progress
reached 100% (info.cur == info.end). In current versions of qemu this
assumption is invalid and qemu exposes a new flag 'ready' in the
query-block-jobs output that is set to true if the job is actually
finished.
This patch adds internal data handling for reading the 'ready' flag and
acting appropriately as long as the flag is present.
While this still doesn't fix the virsh client problem with two phase
block jobs and the --pivot option, it at least improves the error
message:
$ virsh blockcommit --wait --verbose vm vda --base vda[1] --active --pivot
Block commit: [100 %]error: failed to pivot job for disk vda
error: internal error: unable to execute QEMU command 'block-job-complete': The active block job for device 'drive-virtio-disk0' cannot be completed
to
$ virsh blockcommit --wait --verbose VM vda --base vda[1] --active --pivot
Block commit: [100 %]error: failed to pivot job for disk vda
error: block copy still active: disk 'vda' not ready for pivot yet
Use the VSH_EXCLUSIVE_OPTIONS to exclude combinations of --pivot and
--keep-overlay and refactor the enforcing of the --wait option and other
flags that imply --wait.
Use the VSH_EXCLUSIVE_OPTIONS_VAR to interlock incompatible options.
Since a variable named 'abort' would conflict with older compilers use
VSH_EXCLUSIVE_OPTIONS for the --abort option.
Adding functionality to libvirt that will allow
it query the interface for the availability of RDMA and
tx-udp_tnl-segmentation Offloading NIC capabilities
Here is an example of the feature XML definition:
<device>
<name>net_eth4_90_e2_ba_5e_a5_45</name>
<path>/sys/devices/pci0000:00/0000:00:03.0/0000:08:00.1/net/eth4</path>
<parent>pci_0000_08_00_1</parent>
<capability type='net'>
<interface>eth4</interface>
<address>90:e2:ba:5e:a5:45</address>
<link speed='10000' state='up'/>
<feature name='rx'/>
<feature name='tx'/>
<feature name='sg'/>
<feature name='tso'/>
<feature name='gso'/>
<feature name='gro'/>
<feature name='rxvlan'/>
<feature name='txvlan'/>
<feature name='rxhash'/>
<feature name='rdma'/>
<feature name='txudptnl'/>
<capability type='80203'/>
</capability>
</device>
Currently, build fails on FreeBSD with:
CC libvirt_driver_la-nodeinfo.lo
nodeinfo.c:1941:56: error: use of undeclared identifier 'SYSFS_SYSTEM_PATH'
const char *prefix = sysfs_prefix ? sysfs_prefix : SYSFS_SYSTEM_PATH;
^
1 error generated.
This is caused by commit b97b3048 that added sysfs_prefix to
nodeCapsInitNUMA and used SYSFS_CPU_PATH.
Fix it by unconditionally defining SYSFS_CPU_PATH instead of defining it
under #ifdef __linux__.
If one calls update-device with information that is not updatable,
libvirt reports success even though no data were updated. The example
used in the bug linked below uses updating device with <boot order='2'/>
which, in my opinion, is a valid thing to request from user's
perspective. Mainly since we properly error out if user wants to update
such data on a network device for example.
And since there are many things that might happen (update-device on disk
basically knows just how to change removable media), check for what's
changing and moreover, since the function might be usable in other
drivers (updating only disk path is a valid possibility) let's abstract
it for any two disks.
We can't possibly check for everything since for many fields our code
does not properly differentiate between default and unspecified values.
Even though this could be changed, I don't feel like it's worth the
complexity so it's not the aim of this patch.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1007228
After upgrade to perl-5.22.0, it started complaining about one of our
scripts. The thing is that even though it works, it wants all curly
brackets escaped properly. The change is not functional, it merely gets
rid of the following error:
Unescaped left brace in regex is deprecated, passed through in regex;
marked by <-- HERE in m/^enum { <-- HERE / at -e line 3.
There is one more error like this that I'm getting, but it is because of
GNU automake bug #21001:
https://debbugs.gnu.org/cgi/bugreport.cgi?bug=21001
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Allows to specify maximum number of head to QXL driver.
Actually can be a compatiblity problem as heads in the XML configuration
was set by default to '1'.
Signed-off-by: Frediano Ziglio <fziglio@redhat.com>
Currently, when trying to virsh pool-define/virsh pool-build a new
'dir' pool, if the target directory already exists, virsh
pool-build/virStoragePoolBuild will error out. This is a change of
behaviour compared to eg libvirt 1.2.13
This is caused by the wrong type being used for the dir_create_flags
variable in virStorageBackendFileSystemBuild , it's defined as a bool
but is used as a flag bit field so should be unsigned int (this matches
the type virDirCreate expects for this variable).
This should fix https://bugzilla.gnome.org/show_bug.cgi?id=752417 (GNOME
Boxes) and https://bugzilla.redhat.com/show_bug.cgi?id=1244080
(downstream virt-manager).
The auto-spawn code would originally attempt to spawn the
daemon for both ENOENT and ECONNREFUSED errors from connect().
The various refactorings eventually lost this so we only
spawn the daemon on ENOENT. The result is if the daemon exits
uncleanly, so that the socket is left in the filesystem, we
will never be able to auto-spawn the daemon again.
The information on companion controllers we give in our documentation is
rather sparse. For example, it looks like any controller can be used as
a companion one. Also, when using ich9-uhci2, for example, we are able
to set some sensible defaults, but it might get confusing for the user
as we don't do that for all controller models.
https://bugzilla.redhat.com/show_bug.cgi?id=1069590
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Resolving an error reporting bug introduced by commit id '761491e' which
just took the return of virStorageBackendRBDCreateImage and used it as
the basis for the message generated. This would generate EPERM regardless
of error seen.
We used to look at the librbd code version and depending on that
we would invoke rbd_create3() or rbd_create().
Since librbd version 0.67.9 we can however tell RBD that it should
create rbd format 2 images even if we invoke rbd_create().
The less options we pass to librbd, the more we can lean on the sane
defaults it uses.
For rbd_create3() we had things like the stripe count and unit hardcoded
in libvirt and that might cause problems down the road.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Commit ed8155eafb documented that
mhz field in virNodeInfo might be 0 if the frequency is unknown. Modify
virsh to know about that.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
This patch adds a test for the qemu command line generation.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@us.ibm.com>
Reviewed-by: Stefan Zimmermann <stzi@linux.vnet.ibm.com>
For s390-ccw-virtio machines the default bus type is set to ccw.
Specifing an address element allows to override the default.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@us.ibm.com>
Reviewed-by: Stefan Zimmermann <stzi@linux.vnet.ibm.com>
Adding the recently in qemu added 9pfs support for virtio-ccw.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@us.ibm.com>
Reviewed-by: Stefan Zimmermann <stzi@linux.vnet.ibm.com>
Some drivers don't expose available huge page sizes in the
capabilities XML. For instance, LXC driver is one of those.
This has a downside that when virsh is trying to get
aggregated info on free pages per all NUMA nodes, it fails.
The problem is that the virNodeGetFreePages() API expects
caller to pass an array of page sizes he is interested in.
In virsh, this array is filled from the capabilities from
'/capabilities/host/cpu/pages' XPath. As said, in LXC
there's no such XPath and therefore virsh fails currently.
But hey, we can fallback: the page sizes are exposed under
'/capabilities/host/topology/cells/cell/pages'. The page
size can be collected from there, and voilà the command
works again. But now we must make sure that there are no
duplicates in the array passed to the public API. Otherwise
we won't get as beautiful output as we are getting now.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
There's this condition:
flags & VIR_DOMAIN_AFFECT_CURRENT && virDomainIsActive(dom)
which can never be true since VIR_DOMAIN_AFFECT_CURRENT has hardcoded
value of zero. Therefore virDomainIsActive() is a dead code. However,
the condition could make sense if it is rewritten as the following:
!(flags & VIR_DOMAIN_AFFECT_CONFIG) && virDomainIsActive(dom)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If we are migrating to an UNIX socket, we accept() a connection
from qemu and use that FD to set up a tunnel. However, the FD is
not closed as often as it should be.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Make sure sysfs_prefix, when present, is always the first argument
to a function; don't use a different name to refer to it; check
whether it is NULL, and hence SYSFS_SYSTEM_PATH should be used, only
when using it directly and not just passing it down to another
function; always pass down the same value we've been passed when
calling another function.
In commit 641a145d73 I've added code that
resets the balloon memory value to full size prior to resuming the vCPUs
since the size certainly was not reduced at that point.
Since qemuProcessStart is used also in code paths with already booted
up guests (migration, save/restore) the assumption is not entirely true
since the guest might already been running before.
This patch adds a function that queries the monitor rather than using
the full size since a balloon event would not be reissued in case we are
recovering a saved migration state.
Additionally the new function is used also when reconnecting to a VM
after libvirtd restart since we might have missed a few balloon events
while libvirtd was not running.
https://bugzilla.redhat.com/show_bug.cgi?id=1232663
In one of my previous ptaches (bcd9a564) I've tried to fix the problem
that we blindly assumed strict NUMA mode for guests. This led to
several problems like us pinning a domain onto a nodeset via libnuma
among with CGroups. Once the nodeset was changed by user, well, it did
not result in desired effect. See the original commit for more info.
But, the commit I wrote had a bug: when NUMA parameters are changed on
a running domain we require domain to be strictly pinned onto a
nodeset. Due to a typo a condition was mis-evaluated.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>