While a zero allocation in safezero should be fine it isn't when we use
posix_fallocate which returns EINVAL on a zero allocation.
While we could skip the zero allocation in safezero_posix_fallocate it's
an optimization to do it for all allocations.
This fixes vm installation via virtinst for me which otherwise aborts
like:
Starting install...
Retrieving file linux... | 5.9 MB 00:01 ...
Retrieving file initrd.gz... | 29 MB 00:07 ...
ERROR Couldn't create storage volume 'virtinst-linux.sBgds4': 'cannot fill file '/var/lib/libvirt/boot/virtinst-linux.sBgds4': Invalid argument'
The error was introduced by e30297b0 as spotted by Chunyan Liu
(cherry picked from commit 269d39afe5)
There's a couple reports of things failing in this area (bug 1259070),
but it's tough to tell what's going wrong without stderr from
qemu-bridge-helper. So let's report stderr in the error message
Couple new examples:
virbr0 is inactive:
internal error: /usr/libexec/qemu-bridge-helper --use-vnet --br=virbr0 --fd=21: failed to communicate with bridge helper: Transport endpoint is not connected
stderr=failed to get mtu of bridge `virbr0': No such device
bridge isn't on the ACL:
internal error: /usr/libexec/qemu-bridge-helper --use-vnet --br=br0 --fd=21: failed to communicate with bridge helper: Transport endpoint is not connected
stderr=access denied by acl file
(cherry picked from commit db35beaa1d)
Qemu reports physical size 0 for block devices. As 15fa84acbb
changed the behavior of qemuDomainGetBlockInfo to just query the monitor
this created a regression since we didn't report the size correctly any
more.
This patch adds code to refresh the physical size of a block device by
opening it and seeking to the end and uses it both in
qemuDomainGetBlockInfo and also in qemuDomainGetStatsOneBlock that was
broken since it was introduced in this respect.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1250982
(cherry picked from commit 8dc2725925)
Well, in 8ad126e6 we tried to fix a memory corruption problem.
However, the fix was not as good as it could be. I mean, the
commit has one line more than it should. I've noticed this output
just recently:
# ./run valgrind --leak-check=full --show-reachable=yes ./tools/virsh domblklist gentoo
==17019== Memcheck, a memory error detector
==17019== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==17019== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
==17019== Command: /home/zippy/work/libvirt/libvirt.git/tools/.libs/virsh domblklist gentoo
==17019==
Target Source
------------------------------------------------
fda /var/lib/libvirt/images/fd.img
vda /var/lib/libvirt/images/gentoo.qcow2
hdc /home/zippy/tmp/install-amd64-minimal-20150402.iso
==17019== Thread 2:
==17019== Invalid read of size 4
==17019== at 0x4EFF5B4: virObjectUnref (virobject.c:258)
==17019== by 0x5038CFF: remoteClientCloseFunc (remote_driver.c:552)
==17019== by 0x5069D57: virNetClientCloseLocked (virnetclient.c:685)
==17019== by 0x506C848: virNetClientIncomingEvent (virnetclient.c:1852)
==17019== by 0x5082136: virNetSocketEventHandle (virnetsocket.c:1913)
==17019== by 0x4ECD64E: virEventPollDispatchHandles (vireventpoll.c:509)
==17019== by 0x4ECDE02: virEventPollRunOnce (vireventpoll.c:658)
==17019== by 0x4ECBF00: virEventRunDefaultImpl (virevent.c:308)
==17019== by 0x130386: vshEventLoop (vsh.c:1864)
==17019== by 0x4F1EB07: virThreadHelper (virthread.c:206)
==17019== by 0xA8462D3: start_thread (in /lib64/libpthread-2.20.so)
==17019== by 0xAB441FC: clone (in /lib64/libc-2.20.so)
==17019== Address 0x139023f4 is 4 bytes inside a block of size 240 free'd
==17019== at 0x4C2B1F0: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==17019== by 0x4EA8949: virFree (viralloc.c:582)
==17019== by 0x4EFF6D0: virObjectUnref (virobject.c:273)
==17019== by 0x4FE74D6: virConnectClose (libvirt.c:1390)
==17019== by 0x13342A: virshDeinit (virsh.c:406)
==17019== by 0x134A37: main (virsh.c:950)
The problem is, when registering remoteClientCloseFunc(), it's
conn->closeCallback which is ref'd. But in the function itself
it's conn->closeCallback->conn what is unref'd. This is causing
imbalance in reference counting. Moreover, there's no need for
the remote driver to increase/decrease conn refcount since it's
not used anywhere. It's just merely passed to client registered
callback. And for that purpose it's correctly ref'd in
virConnectRegisterCloseCallback() and then unref'd in
virConnectUnregisterCloseCallback().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
(cherry picked from commit e689300770)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit id '7c2d65dde2' changed the default value of mode to be -1 if not
supplied in the XML, which should cause creation of the volume using the
default mode of VIR_STORAGE_DEFAULT_VOL_PERM_MODE; however, the check
made was whether mode was '0' or not to use default or provided value.
This patch fixes the issue to check if the 'mode' was provided in the XML
and use that value.
(cherry picked from commit 691dd388ae)
Commit id '155ca616' added the 'refreshVol' API. In an NFS root-squash
environment it was possible that if the just created volume from XML wasn't
properly created with the right uid/gid and/or mode, then the followup
refreshVol will fail to open the volume in order to get the allocation/
capacity values. This would leave the volume still on the server and
cause a libvirtd crash because 'voldef' would be in the pool list, but
the cleanup code would free it.
(cherry picked from commit db9277a39b)
In an NFS root-squashed environment the 'vol-delete' command will fail to
'unlink' the target volume since it was created under a different uid:gid.
This code continues the concepts introduced in virFileOpenForked and
virDirCreate[NoFork] with respect to running the unlink command under
the uid/gid of the child. Unlike the other two, don't retry on EACCES
(that's why we're here doing this now).
(cherry picked from commit 35847860f6)
This reverts commit 1ce7c1d20c,
which introduced a significant semantic change to the
virDomainGetInfo() API. Additionally, the change was only
made to 2 of the 15 virt drivers.
Conflicts:
src/qemu/qemu_driver.c
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
(cherry picked from commit 60acb38abb)
When stopping a domain on the destination host after a failed migration,
we need to avoid reseting security labels since the domain is still
running on the source host. While we were correctly doing so in some
cases, there were still some paths which did this wrong.
https://bugzilla.redhat.com/show_bug.cgi?id=1242904
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
In addition to checking the current asynchronous job
qemuMigrationJobIsActive reports an error if the current job does not
match the one we asked for. Let's just check the job directly since we
are not interested in the error in qemuProcessHandleMonitorEOF.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If destination libvirt doesn't support memory hotplug since all the
support was introduced by adding new elements the destination would
attempt to start qemu with an invalid configuration. The worse part is
that qemu might hang in such situation.
Fix this by sending a required migration feature called 'memory-hotplug'
to the destination. If the destination doesn't recognize it it will fail
the migration.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1248350
So far qemu-nbd is run even if the nbd kernel module isn't loaded. This
leads to errors when the user starts his lxc container while libvirt
could easily load the nbd module automatically.
Our atomic increment (virAtomicIntInc) uses (if available) gcc
__sync_add_and_fetch builtin. In qemu driver though, we'd profit more
from __sync_fetch_and_add builtin. To keep it simplistic, this patch
adjusts qemu driver initialization rather than adding a new atomic
increment macro.
virDomainDeleteConfig is meant to delete the persistent config and thus
it resets vm->autostart. Copy parts of qemuProcessRemoveDomainStatus to
a new helper to avoid using the incorrect function.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1230071
The remoteDomainOpenGraphicsFD method was using the wrong RPC
arg struct remote_domain_open_graphics_args instead of
remote_domain_open_graphics_fd_args. Fortunately both structs
had identical contents so there was no functional bug, but to
avoid confusing future maintainers, we should fix it.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
First hunk changes the use of srcdir to top_srcdir so it complies with
other rules in the Makefile. Second one removes the need of
remote_protocol.h in admin_protocol.h as it was suggested and worked in,
but this one line was missed apparently. Last one just removes the
'remote' naming from admin protocol specification, just so it's cleaner.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Commit d506a51aeb meant to check for
QEMU_CAPS_DRIVE_IOTUNE_MAX, but checked for QEMU_CAPS_DRIVE_IOTUNE
instead. That's clearly visible from the diff, but it got in. Because
of that, we were supplying information unknown for QEMU if it wasn't new
enough and we couldn't even properly handle the error, leading to
"Unexpected error". Also iops_size came at the same time with all the
other "_max" options, so check whether we're not setting that either if
QEMU_CAPS_DRIVE_IOTUNE_MAX is not supported.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1224053
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
There are some non-0 default values in virDomainControllerDef (and
will soon be more) that are easier to not forget if the remembering is
done by a single initializer function (rather than inline code after
allocating the obejct with generic VIR_ALLOC().
This loop occurs just after we've assured that all devices that
require a PCI device have been assigned and all necessary PCI
controllers have been added. It is the perfect place to add other
potentially auto-generated PCI controller attributes that are
dependent on the controller's PCI address (upcoming patch).
There is a convenient loop through all controllers at the end of the
function, but the patch to add new functionality will be cleaner if we
first rearrange that loop a bit.
Note that the loop originally was accessing info.addr.pci.bus prior to
determining that the pci part of the object was valid. This isn't
dangerous in any way, but seemed a bit ugly, so I fixed it.
The function that auto-assigns PCI addresses was written with the
hardcoded assumptions that any PCI bus would have slots available
starting at 1 and ending at 31. This isn't true for many types of
controllers (some have a single slot/port at 0, some have slots/ports
from 0 to 31). This patch updates that function to remove the
hardcoded assumptions. It will properly find/assign addresses for
devices that can only connect to pcie-(root|downstream)-port (which
have minSlot/maxSlot of 0/0) or a pcie-switch-upstream-port (0/31).
It still will not auto-create a new bus of the proper kind for these
connections when one doesn't exist, that task is for another day.
In commit 155ca616e, a change was introduced that no longer allowed defining
volumes via XML with a capacity of '0'. Because we check for info.size_arg
to be non-zero, this use-case fails. This patch allows info.size_arg to be
zero if no backing store is specified.
Signed-off-by: Chris J Arges <chris.j.arges@canonical.com>
Commit id 'ac3ed2085' causes 'virsh nodedev-list --cap net' to fail
on any system without SYSFS_INFINIBAND_DIR (/sys/class/infiniband).
Rather than assume it's there and fail on the attempt to open the
non-existent directory, check if it's there - if not, return
success and move on. Also fix caller to check < 0 upon return.
As reported by Suren Hajyan <shajyan@redhat.com> from run of unit tests
This reverts commit 7b401c3bda.
Until libvirt is able to differentiate whether heads='1' is just a
leftover from previous libvirt or whether that's added by user on
purpose and also whether the domain was started with the support for
qxl's max_outputs, we cannot incorporate this patch into the tree
due to compatibility reasons.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
This makes the range and static host array management in
virNetworkDHCPDefParseXML() more similar to what is done in
virNetworkDefUpdateIPDHCPRange() and virNetworkDefUpdateIPDHCPHost() -
they use VIR_APPEND_ELEMENT rather than a combination of
VIR_REALLOC_N() and separate incrementing of the array size.
The one functional change here is that a memory leak of the contents
of the last (unsuccessful) virNetworkDHCPHostDef was previously leaked
in certain failure conditions, but it is now properly cleaned up.
Bhyve as of r279225 (FreeBSD -CURRENT) or r284894 (FreeBSD 10-STABLE)
supports using UTC time offset via the '-u' argument to bhyve(8). By
default it's still using localtime.
Make the bhyve driver use UTC clock if it's requested by specifying
<clock offset='utc'> in domain XML and if the bhyve(8) binary supports
the '-u' flag.
Commit ac3ed20 breaks build on FreeBSD with:
CC util/libvirt_util_la-virnetdev.lo
util/virnetdev.c:2967:1: error: unused function 'virNetDevRDMAFeature' [-Werror,-Wunused-function]
virNetDevRDMAFeature(const char *ifname,
^
So hide virNetDevRDMAFeature function under the #ifdef 'SIOCETHTOOL'
and 'HAVE_STRUCT_IFREQ' section.
Pushed under the build breaker rule.
https://bugzilla.redhat.com/show_bug.cgi?id=1245476
We won't return the errno after commit 0d7f45ae, and
the more clearly error will be set in the code in vircgroup*.
Also We will always report error "Operation not permitted",
because the return is -1.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Move the calls to the respective functions from virNodeParseNode(),
which is executed once for every NUMA node, to
linuxNodeInfoCPUPopulate(), which is executed just once per host.
Keep track of what CPUs belong to the current node while walking
through the sysfs node entry, so we don't need to do it a second
time immediately afterwards.
This also allows us to loop through all CPUs that are part of a
node in guaranteed ascending order, which is something that is
required for some upcoming changes.
Swap out all instances of cpu_set_t and replace them with virBitmap,
which some of the code was already using anyway.
The changes are pretty mechanical, with one notable exception: an
assumption has been added on the max value we can run into while
reading either socket_it or core_id.
While this specific assumption was not in place before, we were
using cpu_set_t improperly by not making sure not to set any bit
past CPU_SETSIZE or explicitly allocating bigger bitmaps; in fact
the default size of a cpu_set_t, 1024, is way too low to run our
testsuite, which includes core_id values in the 2000s.
The new name makes it clear that the returned bitmap contains the
information about which CPUs are online, not eg. which CPUs are
present.
No behavioral change.
If the cpu/present file is not available, we assume that the kernel
is too old to support non-consecutive CPU ids and return a bitmap
with all the bits set to represent this fact. This assumption is
already exploited in nodeGetCPUCount().
This means users of this API can expect the information to always
be available unless an error has occurred, and no longer need to
treat the NULL return value as a special case.
The error message has been updated as well.
The original name was confusing because the function returns the number
of CPUs, not the maximum CPU id. The comment above the function has
been updated to reflect this.
No behavioral changes.
During the recent refactoring/cleanups, a bug has been introduced
that caused all CPUs to be reported as online unless the sysfs
cpu/present file was available.
This commit fixes the fallback code path by building the directory
path passed to virNodeGetCpuValue() correctly.
The scope name, even according to our docs is
"machine-$DRIVER\x2d$VMNAME.scope" virSystemdMakeScopeName would use the
resource partition name instead of "machine-" if it was specified thus
creating invalid scope paths.
This makes libvirt drop cgroups for a VM that uses custom resource
partition upon reconnecting since the detected scope name would not
match the expected name generated by virSystemdMakeScopeName.
The error is exposed by the following log entry:
debug : virCgroupValidateMachineGroup:302 : Name 'machine-qemu\x2dtestvm.scope' for controller 'cpu' does not match 'testvm', 'testvm.libvirt-qemu' or 'machine-test-qemu\x2dtestvm.scope'
for a "/machine/test" resource and "testvm" vm.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1238570
Few parts of the code looked at the current progress of and assumed that
a two phase blockjob is in the _READY state as soon as the progress
reached 100% (info.cur == info.end). In current versions of qemu this
assumption is invalid and qemu exposes a new flag 'ready' in the
query-block-jobs output that is set to true if the job is actually
finished.
This patch adds internal data handling for reading the 'ready' flag and
acting appropriately as long as the flag is present.
While this still doesn't fix the virsh client problem with two phase
block jobs and the --pivot option, it at least improves the error
message:
$ virsh blockcommit --wait --verbose vm vda --base vda[1] --active --pivot
Block commit: [100 %]error: failed to pivot job for disk vda
error: internal error: unable to execute QEMU command 'block-job-complete': The active block job for device 'drive-virtio-disk0' cannot be completed
to
$ virsh blockcommit --wait --verbose VM vda --base vda[1] --active --pivot
Block commit: [100 %]error: failed to pivot job for disk vda
error: block copy still active: disk 'vda' not ready for pivot yet
Adding functionality to libvirt that will allow
it query the interface for the availability of RDMA and
tx-udp_tnl-segmentation Offloading NIC capabilities
Here is an example of the feature XML definition:
<device>
<name>net_eth4_90_e2_ba_5e_a5_45</name>
<path>/sys/devices/pci0000:00/0000:00:03.0/0000:08:00.1/net/eth4</path>
<parent>pci_0000_08_00_1</parent>
<capability type='net'>
<interface>eth4</interface>
<address>90:e2:ba:5e:a5:45</address>
<link speed='10000' state='up'/>
<feature name='rx'/>
<feature name='tx'/>
<feature name='sg'/>
<feature name='tso'/>
<feature name='gso'/>
<feature name='gro'/>
<feature name='rxvlan'/>
<feature name='txvlan'/>
<feature name='rxhash'/>
<feature name='rdma'/>
<feature name='txudptnl'/>
<capability type='80203'/>
</capability>
</device>
Currently, build fails on FreeBSD with:
CC libvirt_driver_la-nodeinfo.lo
nodeinfo.c:1941:56: error: use of undeclared identifier 'SYSFS_SYSTEM_PATH'
const char *prefix = sysfs_prefix ? sysfs_prefix : SYSFS_SYSTEM_PATH;
^
1 error generated.
This is caused by commit b97b3048 that added sysfs_prefix to
nodeCapsInitNUMA and used SYSFS_CPU_PATH.
Fix it by unconditionally defining SYSFS_CPU_PATH instead of defining it
under #ifdef __linux__.
If one calls update-device with information that is not updatable,
libvirt reports success even though no data were updated. The example
used in the bug linked below uses updating device with <boot order='2'/>
which, in my opinion, is a valid thing to request from user's
perspective. Mainly since we properly error out if user wants to update
such data on a network device for example.
And since there are many things that might happen (update-device on disk
basically knows just how to change removable media), check for what's
changing and moreover, since the function might be usable in other
drivers (updating only disk path is a valid possibility) let's abstract
it for any two disks.
We can't possibly check for everything since for many fields our code
does not properly differentiate between default and unspecified values.
Even though this could be changed, I don't feel like it's worth the
complexity so it's not the aim of this patch.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1007228
After upgrade to perl-5.22.0, it started complaining about one of our
scripts. The thing is that even though it works, it wants all curly
brackets escaped properly. The change is not functional, it merely gets
rid of the following error:
Unescaped left brace in regex is deprecated, passed through in regex;
marked by <-- HERE in m/^enum { <-- HERE / at -e line 3.
There is one more error like this that I'm getting, but it is because of
GNU automake bug #21001:
https://debbugs.gnu.org/cgi/bugreport.cgi?bug=21001
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Allows to specify maximum number of head to QXL driver.
Actually can be a compatiblity problem as heads in the XML configuration
was set by default to '1'.
Signed-off-by: Frediano Ziglio <fziglio@redhat.com>