We forbid access to /usr/share/, but (at least on Debian-based systems)
the Open Virtual Machine Firmware files needed for booting UEFI virtual
machines in QEMU live in /usr/share/ovmf/. Therefore, we need to add
that directory to the list of read only paths.
A similar patch was suggested by Jamie Strandboge <jamie@canonical.com>
on https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1483071.
(cherry picked from commit 2f01cfdf05)
First check overrides, then read only files then restricted access
itself.
This allows us to mark files for read only access whose parents were
already restricted for read write.
Based on a proposal by Martin Kletzander
(cherry picked from commit d25a5e087a)
Commit a2c5d16a70 switched to generating
libvirt_admin.syms, but forgot to add the generated file into
.gitignore, hence causing tree pollution post-build.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
(cherry picked from commit 14d84db863)
Since we're linking this into libvirtd we need some symbols to be public
but not part of the public API so mark them as
LIBVIRT_ADMIN_PRIVATE_<VERSION> as we do with libvirt.
Making all other symbols local makes sure we don't accidentally leak
unwanted ones.
(cherry picked from commit a2c5d16a70)
This makes it consistent with the other FLAGS in this file and reduced
clutter in the diff when adding new entries.
(cherry picked from commit 6d71d54812)
https://bugzilla.redhat.com/show_bug.cgi?id=1251886
Since iothread_id == 0 is an invalid value for QEMU let's point
that out specifically. For the IOThreadDel code, the failure would
have ended up being a failure to find the IOThread ID; however, for
the IOThreadAdd code - an IOThread 0 was added and that isn't good.
It seems during many reviews/edits to the code the check for
iothread_id = 0 being invalid was lost - it could have originally
been in the API code, but requested to be moved - I cannot remember.
(cherry picked from commit 32c6b1908b)
When looking up a domain, we try to look up by ID, UUID and NAME
consequently while not really caring which of those lookups succeeds.
The problem is that if any of them fails, we dispatch the error from the
driver and that means setting both threadlocal and global error. Let's
say the last lookup (by NAME) succeeds and resets the threadlocal error as any
other API does, however leaving the global error unchanged. If the underlying
virsh command does not succeed afterwards, our cleanup routine in
vshCommandRun ensures that no libvirt error will be forgotten and that's
exactly where this global error comes in incorrectly.
# virsh domif-setlink 123 vnet1 up
error: interface (target: vnet1) not found
error: Domain not found: no domain with matching id 123
This patch also resets the global error which would otherwise cause some
minor confusion in reported error messages.
https://bugzilla.redhat.com/show_bug.cgi?id=1254152
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Erik Skultety <eskultet@redhat.com>
(cherry picked from commit 70f56dd72c)
Ever since commit e44b0269, 64-bit mingw compilation fails with:
../../src/util/virprocess.c: In function 'virProcessGetPids':
../../src/util/virprocess.c:628:50: error: passing argument 4 of 'virStrToLong_i' from incompatible pointer type [-Werror=incompatible-pointer-types]
if (virStrToLong_i(ent->d_name, NULL, 10, &tmp_pid) < 0)
^
In file included from ../../src/util/virprocess.c:59:0:
../../src/util/virstring.h:53:5: note: expected 'int *' but argument is of type 'pid_t * {aka long long int *}'
int virStrToLong_i(char const *s,
^
cc1: all warnings being treated as errors
Although mingw won't be using this function, it does compile the
file, and the fix is relatively simple.
* src/util/virprocess.c (virProcessGetPids): Don't assume pid_t
fits in int.
Signed-off-by: Eric Blake <eblake@redhat.com>
(cherry picked from commit 0a617b53d4)
Otherwise the error is just
error: Failed to create domain from test1.xml
error: failed to retrieve file descriptor for interface: Transport endpoint is not connected
since we don't get a sensible error after the fork.
(cherry picked from commit 151ba02293)
Pinning information returned for emulatorpin and vcpupin calls is being
returned from our data without querying cgroups for some time. However,
not all the data were utilized. When automatic placement is used the
information is not returned for the calls mentioned above. Since the
numad hint in private data is properly saved/restored, we can safely use
it to return true information.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1162947
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
(cherry picked from commit 776924e376)
The numad hint stored in priv->autoNodeset is information that gets lost
during daemon restart. And because we would like to use that
information in the future, we also need to save it in the status XML.
For the sake of tests, we need to initialize nnumaCell_max to some
value, so that the restoration doesn't fail in our test suite. There is
no need to fill in the actual numa cell data since the recalculating
function virCapabilitiesGetCpusForNodemask() will not fail, it will just
skip filling the data in the bitmap which we don't use in tests anyway.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
(cherry picked from commit 8ce86722d7)
This needs a reorder of XML option definitions. It might come in handy
one day.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
(cherry picked from commit 7c8028cda9)
When parsing private domain data, there are two paths that are flawed.
They are both error paths, just from different parts of the function.
One of them can call free() on an uninitialized pointer. Initialization
to NULL is enough here. The other one is a bit trickier to explain, but
as easy as the first one to fix. We create capabilities, parse them and
then assign them into the private data pointer inside the domain object.
If, however, we get to fail from now on, the error path calls unrefs the
capabilities and then, when the domain object is being cleaned,
qemuDomainObjPrivateFree() tries to unref them as well. That causes a
segfault. Settin the pointer to NULL upon successful addition to the
private data is enough.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
(cherry picked from commit 92ddffdbd3)
If you pass <disk><serial> XML to UpdateDevice, and the original device
didn't have a <serial> block, libvirtd crashes trying to read the original
NULL serial string.
Use _NULLABLE string comparisons to avoid the crash. A couple other
properties needed the change too.
(cherry picked from commit c7790408d7)
When running the test suite using "unshare -n" we might have IPv6 but no
configured addresses. Due to AI_ADDRCONFIG getaddrinfo then fails with
EAI_NONAME which we should then treat as IPv6 unavailable.
(cherry picked from commit fbb27088ee)
This fixes the crash described here:
https://www.redhat.com/archives/libvir-list/2015-August/msg00162.html
In short, we were calling ioctl(SIOCETHTOOL) pointing to a too-short
object that was a local on the stack, resulting in the memory past the
end of the object being overwritten. This was because the struct used
by the ETHTOOL_GFEATURES command of SIOCETHTOOL ends with a 0-length
array, but we were telling ethtool that it could use 2 elements on the
array.
The fix is to allocate the necessary memory with VIR_ALLOC_VAR(),
including the extra length needed for a 2 element array at the end.
(cherry picked from commit bfaaa2b681)
Commit a6f9af8292 added checking for address colisions between
starting and ending addresses of forwarding addresses, but forgot that
there might be no addresses set at all.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
(cherry picked from commit 1f24c1494a)
This is a public library, it shouldn't include anything that is
internal. Including the library in it's current state to an example
application fails the preprocessor phase.
(cherry picked from commit eefec56b47)
nwfilter uses iptables and ebtables, which only work properly on
tap-based network connections (*not* on macvtap, for example), but we
just ignore any <filterref> elements for other types of networks,
potentially giving users a false sense of security.
This patch checks the network type and fails/logs an error if any
domain <interface> has a <filterref> when the connection isn't using a
tap device.
This resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1180011
(cherry picked from commit f4f1d18dc4)
This patch modifies virSocketAddrGetRange() to function properly when
the containing network/prefix of the address range isn't known, for
example in the case of the NAT range of a virtual network (since it is
a range of addresses on the *host*, not within the network itself). We
then take advantage of this new functionality to validate the NAT
range of a virtual network.
Extra test cases are also added to verify that virSocketAddrGetRange()
works properly in both positive and negative cases when the network
pointer is NULL.
This is the *real* fix for:
https://bugzilla.redhat.com/show_bug.cgi?id=985653
Commits 1e334a and 48e8b9 had earlier been pushed as fixes for that
bug, but I had neglected to read the report carefully, so instead of
fixing validation for the NAT range, I had fixed validation for the
DHCP range. sigh.
(cherry picked from commit a6f9af8292)
https://bugzilla.redhat.com/show_bug.cgi?id=1022292
The following XML really does not make any sense:
<inbound average="-1" burst="-2" peak="-3" floor="-4"/>
There can't be a negative packet rate. Well, so far we haven't
assigned any meaning to it. So reject it unless users harm themselves,
because otherwise we turn the negative numbers into really big values.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
(cherry picked from commit 2a5d3f227d)
By specifying parentIndex in a call to virNetworkUpdate(), it was
possible to direct libvirt to add a dhcp range or static host of a
non-matching address family to the <dhcp> element of an <ip>. For
example, given:
<ip address='192.168.122.1' netmask='255.255.255.0'/>
<ip family='ipv6' address='2001:db6:ca3:45::1' prefix='64'/>
you could provide a static host entry with an IPv4 address, and
specify that it be added to the 2nd <ip> element (index 1):
virsh net-update default add ip-dhcp-host --parent-index 1 \
'<host mac="52:54:00:00:00:01" ip="192.168.122.45"/>'
This would be happily added with no error (and no concern of any
possible future consequences).
This patch checks that any dhcp range or host element being added to a
network ip's <dhcp> subelement has addresses of the same family as the
ip element they are being added to.
This resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1184736
(cherry picked from commit 6a21bc119e)
If a pci address had a function number out of range, the error message
would be:
Insufficient specification for PCI address
which is logged by virDevicePCIAddressParseXML() after
virDevicePCIAddressIsValid returns a failure.
This patch enhances virDevicePCIAddressIsValid() to optionally report
the error itself (since it is the place that decides which part of the
address is "invalid"), and uses that feature when calling from
virDevicePCIAddressParseXML(), so that the error will be more useful,
e.g.:
Invalid PCI address function=0x8, must be <= 7
Previously, virDevicePCIAddressIsValid didn't check for the
theoretical limits of domain or bus, only for slot or function. While
adding log messages, we also correct that ommission. (The RNG for PCI
addresses already enforces this limit, which by the way means that we
can't add any negative tests for this - as far as I know our
domainschematest has no provisions for passing XML that is supposed to
fail).
Note that virDevicePCIAddressIsValid() can only check against the
absolute maximum attribute values for *any* possible PCI controller,
not for the actual maximums of the specific controller that this
device is attaching to; fortunately there is later more specific
validation for guest-side PCI addresses when building the set of
assigned PCI addresses. For host-side PCI addresses (e.g. for
<hostdev> and for network device pools), we rely on the error that
will be logged when it is found that the device doesn't actually
exist.
This resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1004596
(cherry picked from commit f8fe8f0345)
https://bugzilla.redhat.com/show_bug.cgi?id=1176020
Some users think this is a good idea:
<vcpu placement='static'>4</vcpu>
<cpu mode='host-model'>
<model fallback='allow'/>
<numa>
<cell id='0' cpus='0-1' memory='1048576' unit='KiB'/>
<cell id='1' cpus='9-10' memory='2097152' unit='KiB'/>
</numa>
</cpu>
It's not. Lets therefore introduce a check and discourage them in
doing so.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
(cherry picked from commit 82af954c52)
This function should return the greatest CPU number set in
/domain/cpu/numa/cell/@cpus. The idea is that we should compare
the returned value against /domain/vcpu value. Yes, there exist
users who think the following is a good idea:
<vcpu placement='static'>4</vcpu>
<cpu mode='host-model'>
<model fallback='allow'/>
<numa>
<cell id='0' cpus='0-1' memory='1048576' unit='KiB'/>
<cell id='1' cpus='9-10' memory='2097152' unit='KiB'/>
</numa>
</cpu>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
(cherry picked from commit 8f2535dec1)
The commit 7e72de4 didn't consider the hotplug scenarios. The patch addresses
the hotplug case whereby if atleast one of the pci function is owned by a
guest, the hotplug of other functions/devices in the same iommu group to the
same guest goes through successfully.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
(cherry picked from commit e3810db34f)
Libvirt doesn't reliably know the location of the backing chain when
pre-creating images for non-shared migration. This isn't a problem for
full copy, but incremental copy requires the information.
Forbid pre-creating the image in cases where incremental migration is
required. This limitation can perhaps be lifted once libvirt will fully
support loading of backing chain information from the XML.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1249587
(cherry picked from commit 6da3b694cc)
https://bugzilla.redhat.com/show_bug.cgi?id=1250287
When run domfsinfo in quiet mode, we cannot get any
useful information (just get \n), this is because
we didn't use vshPrint to print useful information.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
(cherry picked from commit ee6160b549)
In gnutls 3.4.3 there is a regression in the loading of private
keys via gnutls_x509_privkey_import. We already have a workaround
to deal with failures on older gnutls, but the error code that
the new gnutls returns is different. Extend the workaround so that
is checks for GNUTLS_E_REQUESTED_DATA_NOT_AVAILABLE too.
See also gnutls https://bugzilla.redhat.com/show_bug.cgi?id=1250020
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
(cherry picked from commit 3433180ec8)
If cpuset is disabled or not available, it libvirt must not use it.
Mainly for actions that do not need it and can use sched_setaffinity()
or numa_membind() instead, because they will fail without good reason.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1244664
Signed-off-by: Luyao Huang <lhuang@redhat.com>
(cherry picked from commit 1439eb32af)
While a zero allocation in safezero should be fine it isn't when we use
posix_fallocate which returns EINVAL on a zero allocation.
While we could skip the zero allocation in safezero_posix_fallocate it's
an optimization to do it for all allocations.
This fixes vm installation via virtinst for me which otherwise aborts
like:
Starting install...
Retrieving file linux... | 5.9 MB 00:01 ...
Retrieving file initrd.gz... | 29 MB 00:07 ...
ERROR Couldn't create storage volume 'virtinst-linux.sBgds4': 'cannot fill file '/var/lib/libvirt/boot/virtinst-linux.sBgds4': Invalid argument'
The error was introduced by e30297b0 as spotted by Chunyan Liu
(cherry picked from commit 269d39afe5)
There's a couple reports of things failing in this area (bug 1259070),
but it's tough to tell what's going wrong without stderr from
qemu-bridge-helper. So let's report stderr in the error message
Couple new examples:
virbr0 is inactive:
internal error: /usr/libexec/qemu-bridge-helper --use-vnet --br=virbr0 --fd=21: failed to communicate with bridge helper: Transport endpoint is not connected
stderr=failed to get mtu of bridge `virbr0': No such device
bridge isn't on the ACL:
internal error: /usr/libexec/qemu-bridge-helper --use-vnet --br=br0 --fd=21: failed to communicate with bridge helper: Transport endpoint is not connected
stderr=access denied by acl file
(cherry picked from commit db35beaa1d)
Qemu reports physical size 0 for block devices. As 15fa84acbb
changed the behavior of qemuDomainGetBlockInfo to just query the monitor
this created a regression since we didn't report the size correctly any
more.
This patch adds code to refresh the physical size of a block device by
opening it and seeking to the end and uses it both in
qemuDomainGetBlockInfo and also in qemuDomainGetStatsOneBlock that was
broken since it was introduced in this respect.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1250982
(cherry picked from commit 8dc2725925)
Well, in 8ad126e6 we tried to fix a memory corruption problem.
However, the fix was not as good as it could be. I mean, the
commit has one line more than it should. I've noticed this output
just recently:
# ./run valgrind --leak-check=full --show-reachable=yes ./tools/virsh domblklist gentoo
==17019== Memcheck, a memory error detector
==17019== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==17019== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
==17019== Command: /home/zippy/work/libvirt/libvirt.git/tools/.libs/virsh domblklist gentoo
==17019==
Target Source
------------------------------------------------
fda /var/lib/libvirt/images/fd.img
vda /var/lib/libvirt/images/gentoo.qcow2
hdc /home/zippy/tmp/install-amd64-minimal-20150402.iso
==17019== Thread 2:
==17019== Invalid read of size 4
==17019== at 0x4EFF5B4: virObjectUnref (virobject.c:258)
==17019== by 0x5038CFF: remoteClientCloseFunc (remote_driver.c:552)
==17019== by 0x5069D57: virNetClientCloseLocked (virnetclient.c:685)
==17019== by 0x506C848: virNetClientIncomingEvent (virnetclient.c:1852)
==17019== by 0x5082136: virNetSocketEventHandle (virnetsocket.c:1913)
==17019== by 0x4ECD64E: virEventPollDispatchHandles (vireventpoll.c:509)
==17019== by 0x4ECDE02: virEventPollRunOnce (vireventpoll.c:658)
==17019== by 0x4ECBF00: virEventRunDefaultImpl (virevent.c:308)
==17019== by 0x130386: vshEventLoop (vsh.c:1864)
==17019== by 0x4F1EB07: virThreadHelper (virthread.c:206)
==17019== by 0xA8462D3: start_thread (in /lib64/libpthread-2.20.so)
==17019== by 0xAB441FC: clone (in /lib64/libc-2.20.so)
==17019== Address 0x139023f4 is 4 bytes inside a block of size 240 free'd
==17019== at 0x4C2B1F0: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==17019== by 0x4EA8949: virFree (viralloc.c:582)
==17019== by 0x4EFF6D0: virObjectUnref (virobject.c:273)
==17019== by 0x4FE74D6: virConnectClose (libvirt.c:1390)
==17019== by 0x13342A: virshDeinit (virsh.c:406)
==17019== by 0x134A37: main (virsh.c:950)
The problem is, when registering remoteClientCloseFunc(), it's
conn->closeCallback which is ref'd. But in the function itself
it's conn->closeCallback->conn what is unref'd. This is causing
imbalance in reference counting. Moreover, there's no need for
the remote driver to increase/decrease conn refcount since it's
not used anywhere. It's just merely passed to client registered
callback. And for that purpose it's correctly ref'd in
virConnectRegisterCloseCallback() and then unref'd in
virConnectUnregisterCloseCallback().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
(cherry picked from commit e689300770)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit id '7c2d65dde2' changed the default value of mode to be -1 if not
supplied in the XML, which should cause creation of the volume using the
default mode of VIR_STORAGE_DEFAULT_VOL_PERM_MODE; however, the check
made was whether mode was '0' or not to use default or provided value.
This patch fixes the issue to check if the 'mode' was provided in the XML
and use that value.
(cherry picked from commit 691dd388ae)
Commit id '155ca616' added the 'refreshVol' API. In an NFS root-squash
environment it was possible that if the just created volume from XML wasn't
properly created with the right uid/gid and/or mode, then the followup
refreshVol will fail to open the volume in order to get the allocation/
capacity values. This would leave the volume still on the server and
cause a libvirtd crash because 'voldef' would be in the pool list, but
the cleanup code would free it.
(cherry picked from commit db9277a39b)
In an NFS root-squashed environment the 'vol-delete' command will fail to
'unlink' the target volume since it was created under a different uid:gid.
This code continues the concepts introduced in virFileOpenForked and
virDirCreate[NoFork] with respect to running the unlink command under
the uid/gid of the child. Unlike the other two, don't retry on EACCES
(that's why we're here doing this now).
(cherry picked from commit 35847860f6)
This reverts commit 1ce7c1d20c,
which introduced a significant semantic change to the
virDomainGetInfo() API. Additionally, the change was only
made to 2 of the 15 virt drivers.
Conflicts:
src/qemu/qemu_driver.c
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
(cherry picked from commit 60acb38abb)
When stopping a domain on the destination host after a failed migration,
we need to avoid reseting security labels since the domain is still
running on the source host. While we were correctly doing so in some
cases, there were still some paths which did this wrong.
https://bugzilla.redhat.com/show_bug.cgi?id=1242904
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
In addition to checking the current asynchronous job
qemuMigrationJobIsActive reports an error if the current job does not
match the one we asked for. Let's just check the job directly since we
are not interested in the error in qemuProcessHandleMonitorEOF.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If destination libvirt doesn't support memory hotplug since all the
support was introduced by adding new elements the destination would
attempt to start qemu with an invalid configuration. The worse part is
that qemu might hang in such situation.
Fix this by sending a required migration feature called 'memory-hotplug'
to the destination. If the destination doesn't recognize it it will fail
the migration.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1248350
So far qemu-nbd is run even if the nbd kernel module isn't loaded. This
leads to errors when the user starts his lxc container while libvirt
could easily load the nbd module automatically.
Our atomic increment (virAtomicIntInc) uses (if available) gcc
__sync_add_and_fetch builtin. In qemu driver though, we'd profit more
from __sync_fetch_and_add builtin. To keep it simplistic, this patch
adjusts qemu driver initialization rather than adding a new atomic
increment macro.
virDomainDeleteConfig is meant to delete the persistent config and thus
it resets vm->autostart. Copy parts of qemuProcessRemoveDomainStatus to
a new helper to avoid using the incorrect function.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1230071
The remoteDomainOpenGraphicsFD method was using the wrong RPC
arg struct remote_domain_open_graphics_args instead of
remote_domain_open_graphics_fd_args. Fortunately both structs
had identical contents so there was no functional bug, but to
avoid confusing future maintainers, we should fix it.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
First hunk changes the use of srcdir to top_srcdir so it complies with
other rules in the Makefile. Second one removes the need of
remote_protocol.h in admin_protocol.h as it was suggested and worked in,
but this one line was missed apparently. Last one just removes the
'remote' naming from admin protocol specification, just so it's cleaner.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>