On RHEL 5, I was getting a segfault trying to start libvirtd,
because we were failing virNodeParseSocket but not checking
for errors, and then calling CPU_SET(-1, &sock_map) as a result.
But if you don't have a topology/physical_package_id file,
then you can just assume that the cpu belongs to socket 0.
* src/nodeinfo.c (virNodeGetCpuValue): Change bool into
default_value.
(virNodeParseSocket): Allow for default value when file is missing,
different from fatal error on reading file.
(virNodeParseNode): Update call sites to fail on error.
(cherry picked from commit 47976b484c)
Ever since commit 7b21981c started generating AUTHORS, we now have
the situation that if you flip between two branches in the same
git repository that cross that commit boundary, then 'make' will
fail due to automake complaining about AUTHORS not existing. The
simplest solution is to realize that if AUTHORS does not exist,
then we flipped branches so we will need to rerun bootstrap
anyways; and rerunning bootstrap ensures AUTHORS will exist in time.
* cfg.mk (_update_required): Also depend on AUTHORS.
(cherry picked from commit 71d125620d)
The lack of initialization of 'opts' caused a SEGV in the
cleanup: path if the root->src directory did not exist
(cherry picked from commit 3782814d4a)
After the connection to ESX 5.1 being broken since g1e7cd39, the fix
in bab7752c helped a bit, but still missed a spot, so the connection
is now successful, but some APIs (for example defineXML) don't work.
Two cases missing are added in this patch to avoid that.
(cherry picked from commit 9c294e6f9a)
qemu is sensitive to the order of arguments passed. Hence, if a
device requires a controller, the controller cmd string must
precede device cmd string. The same apply for controllers, when
for instance ccid controller requires usb controller. So
controllers create partial ordering in which they should be added
to qemu cmd line.
(cherry picked from commit 0f720ab35a)
Some FDs may not implement fdatasync() functionality,
e.g. pipes. In that case EINVAL or EROFS is returned.
We don't want to fail then nor report any error.
Reported-by: Christophe Fergeau <cfergeau@redhat.com>
(cherry picked from commit 46325e5131)
When pausing the guest while migration is running (to speed up
convergence) the virDomainSuspend API checks if the migration job is
active before entering the job. This could cause a possible race if the
virDomainSuspend is called while the job is active but ends before the
Suspend API enters the job (this would require that the migration is
aborted). This would cause a incorrect event to be emitted.
(cherry picked from commit d0fc6dc831)
The network driver didn't care about config files when a network was
destroyed, just when it was undefined leaving behind files for transient
networks.
This patch splits out the cleanup code to a helper function that handles
the cleanup if the inactive network object is being removed and re-uses
this code when getting rid of inactive networks.
(cherry picked from commit e87af617fc)
The hosts file was created in the network definition function. This
patch moves the place the file is being created to the point where
dnsmasq is being started.
(cherry picked from commit 23ae3fe425)
This needs to be done before the container starts. Turning
off the mknod capability is noticed by systemd, which will
no longer attempt to create device nodes.
This eliminates SELinux AVC messages and ugly failure messages in the journal.
(cherry picked from commit 2e03b08ead)
Currently, when we are doing (managed) save, we insert the
iohelper between the qemu and OS. The pipe is created, the
writing end is passed to qemu and the reading end to the
iohelper. It reads data and write them into given file. However,
with write() being asynchronous data may still be in OS
caches and hence in some (corner) cases, all migration data
may have been read and written (not physically though). So
qemu will report success, as well as iohelper. However, with
some non local filesystems, where ENOSPACE is polled every X
time units, we may get into situation where all operations
succeeded but data hasn't reached the disk. And in fact will
never do. Therefore we ought sync caches to make sure data
has reached the block device on remote host.
(cherry picked from commit f32e3a2dd6)
https://bugzilla.redhat.com/show_bug.cgi?id=871312
Recent fixes made almost all the right steps to make emulator pinned
to the cpuset of the whole domain in case <emulatorpin> isn't
specified, but qemudDomainGetEmulatorPinInfo still reports all the
CPUs even when cpuset is specified. This patch fixes that.
(cherry picked from commit 10c5212b10)
Three FORWARD chain rules are added and two INPUT chain rules
are added when a network is started but only the FORWARD chain
rules are removed when the network is destroyed.
(cherry picked from commit adaa7ab653)
This patch resolves: https://bugzilla.redhat.com/show_bug.cgi?id=871201
If libvirt is restarted after updating the dnsmasq or radvd packages,
a subsequent "virsh net-destroy" will fail to kill the dnsmasq/radvd
process.
The problem is that when libvirtd restarts, it re-reads the dnsmasq
and radvd pidfiles, then does a sanity check on each pid it finds,
including checking that the symbolic link in /proc/$pid/exe actually
points to the same file as the path used by libvirt to execute the
binary in the first place. If this fails, libvirt assumes that the
process is no longer alive.
But if the original binary has been replaced, the link in /proc is set
to "$binarypath (deleted)" (it literally has the string " (deleted)"
appended to the link text stored in the filesystem), so even if a new
binary exists in the same location, attempts to resolve the link will
fail.
In the end, not only is the old dnsmasq/radvd not terminated when the
network is stopped, but a new dnsmasq can't be started when the
network is later restarted (because the original process is still
listening on the ports that the new process wants).
The solution is, when the initial "use stat to check for identical
inodes" check for identity between /proc/$pid/exe and $binpath fails,
to check /proc/$pid/exe for a link ending with " (deleted)" and if so,
truncate that part of the link and compare what's left with the
original binarypath.
A twist to this problem is that on systems with "merged" /sbin and
/usr/sbin (i.e. /sbin is really just a symlink to /usr/sbin; Fedora
17+ is an example of this), libvirt may have started the process using
one path, but /proc/$pid/exe lists a different path (indeed, on F17
this is the case - libvirtd uses /sbin/dnsmasq, but /proc/$pid/exe
shows "/usr/sbin/dnsmasq"). The further bit of code to resolve this is
to call virFileResolveAllLinks() on both the original binarypath and
on the truncated link we read from /proc/$pid/exe, and compare the
results.
The resulting code still succeeds in all the same cases it did before,
but also succeeds if the binary was deleted or replaced after it was
started.
(cherry picked from commit 7bafe009d9)
After separating 5.x and 5.1 versions of ESX, we forgot to add 5.1
into the list of allowed connections, so connections to 5.1 fail since
v1.0.0-rc1-5-g1e7cd39
(cherry picked from commit bab7752c0c)
This resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=881480
These three functions:
virDomainNetGetActualBridgeName
virDomainNetGetActualDirectDev
virDomainNetGetActualDirectMode
return attributes that are in a union whose contents are interpreted
differently depending on the actual->type and so they should only
return non-0 when actual->type is 'bridge' (in the first case) or
'direct' (in the other two cases, but I had neglected to do that, so
...DirectDev() was returning bridge.brname (which happens to share the
same spot in the union with direct.linkdev) if actual->type was
'bridge', and ...BridgeName was returning direct.linkdev when
actual->type was 'direct'.
How does this involve Bug 881480 (which was about the inability to
switch between two networks that both have "<forward mode='bridge'/>
<bridge name='xxx'/>"? Whenever the return value of
virDomainNetGetActualDirectDev() for the new and old network
definitions doesn't match, qemuDomainChangeNet() requires a "complete
reconnect" of the device, which qemu currently doesn't
support. ...DirectDev() *should* have been returning NULL for old and
new, but was instead returning the old and new bridge names, which
differ.
(The other two functions weren't causing any behavioral problems in
virDomainChangeNet(), but their problem and fix was identical, so I
included them in this same patch).
(cherry picked from commit 3738cf41f1)
This bug resolves CVE-2012-3411, which is described in the following
bugzilla report:
https://bugzilla.redhat.com/show_bug.cgi?id=833033
The following report is specifically for libvirt on Fedora:
https://bugzilla.redhat.com/show_bug.cgi?id=874702
In short, a dnsmasq instance run with the intention of listening for
DHCP/DNS requests only on a libvirt virtual network (which is
constructed using a Linux host bridge) would also answer queries sent
from outside the virtualization host.
This patch takes advantage of a new dnsmasq option "--bind-dynamic",
which will cause the listening socket to be setup such that it will
only receive those requests that actually come in via the bridge
interface. In order for this behavior to actually occur, not only must
"--bind-interfaces" be replaced with "--bind-dynamic", but also all
"--listen-address" options must be replaced with a single
"--interface" option. Fully:
--bind-interfaces --except-interface lo --listen-address x.x.x.x ...
(with --listen-address possibly repeated) is replaced with:
--bind-dynamic --interface virbrX
Of course libvirt can't use this new option if the host's dnsmasq
doesn't have it, but we still want libvirt to function (because the
great majority of libvirt installations, which only have mode='nat'
networks using RFC1918 private address ranges (e.g. 192.168.122.0/24),
are immune to this vulnerability from anywhere beyond the local subnet
of the host), so we use the new dnsmasqCaps API to check if dnsmasq
supports the new option and, if not, we use the "old" option style
instead. In order to assure that this permissiveness doesn't lead to a
vulnerable system, we do check for non-private addresses in this case,
and refuse to start the network if both a) we are using the old-style
options, and b) the network has a publicly routable IP
address. Hopefully this will provide the proper balance of not being
disruptive to those not practically affected, and making sure that
those who *are* affected get their dnsmasq upgraded.
(--bind-dynamic was added to dnsmasq in upstream commit
54dd393f3938fc0c19088fbd319b95e37d81a2b0, which was included in
dnsmasq-2.63)
This new function returns true if the given address is in the range of
any "private" or "local" networks as defined in RFC1918 (IPv4) or
RFC3484/RFC4193 (IPv6), otherwise they return false.
These ranges are:
192.168.0.0/16
172.16.0.0/16
10.0.0.0/24
FC00::/7
FEC0::/10
In order to optionally take advantage of new features in dnsmasq when
the host's version of dnsmasq supports them, but still be able to run
on hosts that don't support the new features, we need to be able to
detect the version of dnsmasq running on the host, and possibly
determine from the help output what options are in this dnsmasq.
This patch implements a greatly simplified version of the capabilities
code we already have for qemu. A dnsmasqCaps device can be created and
populated either from running a program on disk, reading a file with
the concatenated output of "dnsmasq --version; dnsmasq --help", or
examining a buffer in memory that contains the concatenated output of
those two commands. Simple functions to retrieve capabilities flags,
the version number, and the path of the binary are also included.
bridge_driver.c creates a single dnsmasqCaps object at driver startup,
and disposes of it at driver shutdown. Any time it must be used, the
dnsmasqCapsRefresh method is called - it checks the mtime of the
binary, and re-runs the checks if the binary has changed.
networkxml2argvtest.c creates 2 "artificial" dnsmasqCaps objects at
startup - one "restricted" (doesn't support --bind-dynamic) and one
"full" (does support --bind-dynamic). Some of the test cases use one
and some the other, to make sure both code pathes are tested.
When libvirt cannot find a suitable CPU model for host CPU (easily
reproducible by running libvirt in a guest), it would not provide CPU
topology in capabilities XML either. Even though CPU topology is known
and can be queried by virNodeGetInfo. With this patch, CPU topology will
always be provided in capabilities XML regardless on the presence of CPU
model.
(cherry picked from commit f1c7010040)
Conflicts:
src/qemu/qemu_capabilities.c
src/qemu/qemu_command.c
The new code uses capabilities caching.
In Fedora 16, we quit enabling cgconfig because systemd set up
default cgroups that were good enough for our use. But in F17,
when we switched to systemd, we reverted and started up cgconfig
again. See also the tail of this thread:
https://www.redhat.com/archives/libvir-list/2012-October/msg01657.html
* libvirt.spec.in (with_systemd): Rely on systemd for cgroups.
(cherry picked from commit b61eadf3c6)
The string comparison logic was inverted and matched the first drive
that does *not* have the name we search for.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 23d47b33a2)
The QEMU -drive id= begins with libvirt's QEMU host drive prefix
("drive-"), which is stripped off in several places two convert between
host ("-drive") and guest ("-device") device names.
In the case of BlkIoTune it is unnecessary to strip the QEMU host drive
prefix because we operate on "info block"/"query-block" output that uses
host drive names.
Stripping the prefix incorrectly caused string comparisons to fail since
we were comparing the guest device name against the host device name.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 04ee70bfda)
Found this when building on RHEL5:
parallels/parallels_storage.c: In function 'parallelsStorageOpen':
parallels/parallels_storage.c:180: error: 'for' loop initial declaration used outside C99 mode
(and similar error in parallels_driver.c). This was in spite of
configuring with "-Wno-error".
(cherry picked from commit 73ebd86d73)
Replace '%' by '&' for correct escaping of '>' in Domain specification.
Signed-off-by: Philipp Hahn <hahn@univention.de>
(cherry picked from commit 7083cdc7bd)
This was found during testing of the fix for:
https://bugzilla.redhat.com/show_bug.cgi?id=868483
networkValidate was supposed to check for the existence of multiple
portgroups and report an error if this was encountered. It did, but
there were two problems:
1) even though it logged an error, it still returned success, allowing
the operation to continue.
2) It could exit the portgroup checking loop early (or possibly not
even do it once) if a vlan tag was supplied in the base network config
or one of the portgroups.
This patch fixes networkValidate to return failure in addition to
logging the error, and also changes it to not exit the portgroup
checking loop early. The logic was a bit off in the checking for vlan
anyway, and it's intertwined with fixing the early loop exit, so I
fixed that as well. Now it correctly checks for combinations where a
<virtualport> is specified in the base network def and <vlan> is given
in a portgroup, as well as the opposite (<vlan> in base network def
and <virtualport> in portgroup), and ignores the case of a disallowed
vlan when using *no* portgroup if there is a default portgroup (since
in that case there is no way to not use any portgroup).
(cherry picked from commit d8aae15aa1)
Also remove warnings for upcoming versions. There hadn't been any
compatibility problems with new ESX version over the whole lifetime
of the ESX driver, so I don't expect any in the future.
Update documentation to mention vSphere 5.x support.
(cherry picked from commit 1e7cd39511)
In commit 371ddc98, I mistakenly added the check for sysctl
version 9 after setting the hypercall version to 1, which will
fail with
error : xenHypervisorDoV1Op:967 : Unable to issue hypervisor
ioctl 3166208: Function not implemented
This check should be included along with the others that use
hypercall version 2.
(cherry picked from commit 9785f2b6f2)
When restoring selinux labels after a VM is stopped, any non-standard
path that doesn't have a default selinux label causes the process
to stop and exit early. This isn't really an error condition IMO.
Of course the selinux API could be erroring for some other reason
but hopefully that's rare enough to not need explicit handling.
Common example here is storing disk images in a non-standard location
like under /mnt.
(cherry picked from commit 767be8be72)
Rename the 'wait' parameter to 'loop'.
This silences the warning:
storage/storage_backend.c:1348:34: error: declaration of 'wait' shadows
a global declaration [-Werror=shadow]
and fixes the build with -Werror.
--
Note: loop is pool backwards.
(cherry picked from commit b326765c80)
virStorageVolLookupByPath is an API call that virt-manager uses
quite a bit when dealing with storage. This call use BackendStablePath
which has several usleep() heuristics that can be tripped up
and hang virt-manager for a while.
Current example: an empty mpath pool pointing to /dev/mapper makes
_any_ calls to virStorageVolLookupByPath take 5 seconds.
The sleep heuristics are actually only needed in certain cases
when we are waiting for new storage to appear, so let's skip the
timeout steps when calling from LookupByPath.
(cherry picked from commit 77eff5eeb2)
It might need some time till the LUN's stable path shows up on
initiator host, and although the time window is not foreseeable,
as a better than nothing fix, this patch adds timeout for the
stable path discovery process.
(cherry picked from commit de7f0774c3)