Pinning information returned for emulatorpin and vcpupin calls is being
returned from our data without querying cgroups for some time. However,
not all the data were utilized. When automatic placement is used the
information is not returned for the calls mentioned above. Since the
numad hint in private data is properly saved/restored, we can safely use
it to return true information.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1162947
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The numad hint stored in priv->autoNodeset is information that gets lost
during daemon restart. And because we would like to use that
information in the future, we also need to save it in the status XML.
For the sake of tests, we need to initialize nnumaCell_max to some
value, so that the restoration doesn't fail in our test suite. There is
no need to fill in the actual numa cell data since the recalculating
function virCapabilitiesGetCpusForNodemask() will not fail, it will just
skip filling the data in the bitmap which we don't use in tests anyway.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
When parsing private domain data, there are two paths that are flawed.
They are both error paths, just from different parts of the function.
One of them can call free() on an uninitialized pointer. Initialization
to NULL is enough here. The other one is a bit trickier to explain, but
as easy as the first one to fix. We create capabilities, parse them and
then assign them into the private data pointer inside the domain object.
If, however, we get to fail from now on, the error path calls unrefs the
capabilities and then, when the domain object is being cleaned,
qemuDomainObjPrivateFree() tries to unref them as well. That causes a
segfault. Settin the pointer to NULL upon successful addition to the
private data is enough.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1210587 (completed)
When generating the default drive address for a SCSI <disk> device,
check the generated address to ensure it doesn't conflict with a SCSI
<hostdev> address. The <disk> address generation algorithm uses the
<target> "dev" name in order to determine which controller and unit
in order to place the device. Since a SCSI <hostdev> device doesn't
require a target device name, its placement on the guest SCSI address
"could" conflict. For instance, if a SCSI <hostdev> exists at
controller=0 unit=0 and an attempt to hotplug 'sda' into the guest
made, there would be a conflict if the <hostdev> is already using
/dev/sda.
https://bugzilla.redhat.com/show_bug.cgi?id=1210587 (partial)
If a SCSI subsystem <hostdev> element address is provided, we need to
make sure the address provided doesn't conflict with an existing or
libvirt generated address for a SCSI <disk> element. We can handle
this condition in device post processing since we're not generating an
address based on some target name - rather it's either generated based
on space or provided from the user. If the user provides one that conflicts,
then we need to disallow the change.
This will fix the issue where the domain XML provided an <address> for
the <hostdev>, but not the <disk> element where the address provided
ends up being the same address used for the <disk>. A <disk> address
is generated using it's assigned <target> 'dev' name prior to the
check/validation of the <hostdev> address value.
Hot-unplugging a disk from a guest that supports hot-unplugging generates an error
in the libvirt log when running QEMU with the "-msg timestamp=on" flag.
2015-08-06 10:48:59.945+0000: 11662: error : qemuMonitorTextDriveDel:2594 :
operation failed: deleting drive-virtio-disk4 drive failed:
2015-08-06T10:48:59.945058Z Device 'drive-virtio-disk4' not found
This error is caused because the HMP results are getting prefixed with a timestamp.
Parsing the output is not reliable with STRPREFIX as the results can be prefixed with a timestamp.
Using strstr ensures that parsing the output works whether the results are prefixed or not.
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Frank Schreuder <fschreuder@transip.nl>
This reverts commit ede34470fd, which
was apparently written based on testing performed before commits
1e15be1 and 9a12b6 were pushed upstream. Once those two patches are in
place, commit ede34470 is redundant, and can even cause
incorrect/unexpected behavior when auto-assigning addresses for
virtio-net devices.
In my previous commit d7f5c88961 I tried to introduce support
for inbound.floor. But the code change was incomplete. This is
the change needed to fully enable the feature.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
There's a check right at the beginning of the function that
shortcuts if the function was called over all NULL arguments.
However, this was meant just as a fool-proof check so that we
don't crash if function is used in a bad manner. Anyway, it makes
Coverity unhappy as it then thinks any of the arguments could be
NULL. Well, with the current state of the code it can't.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
It may happen that an interface don't have any bandwidth set and
a new one is to be set. In that case, @ifaceBand will be NULL.
This will cause troubles later in the code when deciding what to
do.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit 6983d6d2 tried to improve parseRateStr but broke the build
instead for compilers that were not able to properly introspect the for
loop indexed by the enum resulting into the following error:
virsh-domain.c: In function 'parseRateStr':
virsh-domain.c:916:13: error: 'field_name' may be used uninitialized in this function [-Werror=maybe-uninitialized]
vshError(ctl, _("malformed %s field"), field_name);
^
virsh-domain.c:915:13: error: 'tmp' may be used uninitialized in this function [-Werror=maybe-uninitialized]
if (virStrToLong_ullp(token, NULL, 10, tmp) < 0) {
^
Rather than trying to fix the code, refactor the function again by
reusing virStringSplit.
If you pass <disk><serial> XML to UpdateDevice, and the original device
didn't have a <serial> block, libvirtd crashes trying to read the original
NULL serial string.
Use _NULLABLE string comparisons to avoid the crash. A couple other
properties needed the change too.
Commit e8d5517 updated the domain post-parse to automatically add
pcie-root et al for certain ARM "virt" machinetypes, but didn't update
the function qemuDomainSupportsPCI() which is called later on when we
are auto-assigning PCI addresses and default settings for the PCI
controller <model> and <target> attributes. The result was that PCI
addresses weren't assigned, and the controllers didn't have their
attribute default values set, leading to an error when the domain was
started, e.g.:
internal error: autogenerated dmi-to-pci-bridge options not set
This patch adds the same check made in the earlier patch to
qemuDomainSupportsPCI(), so that PCI address auto-assignment and
target/model default values will be set.
When running the test suite using "unshare -n" we might have IPv6 but no
configured addresses. Due to AI_ADDRCONFIG getaddrinfo then fails with
EAI_NONAME which we should then treat as IPv6 unavailable.
This fixes the crash described here:
https://www.redhat.com/archives/libvir-list/2015-August/msg00162.html
In short, we were calling ioctl(SIOCETHTOOL) pointing to a too-short
object that was a local on the stack, resulting in the memory past the
end of the object being overwritten. This was because the struct used
by the ETHTOOL_GFEATURES command of SIOCETHTOOL ends with a 0-length
array, but we were telling ethtool that it could use 2 elements on the
array.
The fix is to allocate the necessary memory with VIR_ALLOC_VAR(),
including the extra length needed for a 2 element array at the end.
Well, there are just two places that needs adjustment:
qemuDomainGetInterfaceParameters - to report the @floor
qemuDomainSetInterfaceParameters - now that the function has been
fixed, we can allow updating @floor too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
We have a function parseRateStr() that parses --inbound and
--outbound arguments to both attach-interface and domiftune.
Now that we have all virTypedParams macros needed for QoS,
lets parse even floor attribute. The extended format for the
arguments looks like this then:
--inbound average[,peak[,burst[,floor]]]
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The function is used to parse a tuple delimited by commas into
virNetDevBandwidth structure. So far only three out of fore
fields are supported: average, peak and burst. The single missing
field is floor. Well, the parsing works, but I think we can do
better. Especially when we will need to parse floor too in very
close future.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
As sketched in previous commits, imagine the following scenario:
virsh # domiftune gentoo vnet0
inbound.average: 100
inbound.peak : 0
inbound.burst : 0
outbound.average: 100
outbound.peak : 0
outbound.burst : 0
virsh # domiftune gentoo vnet0 --inbound 0
virsh # shutdown gentoo
Domain gentoo is being shutdown
virsh # list --all
error: Failed to list domains
error: Cannot recv data: Connection reset by peer
Program received signal SIGSEGV, Segmentation fault.
0x00007fffe80ea221 in networkUnplugBandwidth (net=0x7fff9400c1a0, iface=0x7fff940ea3e0) at network/bridge_driver.c:4881
4881 net->floor_sum -= ifaceBand->in->floor;
This is rather unfortunate. We should not SIGSEGV here. The
problem is, that while in the second step the inbound QoS was
cleared out, the network part of it was not updated (moreover, we
don't report that vnet0 had inbound.floor set). Internal
structure therefore still had some fragments left (e.g.
class_id). So when qemuProcessStop() started to clean up the
environment it got to networkUnplugBandwidth(). Here, class_id is
set therefore function assumes that there is an inbound QoS. This
actually is a fair assumption to make, there's no need for a
special QoS box in network's QoS when there's no QoS to set.
Anyway, the problem is not the networkUnplugBandwidth() rather
than qemuDomainSetInterfaceParameters() which completely forgot
about QoS being disperse (some parts are set directly on
interface itself, some on bridge the interface is plugged into).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So, if a domain vNIC's bandwidth has been successfully set, it's
possible that because @floor is set on network's bridge, this
part may need updating too. And that's exactly what this function
does. While the previous commit introduced a function to check if
@floor can be satisfied, this does all the hard work. In general,
there may be three, well four possibilities:
1) No change in @floor value (either it remain unset, or its
value hasn't changed)
2) The @floor value has changed from a non-zero to a non-zero
value
3) New @floor is to be set
4) Old @floor must be cleared out
The difference between 2), 3) and 4) is, that while in 2) the QoS
tree on the network's bridge already has a special class for the
vNIC, in 3) the class must be created from scratch. In 4) it must
be removed. Fortunately, we have helpers for all three
interesting cases.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When a domain vNIC's bandwidth is to be changed (at runtime) it is
possible that guaranteed minimal bandwidth (@floor) will change too.
Well, so far it is, because we still don't have an implementation that
allows setting it dynamically, so it's effectively erased on:
#virsh domiftune $dom vnet0 --inbound 0
However, that's slightly unfortunate. We do some checks on domain
startup to see if @floor can be guaranteed. We ought do the same if
QoS is changed at runtime.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This is no functional change. It's just that later in the series we
will need to pass class_id as an integer.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
There is no guarantee that an enum start it mapped onto a value
of zero. However, we are guaranteed that enum items are
consecutive integers. Moreover, it's a pity to define an enum to
avoid using magical constants but then using them anyway.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit a6f9af8292 added checking for address colisions between
starting and ending addresses of forwarding addresses, but forgot that
there might be no addresses set at all.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Unlike what happens on x86, on ppc64 you can't mix and match CPU
features to obtain the guest CPU you want regardless of the host
CPU, so the concept of model fallback doesn't apply.
Make sure CPU definitions emitted by the driver, eg. as output of
the cpuBaseline() and cpuUpdate() calls, reflect this fact.
All previously recognized CPU models (POWER7_v2.1, POWER7_v2.3,
POWER7+_v2.1 and POWER8_v1.0) are internally converted to the
corrisponding generation name so that existing guests don't stop
working.
Use multiple PVRs per CPU model to reduce the number of models we
need to keep track of.
Remove specific CPU models (eg. POWER7+_v2.1): the corresponding
generic CPU model (eg. POWER7) should be used instead to ensure
the guest can be booted on any compatible host.
Get rid of all the entries that did not match any of the CPU
models supported by QEMU, like power8 and power8e.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1250977
This will allow us to perform PVR matching more broadly, eg. consider
both POWER8 and POWER8E CPUs to be the same even though they have
different PVR values.
The upcoming commits will make heavy modifications to the ppc64
driver, split so that it's easier to review the changes.
Instead of updating the test cases so that they pass, possibly
only to update them again with the following commit, disable them
for the time being.
Another commit will update them all in one go once all required
changes are in place.
This ensures comparison of two CPU definitions will be consistent
regardless of the fact that it is performed using cpuCompare() or
cpuGuestData(). The x86 driver uses the same exact code.
Limitations of the POWER architecture mean that you can't run
eg. a POWER7 guest on a POWER8 host when using KVM. This applies
to all guests, not just those using VIR_CPU_MATCH_STRICT in the
CPU definition; in fact, exact and strict CPU matching are
basically the same on ppc64.
This means, of course, that hosts using different CPUs have to be
considered incompatible as well.
Change ppc64Compute(), called by cpuGuestData(), to reflect this
fact and update test cases accordingly.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1250977
ppc64Compute(), called by cpuNodeData(), is used not only to retrieve
the driver-specific data associated to a guest CPU definition, but
also to check whether said guest CPU is compatible with the host CPU.
If the user is not interested in the CPU data, it's perfectly fine
to pass a NULL pointer instead of a return location, and the
compatibility data returned should not be affected by this. One of
the checks, specifically the one on CPU model name, was however
only performed if the return location was non-NULL.
A test is considered successful if the obtained result matches
the expected result: if that's not the case, whether because a
test that was expected to succeed failed or because a test that
was supposed to fail succeeded, then something's not right and
we want the user to know about this.
On the other hand, if a failure that's unrelated to the bits
we're testing occurs, then the user should be notified even if
the test was expected to fail.
Use different values to tell these two situations apart.
Fix a test case that was wrongly expected to fail as well.
Use briefer checks, eg. (!model) instead of (model == NULL), and
avoid initializing to NULL a pointer that would be assigned in
the first line of the function anyway.
Also remove a pointless NULL assignment.
No functional changes.
Use the ppc64Driver prefix for all functions that are used to
fill in the cpuDriverPPC64 structure, ie. those that are going
to be called by the generic CPU code.
This makes it clear which functions are exported and which are
implementation details; it also gets rid of the ambiguity that
affected the ppc64DataFree() function which, despite what the
name suggested, was not related to ppc64DataCopy() and could
not be used to release the memory allocated for a
virCPUppc64Data* instance.
No functional changes.
This is a public library, it shouldn't include anything that is
internal. Including the library in it's current state to an example
application fails the preprocessor phase.
nwfilter uses iptables and ebtables, which only work properly on
tap-based network connections (*not* on macvtap, for example), but we
just ignore any <filterref> elements for other types of networks,
potentially giving users a false sense of security.
This patch checks the network type and fails/logs an error if any
domain <interface> has a <filterref> when the connection isn't using a
tap device.
This resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1180011