The problem here is that there are some values that kernel accepts, but
does not set them, for example 18446744073709551615 which acts the same
way as zero. Let's do the same thing we do with other tuning options
and re-read them right after they are set in order to keep our internal
structures up-to-date.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1165580
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The problem here is that there are some values that kernel accepts, but
does not set them, for example 18446744073709551615 which acts the same
way as zero. Let's do the same thing we do with other tuning options
and re-read them right after they are set in order to keep our internal
structures up-to-date.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
This function translates device paths to "major:minor " string, and all
virCgroupSetBlkioDevice* functions are modified to use it. It's a
cleanup with no functional change.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
That function takes string list and returns first string in that list
that starts with the @prefix parameter with that prefix being skipped as
the caller knows what it starts with (also for easier manipulation in
future).
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1251886
Since iothread_id == 0 is an invalid value for QEMU let's point
that out specifically. For the IOThreadDel code, the failure would
have ended up being a failure to find the IOThread ID; however, for
the IOThreadAdd code - an IOThread 0 was added and that isn't good.
It seems during many reviews/edits to the code the check for
iothread_id = 0 being invalid was lost - it could have originally
been in the API code, but requested to be moved - I cannot remember.
The comment for the function indicated that iothread_id had to be
a positive non-zero value; however, that wasn't checked - that is
a value of 0 is/was allowed by the API and was left up to the
hypervisor to reject the value.
More than likely this nuance was missed during the many "adjustments"
to the API in the review phase.
Allow 0 as an iothread_id and force the hypervisor to handle.
The qemuDomainPinIOThread API will look up the iothread_id of
0 and not find it and message that anyway.
Just like in commit 704cf06, if virCgroup*() fails, the error is
already reported. There's no need to overwrite the error with a
generic one and possibly hiding the true root cause of the error.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Fix inconsistency between function description and actual
parameter name in virConfGetValue/virConfSetValue.
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
Ever since commit e44b0269, 64-bit mingw compilation fails with:
../../src/util/virprocess.c: In function 'virProcessGetPids':
../../src/util/virprocess.c:628:50: error: passing argument 4 of 'virStrToLong_i' from incompatible pointer type [-Werror=incompatible-pointer-types]
if (virStrToLong_i(ent->d_name, NULL, 10, &tmp_pid) < 0)
^
In file included from ../../src/util/virprocess.c:59:0:
../../src/util/virstring.h:53:5: note: expected 'int *' but argument is of type 'pid_t * {aka long long int *}'
int virStrToLong_i(char const *s,
^
cc1: all warnings being treated as errors
Although mingw won't be using this function, it does compile the
file, and the fix is relatively simple.
* src/util/virprocess.c (virProcessGetPids): Don't assume pid_t
fits in int.
Signed-off-by: Eric Blake <eblake@redhat.com>
It may happen that user (mistakenly) wants to rename a domain to
itself. Which is no renaming at all. We should reject that with
some meaningful error message.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If this function fails, the error message is reported only in
some cases (e.g. OOM), but in some it's not (e.g. duplicate key).
This fact is painful and we should either not report error at all
or report the error in all possible cases. I vote for the latter.
Unfortunately, since the key may be an arbitrary value (not
necessarily a string) we can't report it in the error message.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In 9190f0b0 we've tried to fix an OOM. And boy, was that fix
successful. But back then, the hash table implementation worked
strictly over string keys, which is not the case anymore. Hash
table have this function keyCopy() which returns void *.
Therefore a local variable that is temporarily holding the
intermediate return value from that function should be void *
too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Coverity complained that 'vm' wasn't initialized before jumping to
cleanup: and calling virDomainObjEndAPI if the VIR_STRDUP fails.
So I initialized vm = NULL and also moved the VIR_STRDUP closer to
usage and used endjob for goto. Lots of other reasons for failures.
In order to share as much virsh' logic as possible with upcomming
virt-admin client we need to split virsh logic into virsh specific and
client generic features.
Since majority of virsh methods should be generic enough to be used by
other clients, it's much easier to rename virsh specific data to virshX
than doing this vice versa. It moved generic virsh commands (including info
and opts structures) to generic module vsh.c.
Besides renaming methods and structures, this patch also involves introduction
of a client specific control structure being referenced as private data in the
original control structure, introduction of a new global vsh Initializer,
which currently doesn't do much, but there is a potential for added
functionality in the future.
Lastly it introduced client hooks which are especially necessary during
client connecting phase.
Currently supports only renaming inactive domains without snapshots.
Signed-off-by: Tomas Meszaros <exo@tty.sk>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
We just need to update the entry in the second hash table. Since commit 8728a56
we have two hash tables for the domain list so that we can do O(1) lookup
regardless of looking up by UUID or name. Since with renaming a domain UUID does
not change, we only need to update the second hash table, where domains are
referenced by their name.
We will call both functions from the qemuDomainRename().
Signed-off-by: Tomas Meszaros <exo@tty.sk>
Also, among with this new API new ACL that restricts rename
capability is invented too.
Signed-off-by: Tomas Meszaros <exo@tty.sk>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Otherwise the error is just
error: Failed to create domain from test1.xml
error: failed to retrieve file descriptor for interface: Transport endpoint is not connected
since we don't get a sensible error after the fork.
Pinning information returned for emulatorpin and vcpupin calls is being
returned from our data without querying cgroups for some time. However,
not all the data were utilized. When automatic placement is used the
information is not returned for the calls mentioned above. Since the
numad hint in private data is properly saved/restored, we can safely use
it to return true information.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1162947
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The numad hint stored in priv->autoNodeset is information that gets lost
during daemon restart. And because we would like to use that
information in the future, we also need to save it in the status XML.
For the sake of tests, we need to initialize nnumaCell_max to some
value, so that the restoration doesn't fail in our test suite. There is
no need to fill in the actual numa cell data since the recalculating
function virCapabilitiesGetCpusForNodemask() will not fail, it will just
skip filling the data in the bitmap which we don't use in tests anyway.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
When parsing private domain data, there are two paths that are flawed.
They are both error paths, just from different parts of the function.
One of them can call free() on an uninitialized pointer. Initialization
to NULL is enough here. The other one is a bit trickier to explain, but
as easy as the first one to fix. We create capabilities, parse them and
then assign them into the private data pointer inside the domain object.
If, however, we get to fail from now on, the error path calls unrefs the
capabilities and then, when the domain object is being cleaned,
qemuDomainObjPrivateFree() tries to unref them as well. That causes a
segfault. Settin the pointer to NULL upon successful addition to the
private data is enough.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1210587 (completed)
When generating the default drive address for a SCSI <disk> device,
check the generated address to ensure it doesn't conflict with a SCSI
<hostdev> address. The <disk> address generation algorithm uses the
<target> "dev" name in order to determine which controller and unit
in order to place the device. Since a SCSI <hostdev> device doesn't
require a target device name, its placement on the guest SCSI address
"could" conflict. For instance, if a SCSI <hostdev> exists at
controller=0 unit=0 and an attempt to hotplug 'sda' into the guest
made, there would be a conflict if the <hostdev> is already using
/dev/sda.
https://bugzilla.redhat.com/show_bug.cgi?id=1210587 (partial)
If a SCSI subsystem <hostdev> element address is provided, we need to
make sure the address provided doesn't conflict with an existing or
libvirt generated address for a SCSI <disk> element. We can handle
this condition in device post processing since we're not generating an
address based on some target name - rather it's either generated based
on space or provided from the user. If the user provides one that conflicts,
then we need to disallow the change.
This will fix the issue where the domain XML provided an <address> for
the <hostdev>, but not the <disk> element where the address provided
ends up being the same address used for the <disk>. A <disk> address
is generated using it's assigned <target> 'dev' name prior to the
check/validation of the <hostdev> address value.
Hot-unplugging a disk from a guest that supports hot-unplugging generates an error
in the libvirt log when running QEMU with the "-msg timestamp=on" flag.
2015-08-06 10:48:59.945+0000: 11662: error : qemuMonitorTextDriveDel:2594 :
operation failed: deleting drive-virtio-disk4 drive failed:
2015-08-06T10:48:59.945058Z Device 'drive-virtio-disk4' not found
This error is caused because the HMP results are getting prefixed with a timestamp.
Parsing the output is not reliable with STRPREFIX as the results can be prefixed with a timestamp.
Using strstr ensures that parsing the output works whether the results are prefixed or not.
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Frank Schreuder <fschreuder@transip.nl>
This reverts commit ede34470fd, which
was apparently written based on testing performed before commits
1e15be1 and 9a12b6 were pushed upstream. Once those two patches are in
place, commit ede34470 is redundant, and can even cause
incorrect/unexpected behavior when auto-assigning addresses for
virtio-net devices.
There's a check right at the beginning of the function that
shortcuts if the function was called over all NULL arguments.
However, this was meant just as a fool-proof check so that we
don't crash if function is used in a bad manner. Anyway, it makes
Coverity unhappy as it then thinks any of the arguments could be
NULL. Well, with the current state of the code it can't.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
It may happen that an interface don't have any bandwidth set and
a new one is to be set. In that case, @ifaceBand will be NULL.
This will cause troubles later in the code when deciding what to
do.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If you pass <disk><serial> XML to UpdateDevice, and the original device
didn't have a <serial> block, libvirtd crashes trying to read the original
NULL serial string.
Use _NULLABLE string comparisons to avoid the crash. A couple other
properties needed the change too.
Commit e8d5517 updated the domain post-parse to automatically add
pcie-root et al for certain ARM "virt" machinetypes, but didn't update
the function qemuDomainSupportsPCI() which is called later on when we
are auto-assigning PCI addresses and default settings for the PCI
controller <model> and <target> attributes. The result was that PCI
addresses weren't assigned, and the controllers didn't have their
attribute default values set, leading to an error when the domain was
started, e.g.:
internal error: autogenerated dmi-to-pci-bridge options not set
This patch adds the same check made in the earlier patch to
qemuDomainSupportsPCI(), so that PCI address auto-assignment and
target/model default values will be set.
When running the test suite using "unshare -n" we might have IPv6 but no
configured addresses. Due to AI_ADDRCONFIG getaddrinfo then fails with
EAI_NONAME which we should then treat as IPv6 unavailable.
This fixes the crash described here:
https://www.redhat.com/archives/libvir-list/2015-August/msg00162.html
In short, we were calling ioctl(SIOCETHTOOL) pointing to a too-short
object that was a local on the stack, resulting in the memory past the
end of the object being overwritten. This was because the struct used
by the ETHTOOL_GFEATURES command of SIOCETHTOOL ends with a 0-length
array, but we were telling ethtool that it could use 2 elements on the
array.
The fix is to allocate the necessary memory with VIR_ALLOC_VAR(),
including the extra length needed for a 2 element array at the end.
Well, there are just two places that needs adjustment:
qemuDomainGetInterfaceParameters - to report the @floor
qemuDomainSetInterfaceParameters - now that the function has been
fixed, we can allow updating @floor too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
As sketched in previous commits, imagine the following scenario:
virsh # domiftune gentoo vnet0
inbound.average: 100
inbound.peak : 0
inbound.burst : 0
outbound.average: 100
outbound.peak : 0
outbound.burst : 0
virsh # domiftune gentoo vnet0 --inbound 0
virsh # shutdown gentoo
Domain gentoo is being shutdown
virsh # list --all
error: Failed to list domains
error: Cannot recv data: Connection reset by peer
Program received signal SIGSEGV, Segmentation fault.
0x00007fffe80ea221 in networkUnplugBandwidth (net=0x7fff9400c1a0, iface=0x7fff940ea3e0) at network/bridge_driver.c:4881
4881 net->floor_sum -= ifaceBand->in->floor;
This is rather unfortunate. We should not SIGSEGV here. The
problem is, that while in the second step the inbound QoS was
cleared out, the network part of it was not updated (moreover, we
don't report that vnet0 had inbound.floor set). Internal
structure therefore still had some fragments left (e.g.
class_id). So when qemuProcessStop() started to clean up the
environment it got to networkUnplugBandwidth(). Here, class_id is
set therefore function assumes that there is an inbound QoS. This
actually is a fair assumption to make, there's no need for a
special QoS box in network's QoS when there's no QoS to set.
Anyway, the problem is not the networkUnplugBandwidth() rather
than qemuDomainSetInterfaceParameters() which completely forgot
about QoS being disperse (some parts are set directly on
interface itself, some on bridge the interface is plugged into).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So, if a domain vNIC's bandwidth has been successfully set, it's
possible that because @floor is set on network's bridge, this
part may need updating too. And that's exactly what this function
does. While the previous commit introduced a function to check if
@floor can be satisfied, this does all the hard work. In general,
there may be three, well four possibilities:
1) No change in @floor value (either it remain unset, or its
value hasn't changed)
2) The @floor value has changed from a non-zero to a non-zero
value
3) New @floor is to be set
4) Old @floor must be cleared out
The difference between 2), 3) and 4) is, that while in 2) the QoS
tree on the network's bridge already has a special class for the
vNIC, in 3) the class must be created from scratch. In 4) it must
be removed. Fortunately, we have helpers for all three
interesting cases.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When a domain vNIC's bandwidth is to be changed (at runtime) it is
possible that guaranteed minimal bandwidth (@floor) will change too.
Well, so far it is, because we still don't have an implementation that
allows setting it dynamically, so it's effectively erased on:
#virsh domiftune $dom vnet0 --inbound 0
However, that's slightly unfortunate. We do some checks on domain
startup to see if @floor can be guaranteed. We ought do the same if
QoS is changed at runtime.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This is no functional change. It's just that later in the series we
will need to pass class_id as an integer.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
There is no guarantee that an enum start it mapped onto a value
of zero. However, we are guaranteed that enum items are
consecutive integers. Moreover, it's a pity to define an enum to
avoid using magical constants but then using them anyway.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit a6f9af8292 added checking for address colisions between
starting and ending addresses of forwarding addresses, but forgot that
there might be no addresses set at all.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Unlike what happens on x86, on ppc64 you can't mix and match CPU
features to obtain the guest CPU you want regardless of the host
CPU, so the concept of model fallback doesn't apply.
Make sure CPU definitions emitted by the driver, eg. as output of
the cpuBaseline() and cpuUpdate() calls, reflect this fact.
All previously recognized CPU models (POWER7_v2.1, POWER7_v2.3,
POWER7+_v2.1 and POWER8_v1.0) are internally converted to the
corrisponding generation name so that existing guests don't stop
working.
Use multiple PVRs per CPU model to reduce the number of models we
need to keep track of.
Remove specific CPU models (eg. POWER7+_v2.1): the corresponding
generic CPU model (eg. POWER7) should be used instead to ensure
the guest can be booted on any compatible host.
Get rid of all the entries that did not match any of the CPU
models supported by QEMU, like power8 and power8e.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1250977
This will allow us to perform PVR matching more broadly, eg. consider
both POWER8 and POWER8E CPUs to be the same even though they have
different PVR values.
This ensures comparison of two CPU definitions will be consistent
regardless of the fact that it is performed using cpuCompare() or
cpuGuestData(). The x86 driver uses the same exact code.
Limitations of the POWER architecture mean that you can't run
eg. a POWER7 guest on a POWER8 host when using KVM. This applies
to all guests, not just those using VIR_CPU_MATCH_STRICT in the
CPU definition; in fact, exact and strict CPU matching are
basically the same on ppc64.
This means, of course, that hosts using different CPUs have to be
considered incompatible as well.
Change ppc64Compute(), called by cpuGuestData(), to reflect this
fact and update test cases accordingly.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1250977
ppc64Compute(), called by cpuNodeData(), is used not only to retrieve
the driver-specific data associated to a guest CPU definition, but
also to check whether said guest CPU is compatible with the host CPU.
If the user is not interested in the CPU data, it's perfectly fine
to pass a NULL pointer instead of a return location, and the
compatibility data returned should not be affected by this. One of
the checks, specifically the one on CPU model name, was however
only performed if the return location was non-NULL.
Use briefer checks, eg. (!model) instead of (model == NULL), and
avoid initializing to NULL a pointer that would be assigned in
the first line of the function anyway.
Also remove a pointless NULL assignment.
No functional changes.
Use the ppc64Driver prefix for all functions that are used to
fill in the cpuDriverPPC64 structure, ie. those that are going
to be called by the generic CPU code.
This makes it clear which functions are exported and which are
implementation details; it also gets rid of the ambiguity that
affected the ppc64DataFree() function which, despite what the
name suggested, was not related to ppc64DataCopy() and could
not be used to release the memory allocated for a
virCPUppc64Data* instance.
No functional changes.
nwfilter uses iptables and ebtables, which only work properly on
tap-based network connections (*not* on macvtap, for example), but we
just ignore any <filterref> elements for other types of networks,
potentially giving users a false sense of security.
This patch checks the network type and fails/logs an error if any
domain <interface> has a <filterref> when the connection isn't using a
tap device.
This resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1180011
This patch modifies virSocketAddrGetRange() to function properly when
the containing network/prefix of the address range isn't known, for
example in the case of the NAT range of a virtual network (since it is
a range of addresses on the *host*, not within the network itself). We
then take advantage of this new functionality to validate the NAT
range of a virtual network.
Extra test cases are also added to verify that virSocketAddrGetRange()
works properly in both positive and negative cases when the network
pointer is NULL.
This is the *real* fix for:
https://bugzilla.redhat.com/show_bug.cgi?id=985653
Commits 1e334a and 48e8b9 had earlier been pushed as fixes for that
bug, but I had neglected to read the report carefully, so instead of
fixing validation for the NAT range, I had fixed validation for the
DHCP range. sigh.
https://bugzilla.redhat.com/show_bug.cgi?id=1022292
The following XML really does not make any sense:
<inbound average="-1" burst="-2" peak="-3" floor="-4"/>
There can't be a negative packet rate. Well, so far we haven't
assigned any meaning to it. So reject it unless users harm themselves,
because otherwise we turn the negative numbers into really big values.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
There's no need to set mon->fd to a dummy value since
it's initialized to proper value just a few lines below.
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
Since its introduction in 2011 (particularly in commit f4324e3292),
the option doesn't work. It just effectively disables all incoming
connections. That's because the client private data that contain the
'keepalive_supported' boolean, are initialized to zeroes so the bool is
false and the only other place where the bool is used is when checking
whether the client supports keepalive. Thus, according to the server,
no client supports keepalive.
Removing this instead of fixing it is better because a) apparently
nobody ever tried it since 2011 (4 years without one month) and b) we
cannot know whether the client supports keepalive until we get a ping or
pong keepalive packet. And that won't happen until after we dispatched
the ConnectOpen call.
Another two reasons would be c) the keepalive_required was tracked on
the server level, but keepalive_supported was in private data of the
client as well as the check that was made in the remote layer, thus
making all other instances of virNetServer miss this feature unless they
all implemented it for themselves and d) we can always add it back in
case there is a request and a use-case for it.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
So the API takes @dumpformat argument. This is what makes it special
when compared to virDomainCoreDump. The argument is there so that
users can choose the format of resulting core dump file. And to ease
them the choosing process we even have an enum with supported values
across all the hypervisors. But we don't mention the enum in the
function description anywhere. Fix it!
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
By specifying parentIndex in a call to virNetworkUpdate(), it was
possible to direct libvirt to add a dhcp range or static host of a
non-matching address family to the <dhcp> element of an <ip>. For
example, given:
<ip address='192.168.122.1' netmask='255.255.255.0'/>
<ip family='ipv6' address='2001:db6:ca3:45::1' prefix='64'/>
you could provide a static host entry with an IPv4 address, and
specify that it be added to the 2nd <ip> element (index 1):
virsh net-update default add ip-dhcp-host --parent-index 1 \
'<host mac="52:54:00:00:00:01" ip="192.168.122.45"/>'
This would be happily added with no error (and no concern of any
possible future consequences).
This patch checks that any dhcp range or host element being added to a
network ip's <dhcp> subelement has addresses of the same family as the
ip element they are being added to.
This resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1184736
This controller can be connected only to a port on a
pcie-switch-upstream-port. It provides a single hotpluggable port that
will accept any PCI or PCIe device, as well as any device requiring a
pcie-*-port (the only current example of such a device is the
pcie-switch-upstream-port).
The downstream ports of an x3130-upstream switch can each have one of
these plugged into them (and that is the only place they can be
connected). Each xio3130-downstream provides a single PCIe port that
can have PCI or PCIe devices hotplugged into it. Apparently an entire
set of x3130-upstream + several xio3130-downstreams can be hotplugged
as a unit, but it's not clear to me yet how that would be done, since
qemu only allows attaching a single device at a time.
This device will be used to implement the
"pcie-switch-downstream-port" model of pci controller.
This controller can be connected only to a pcie-root-port or a
pcie-switch-downstream-port (which will be added in a later patch),
which is the reason for the new connect type
VIR_PCI_CONNECT_TYPE_PCIE_PORT. A pcie-switch-upstream-port provides
32 ports (slot=0 to slot=31) on the downstream side, which can only
have pci controllers of model "pcie-switch-downstream-port" plugged
into them, which is the reason for the other new connect type
VIR_PCI_CONNECT_TYPE_PCIE_SWITCH.
This is the upstream part of a PCIe switch. It connects to a PCIe port
(but not PCI) on the upstream side, and can have up to 31
xio3130-downstream controllers (but no other types of devices)
connected to its downstream side.
This device will be used to implement the "pcie-switch-upstream-port"
model of pci controller.
This is backed by the qemu device ioh3420.
chassis and port from the <target> subelement are used to store/set the
respective qemu device options for the ioh3420. Currently, chassis is
set to be the index of the controller, and port is set to
"(slot << 3) + function" (per suggestion from Alex Williamson).
This controller can be connected (at domain startup time only - not
hotpluggable) only to a port on the pcie root complex ("pcie-root" in
libvirt config), hence the new connect type
VIR_PCI_CONNECT_TYPE_PCIE_ROOT. It provides a hotpluggable port that
will accept any PCI or PCIe device.
New attributes must be added to the controller <target> subelement for
this - chassis and port are guest-visible option values that will be
set by libvirt with values derived from the controller's index and pci
address information.
This is a PCIE "root port". It connects only to a port of the
integrated pcie.0 bus of a Q35 machine (can't be hotplugged), and
provides a single PCIe port that can have PCI or PCIe devices
hotplugged into it.
This device will be used to implement the "pcie-root-port" model of
pci controller.
This uses the new subelement/attribute in two ways:
1) If a "pci-bridge" pci controller has no chassisNr attribute, it
will automatically be set to the controller's index as soon as the
controller's PCI address is known (during
qemuDomainAssignPCIAddresses()).
2) when creating the commandline for a pci-bridge device, chassisNr
will be used to set qemu's chassis_nr option (rather than the previous
practice of hard-coding it to the controller's index).
There are some configuration options to some types of pci controllers
that are currently automatically derived from other parts of the
controller's configuration. For example, in qemu a pci-bridge
controller has an option that is called "chassis_nr"; up until now
libvirt has always set chassis_nr to the index of the pci-bridge. So
this:
<controller type='pci' model='pci-bridge' index='2'/>
will always result in:
-device pci-bridge,chassis_nr=2,...
on the qemu commandline. In the future we may decide there is a better
way to derive that option, but even in that case we will need for
existing domains to retain the same chassis_nr they were using in the
past - that is something that is visible to the guest so it is part of
the guest ABI and changing it would lead to problems for migrating
guests (or just guests with very picky OSes).
The <target> subelement has been added as a place to put the new
"chassisNr" attribute that will be filled in by libvirt when it
auto-generates the chassisNr; it will be saved in the config, then
reused any time the domain is started:
<controller type='pci' model='pci-bridge' index='2'>
<model type='pci-bridge'/>
<target chassisNr='2'/>
</controller>
The one oddity of all this is that if the controller configuration
is changed (for example to change the index or the pci address
where the controller is plugged in), the items in <target> will
*not* be re-generated, which might lead to conflict. I can't
really see any way around this, but fortunately if there is a
material conflict qemu will let us know and we will pass that on
to the user.
This patch provides qemu support for the contents of <model> in
<controller> for the two existing PCI controller types that need it
(i.e. the two controller types that are backed by a device that must
be specified on the qemu commandline):
1) pci-bridge - sets <model> name attribute default as "pci-bridge"
2) dmi-to-pci-bridge - sets <model> name attribute default as
"i82801b11-bridge".
These both match current hardcoded practice.
The defaults are set at the end of qemuDomainAssignPCIAddresses().
This can't be done earlier because some of the options that will be
autogenerated need full PCI address info for the controller, and
because qemuDomainAssignPCIAddresses() might create extra controllers
which would need default settings added, and that hasn't yet been done
at the time the PostParse callbacks are being run.
qemuDomainAssignPCIAddresses() is still called prior to the XML being
written to disk, though, so the autogenerated defaults are persistent.
qemu capabilities bits aren't checked when the domain is defined, but
rather when the commandline is actually created (so the domain can
possibly be defined on a host that doesn't yet have support for the
given device, or a host different from the one where it will
eventually be run). When the commandline is being generated we compare
the modelName to known qemu device names implementing the given type
of controller, and check the capabilities bit for that device.
This new subelement is used in PCI controllers: the toplevel
*attribute* "model" of a controller denotes what kind of PCI
controller is being described, e.g. a "dmi-to-pci-bridge",
"pci-bridge", or "pci-root". But in the future there will be different
implementations of some of those types of PCI controllers, which
behave similarly from libvirt's point of view (and so should have the
same model), but use a different device in qemu (and present
themselves as a different piece of hardware in the guest). In an ideal
world we (i.e. "I") would have thought of that back when the pci
controllers were added, and used some sort of type/class/model
notation (where class was used in the way we are now using model, and
model was used for the actual manufacturer's model number of a
particular family of PCI controller), but that opportunity is long
past, so as an alternative, this patch allows selecting a particular
implementation of a pci controller with the "name" attribute of the
<model> subelement, e.g.:
<controller type='pci' model='dmi-to-pci-bridge' index='1'>
<model name='i82801b11-bridge'/>
</controller>
In this case, "dmi-to-pci-bridge" is the kind of controller (one that
has a single PCIe port upstream, and 32 standard PCI ports downstream,
which are not hotpluggable), and the qemu device to be used to
implement this kind of controller is named "i82801b11-bridge".
Implementing the above now will allow us in the future to add a new
kind of dmi-to-pci-bridge that doesn't use qemu's i82801b11-bridge
device, but instead uses something else (which doesn't yet exist, but
qemu people have been discussing it), all without breaking existing
configs.
(note that for the existing "pci-bridge" type of PCI controller, both
the model attribute and <model> name are 'pci-bridge'. This is just a
coincidence, since it turns out that in this case the device name in
qemu really is a generic 'pci-bridge' rather than being the name of
some real-world chip)
If a pci address had a function number out of range, the error message
would be:
Insufficient specification for PCI address
which is logged by virDevicePCIAddressParseXML() after
virDevicePCIAddressIsValid returns a failure.
This patch enhances virDevicePCIAddressIsValid() to optionally report
the error itself (since it is the place that decides which part of the
address is "invalid"), and uses that feature when calling from
virDevicePCIAddressParseXML(), so that the error will be more useful,
e.g.:
Invalid PCI address function=0x8, must be <= 7
Previously, virDevicePCIAddressIsValid didn't check for the
theoretical limits of domain or bus, only for slot or function. While
adding log messages, we also correct that ommission. (The RNG for PCI
addresses already enforces this limit, which by the way means that we
can't add any negative tests for this - as far as I know our
domainschematest has no provisions for passing XML that is supposed to
fail).
Note that virDevicePCIAddressIsValid() can only check against the
absolute maximum attribute values for *any* possible PCI controller,
not for the actual maximums of the specific controller that this
device is attaching to; fortunately there is later more specific
validation for guest-side PCI addresses when building the set of
assigned PCI addresses. For host-side PCI addresses (e.g. for
<hostdev> and for network device pools), we rely on the error that
will be logged when it is found that the device doesn't actually
exist.
This resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1004596
https://bugzilla.redhat.com/show_bug.cgi?id=1176020
Some users think this is a good idea:
<vcpu placement='static'>4</vcpu>
<cpu mode='host-model'>
<model fallback='allow'/>
<numa>
<cell id='0' cpus='0-1' memory='1048576' unit='KiB'/>
<cell id='1' cpus='9-10' memory='2097152' unit='KiB'/>
</numa>
</cpu>
It's not. Lets therefore introduce a check and discourage them in
doing so.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This function should return the greatest CPU number set in
/domain/cpu/numa/cell/@cpus. The idea is that we should compare
the returned value against /domain/vcpu value. Yes, there exist
users who think the following is a good idea:
<vcpu placement='static'>4</vcpu>
<cpu mode='host-model'>
<model fallback='allow'/>
<numa>
<cell id='0' cpus='0-1' memory='1048576' unit='KiB'/>
<cell id='1' cpus='9-10' memory='2097152' unit='KiB'/>
</numa>
</cpu>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Qemu reports physical size 0 for block devices. As 15fa84acbb
changed the behavior of qemuDomainGetBlockInfo to just query the monitor
this created a regression since we didn't report the size correctly any
more.
This patch adds code to refresh the physical size of a block device by
opening it and seeking to the end and uses it both in
qemuDomainGetBlockInfo and also in qemuDomainGetStatsOneBlock that was
broken since it was introduced in this respect.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1250982
The commit 7e72de4 didn't consider the hotplug scenarios. The patch addresses
the hotplug case whereby if atleast one of the pci function is owned by a
guest, the hotplug of other functions/devices in the same iommu group to the
same guest goes through successfully.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
virtio-net-pci adapter is capable to use irqfd with vhost-net only in MSI-X
mode, which appears to be available only on PCIe bus, at least on ARM
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Legacy -net option works correctly only with embedded device models, which
do not require any bus specification. Therefore, we should use -device for
PCI hardware
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Here we assume that if qemu supports generic PCI host controller,
it is a part of virt machine and can be used for adding PCI devices.
In qemu this is actually a PCIe bus, so we also declare multibus
capability so that 0'th bus is specified to qemu correctly as 'pcie.0'
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This capability specifies that qemu can implement generic PCI host
controller. It is often used for virtual environments, including ARM.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Libvirt doesn't reliably know the location of the backing chain when
pre-creating images for non-shared migration. This isn't a problem for
full copy, but incremental copy requires the information.
Forbid pre-creating the image in cases where incremental migration is
required. This limitation can perhaps be lifted once libvirt will fully
support loading of backing chain information from the XML.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1249587
Only the symbols exported by the driver have been updated;
the driver implementation itself still uses the old names
internally.
No functional changes.
The driver only supports VIR_ARCH_PPC64 and VIR_ARCH_PPC64LE.
Just shuffling files around and updating the build system
accordingly. No functional changes.
The recent changes to perform SCSI device address checks during the
post parse callbacks ran afoul of the Coverity checker since the changes
assumed that the 'xmlopt' parameter to virDomainDeviceDefPostParse
would be non NULL (commit id 'ca2cf74e87'); however, what was missed
is there was an "if (xmlopt &&" check being made, so Coverity believed
that it could be possible for a NULL 'xmlopt'.
Checking the various calling paths seemingly disproves that. If called
from virDomainDeviceDefParse, there were two other possible calls that
would end up dereffing, so that path could not be NULL. If called via
virDomainDefPostParseDeviceIterator via virDomainDefPostParse there
are two callers (virDomainDefParseXML and qemuParseCommandLine)
which deref xmlopt either directly or through another call.
So I'm removing the check for non-NULL xmlopt.
Rather than provide a somewhat generic error message when the API
returns false, allow the caller to supply a "report = true" option
in order to cause virReportError's to describe which of the 3 paths
that can cause failure.
Some callers don't care about what caused the failure, they just want
to have a true/false - for those, calling with report = false should
be sufficient.
PowerPC pseries based VMs do not support a floppy disk controller.
This prohibits libvirt from creating qemu command with floppy device.
Signed-off-by: Kothapally Madhu Pavan <kmp@linux.vnet.ibm.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1180486
Signed-off-by: Ján Tomko <jtomko@redhat.com>
PowerPC pseries based VMs do not support a floppy disk controller.
This prohibits libvirt from adding floppy disk for a PowerPC pseries VM.
Signed-off-by: Kothapally Madhu Pavan <kmp@linux.vnet.ibm.com>
Rather than calling virDomainDiskDefAssignAddress during the parsing of
the XML, moving the setting of disk addresses into the domain/device post
processing.
Commit id '37588b25' which introduced VIR_DOMAIN_DEF_PARSE_DISK_SOURCE
in order to avoid generating the address which wasn't required will not
be affected by this as all it cared about was processing the source XML.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than calling virDomainHostdevAssignAddress during the parsing
of the XML, move the setting of a default hostdev address to domain/
device post processing.
Since the parse code no longer generates an address, we can remove
the virDomainDefMaybeAddHostdevSCSIcontroller since the call to
virDomainHostdevAssignAddress will attempt to add the controllers
that were not already defined in the XML.
This patch will also enforce that the address type is type 'drive'
when a SCSI subsystem <hostdev> element is provided with an <address>.
Signed-off-by: John Ferlan <jferlan@redhat.com>
If virDomainControllerSCSINextUnit failed to find a slot on the current
VIR_DOMAIN_CONTROLLER_TYPE_SCSI controller(s), try to add a new controller;
otherwise, there may be multiple unit=0 entries for the same "next"
controller.
Signed-off-by: John Ferlan <jferlan@redhat.com>
While searching the hostdevs the drive type can be either *_TYPE_DRIVE
or *_TYPE_NONE. If the type is _TYPE_NONE on the first scsi_host, then
there is an erroneous "match" that the address already exists.
Although this works by chance currently because hostdev's are added one
at a time and 'nhostdevs' would be zero, thus returning false for the
first hostdev added, a future patch will move the hostdev address
assignment into post processing resulting in the bad match.
This code is only called by path's expecting either drive or none.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Add the xmlopt parameter that was saved during virDomainDefPostParse
to the parameters. A future patch will use it.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Modify virDomainDriveAddressIsUsedBy{Disk|Hostdev} and
virDomainSCSIDriveAddressIsUsed to take 'bus' and 'target'
parameters. Will be used by future patches for more complete
address conflict checks
Signed-off-by: John Ferlan <jferlan@redhat.com>
Since the only way virDomainHostdevAssignAddress can be called is from
within virDomainHostdevDefParseXML when hostdev->source.subsys.type is
VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_SCSI, thus there's no need for redundancy.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Compiler error:
../../src/nodeinfo.c: In function 'nodeGetThreadsPerSubcore':
../../src/nodeinfo.c:2393: error: label 'out' defined but not used [-Wunused-label]
../../src/nodeinfo.c:2352: error: unused parameter 'arch' [-Wunused-parameter]
The virDomainObjListRemove() function unlocks a domain that it's given
due to legacy code. And because of that code, which should be
refactored, that last virObjectUnlock() cannot be just removed. So
instead, lock it right back for qemu for now. All calls to
qemuDomainRemoveInactive() are followed by code that unlocks the domain
again, plus the domain should be locked during qemuDomainObjEndJob(), so
the right place to lock it is right after virDomainObjListRemove().
The only place where this would cause a problem is the autodestroy
callback, so we need to get another reference there and uref+unlock it
afterwards. Luckily, returning NULL from that function doesn't mean an
error, and only means that it doesn't need to be unlocked anymore.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The nodeinfo is reporting incorrect number of cpus and incorrect host
topology on PPC64 KVM hosts. The KVM hypervisor on PPC64 needs only
the primary thread in a core to be online, and the secondaries offlined.
While scheduling a guest in, the kvm scheduler wakes up the secondaries to
run in guest context.
The host scheduling of the guests happen at the core level(as only primary
thread is online). The kvm scheduler exploits as many threads of the core
as needed by guest. Further, starting POWER8, the processor allows splitting
a physical core into multiple subcores with 2 or 4 threads each. Again, only
the primary thread in a subcore is online in the host. The KVM-PPC
scheduler allows guests to exploit all the offline threads in the subcore,
by bringing them online when needed.
(Kernel patches on split-core http://www.spinics.net/lists/kvm-ppc/msg09121.html)
Recently with dynamic micro-threading changes in ppc-kvm, makes sure
to utilize all the offline cpus across guests, and across guests with
different cpu topologies.
(https://www.mail-archive.com/kvm@vger.kernel.org/msg115978.html)
Since the offline cpus are brought online in the guest context, it is safe
to count them as online. Nodeinfo today discounts these offline cpus from
cpu count/topology calclulation, and the nodeinfo output is not of any help
and the host appears overcommited when it is actually not.
The patch carefully counts those offline threads whose primary threads are
online. The host topology displayed by the nodeinfo is also fixed when the
host is in valid kvm state.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Use I/O vector (iovec) instead of one huge memory buffer as suggested
in https://bugzilla.redhat.com/show_bug.cgi?id=1026137#c7. This avoids
doing memmove() to big buffers and performance doesn't degrade if
source (virNetClientStreamQueuePacket()) is faster than sink
(virNetClientStreamRecvPacket()).
Resolves: http://bugzilla.redhat.com/1026137
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
If cpuset is disabled or not available, it libvirt must not use it.
Mainly for actions that do not need it and can use sched_setaffinity()
or numa_membind() instead, because they will fail without good reason.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1244664
Signed-off-by: Luyao Huang <lhuang@redhat.com>
When stopping a domain on the destination host after a failed migration,
we need to avoid reseting security labels since the domain is still
running on the source host. While we were correctly doing so in some
cases, there were still some paths which did this wrong.
https://bugzilla.redhat.com/show_bug.cgi?id=1242904
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
In addition to checking the current asynchronous job
qemuMigrationJobIsActive reports an error if the current job does not
match the one we asked for. Let's just check the job directly since we
are not interested in the error in qemuProcessHandleMonitorEOF.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If destination libvirt doesn't support memory hotplug since all the
support was introduced by adding new elements the destination would
attempt to start qemu with an invalid configuration. The worse part is
that qemu might hang in such situation.
Fix this by sending a required migration feature called 'memory-hotplug'
to the destination. If the destination doesn't recognize it it will fail
the migration.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1248350
So far qemu-nbd is run even if the nbd kernel module isn't loaded. This
leads to errors when the user starts his lxc container while libvirt
could easily load the nbd module automatically.
Our atomic increment (virAtomicIntInc) uses (if available) gcc
__sync_add_and_fetch builtin. In qemu driver though, we'd profit more
from __sync_fetch_and_add builtin. To keep it simplistic, this patch
adjusts qemu driver initialization rather than adding a new atomic
increment macro.
virDomainDeleteConfig is meant to delete the persistent config and thus
it resets vm->autostart. Copy parts of qemuProcessRemoveDomainStatus to
a new helper to avoid using the incorrect function.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1230071
The remoteDomainOpenGraphicsFD method was using the wrong RPC
arg struct remote_domain_open_graphics_args instead of
remote_domain_open_graphics_fd_args. Fortunately both structs
had identical contents so there was no functional bug, but to
avoid confusing future maintainers, we should fix it.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
First hunk changes the use of srcdir to top_srcdir so it complies with
other rules in the Makefile. Second one removes the need of
remote_protocol.h in admin_protocol.h as it was suggested and worked in,
but this one line was missed apparently. Last one just removes the
'remote' naming from admin protocol specification, just so it's cleaner.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Commit d506a51aeb meant to check for
QEMU_CAPS_DRIVE_IOTUNE_MAX, but checked for QEMU_CAPS_DRIVE_IOTUNE
instead. That's clearly visible from the diff, but it got in. Because
of that, we were supplying information unknown for QEMU if it wasn't new
enough and we couldn't even properly handle the error, leading to
"Unexpected error". Also iops_size came at the same time with all the
other "_max" options, so check whether we're not setting that either if
QEMU_CAPS_DRIVE_IOTUNE_MAX is not supported.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1224053
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
There are some non-0 default values in virDomainControllerDef (and
will soon be more) that are easier to not forget if the remembering is
done by a single initializer function (rather than inline code after
allocating the obejct with generic VIR_ALLOC().
This loop occurs just after we've assured that all devices that
require a PCI device have been assigned and all necessary PCI
controllers have been added. It is the perfect place to add other
potentially auto-generated PCI controller attributes that are
dependent on the controller's PCI address (upcoming patch).
There is a convenient loop through all controllers at the end of the
function, but the patch to add new functionality will be cleaner if we
first rearrange that loop a bit.
Note that the loop originally was accessing info.addr.pci.bus prior to
determining that the pci part of the object was valid. This isn't
dangerous in any way, but seemed a bit ugly, so I fixed it.
The function that auto-assigns PCI addresses was written with the
hardcoded assumptions that any PCI bus would have slots available
starting at 1 and ending at 31. This isn't true for many types of
controllers (some have a single slot/port at 0, some have slots/ports
from 0 to 31). This patch updates that function to remove the
hardcoded assumptions. It will properly find/assign addresses for
devices that can only connect to pcie-(root|downstream)-port (which
have minSlot/maxSlot of 0/0) or a pcie-switch-upstream-port (0/31).
It still will not auto-create a new bus of the proper kind for these
connections when one doesn't exist, that task is for another day.
In commit 155ca616e, a change was introduced that no longer allowed defining
volumes via XML with a capacity of '0'. Because we check for info.size_arg
to be non-zero, this use-case fails. This patch allows info.size_arg to be
zero if no backing store is specified.
Signed-off-by: Chris J Arges <chris.j.arges@canonical.com>
Commit id 'ac3ed2085' causes 'virsh nodedev-list --cap net' to fail
on any system without SYSFS_INFINIBAND_DIR (/sys/class/infiniband).
Rather than assume it's there and fail on the attempt to open the
non-existent directory, check if it's there - if not, return
success and move on. Also fix caller to check < 0 upon return.
As reported by Suren Hajyan <shajyan@redhat.com> from run of unit tests
This reverts commit 7b401c3bda.
Until libvirt is able to differentiate whether heads='1' is just a
leftover from previous libvirt or whether that's added by user on
purpose and also whether the domain was started with the support for
qxl's max_outputs, we cannot incorporate this patch into the tree
due to compatibility reasons.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
This makes the range and static host array management in
virNetworkDHCPDefParseXML() more similar to what is done in
virNetworkDefUpdateIPDHCPRange() and virNetworkDefUpdateIPDHCPHost() -
they use VIR_APPEND_ELEMENT rather than a combination of
VIR_REALLOC_N() and separate incrementing of the array size.
The one functional change here is that a memory leak of the contents
of the last (unsuccessful) virNetworkDHCPHostDef was previously leaked
in certain failure conditions, but it is now properly cleaned up.
Bhyve as of r279225 (FreeBSD -CURRENT) or r284894 (FreeBSD 10-STABLE)
supports using UTC time offset via the '-u' argument to bhyve(8). By
default it's still using localtime.
Make the bhyve driver use UTC clock if it's requested by specifying
<clock offset='utc'> in domain XML and if the bhyve(8) binary supports
the '-u' flag.
Commit ac3ed20 breaks build on FreeBSD with:
CC util/libvirt_util_la-virnetdev.lo
util/virnetdev.c:2967:1: error: unused function 'virNetDevRDMAFeature' [-Werror,-Wunused-function]
virNetDevRDMAFeature(const char *ifname,
^
So hide virNetDevRDMAFeature function under the #ifdef 'SIOCETHTOOL'
and 'HAVE_STRUCT_IFREQ' section.
Pushed under the build breaker rule.
https://bugzilla.redhat.com/show_bug.cgi?id=1245476
We won't return the errno after commit 0d7f45ae, and
the more clearly error will be set in the code in vircgroup*.
Also We will always report error "Operation not permitted",
because the return is -1.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Move the calls to the respective functions from virNodeParseNode(),
which is executed once for every NUMA node, to
linuxNodeInfoCPUPopulate(), which is executed just once per host.
Keep track of what CPUs belong to the current node while walking
through the sysfs node entry, so we don't need to do it a second
time immediately afterwards.
This also allows us to loop through all CPUs that are part of a
node in guaranteed ascending order, which is something that is
required for some upcoming changes.
Swap out all instances of cpu_set_t and replace them with virBitmap,
which some of the code was already using anyway.
The changes are pretty mechanical, with one notable exception: an
assumption has been added on the max value we can run into while
reading either socket_it or core_id.
While this specific assumption was not in place before, we were
using cpu_set_t improperly by not making sure not to set any bit
past CPU_SETSIZE or explicitly allocating bigger bitmaps; in fact
the default size of a cpu_set_t, 1024, is way too low to run our
testsuite, which includes core_id values in the 2000s.