Changes in commit id 'dec6d9df' caused a compilation failure on a RHEL6
CI build environment. So just replace 'system' with 'syscap' as a name.
cc1: warnings being treated as errors
../../src/conf/node_device_conf.c: In function 'virNodeDevCapSystemParseXML':
../../src/conf/node_device_conf.c:1415: error: declaration of 'system' shadows a global declaration [-Wshadow]
In an effort to be consistent with the source module, alter the function
prototypes to follow the similar style of source with the "type" on one
line followed by the function name and arguments on subsequent lines with
with argument getting it's own line.
Alter the format of the code to follow more recent style guidelines of
two empty lines between functions, function decls with "[static] type"
on one line followed by function name with arguments to functions each
on one line.
Move all the NodeDeviceObj API's into their own module virnodedeviceobj
from the node_device_conf
Purely code motion at this point, plus adjustments to cleanly build.
AArch64 kernels are technically capable of running armv7l binaries.
Though some vendors disable this feature during kernel build, we
need to allow it in LXC.
Signed-off-by: Matwey V. Kornilov <matwey.kornilov@gmail.com>
All existing Haswell CPUID data were gathered from CPUs with broken TSX.
Let's add new data for Haswell with correct TSX implementation.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
All Intel Haswell processors (except Xeon E7 v3 with stepping >= 4) have
TSX disabled by microcode update. As not all CPUs are guaranteed to be
patched with microcode updates we need to explicitly disable TSX on
affected CPUs to avoid its accidental usage.
https://bugzilla.redhat.com/show_bug.cgi?id=1406791
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The original test didn't use family/model numbers to make better
decisions about the CPU model and thus mis-detected the model in the two
cases which are modified in this commit. The detected CPU models now
match those obtained from raw CPUID data.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Converted by running the following command, renaming the files as
*.new, and committing only the *.new files.
(cd tests/cputestdata; ./cpu-convert.py *.json)
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instantiating "host" CPU and querying it using qom-get has been the only
way of probing host CPU via QEMU until 2.9.0 implemented
query-cpu-model-expansion for x86_64. Even though libvirt never really
used the old way its result can be easily converted into the one
produced by query-cpu-model-expansion. Thus we can reuse the original
test data and possible get new data from hosts where QEMU does not
support the new QMP command.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The static CPU model expansion is designed to return only canonical
names of all CPU properties. To maintain backwards compatibility libvirt
is stuck with different spelling of some of the features, but we need to
use the full expansion to get the additional spellings. In addition to
returning all spelling variants for all properties the full expansion
will contain properties which are not guaranteed to be migration
compatible. Thus, we need to combine both expansions. First we need to
call the static expansion to limit the result to migratable properties.
Then we can use the result of the static expansion as an input to the
full expansion to get both canonical names and their aliases.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Querying "host" CPU model expansion only makes sense for KVM. QEMU 2.9.0
introduces a new "max" CPU model which can be used to ask QEMU what the
best CPU it can provide to a TCG domain is.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
While query-cpu-model-expansion returns only boolean features on s390,
but x86_64 reports some integer and string properties which we are
interested in.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
While reviewing a patch from Andrea that modified this test case, I
realized that although it was "properly failing" (it's a negative
test), that it was failing for the wrong reason (the MULTIFUNCTION cap
wasn't set in the test case, so it was saying that multifunction=on
wasn't supported by the QEMU binary; instead it should have been
complaining that it had run out of PCI slots of the appropriate type
and couldn't automatically add any more).
This improper failure had started when I added the patch to
automatically aggregate pcie-root-ports onto multiple functions of
each pcie-root slot, but I hadn't noticed it because the test still
failed.
This patch corrects the test case to 1) set the MULTIFUNCTION flag in
the caps, and 2) attempt to add 241 pcie-root-ports to a domain. Since
there are 30 slots available on a pcie-root (slot 0 is reserved, and
slot 31 is used by the integrated SATA controller), and a
pcie-root-port can only be placed on a function of a slot on
pcie-root, the maximum number of pcie-root-ports in any domain is 240.
The build system for libvirt correctly detects the location of blkid
using PKG_CONFIG_PATH environment variable. The file blkid.pc states
that the include flags should be: 'Cflags: -I${includedir}/blkid' but
libvirt searches for blkid.h inside ${includedir}/blkid/blkid, which is
wrong. Until now, the compilation for libvirt succeeded because of pure
luck, as it had -I/usr/include as a CFLAG. This issue was faced while
compiling libvirt on Ubuntu 16.04.2 with bare minimum dev packages and a
custom compiled blkid kept in a non-standard $prefix.
Signed-off-by: Nehal J Wani <nehaljw.kkd1@gmail.com>
The generated HTML will contain <ul></ul> otherwise, which
triggers an error during 'make check'.
The proper fix would be not to generate the problematic
HTML in the first place but, while I'm working on it, this
workaround will do.
virQEMUCapsHasPCIMultiBus() performs a version check on
the QEMU binary to figure out whether multiple buses are
supported, so to get the correct aliases assigned when
dealing with pSeries guests we need to spoof the version
accordingly in the test suite.
Due to the extra architecture-specific logic, it's already
necessary for users to call virQEMUCapsHasPCIMultiBus(),
so the capability itself is just a pointless distraction.
Our documentation states that the chardev logging file is truncated
unless append='on' is specified. QEMU also behaves the same way and
truncates the file unless we provide the argument. The new virlogd
implementation did not honor if the argument was missing and continued
to append to the file.
Truncate the file even when the 'append' attribute is not present to
behave the same with both implementations and adhere to the docs.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1420205
This function is calling public APIs (virNodeDeviceLookupByName
etc.). That requires the driver lock to be unlocked and locked
again. If we, however, replace the public APIs calls with the
internal calls (that public APIs call anyway), we can drop the
lock/unlock exercise.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The 'nodes' is overwritten after the first usage and possibly leaked
if any code in the first set of parsing goes to error.
Found by Coverity.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Arguably though, function returning only on success is a very
interesting, although quite impractical concept. Also, the errno isn't
and shouldn't be preserved in this case, since the errno can be directly
fed to the virReportSystemError.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
After eca76884ea in case of error in qemuDomainSetPrivatePaths()
in pretended start we jump to stop. I've changed this during
review from 'cleanup' which turned out to be correct. Well, sort
of. We can't call qemuProcessStop() as it decrements
driver->nactive and we did not increment it. However, it calls
virDomainObjRemoveTransientDef() which is basically the only
function we need to call. So call that function and goto cleanup;
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
While "x86" is a CPU sub driver name, it is not a recognized name of any
architecture known to libvirt. Let's use "x86_64" prefix which can be
used with virArch APIs.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The API is useful for creating virCPUData in a hypervisor driver from
data we got by querying the hypervisor.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The API is useful for creating virCPUData in a hypervisor driver from
data we got by querying the hypervisor.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The API is useful for creating virCPUData in a hypervisor driver from
data we got by querying the hypervisor.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The CPU driver provides APIs to create and free virCPUDataPtr. Thus all
APIs exported from the driver should work with that rather than
requiring the caller to pass a pointer to an internal part of the
structure.
In other words
virCPUx86DataAddCPUID(cpudata, &cpuid)
is much better than the original
virCPUx86DataAddCPUID(&cpudata->data.x86, &cpuid)
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The new API is called virCPUDataFree. Individual CPU drivers are no
longer required to implement their own freeing function unless they need
to free architecture specific data from virCPUData.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>