Yajl has not seen much activity upstream recently.
Switch to using Jansson >= 2.5.
All the platforms we target on https://libvirt.org/platforms.html
have a version >= 2.7 listed on the sites below:
https://repology.org/metapackage/jansson/versionshttps://build.opensuse.org/package/show/devel:libraries:c_c++/libjansson
Additionally, Ubuntu 14.04 on Travis-CI has 2.5. Set the requirement
to 2.5 since we don't use anything from newer versions.
Implement virJSONValue{From,To}String using Jansson, delete the yajl
code (and the related virJSONParser structure) and report an error
if someone explicitly specifies --with-yajl.
Also adjust the test data to account for Jansson's different whitespace
usage for empty arrays and tune up the specfile to keep 'make rpm'
working when bisecting.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
When computing a baseline CPU for a specific hypervisor we have to make
sure to include only CPU features supported by the hypervisor. Otherwise
the computed CPU could not be used for starting a new domain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Collin Walling <walling@linux.ibm.com>
This is required for virCPUBaseline to accept a list of guest CPU
definitions since they do not have arch set.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Collin Walling <walling@linux.ibm.com>
The CPU contains the updated microcode for CVE-2017-5715.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The CPU contains the updated microcode for CVE-2017-5715.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The CPU contains the updated microcode for CVE-2017-5715.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The CPU contains the updated microcode for CVE-2017-5715.
The *-guest.xml and *-json.xml CPU definitions use Skylake-Client CPU
model rather than Broadwell. This is similar to Xeon-E5-2650-v4 and it
is caused by our CPU model selection code when no model matches the CPU
signature (family + model). We'd need to maintain a complete list of CPU
signatures for our CPU models to fix this.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The CPU contains the updated microcode for CVE-2017-5715.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Right-aligning backslashes when defining macros or using complex
commands in Makefiles looks cute, but as soon as any changes is
required to the code you end up with either distractingly broken
alignment or unnecessarily big diffs where most of the changes
are just pushing all backslashes a few characters to one side.
Generated using
$ git grep -El '[[:blank:]][[:blank:]]\\$' | \
grep -E '*\.([chx]|am|mk)$$' | \
while read f; do \
sed -Ei 's/[[:blank:]]*[[:blank:]]\\$/ \\/g' "$f"; \
done
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Some tests require JSON_MODELS to be parsed into qemuCaps and applied
when computing CPU models and such test cannot succeed if QEMU driver is
disabled. Let's mark the tests with JSON_MODELS_REQUIRED and skip the
appropriate parts if building without QEMU.
On the other hand, CPU tests with JSON_MODELS should succeed even if
model definitions from QEMU are not parsed and applied. Let's explicitly
test this by repeating the tests without JSON_MODELS set.
This fixes the build with QEMU driver disabled, e.g., on some
architectures on RHEL/CentOS.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Without the fix in the previous patch the JSON data from QEMU would be
interpreted as Haswell-noTSX because x86DataFilterTSX would filter rtm
and hle features as a result of
family == 6 && model == 63 && stepping < 4
test even though this CPU has stepping == 4.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
xsaveopt is artificially removed from the host to test disabled feature
which is only included in QEMU's version of the CPU model.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
This CPU was incorrectly detected as SandyBridge before because the
number of additional <feature> elements was the same for both
SandyBridge and Westmere CPU models, but SandyBridge is newer (the CPU
signature does not help here because it doesn't match any signature
defined in cpu_map.xml). But since QEMU's version of SandyBridge CPU
model contains xsaveopt which needs to be disabled, Westmere becomes the
best CPU model when translating CPUID data to virCPUDef. Unfortunately,
this doesn't help with translating the data we got from QEMU and the CPU
model is still computed as SandyBridge in this case.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The unavailable features do not make any difference in this case,
because this is a SandyBridge CPU which has an empty list of unavailable
features.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
When testing cpuDecode for computing guest CPU definition from CPUID
data (the CPU definition reported by domain capabilities), we need to
use CPU models (and their usability blockers) from QEMU if they are
available to cpuDecode in the same way it is actually used in the qemu
driver.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Gather query-cpu-definitions results and use them for testing CPU model
usability blockers in CPUID to virCPUDef translation.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
If the actual result does not match our expectation, the tests would
not correctly show the difference if a CPU feature is disabled in the
expected result and the actual result does not mention it at all. The
test could complain about an unrelated CPU feature or it could even
crash in case the actual result contains no more features to go through.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The "preferred" parameter is not used by any caller of cpuDecode
anymore. It's only used internally in cpu_x86 to implement cpuBaseline.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
All APIs which expect a list of CPU models supported by hypervisors were
switched from char **models and int models to just accept a pointer to
virDomainCapsCPUModels object stored in domain capabilities. This avoids
the need to transform virDomainCapsCPUModelsPtr into a NULL-terminated
list of model names and also allows the various cpu driver APIs to
access additional details (such as its usability) about each CPU model.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
In the past we updated host-model CPUs with host CPU data by adding a
model and features, but keeping the host-model mode. And since the CPU
model is not normally formatted for host-model CPU defs, we had to pass
the updateCPU flag to the formatting code to be able to properly output
updated host-model CPUs. Libvirt doesn't do this anymore, host-model
CPUs are turned into custom mode CPUs once updated with host CPU data
and thus there's no reason for keeping the hacks inside CPU XML
formatters.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
CPU features unknown to a hypervisor will not be present in dataDisabled
even though the features won't naturally be enabled because.
Thus any features we asked for which are not in dataEnabled should be
considered disabled.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We already know from QEMU which CPU features will block migration. Let's
use this information to make a migratable copy of the host CPU model and
use it for updating guest CPU specification. This will allow us to drop
feature filtering from virCPUUpdate where it was just a hack.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If calling query-cpu-model-expansion on the 'host'/'max' CPU model with
'migratable' property set to false succeeds, we know QEMU is able to
tell us which features would disable migration. Thus we can mark all
enabled features as migratable.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The test takes
x86-cpuid-Something-guest.xml CPU (the CPU libvirt would use for
host-model on a CPU described by x86_64-cpuid-Something.xml without
talking to QEMU about what it supports on the host)
and updates it according to CPUID data from QEMU:
x86_64-cpuid-Something-enabled.xml (reported as "feature-words"
property of the CPU device)
and
x86_64-cpuid-Something-disabled.xml (reported as "filtered-features"
property of the CPU device).
The result is compared to
x86_64-cpuid-Something-json.xml (the CPU libvirt would use as
host-model based on the reply from query-cpu-model-expansion).
The comparison is a bit tricky because the *-json.xml CPU contains fewer
disabled features. Only the features which are included in the base CPU
model, but listed as disabled in *.json will be disabled in *-json.xml.
The CPU computed by virCPUUpdateLive from the test data will list all
features present in the host's CPUID data and not enabled in *.json as
disabled. The cpuTestUpdateLiveCompare function checks that the computed
and expected sets of enabled features match.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The public API flags are handled by the cpuBaselineXML wrapper. The
internal cpuBaseline API only needs to know whether it is supposed to
drop non-migratable features.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
cpuBaseline is responsible for computing a baseline CPU while feature
expansion is done by virCPUExpandFeatures. The cpuBaselineXML wrapper
(used by hypervisor drivers to implement virConnectBaselineCPU API)
calls cpuBaseline followed by virCPUExpandFeatures if requested by
VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES flag.
The features in the three changed test files had to be sorted using
"sort -k 3" because virCPUExpandFeatures returns a sorted list of
features.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
All existing Haswell CPUID data were gathered from CPUs with broken TSX.
Let's add new data for Haswell with correct TSX implementation.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The original test didn't use family/model numbers to make better
decisions about the CPU model and thus mis-detected the model in the two
cases which are modified in this commit. The detected CPU models now
match those obtained from raw CPUID data.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>