Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
/*
|
|
|
|
* cpu_x86.c: CPU driver for CPUs with x86 compatible CPUID instruction
|
|
|
|
*
|
2014-09-03 17:29:38 +00:00
|
|
|
* Copyright (C) 2009-2014 Red Hat, Inc.
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
*
|
|
|
|
* This library is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU Lesser General Public
|
|
|
|
* License as published by the Free Software Foundation; either
|
|
|
|
* version 2.1 of the License, or (at your option) any later version.
|
|
|
|
*
|
|
|
|
* This library is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
|
|
* Lesser General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU Lesser General Public
|
2012-09-20 22:30:55 +00:00
|
|
|
* License along with this library. If not, see
|
2012-07-21 10:06:23 +00:00
|
|
|
* <http://www.gnu.org/licenses/>.
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
*
|
|
|
|
* Authors:
|
|
|
|
* Jiri Denemark <jdenemar@redhat.com>
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <config.h>
|
|
|
|
|
|
|
|
#include <stdint.h>
|
|
|
|
|
2012-12-12 17:59:27 +00:00
|
|
|
#include "virlog.h"
|
2012-12-12 18:06:53 +00:00
|
|
|
#include "viralloc.h"
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
#include "cpu.h"
|
|
|
|
#include "cpu_map.h"
|
|
|
|
#include "cpu_x86.h"
|
2012-12-04 12:04:07 +00:00
|
|
|
#include "virbuffer.h"
|
2013-02-07 01:57:13 +00:00
|
|
|
#include "virendian.h"
|
2013-04-03 10:36:23 +00:00
|
|
|
#include "virstring.h"
|
2017-12-12 15:23:40 +00:00
|
|
|
#include "virhostcpu.h"
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
#define VIR_FROM_THIS VIR_FROM_CPU
|
|
|
|
|
2014-02-28 12:16:17 +00:00
|
|
|
VIR_LOG_INIT("cpu.cpu_x86");
|
|
|
|
|
2010-07-02 15:51:59 +00:00
|
|
|
#define VENDOR_STRING_LENGTH 12
|
|
|
|
|
2016-05-20 08:59:13 +00:00
|
|
|
static const virCPUx86CPUID cpuidNull = { 0 };
|
2010-07-02 15:51:59 +00:00
|
|
|
|
2012-12-11 12:58:54 +00:00
|
|
|
static const virArch archs[] = { VIR_ARCH_I686, VIR_ARCH_X86_64 };
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-05-11 08:47:21 +00:00
|
|
|
typedef struct _virCPUx86Vendor virCPUx86Vendor;
|
|
|
|
typedef virCPUx86Vendor *virCPUx86VendorPtr;
|
|
|
|
struct _virCPUx86Vendor {
|
2010-07-02 15:51:59 +00:00
|
|
|
char *name;
|
2013-07-23 18:00:14 +00:00
|
|
|
virCPUx86CPUID cpuid;
|
2010-07-02 15:51:59 +00:00
|
|
|
};
|
|
|
|
|
2016-05-11 09:56:14 +00:00
|
|
|
typedef struct _virCPUx86Feature virCPUx86Feature;
|
|
|
|
typedef virCPUx86Feature *virCPUx86FeaturePtr;
|
|
|
|
struct _virCPUx86Feature {
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
char *name;
|
2016-06-07 07:38:53 +00:00
|
|
|
virCPUx86Data data;
|
2016-05-17 08:59:28 +00:00
|
|
|
bool migratable;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
};
|
|
|
|
|
2013-10-09 09:42:24 +00:00
|
|
|
|
2017-11-03 12:09:47 +00:00
|
|
|
#define KVM_FEATURE_DEF(Name, Eax_in, Eax) \
|
|
|
|
static virCPUx86CPUID Name ## _cpuid[] = { \
|
|
|
|
{ .eax_in = Eax_in, .eax = Eax }, \
|
2016-06-07 10:09:41 +00:00
|
|
|
}
|
|
|
|
|
2017-11-03 12:09:47 +00:00
|
|
|
#define KVM_FEATURE(Name) \
|
|
|
|
{ \
|
|
|
|
.name = (char *) Name, \
|
|
|
|
.data = { \
|
|
|
|
.len = ARRAY_CARDINALITY(Name ## _cpuid), \
|
|
|
|
.data = Name ## _cpuid \
|
|
|
|
} \
|
2016-06-07 10:09:41 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_CLOCKSOURCE,
|
|
|
|
0x40000001, 0x00000001);
|
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_NOP_IO_DELAY,
|
|
|
|
0x40000001, 0x00000002);
|
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_MMU_OP,
|
|
|
|
0x40000001, 0x00000004);
|
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_CLOCKSOURCE2,
|
|
|
|
0x40000001, 0x00000008);
|
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_ASYNC_PF,
|
|
|
|
0x40000001, 0x00000010);
|
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_STEAL_TIME,
|
|
|
|
0x40000001, 0x00000020);
|
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_PV_EOI,
|
|
|
|
0x40000001, 0x00000040);
|
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_PV_UNHALT,
|
|
|
|
0x40000001, 0x00000080);
|
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_CLOCKSOURCE_STABLE_BIT,
|
|
|
|
0x40000001, 0x01000000);
|
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_HV_RUNTIME,
|
|
|
|
0x40000003, 0x00000001);
|
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_HV_SYNIC,
|
|
|
|
0x40000003, 0x00000004);
|
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_HV_STIMER,
|
|
|
|
0x40000003, 0x00000008);
|
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_HV_RELAXED,
|
|
|
|
0x40000003, 0x00000020);
|
2017-01-31 14:12:13 +00:00
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_HV_SPINLOCKS,
|
2016-06-07 10:09:41 +00:00
|
|
|
0x40000003, 0x00000022);
|
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_HV_VAPIC,
|
|
|
|
0x40000003, 0x00000030);
|
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_HV_VPINDEX,
|
|
|
|
0x40000003, 0x00000040);
|
|
|
|
KVM_FEATURE_DEF(VIR_CPU_x86_KVM_HV_RESET,
|
|
|
|
0x40000003, 0x00000080);
|
|
|
|
|
|
|
|
static virCPUx86Feature x86_kvm_features[] =
|
|
|
|
{
|
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_CLOCKSOURCE),
|
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_NOP_IO_DELAY),
|
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_MMU_OP),
|
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_CLOCKSOURCE2),
|
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_ASYNC_PF),
|
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_STEAL_TIME),
|
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_PV_EOI),
|
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_PV_UNHALT),
|
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_CLOCKSOURCE_STABLE_BIT),
|
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_HV_RUNTIME),
|
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_HV_SYNIC),
|
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_HV_STIMER),
|
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_HV_RELAXED),
|
2017-01-31 14:12:13 +00:00
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_HV_SPINLOCKS),
|
2016-06-07 10:09:41 +00:00
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_HV_VAPIC),
|
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_HV_VPINDEX),
|
|
|
|
KVM_FEATURE(VIR_CPU_x86_KVM_HV_RESET),
|
2013-10-09 09:42:24 +00:00
|
|
|
};
|
|
|
|
|
2016-05-11 10:03:48 +00:00
|
|
|
typedef struct _virCPUx86Model virCPUx86Model;
|
|
|
|
typedef virCPUx86Model *virCPUx86ModelPtr;
|
|
|
|
struct _virCPUx86Model {
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
char *name;
|
2016-05-11 08:47:21 +00:00
|
|
|
virCPUx86VendorPtr vendor;
|
2015-06-25 13:06:19 +00:00
|
|
|
uint32_t signature;
|
2016-06-07 07:38:53 +00:00
|
|
|
virCPUx86Data data;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
};
|
|
|
|
|
2016-05-11 10:30:04 +00:00
|
|
|
typedef struct _virCPUx86Map virCPUx86Map;
|
|
|
|
typedef virCPUx86Map *virCPUx86MapPtr;
|
|
|
|
struct _virCPUx86Map {
|
2016-05-17 12:30:18 +00:00
|
|
|
size_t nvendors;
|
|
|
|
virCPUx86VendorPtr *vendors;
|
2016-05-17 13:15:40 +00:00
|
|
|
size_t nfeatures;
|
|
|
|
virCPUx86FeaturePtr *features;
|
2016-05-18 13:24:05 +00:00
|
|
|
size_t nmodels;
|
|
|
|
virCPUx86ModelPtr *models;
|
2016-05-17 13:15:40 +00:00
|
|
|
size_t nblockers;
|
|
|
|
virCPUx86FeaturePtr *migrate_blockers;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
};
|
|
|
|
|
2016-05-11 10:30:04 +00:00
|
|
|
static virCPUx86MapPtr cpuMap;
|
2017-12-12 15:23:40 +00:00
|
|
|
static unsigned int microcodeVersion;
|
|
|
|
|
2017-12-13 21:30:31 +00:00
|
|
|
int virCPUx86DriverOnceInit(void);
|
|
|
|
VIR_ONCE_GLOBAL_INIT(virCPUx86Driver);
|
2013-10-08 16:20:10 +00:00
|
|
|
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-05-11 10:36:43 +00:00
|
|
|
typedef enum {
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
SUBSET,
|
|
|
|
EQUAL,
|
|
|
|
SUPERSET,
|
|
|
|
UNRELATED
|
2016-05-11 10:36:43 +00:00
|
|
|
} virCPUx86CompareResult;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
|
2016-05-11 10:39:15 +00:00
|
|
|
typedef struct _virCPUx86DataIterator virCPUx86DataIterator;
|
|
|
|
typedef virCPUx86DataIterator *virCPUx86DataIteratorPtr;
|
|
|
|
struct _virCPUx86DataIterator {
|
2013-10-07 13:26:17 +00:00
|
|
|
const virCPUx86Data *data;
|
2010-06-30 11:08:57 +00:00
|
|
|
int pos;
|
|
|
|
};
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
|
2013-10-07 14:20:31 +00:00
|
|
|
#define virCPUx86DataIteratorInit(data) \
|
2013-10-07 13:26:17 +00:00
|
|
|
{ data, -1 }
|
2010-06-30 11:08:57 +00:00
|
|
|
|
|
|
|
|
2013-10-07 15:15:46 +00:00
|
|
|
static bool
|
2013-07-23 18:00:14 +00:00
|
|
|
x86cpuidMatch(const virCPUx86CPUID *cpuid1,
|
|
|
|
const virCPUx86CPUID *cpuid2)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
|
|
|
return (cpuid1->eax == cpuid2->eax &&
|
|
|
|
cpuid1->ebx == cpuid2->ebx &&
|
|
|
|
cpuid1->ecx == cpuid2->ecx &&
|
|
|
|
cpuid1->edx == cpuid2->edx);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2013-10-07 15:15:46 +00:00
|
|
|
static bool
|
2013-07-23 18:00:14 +00:00
|
|
|
x86cpuidMatchMasked(const virCPUx86CPUID *cpuid,
|
|
|
|
const virCPUx86CPUID *mask)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
|
|
|
return ((cpuid->eax & mask->eax) == mask->eax &&
|
|
|
|
(cpuid->ebx & mask->ebx) == mask->ebx &&
|
|
|
|
(cpuid->ecx & mask->ecx) == mask->ecx &&
|
|
|
|
(cpuid->edx & mask->edx) == mask->edx);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-06-30 11:08:57 +00:00
|
|
|
static void
|
2013-07-23 18:00:14 +00:00
|
|
|
x86cpuidSetBits(virCPUx86CPUID *cpuid,
|
|
|
|
const virCPUx86CPUID *mask)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2013-10-07 13:26:17 +00:00
|
|
|
if (!mask)
|
|
|
|
return;
|
|
|
|
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
cpuid->eax |= mask->eax;
|
|
|
|
cpuid->ebx |= mask->ebx;
|
|
|
|
cpuid->ecx |= mask->ecx;
|
|
|
|
cpuid->edx |= mask->edx;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-06-30 11:08:57 +00:00
|
|
|
static void
|
2013-07-23 18:00:14 +00:00
|
|
|
x86cpuidClearBits(virCPUx86CPUID *cpuid,
|
|
|
|
const virCPUx86CPUID *mask)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2013-10-07 13:26:17 +00:00
|
|
|
if (!mask)
|
|
|
|
return;
|
|
|
|
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
cpuid->eax &= ~mask->eax;
|
|
|
|
cpuid->ebx &= ~mask->ebx;
|
|
|
|
cpuid->ecx &= ~mask->ecx;
|
|
|
|
cpuid->edx &= ~mask->edx;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-06-30 11:08:57 +00:00
|
|
|
static void
|
2013-07-23 18:00:14 +00:00
|
|
|
x86cpuidAndBits(virCPUx86CPUID *cpuid,
|
|
|
|
const virCPUx86CPUID *mask)
|
2010-01-27 13:33:20 +00:00
|
|
|
{
|
2013-10-07 13:26:17 +00:00
|
|
|
if (!mask)
|
|
|
|
return;
|
|
|
|
|
2010-01-27 13:33:20 +00:00
|
|
|
cpuid->eax &= mask->eax;
|
|
|
|
cpuid->ebx &= mask->ebx;
|
|
|
|
cpuid->ecx &= mask->ecx;
|
|
|
|
cpuid->edx &= mask->edx;
|
|
|
|
}
|
|
|
|
|
2017-09-26 12:52:34 +00:00
|
|
|
|
|
|
|
static virCPUx86FeaturePtr
|
|
|
|
x86FeatureFind(virCPUx86MapPtr map,
|
|
|
|
const char *name)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
for (i = 0; i < map->nfeatures; i++) {
|
|
|
|
if (STREQ(map->features[i]->name, name))
|
|
|
|
return map->features[i];
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static virCPUx86FeaturePtr
|
|
|
|
x86FeatureFindInternal(const char *name)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
size_t count = ARRAY_CARDINALITY(x86_kvm_features);
|
|
|
|
|
|
|
|
for (i = 0; i < count; i++) {
|
|
|
|
if (STREQ(x86_kvm_features[i].name, name))
|
|
|
|
return x86_kvm_features + i;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2013-10-07 13:26:17 +00:00
|
|
|
static int
|
|
|
|
virCPUx86CPUIDSorter(const void *a, const void *b)
|
|
|
|
{
|
|
|
|
virCPUx86CPUID *da = (virCPUx86CPUID *) a;
|
|
|
|
virCPUx86CPUID *db = (virCPUx86CPUID *) b;
|
|
|
|
|
2016-05-20 07:48:21 +00:00
|
|
|
if (da->eax_in > db->eax_in)
|
2013-10-07 13:26:17 +00:00
|
|
|
return 1;
|
2016-05-20 07:48:21 +00:00
|
|
|
else if (da->eax_in < db->eax_in)
|
2013-10-07 13:26:17 +00:00
|
|
|
return -1;
|
|
|
|
|
2016-05-20 08:59:13 +00:00
|
|
|
if (da->ecx_in > db->ecx_in)
|
|
|
|
return 1;
|
|
|
|
else if (da->ecx_in < db->ecx_in)
|
|
|
|
return -1;
|
|
|
|
|
2013-10-07 13:26:17 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-01-27 13:33:20 +00:00
|
|
|
|
2016-05-23 15:45:40 +00:00
|
|
|
/* skips all zero CPUID leaves */
|
2013-07-23 18:00:14 +00:00
|
|
|
static virCPUx86CPUID *
|
2016-05-11 10:39:15 +00:00
|
|
|
x86DataCpuidNext(virCPUx86DataIteratorPtr iterator)
|
2010-06-30 11:08:57 +00:00
|
|
|
{
|
2013-10-07 13:26:17 +00:00
|
|
|
const virCPUx86Data *data = iterator->data;
|
2010-06-30 11:08:57 +00:00
|
|
|
|
2012-12-18 20:27:09 +00:00
|
|
|
if (!data)
|
2010-06-30 11:08:57 +00:00
|
|
|
return NULL;
|
|
|
|
|
2013-10-07 13:26:17 +00:00
|
|
|
while (++iterator->pos < data->len) {
|
|
|
|
if (!x86cpuidMatch(data->data + iterator->pos, &cpuidNull))
|
|
|
|
return data->data + iterator->pos;
|
|
|
|
}
|
2010-06-30 11:08:57 +00:00
|
|
|
|
2013-10-07 13:26:17 +00:00
|
|
|
return NULL;
|
2010-06-30 11:08:57 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2013-07-23 18:00:14 +00:00
|
|
|
static virCPUx86CPUID *
|
2013-07-23 18:03:30 +00:00
|
|
|
x86DataCpuid(const virCPUx86Data *data,
|
2016-05-20 08:59:13 +00:00
|
|
|
const virCPUx86CPUID *cpuid)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
Convert 'int i' to 'size_t i' in src/cpu/ files
Convert the type of loop iterators named 'i', 'j', k',
'ii', 'jj', 'kk', to be 'size_t' instead of 'int' or
'unsigned int', also santizing 'ii', 'jj', 'kk' to use
the normal 'i', 'j', 'k' naming
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2013-07-08 14:09:33 +00:00
|
|
|
size_t i;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2013-10-07 13:26:17 +00:00
|
|
|
for (i = 0; i < data->len; i++) {
|
2016-05-20 08:59:13 +00:00
|
|
|
if (data->data[i].eax_in == cpuid->eax_in &&
|
|
|
|
data->data[i].ecx_in == cpuid->ecx_in)
|
2013-10-07 13:26:17 +00:00
|
|
|
return data->data + i;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2013-10-07 13:26:17 +00:00
|
|
|
return NULL;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2017-02-02 14:23:36 +00:00
|
|
|
static void
|
2016-06-07 07:38:53 +00:00
|
|
|
virCPUx86DataClear(virCPUx86Data *data)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2016-05-12 13:06:25 +00:00
|
|
|
if (!data)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
return;
|
|
|
|
|
2013-10-07 13:26:17 +00:00
|
|
|
VIR_FREE(data->data);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2012-12-18 20:27:09 +00:00
|
|
|
static void
|
2017-02-02 14:37:40 +00:00
|
|
|
virCPUx86DataFree(virCPUDataPtr data)
|
2012-12-18 20:27:09 +00:00
|
|
|
{
|
|
|
|
if (!data)
|
|
|
|
return;
|
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
virCPUx86DataClear(&data->data.x86);
|
2012-12-18 20:27:09 +00:00
|
|
|
VIR_FREE(data);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
static int
|
|
|
|
x86DataCopy(virCPUx86Data *dst, const virCPUx86Data *src)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
Convert 'int i' to 'size_t i' in src/cpu/ files
Convert the type of loop iterators named 'i', 'j', k',
'ii', 'jj', 'kk', to be 'size_t' instead of 'int' or
'unsigned int', also santizing 'ii', 'jj', 'kk' to use
the normal 'i', 'j', 'k' naming
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2013-07-08 14:09:33 +00:00
|
|
|
size_t i;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
if (VIR_ALLOC_N(dst->data, src->len) < 0)
|
|
|
|
return -1;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
dst->len = src->len;
|
|
|
|
for (i = 0; i < src->len; i++)
|
|
|
|
dst->data[i] = src->data[i];
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
return 0;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2017-02-02 14:52:13 +00:00
|
|
|
static int
|
|
|
|
virCPUx86DataAddCPUIDInt(virCPUx86Data *data,
|
|
|
|
const virCPUx86CPUID *cpuid)
|
2010-07-02 15:51:59 +00:00
|
|
|
{
|
2013-10-07 13:26:17 +00:00
|
|
|
virCPUx86CPUID *existing;
|
2010-07-02 15:51:59 +00:00
|
|
|
|
2016-05-20 08:59:13 +00:00
|
|
|
if ((existing = x86DataCpuid(data, cpuid))) {
|
2013-10-07 13:26:17 +00:00
|
|
|
x86cpuidSetBits(existing, cpuid);
|
|
|
|
} else {
|
|
|
|
if (VIR_APPEND_ELEMENT_COPY(data->data, data->len,
|
|
|
|
*((virCPUx86CPUID *)cpuid)) < 0)
|
|
|
|
return -1;
|
2010-06-30 11:08:57 +00:00
|
|
|
|
2013-10-07 13:26:17 +00:00
|
|
|
qsort(data->data, data->len,
|
|
|
|
sizeof(virCPUx86CPUID), virCPUx86CPUIDSorter);
|
|
|
|
}
|
2010-07-02 15:51:59 +00:00
|
|
|
|
2010-06-30 11:08:57 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int
|
2013-07-23 18:03:30 +00:00
|
|
|
x86DataAdd(virCPUx86Data *data1,
|
|
|
|
const virCPUx86Data *data2)
|
2010-06-30 11:08:57 +00:00
|
|
|
{
|
2016-05-11 10:39:15 +00:00
|
|
|
virCPUx86DataIterator iter = virCPUx86DataIteratorInit(data2);
|
2013-10-07 13:26:17 +00:00
|
|
|
virCPUx86CPUID *cpuid1;
|
|
|
|
virCPUx86CPUID *cpuid2;
|
2010-07-02 15:51:59 +00:00
|
|
|
|
2013-10-07 13:26:17 +00:00
|
|
|
while ((cpuid2 = x86DataCpuidNext(&iter))) {
|
2016-05-20 08:59:13 +00:00
|
|
|
cpuid1 = x86DataCpuid(data1, cpuid2);
|
2010-07-02 15:51:59 +00:00
|
|
|
|
2013-10-07 13:26:17 +00:00
|
|
|
if (cpuid1) {
|
|
|
|
x86cpuidSetBits(cpuid1, cpuid2);
|
|
|
|
} else {
|
2017-02-02 14:52:13 +00:00
|
|
|
if (virCPUx86DataAddCPUIDInt(data1, cpuid2) < 0)
|
2013-10-07 13:26:17 +00:00
|
|
|
return -1;
|
|
|
|
}
|
2010-06-30 11:08:57 +00:00
|
|
|
}
|
2010-07-02 15:51:59 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-04-14 15:41:32 +00:00
|
|
|
static void
|
2013-07-23 18:03:30 +00:00
|
|
|
x86DataSubtract(virCPUx86Data *data1,
|
|
|
|
const virCPUx86Data *data2)
|
2010-04-14 15:41:32 +00:00
|
|
|
{
|
2016-05-11 10:39:15 +00:00
|
|
|
virCPUx86DataIterator iter = virCPUx86DataIteratorInit(data1);
|
2013-10-07 13:26:17 +00:00
|
|
|
virCPUx86CPUID *cpuid1;
|
|
|
|
virCPUx86CPUID *cpuid2;
|
2010-04-14 15:41:32 +00:00
|
|
|
|
2013-10-07 13:26:17 +00:00
|
|
|
while ((cpuid1 = x86DataCpuidNext(&iter))) {
|
2016-05-20 08:59:13 +00:00
|
|
|
cpuid2 = x86DataCpuid(data2, cpuid1);
|
2013-10-07 13:26:17 +00:00
|
|
|
x86cpuidClearBits(cpuid1, cpuid2);
|
2010-04-14 15:41:32 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-06-30 11:08:57 +00:00
|
|
|
static void
|
2013-07-23 18:03:30 +00:00
|
|
|
x86DataIntersect(virCPUx86Data *data1,
|
|
|
|
const virCPUx86Data *data2)
|
2010-07-02 15:51:40 +00:00
|
|
|
{
|
2016-05-11 10:39:15 +00:00
|
|
|
virCPUx86DataIterator iter = virCPUx86DataIteratorInit(data1);
|
2013-07-23 18:00:14 +00:00
|
|
|
virCPUx86CPUID *cpuid1;
|
|
|
|
virCPUx86CPUID *cpuid2;
|
2010-07-02 15:51:40 +00:00
|
|
|
|
2010-06-30 11:08:57 +00:00
|
|
|
while ((cpuid1 = x86DataCpuidNext(&iter))) {
|
2016-05-20 08:59:13 +00:00
|
|
|
cpuid2 = x86DataCpuid(data2, cpuid1);
|
2010-06-30 11:08:57 +00:00
|
|
|
if (cpuid2)
|
|
|
|
x86cpuidAndBits(cpuid1, cpuid2);
|
|
|
|
else
|
|
|
|
x86cpuidClearBits(cpuid1, cpuid1);
|
2010-07-02 15:51:40 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-06-30 11:08:57 +00:00
|
|
|
static bool
|
2013-07-23 18:03:30 +00:00
|
|
|
x86DataIsEmpty(virCPUx86Data *data)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2016-05-11 10:39:15 +00:00
|
|
|
virCPUx86DataIterator iter = virCPUx86DataIteratorInit(data);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-05-12 13:06:25 +00:00
|
|
|
return !x86DataCpuidNext(&iter);
|
2010-06-30 11:08:57 +00:00
|
|
|
}
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
|
2010-06-30 11:08:57 +00:00
|
|
|
static bool
|
2013-07-23 18:03:30 +00:00
|
|
|
x86DataIsSubset(const virCPUx86Data *data,
|
|
|
|
const virCPUx86Data *subset)
|
2010-06-30 11:08:57 +00:00
|
|
|
{
|
|
|
|
|
2016-05-11 10:39:15 +00:00
|
|
|
virCPUx86DataIterator iter = virCPUx86DataIteratorInit((virCPUx86Data *)subset);
|
2013-07-23 18:00:14 +00:00
|
|
|
const virCPUx86CPUID *cpuid;
|
|
|
|
const virCPUx86CPUID *cpuidSubset;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2010-06-30 11:08:57 +00:00
|
|
|
while ((cpuidSubset = x86DataCpuidNext(&iter))) {
|
2016-05-20 08:59:13 +00:00
|
|
|
if (!(cpuid = x86DataCpuid(data, cpuidSubset)) ||
|
2010-06-30 11:08:57 +00:00
|
|
|
!x86cpuidMatchMasked(cpuid, cpuidSubset))
|
|
|
|
return false;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2010-06-30 11:08:57 +00:00
|
|
|
return true;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-03-23 08:32:50 +00:00
|
|
|
/* also removes all detected features from data */
|
|
|
|
static int
|
|
|
|
x86DataToCPUFeatures(virCPUDefPtr cpu,
|
|
|
|
int policy,
|
2013-07-23 18:03:30 +00:00
|
|
|
virCPUx86Data *data,
|
2016-05-11 10:30:04 +00:00
|
|
|
virCPUx86MapPtr map)
|
2010-03-23 08:32:50 +00:00
|
|
|
{
|
2016-05-17 13:15:40 +00:00
|
|
|
size_t i;
|
2010-03-23 08:32:50 +00:00
|
|
|
|
2016-05-17 13:15:40 +00:00
|
|
|
for (i = 0; i < map->nfeatures; i++) {
|
|
|
|
virCPUx86FeaturePtr feature = map->features[i];
|
2016-06-07 07:38:53 +00:00
|
|
|
if (x86DataIsSubset(data, &feature->data)) {
|
|
|
|
x86DataSubtract(data, &feature->data);
|
2010-06-30 11:08:57 +00:00
|
|
|
if (virCPUDefAddFeature(cpu, feature->name, policy) < 0)
|
|
|
|
return -1;
|
2010-03-23 08:32:50 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-07-02 15:51:59 +00:00
|
|
|
/* also removes bits corresponding to vendor string from data */
|
2016-05-11 08:47:21 +00:00
|
|
|
static virCPUx86VendorPtr
|
2016-05-12 14:02:09 +00:00
|
|
|
x86DataToVendor(const virCPUx86Data *data,
|
2016-05-11 10:30:04 +00:00
|
|
|
virCPUx86MapPtr map)
|
2010-07-02 15:51:59 +00:00
|
|
|
{
|
2013-07-23 18:00:14 +00:00
|
|
|
virCPUx86CPUID *cpuid;
|
2016-05-17 12:30:18 +00:00
|
|
|
size_t i;
|
2010-07-02 15:51:59 +00:00
|
|
|
|
2016-05-17 12:30:18 +00:00
|
|
|
for (i = 0; i < map->nvendors; i++) {
|
|
|
|
virCPUx86VendorPtr vendor = map->vendors[i];
|
2016-05-20 08:59:13 +00:00
|
|
|
if ((cpuid = x86DataCpuid(data, &vendor->cpuid)) &&
|
2010-07-02 15:51:59 +00:00
|
|
|
x86cpuidMatchMasked(cpuid, &vendor->cpuid)) {
|
|
|
|
x86cpuidClearBits(cpuid, &vendor->cpuid);
|
|
|
|
return vendor;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2017-02-02 19:12:38 +00:00
|
|
|
static int
|
|
|
|
virCPUx86VendorToCPUID(const char *vendor,
|
|
|
|
virCPUx86CPUID *cpuid)
|
|
|
|
{
|
|
|
|
if (strlen(vendor) != VENDOR_STRING_LENGTH) {
|
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("Invalid CPU vendor string '%s'"), vendor);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
cpuid->eax_in = 0;
|
|
|
|
cpuid->ecx_in = 0;
|
|
|
|
cpuid->ebx = virReadBufInt32LE(vendor);
|
|
|
|
cpuid->edx = virReadBufInt32LE(vendor + 4);
|
|
|
|
cpuid->ecx = virReadBufInt32LE(vendor + 8);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2015-06-25 13:06:19 +00:00
|
|
|
static uint32_t
|
|
|
|
x86MakeSignature(unsigned int family,
|
2017-10-10 11:34:28 +00:00
|
|
|
unsigned int model,
|
|
|
|
unsigned int stepping)
|
2015-06-25 13:06:19 +00:00
|
|
|
{
|
|
|
|
uint32_t sig = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* CPU signature (eax from 0x1 CPUID leaf):
|
|
|
|
*
|
|
|
|
* |31 .. 28|27 .. 20|19 .. 16|15 .. 14|13 .. 12|11 .. 8|7 .. 4|3 .. 0|
|
|
|
|
* | R | extFam | extMod | R | PType | Fam | Mod | Step |
|
|
|
|
*
|
|
|
|
* R reserved
|
|
|
|
* extFam extended family (valid only if Fam == 0xf)
|
|
|
|
* extMod extended model
|
|
|
|
* PType processor type
|
|
|
|
* Fam family
|
|
|
|
* Mod model
|
|
|
|
* Step stepping
|
|
|
|
*
|
|
|
|
* family = eax[27:20] + eax[11:8]
|
|
|
|
* model = eax[19:16] << 4 + eax[7:4]
|
2017-10-10 11:34:28 +00:00
|
|
|
* stepping = eax[3:0]
|
2015-06-25 13:06:19 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
/* extFam */
|
|
|
|
if (family > 0xf) {
|
|
|
|
sig |= (family - 0xf) << 20;
|
|
|
|
family = 0xf;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* extMod */
|
|
|
|
sig |= (model >> 4) << 16;
|
|
|
|
|
|
|
|
/* PType is always 0 */
|
|
|
|
|
|
|
|
/* Fam */
|
|
|
|
sig |= family << 8;
|
|
|
|
|
|
|
|
/* Mod */
|
|
|
|
sig |= (model & 0xf) << 4;
|
|
|
|
|
2017-10-10 11:34:28 +00:00
|
|
|
/* Step */
|
|
|
|
sig |= stepping & 0xf;
|
2015-06-25 13:06:19 +00:00
|
|
|
|
|
|
|
return sig;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2017-02-15 14:01:40 +00:00
|
|
|
static void
|
|
|
|
x86DataToSignatureFull(const virCPUx86Data *data,
|
|
|
|
unsigned int *family,
|
|
|
|
unsigned int *model,
|
|
|
|
unsigned int *stepping)
|
|
|
|
{
|
|
|
|
virCPUx86CPUID leaf1 = { .eax_in = 0x1 };
|
|
|
|
virCPUx86CPUID *cpuid;
|
|
|
|
|
|
|
|
*family = *model = *stepping = 0;
|
|
|
|
|
|
|
|
if (!(cpuid = x86DataCpuid(data, &leaf1)))
|
|
|
|
return;
|
|
|
|
|
|
|
|
*family = ((cpuid->eax >> 20) & 0xff) + ((cpuid->eax >> 8) & 0xf);
|
|
|
|
*model = ((cpuid->eax >> 12) & 0xf0) + ((cpuid->eax >> 4) & 0xf);
|
|
|
|
*stepping = cpuid->eax & 0xf;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2015-06-25 13:06:19 +00:00
|
|
|
/* Mask out irrelevant bits (R and Step) from processor signature. */
|
|
|
|
#define SIGNATURE_MASK 0x0fff3ff0
|
|
|
|
|
|
|
|
static uint32_t
|
|
|
|
x86DataToSignature(const virCPUx86Data *data)
|
|
|
|
{
|
|
|
|
virCPUx86CPUID leaf1 = { .eax_in = 0x1 };
|
|
|
|
virCPUx86CPUID *cpuid;
|
|
|
|
|
|
|
|
if (!(cpuid = x86DataCpuid(data, &leaf1)))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return cpuid->eax & SIGNATURE_MASK;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int
|
|
|
|
x86DataAddSignature(virCPUx86Data *data,
|
|
|
|
uint32_t signature)
|
|
|
|
{
|
|
|
|
virCPUx86CPUID cpuid = { .eax_in = 0x1, .eax = signature };
|
|
|
|
|
2017-02-02 14:52:13 +00:00
|
|
|
return virCPUx86DataAddCPUIDInt(data, &cpuid);
|
2015-06-25 13:06:19 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-01-15 15:58:59 +00:00
|
|
|
static virCPUDefPtr
|
2013-07-23 18:03:30 +00:00
|
|
|
x86DataToCPU(const virCPUx86Data *data,
|
2016-05-11 10:03:48 +00:00
|
|
|
virCPUx86ModelPtr model,
|
2017-10-13 16:17:52 +00:00
|
|
|
virCPUx86MapPtr map,
|
|
|
|
virDomainCapsCPUModelPtr hvModel)
|
2010-01-15 15:58:59 +00:00
|
|
|
{
|
|
|
|
virCPUDefPtr cpu;
|
2016-06-07 07:38:53 +00:00
|
|
|
virCPUx86Data copy = VIR_CPU_X86_DATA_INIT;
|
|
|
|
virCPUx86Data modelData = VIR_CPU_X86_DATA_INIT;
|
2016-05-11 08:47:21 +00:00
|
|
|
virCPUx86VendorPtr vendor;
|
2010-01-15 15:58:59 +00:00
|
|
|
|
|
|
|
if (VIR_ALLOC(cpu) < 0 ||
|
2013-05-03 12:41:23 +00:00
|
|
|
VIR_STRDUP(cpu->model, model->name) < 0 ||
|
2016-06-07 07:38:53 +00:00
|
|
|
x86DataCopy(©, data) < 0 ||
|
|
|
|
x86DataCopy(&modelData, &model->data) < 0)
|
2013-07-04 10:03:29 +00:00
|
|
|
goto error;
|
2010-01-15 15:58:59 +00:00
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
if ((vendor = x86DataToVendor(©, map)) &&
|
2013-05-03 12:41:23 +00:00
|
|
|
VIR_STRDUP(cpu->vendor, vendor->name) < 0)
|
2013-07-04 10:03:29 +00:00
|
|
|
goto error;
|
2010-07-02 15:51:59 +00:00
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
x86DataSubtract(©, &modelData);
|
|
|
|
x86DataSubtract(&modelData, data);
|
2010-04-14 15:41:32 +00:00
|
|
|
|
2017-10-13 16:17:52 +00:00
|
|
|
/* The hypervisor's version of the CPU model (hvModel) may contain
|
|
|
|
* additional features which may be currently unavailable. Such features
|
|
|
|
* block usage of the CPU model and we need to explicitly disable them.
|
|
|
|
*/
|
|
|
|
if (hvModel && hvModel->blockers) {
|
|
|
|
char **blocker;
|
|
|
|
virCPUx86FeaturePtr feature;
|
|
|
|
|
|
|
|
for (blocker = hvModel->blockers; *blocker; blocker++) {
|
|
|
|
if ((feature = x86FeatureFind(map, *blocker)) &&
|
|
|
|
!x86DataIsSubset(©, &feature->data))
|
|
|
|
x86DataAdd(&modelData, &feature->data);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-04-14 15:41:32 +00:00
|
|
|
/* because feature policy is ignored for host CPU */
|
|
|
|
cpu->type = VIR_CPU_TYPE_GUEST;
|
2010-01-15 15:58:59 +00:00
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
if (x86DataToCPUFeatures(cpu, VIR_CPU_FEATURE_REQUIRE, ©, map) ||
|
|
|
|
x86DataToCPUFeatures(cpu, VIR_CPU_FEATURE_DISABLE, &modelData, map))
|
2010-03-23 08:32:50 +00:00
|
|
|
goto error;
|
2010-01-15 15:58:59 +00:00
|
|
|
|
2014-03-25 06:50:40 +00:00
|
|
|
cleanup:
|
2016-06-07 07:38:53 +00:00
|
|
|
virCPUx86DataClear(&modelData);
|
|
|
|
virCPUx86DataClear(©);
|
2010-01-15 15:58:59 +00:00
|
|
|
return cpu;
|
|
|
|
|
2014-03-25 06:50:40 +00:00
|
|
|
error:
|
2010-01-15 15:58:59 +00:00
|
|
|
virCPUDefFree(cpu);
|
|
|
|
cpu = NULL;
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-07-02 15:51:59 +00:00
|
|
|
static void
|
2016-05-11 08:47:21 +00:00
|
|
|
x86VendorFree(virCPUx86VendorPtr vendor)
|
2010-07-02 15:51:59 +00:00
|
|
|
{
|
|
|
|
if (!vendor)
|
|
|
|
return;
|
|
|
|
|
|
|
|
VIR_FREE(vendor->name);
|
|
|
|
VIR_FREE(vendor);
|
2010-06-30 11:08:57 +00:00
|
|
|
}
|
2010-07-02 15:51:59 +00:00
|
|
|
|
|
|
|
|
2016-05-11 08:47:21 +00:00
|
|
|
static virCPUx86VendorPtr
|
2016-05-11 10:30:04 +00:00
|
|
|
x86VendorFind(virCPUx86MapPtr map,
|
2010-07-02 15:51:59 +00:00
|
|
|
const char *name)
|
|
|
|
{
|
2016-05-17 12:30:18 +00:00
|
|
|
size_t i;
|
2010-07-02 15:51:59 +00:00
|
|
|
|
2016-05-17 12:30:18 +00:00
|
|
|
for (i = 0; i < map->nvendors; i++) {
|
|
|
|
if (STREQ(map->vendors[i]->name, name))
|
|
|
|
return map->vendors[i];
|
2010-07-02 15:51:59 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-05-17 08:59:28 +00:00
|
|
|
static virCPUx86VendorPtr
|
|
|
|
x86VendorParse(xmlXPathContextPtr ctxt,
|
|
|
|
virCPUx86MapPtr map)
|
2010-07-02 15:51:59 +00:00
|
|
|
{
|
2016-05-11 08:47:21 +00:00
|
|
|
virCPUx86VendorPtr vendor = NULL;
|
2010-07-02 15:51:59 +00:00
|
|
|
char *string = NULL;
|
|
|
|
|
|
|
|
if (VIR_ALLOC(vendor) < 0)
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
2010-07-02 15:51:59 +00:00
|
|
|
|
|
|
|
vendor->name = virXPathString("string(@name)", ctxt);
|
|
|
|
if (!vendor->name) {
|
2013-10-14 09:28:17 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
|
|
|
|
_("Missing CPU vendor name"));
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
2010-07-02 15:51:59 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (x86VendorFind(map, vendor->name)) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("CPU vendor %s already defined"), vendor->name);
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
2010-07-02 15:51:59 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
string = virXPathString("string(@string)", ctxt);
|
|
|
|
if (!string) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
2013-10-14 09:28:17 +00:00
|
|
|
_("Missing vendor string for CPU vendor %s"),
|
|
|
|
vendor->name);
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
2010-07-02 15:51:59 +00:00
|
|
|
}
|
|
|
|
|
2017-02-02 19:12:38 +00:00
|
|
|
if (virCPUx86VendorToCPUID(string, &vendor->cpuid) < 0)
|
|
|
|
goto error;
|
2013-02-07 01:57:13 +00:00
|
|
|
|
2016-05-12 12:52:26 +00:00
|
|
|
cleanup:
|
2010-07-02 15:51:59 +00:00
|
|
|
VIR_FREE(string);
|
2016-05-17 08:59:28 +00:00
|
|
|
return vendor;
|
|
|
|
|
|
|
|
error:
|
2010-07-02 15:51:59 +00:00
|
|
|
x86VendorFree(vendor);
|
2016-05-17 08:59:28 +00:00
|
|
|
vendor = NULL;
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int
|
|
|
|
x86VendorsLoad(virCPUx86MapPtr map,
|
|
|
|
xmlXPathContextPtr ctxt,
|
|
|
|
xmlNodePtr *nodes,
|
|
|
|
int n)
|
|
|
|
{
|
|
|
|
virCPUx86VendorPtr vendor;
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
if (VIR_ALLOC_N(map->vendors, n) < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
for (i = 0; i < n; i++) {
|
|
|
|
ctxt->node = nodes[i];
|
|
|
|
if (!(vendor = x86VendorParse(ctxt, map)))
|
|
|
|
return -1;
|
|
|
|
map->vendors[map->nvendors++] = vendor;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
2010-07-02 15:51:59 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-05-11 09:56:14 +00:00
|
|
|
static virCPUx86FeaturePtr
|
2010-06-30 11:08:57 +00:00
|
|
|
x86FeatureNew(void)
|
|
|
|
{
|
2016-05-11 09:56:14 +00:00
|
|
|
virCPUx86FeaturePtr feature;
|
2010-06-30 11:08:57 +00:00
|
|
|
|
|
|
|
if (VIR_ALLOC(feature) < 0)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
return feature;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
static void
|
2016-05-11 09:56:14 +00:00
|
|
|
x86FeatureFree(virCPUx86FeaturePtr feature)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2016-05-12 13:06:25 +00:00
|
|
|
if (!feature)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
return;
|
|
|
|
|
|
|
|
VIR_FREE(feature->name);
|
2016-06-07 07:38:53 +00:00
|
|
|
virCPUx86DataClear(&feature->data);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
VIR_FREE(feature);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-06-28 10:23:48 +00:00
|
|
|
static int
|
|
|
|
x86FeatureInData(const char *name,
|
|
|
|
const virCPUx86Data *data,
|
|
|
|
virCPUx86MapPtr map)
|
|
|
|
{
|
|
|
|
virCPUx86FeaturePtr feature;
|
|
|
|
|
|
|
|
if (!(feature = x86FeatureFind(map, name)) &&
|
|
|
|
!(feature = x86FeatureFindInternal(name))) {
|
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("unknown CPU feature %s"), name);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (x86DataIsSubset(data, &feature->data))
|
|
|
|
return 1;
|
|
|
|
else
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-06-28 08:51:41 +00:00
|
|
|
static bool
|
|
|
|
x86FeatureIsMigratable(const char *name,
|
|
|
|
void *cpu_map)
|
|
|
|
{
|
|
|
|
virCPUx86MapPtr map = cpu_map;
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
for (i = 0; i < map->nblockers; i++) {
|
|
|
|
if (STREQ(name, map->migrate_blockers[i]->name))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2012-04-17 13:24:47 +00:00
|
|
|
static char *
|
2016-05-11 10:30:04 +00:00
|
|
|
x86FeatureNames(virCPUx86MapPtr map,
|
2012-04-17 13:24:47 +00:00
|
|
|
const char *separator,
|
2013-07-23 18:03:30 +00:00
|
|
|
virCPUx86Data *data)
|
2012-04-17 13:24:47 +00:00
|
|
|
{
|
|
|
|
virBuffer ret = VIR_BUFFER_INITIALIZER;
|
|
|
|
bool first = true;
|
2016-05-17 13:15:40 +00:00
|
|
|
size_t i;
|
2012-04-17 13:24:47 +00:00
|
|
|
|
|
|
|
virBufferAdd(&ret, "", 0);
|
|
|
|
|
2016-05-17 13:15:40 +00:00
|
|
|
for (i = 0; i < map->nfeatures; i++) {
|
|
|
|
virCPUx86FeaturePtr feature = map->features[i];
|
2016-06-07 07:38:53 +00:00
|
|
|
if (x86DataIsSubset(data, &feature->data)) {
|
2012-04-17 13:24:47 +00:00
|
|
|
if (!first)
|
|
|
|
virBufferAdd(&ret, separator, -1);
|
|
|
|
else
|
|
|
|
first = false;
|
|
|
|
|
2016-05-17 13:15:40 +00:00
|
|
|
virBufferAdd(&ret, feature->name, -1);
|
2012-04-17 13:24:47 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return virBufferContentAndReset(&ret);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2013-07-21 22:18:50 +00:00
|
|
|
static int
|
|
|
|
x86ParseCPUID(xmlXPathContextPtr ctxt,
|
2013-07-23 18:00:14 +00:00
|
|
|
virCPUx86CPUID *cpuid)
|
2013-07-21 22:18:50 +00:00
|
|
|
{
|
2016-05-20 08:59:13 +00:00
|
|
|
unsigned long eax_in, ecx_in;
|
2016-05-20 07:48:21 +00:00
|
|
|
unsigned long eax, ebx, ecx, edx;
|
2016-05-20 08:59:13 +00:00
|
|
|
int ret_eax_in, ret_ecx_in, ret_eax, ret_ebx, ret_ecx, ret_edx;
|
2013-07-21 22:18:50 +00:00
|
|
|
|
|
|
|
memset(cpuid, 0, sizeof(*cpuid));
|
|
|
|
|
2016-05-20 08:59:13 +00:00
|
|
|
eax_in = ecx_in = 0;
|
2016-05-20 07:48:21 +00:00
|
|
|
eax = ebx = ecx = edx = 0;
|
|
|
|
ret_eax_in = virXPathULongHex("string(@eax_in)", ctxt, &eax_in);
|
2016-05-20 08:59:13 +00:00
|
|
|
ret_ecx_in = virXPathULongHex("string(@ecx_in)", ctxt, &ecx_in);
|
2013-07-21 22:18:50 +00:00
|
|
|
ret_eax = virXPathULongHex("string(@eax)", ctxt, &eax);
|
|
|
|
ret_ebx = virXPathULongHex("string(@ebx)", ctxt, &ebx);
|
|
|
|
ret_ecx = virXPathULongHex("string(@ecx)", ctxt, &ecx);
|
|
|
|
ret_edx = virXPathULongHex("string(@edx)", ctxt, &edx);
|
|
|
|
|
2016-05-20 08:59:13 +00:00
|
|
|
if (ret_eax_in < 0 || ret_ecx_in == -2 ||
|
2016-05-20 07:48:21 +00:00
|
|
|
ret_eax == -2 || ret_ebx == -2 || ret_ecx == -2 || ret_edx == -2)
|
2013-07-21 22:18:50 +00:00
|
|
|
return -1;
|
|
|
|
|
2016-05-20 07:48:21 +00:00
|
|
|
cpuid->eax_in = eax_in;
|
2016-05-20 08:59:13 +00:00
|
|
|
cpuid->ecx_in = ecx_in;
|
2013-07-21 22:18:50 +00:00
|
|
|
cpuid->eax = eax;
|
|
|
|
cpuid->ebx = ebx;
|
|
|
|
cpuid->ecx = ecx;
|
|
|
|
cpuid->edx = edx;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-05-17 08:59:28 +00:00
|
|
|
static virCPUx86FeaturePtr
|
|
|
|
x86FeatureParse(xmlXPathContextPtr ctxt,
|
|
|
|
virCPUx86MapPtr map)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
|
|
|
xmlNodePtr *nodes = NULL;
|
2016-05-11 09:56:14 +00:00
|
|
|
virCPUx86FeaturePtr feature;
|
2013-07-23 18:00:14 +00:00
|
|
|
virCPUx86CPUID cpuid;
|
Convert 'int i' to 'size_t i' in src/cpu/ files
Convert the type of loop iterators named 'i', 'j', k',
'ii', 'jj', 'kk', to be 'size_t' instead of 'int' or
'unsigned int', also santizing 'ii', 'jj', 'kk' to use
the normal 'i', 'j', 'k' naming
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2013-07-08 14:09:33 +00:00
|
|
|
size_t i;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
int n;
|
2014-09-05 07:50:36 +00:00
|
|
|
char *str = NULL;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2010-06-30 11:08:57 +00:00
|
|
|
if (!(feature = x86FeatureNew()))
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-05-17 08:59:28 +00:00
|
|
|
feature->migratable = true;
|
2010-02-04 21:52:34 +00:00
|
|
|
feature->name = virXPathString("string(@name)", ctxt);
|
2016-05-12 13:06:25 +00:00
|
|
|
if (!feature->name) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
"%s", _("Missing CPU feature name"));
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (x86FeatureFind(map, feature->name)) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("CPU feature %s already defined"), feature->name);
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2014-09-05 07:50:36 +00:00
|
|
|
str = virXPathString("string(@migratable)", ctxt);
|
|
|
|
if (STREQ_NULLABLE(str, "no"))
|
2016-05-17 08:59:28 +00:00
|
|
|
feature->migratable = false;
|
2014-09-05 07:50:36 +00:00
|
|
|
|
2010-02-04 21:52:34 +00:00
|
|
|
n = virXPathNodeSet("./cpuid", ctxt, &nodes);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
if (n < 0)
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
for (i = 0; i < n; i++) {
|
|
|
|
ctxt->node = nodes[i];
|
2013-07-21 22:18:50 +00:00
|
|
|
if (x86ParseCPUID(ctxt, &cpuid) < 0) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
2013-07-21 22:18:50 +00:00
|
|
|
_("Invalid cpuid[%zu] in %s feature"),
|
|
|
|
i, feature->name);
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
2017-02-02 14:52:13 +00:00
|
|
|
if (virCPUx86DataAddCPUIDInt(&feature->data, &cpuid))
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2016-05-12 12:52:26 +00:00
|
|
|
cleanup:
|
2009-12-21 18:12:45 +00:00
|
|
|
VIR_FREE(nodes);
|
2014-09-05 07:50:36 +00:00
|
|
|
VIR_FREE(str);
|
2016-05-17 08:59:28 +00:00
|
|
|
return feature;
|
2009-12-21 18:12:45 +00:00
|
|
|
|
2016-05-17 08:59:28 +00:00
|
|
|
error:
|
|
|
|
x86FeatureFree(feature);
|
|
|
|
feature = NULL;
|
|
|
|
goto cleanup;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-05-17 08:59:28 +00:00
|
|
|
static int
|
|
|
|
x86FeaturesLoad(virCPUx86MapPtr map,
|
|
|
|
xmlXPathContextPtr ctxt,
|
|
|
|
xmlNodePtr *nodes,
|
|
|
|
int n)
|
|
|
|
{
|
|
|
|
virCPUx86FeaturePtr feature;
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
if (VIR_ALLOC_N(map->features, n) < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
for (i = 0; i < n; i++) {
|
|
|
|
ctxt->node = nodes[i];
|
|
|
|
if (!(feature = x86FeatureParse(ctxt, map)))
|
|
|
|
return -1;
|
|
|
|
map->features[map->nfeatures++] = feature;
|
|
|
|
if (!feature->migratable &&
|
|
|
|
VIR_APPEND_ELEMENT(map->migrate_blockers,
|
|
|
|
map->nblockers,
|
|
|
|
feature) < 0)
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-01-27 16:03:55 +00:00
|
|
|
|
2016-05-11 10:03:48 +00:00
|
|
|
static virCPUx86ModelPtr
|
2010-06-30 11:08:57 +00:00
|
|
|
x86ModelNew(void)
|
|
|
|
{
|
2016-05-11 10:03:48 +00:00
|
|
|
virCPUx86ModelPtr model;
|
2010-06-30 11:08:57 +00:00
|
|
|
|
|
|
|
if (VIR_ALLOC(model) < 0)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
return model;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
static void
|
2016-05-11 10:03:48 +00:00
|
|
|
x86ModelFree(virCPUx86ModelPtr model)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2016-05-12 13:06:25 +00:00
|
|
|
if (!model)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
return;
|
|
|
|
|
|
|
|
VIR_FREE(model->name);
|
2016-06-07 07:38:53 +00:00
|
|
|
virCPUx86DataClear(&model->data);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
VIR_FREE(model);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-05-11 10:03:48 +00:00
|
|
|
static virCPUx86ModelPtr
|
|
|
|
x86ModelCopy(virCPUx86ModelPtr model)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2016-05-11 10:03:48 +00:00
|
|
|
virCPUx86ModelPtr copy;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2013-05-03 12:41:23 +00:00
|
|
|
if (VIR_ALLOC(copy) < 0 ||
|
|
|
|
VIR_STRDUP(copy->name, model->name) < 0 ||
|
2016-06-07 07:38:53 +00:00
|
|
|
x86DataCopy(©->data, &model->data) < 0) {
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
x86ModelFree(copy);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2010-07-02 15:51:59 +00:00
|
|
|
copy->vendor = model->vendor;
|
2015-06-25 13:06:19 +00:00
|
|
|
copy->signature = model->signature;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
return copy;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-05-11 10:03:48 +00:00
|
|
|
static virCPUx86ModelPtr
|
2016-05-11 10:30:04 +00:00
|
|
|
x86ModelFind(virCPUx86MapPtr map,
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
const char *name)
|
|
|
|
{
|
2016-05-18 13:24:05 +00:00
|
|
|
size_t i;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-05-18 13:24:05 +00:00
|
|
|
for (i = 0; i < map->nmodels; i++) {
|
|
|
|
if (STREQ(map->models[i]->name, name))
|
|
|
|
return map->models[i];
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-06-18 09:19:17 +00:00
|
|
|
/*
|
|
|
|
* Computes CPU model data from a CPU definition associated with features
|
|
|
|
* matching @policy. If @policy equals -1, the computed model will describe
|
|
|
|
* all CPU features, i.e., it will contain:
|
|
|
|
*
|
|
|
|
* features from model
|
|
|
|
* + required and forced features
|
|
|
|
* - disabled and forbidden features
|
|
|
|
*/
|
2016-05-11 10:03:48 +00:00
|
|
|
static virCPUx86ModelPtr
|
maint: avoid 'const fooPtr' in cpu files
'const fooPtr' is the same as 'foo * const' (the pointer won't
change, but it's contents can). But in general, if an interface
is trying to be const-correct, it should be using 'const foo *'
(the pointer is to data that can't be changed).
Fix up offenders in src/cpu.
* src/cpu/cpu.h (cpuArchDecode, cpuArchEncode, cpuArchUpdate)
(cpuArchHasFeature, cpuDecode, cpuEncode, cpuUpdate)
(cpuHasFeature): Use intended type.
* src/conf/cpu_conf.h (virCPUDefCopyModel, virCPUDefCopy):
Likewise.
(virCPUDefParseXML): Drop const.
* src/cpu/cpu.c (cpuDecode, cpuEncode, cpuUpdate, cpuHasFeature):
Fix fallout.
* src/cpu/cpu_x86.c (x86ModelFromCPU, x86ModelSubtractCPU)
(x86DecodeCPUData, x86EncodePolicy, x86Encode, x86UpdateCustom)
(x86UpdateHostModel, x86Update, x86HasFeature): Likewise.
* src/cpu/cpu_s390.c (s390Decode): Likewise.
* src/cpu/cpu_arm.c (ArmDecode): Likewise.
* src/cpu/cpu_powerpc.c (ppcModelFromCPU, ppcCompute, ppcDecode)
(ppcUpdate): Likewise.
* src/conf/cpu_conf.c (virCPUDefCopyModel, virCPUDefCopy)
(virCPUDefParseXML): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
2013-10-05 20:01:02 +00:00
|
|
|
x86ModelFromCPU(const virCPUDef *cpu,
|
2016-05-11 10:30:04 +00:00
|
|
|
virCPUx86MapPtr map,
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
int policy)
|
|
|
|
{
|
2016-05-11 10:03:48 +00:00
|
|
|
virCPUx86ModelPtr model = NULL;
|
Convert 'int i' to 'size_t i' in src/cpu/ files
Convert the type of loop iterators named 'i', 'j', k',
'ii', 'jj', 'kk', to be 'size_t' instead of 'int' or
'unsigned int', also santizing 'ii', 'jj', 'kk' to use
the normal 'i', 'j', 'k' naming
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2013-07-08 14:09:33 +00:00
|
|
|
size_t i;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-06-18 08:54:50 +00:00
|
|
|
/* host CPU only contains required features; requesting other features
|
|
|
|
* just returns an empty model
|
|
|
|
*/
|
|
|
|
if (cpu->type == VIR_CPU_TYPE_HOST &&
|
2016-06-18 09:19:17 +00:00
|
|
|
policy != VIR_CPU_FEATURE_REQUIRE &&
|
|
|
|
policy != -1)
|
2016-06-18 08:54:50 +00:00
|
|
|
return x86ModelNew();
|
|
|
|
|
2016-08-09 11:26:53 +00:00
|
|
|
if (cpu->model &&
|
|
|
|
(policy == VIR_CPU_FEATURE_REQUIRE || policy == -1)) {
|
2016-05-12 13:06:25 +00:00
|
|
|
if (!(model = x86ModelFind(map, cpu->model))) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("Unknown CPU model %s"), cpu->model);
|
2016-06-18 08:54:50 +00:00
|
|
|
return NULL;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2016-06-18 08:54:50 +00:00
|
|
|
model = x86ModelCopy(model);
|
|
|
|
} else {
|
|
|
|
model = x86ModelNew();
|
2010-06-30 11:08:57 +00:00
|
|
|
}
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-06-18 08:54:50 +00:00
|
|
|
if (!model)
|
|
|
|
return NULL;
|
|
|
|
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
for (i = 0; i < cpu->nfeatures; i++) {
|
2016-05-11 09:56:14 +00:00
|
|
|
virCPUx86FeaturePtr feature;
|
2016-06-18 09:19:17 +00:00
|
|
|
virCPUFeaturePolicy fpol;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-06-18 09:19:17 +00:00
|
|
|
if (cpu->features[i].policy == -1)
|
|
|
|
fpol = VIR_CPU_FEATURE_REQUIRE;
|
|
|
|
else
|
|
|
|
fpol = cpu->features[i].policy;
|
|
|
|
|
|
|
|
if ((policy == -1 && fpol == VIR_CPU_FEATURE_OPTIONAL) ||
|
|
|
|
(policy != -1 && fpol != policy))
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
continue;
|
|
|
|
|
2016-05-12 13:06:25 +00:00
|
|
|
if (!(feature = x86FeatureFind(map, cpu->features[i].name))) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("Unknown CPU feature %s"), cpu->features[i].name);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
2016-06-18 09:19:17 +00:00
|
|
|
if (policy == -1) {
|
|
|
|
switch (fpol) {
|
|
|
|
case VIR_CPU_FEATURE_FORCE:
|
|
|
|
case VIR_CPU_FEATURE_REQUIRE:
|
|
|
|
if (x86DataAdd(&model->data, &feature->data) < 0)
|
|
|
|
goto error;
|
|
|
|
break;
|
|
|
|
|
|
|
|
case VIR_CPU_FEATURE_DISABLE:
|
|
|
|
case VIR_CPU_FEATURE_FORBID:
|
|
|
|
x86DataSubtract(&model->data, &feature->data);
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* coverity[dead_error_condition] */
|
|
|
|
case VIR_CPU_FEATURE_OPTIONAL:
|
|
|
|
case VIR_CPU_FEATURE_LAST:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
} else if (x86DataAdd(&model->data, &feature->data) < 0) {
|
2013-07-04 10:03:29 +00:00
|
|
|
goto error;
|
2016-06-18 09:19:17 +00:00
|
|
|
}
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return model;
|
|
|
|
|
2014-03-25 06:50:40 +00:00
|
|
|
error:
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
x86ModelFree(model);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-05-11 10:36:43 +00:00
|
|
|
static virCPUx86CompareResult
|
2016-05-11 10:03:48 +00:00
|
|
|
x86ModelCompare(virCPUx86ModelPtr model1,
|
|
|
|
virCPUx86ModelPtr model2)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2016-05-11 10:36:43 +00:00
|
|
|
virCPUx86CompareResult result = EQUAL;
|
2016-06-07 07:38:53 +00:00
|
|
|
virCPUx86DataIterator iter1 = virCPUx86DataIteratorInit(&model1->data);
|
|
|
|
virCPUx86DataIterator iter2 = virCPUx86DataIteratorInit(&model2->data);
|
2013-07-23 18:00:14 +00:00
|
|
|
virCPUx86CPUID *cpuid1;
|
|
|
|
virCPUx86CPUID *cpuid2;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2010-06-30 11:08:57 +00:00
|
|
|
while ((cpuid1 = x86DataCpuidNext(&iter1))) {
|
2016-05-11 10:36:43 +00:00
|
|
|
virCPUx86CompareResult match = SUPERSET;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-05-20 08:59:13 +00:00
|
|
|
if ((cpuid2 = x86DataCpuid(&model2->data, cpuid1))) {
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
if (x86cpuidMatch(cpuid1, cpuid2))
|
|
|
|
continue;
|
|
|
|
else if (!x86cpuidMatchMasked(cpuid1, cpuid2))
|
|
|
|
match = SUBSET;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (result == EQUAL)
|
|
|
|
result = match;
|
|
|
|
else if (result != match)
|
|
|
|
return UNRELATED;
|
|
|
|
}
|
|
|
|
|
2010-06-30 11:08:57 +00:00
|
|
|
while ((cpuid2 = x86DataCpuidNext(&iter2))) {
|
2016-05-11 10:36:43 +00:00
|
|
|
virCPUx86CompareResult match = SUBSET;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-05-20 08:59:13 +00:00
|
|
|
if ((cpuid1 = x86DataCpuid(&model1->data, cpuid2))) {
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
if (x86cpuidMatch(cpuid2, cpuid1))
|
|
|
|
continue;
|
|
|
|
else if (!x86cpuidMatchMasked(cpuid2, cpuid1))
|
|
|
|
match = SUPERSET;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (result == EQUAL)
|
|
|
|
result = match;
|
|
|
|
else if (result != match)
|
|
|
|
return UNRELATED;
|
|
|
|
}
|
|
|
|
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-05-17 08:59:28 +00:00
|
|
|
static virCPUx86ModelPtr
|
|
|
|
x86ModelParse(xmlXPathContextPtr ctxt,
|
|
|
|
virCPUx86MapPtr map)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
|
|
|
xmlNodePtr *nodes = NULL;
|
2016-05-11 10:03:48 +00:00
|
|
|
virCPUx86ModelPtr model;
|
2010-07-02 15:51:59 +00:00
|
|
|
char *vendor = NULL;
|
Convert 'int i' to 'size_t i' in src/cpu/ files
Convert the type of loop iterators named 'i', 'j', k',
'ii', 'jj', 'kk', to be 'size_t' instead of 'int' or
'unsigned int', also santizing 'ii', 'jj', 'kk' to use
the normal 'i', 'j', 'k' naming
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2013-07-08 14:09:33 +00:00
|
|
|
size_t i;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
int n;
|
|
|
|
|
2010-06-30 11:08:57 +00:00
|
|
|
if (!(model = x86ModelNew()))
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2010-02-04 21:52:34 +00:00
|
|
|
model->name = virXPathString("string(@name)", ctxt);
|
2016-05-12 13:06:25 +00:00
|
|
|
if (!model->name) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
"%s", _("Missing CPU model name"));
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2016-05-12 13:06:25 +00:00
|
|
|
if (virXPathNode("./model", ctxt)) {
|
2016-05-11 10:03:48 +00:00
|
|
|
virCPUx86ModelPtr ancestor;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
char *name;
|
|
|
|
|
2010-02-04 21:52:34 +00:00
|
|
|
name = virXPathString("string(./model/@name)", ctxt);
|
2016-05-12 13:06:25 +00:00
|
|
|
if (!name) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("Missing ancestor's name in CPU model %s"),
|
|
|
|
model->name);
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2016-05-12 13:06:25 +00:00
|
|
|
if (!(ancestor = x86ModelFind(map, name))) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("Ancestor model %s not found for CPU model %s"),
|
|
|
|
name, model->name);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
VIR_FREE(name);
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
VIR_FREE(name);
|
|
|
|
|
2010-07-02 15:51:59 +00:00
|
|
|
model->vendor = ancestor->vendor;
|
2018-01-05 16:43:03 +00:00
|
|
|
model->signature = ancestor->signature;
|
2016-06-07 07:38:53 +00:00
|
|
|
if (x86DataCopy(&model->data, &ancestor->data) < 0)
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2015-06-25 13:06:19 +00:00
|
|
|
if (virXPathBoolean("boolean(./signature)", ctxt)) {
|
|
|
|
unsigned int sigFamily = 0;
|
|
|
|
unsigned int sigModel = 0;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
rc = virXPathUInt("string(./signature/@family)", ctxt, &sigFamily);
|
|
|
|
if (rc < 0 || sigFamily == 0) {
|
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("Invalid CPU signature family in model %s"),
|
|
|
|
model->name);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
|
|
|
rc = virXPathUInt("string(./signature/@model)", ctxt, &sigModel);
|
|
|
|
if (rc < 0 || sigModel == 0) {
|
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("Invalid CPU signature model in model %s"),
|
|
|
|
model->name);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
2017-10-10 11:34:28 +00:00
|
|
|
model->signature = x86MakeSignature(sigFamily, sigModel, 0);
|
2015-06-25 13:06:19 +00:00
|
|
|
}
|
|
|
|
|
2010-10-13 09:42:19 +00:00
|
|
|
if (virXPathBoolean("boolean(./vendor)", ctxt)) {
|
|
|
|
vendor = virXPathString("string(./vendor/@name)", ctxt);
|
|
|
|
if (!vendor) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("Invalid vendor element in CPU model %s"),
|
|
|
|
model->name);
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
2010-10-13 09:42:19 +00:00
|
|
|
}
|
|
|
|
|
2010-07-02 15:51:59 +00:00
|
|
|
if (!(model->vendor = x86VendorFind(map, vendor))) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("Unknown vendor %s referenced by CPU model %s"),
|
|
|
|
vendor, model->name);
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
2010-07-02 15:51:59 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-02-04 21:52:34 +00:00
|
|
|
n = virXPathNodeSet("./feature", ctxt, &nodes);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
if (n < 0)
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
for (i = 0; i < n; i++) {
|
2016-05-11 09:56:14 +00:00
|
|
|
virCPUx86FeaturePtr feature;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
char *name;
|
|
|
|
|
2016-05-12 13:06:25 +00:00
|
|
|
if (!(name = virXMLPropString(nodes[i], "name"))) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("Missing feature name for CPU model %s"), model->name);
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2016-05-12 13:06:25 +00:00
|
|
|
if (!(feature = x86FeatureFind(map, name))) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("Feature %s required by CPU model %s not found"),
|
|
|
|
name, model->name);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
VIR_FREE(name);
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
VIR_FREE(name);
|
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
if (x86DataAdd(&model->data, &feature->data))
|
2016-05-17 08:59:28 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2016-05-12 12:52:26 +00:00
|
|
|
cleanup:
|
2010-07-02 15:51:59 +00:00
|
|
|
VIR_FREE(vendor);
|
2009-12-21 18:12:45 +00:00
|
|
|
VIR_FREE(nodes);
|
2016-05-17 08:59:28 +00:00
|
|
|
return model;
|
|
|
|
|
|
|
|
error:
|
|
|
|
x86ModelFree(model);
|
|
|
|
model = NULL;
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int
|
|
|
|
x86ModelsLoad(virCPUx86MapPtr map,
|
|
|
|
xmlXPathContextPtr ctxt,
|
|
|
|
xmlNodePtr *nodes,
|
|
|
|
int n)
|
|
|
|
{
|
|
|
|
virCPUx86ModelPtr model;
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
if (VIR_ALLOC_N(map->models, n) < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
for (i = 0; i < n; i++) {
|
|
|
|
ctxt->node = nodes[i];
|
|
|
|
if (!(model = x86ModelParse(ctxt, map)))
|
|
|
|
return -1;
|
|
|
|
map->models[map->nmodels++] = model;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static void
|
2016-05-11 10:30:04 +00:00
|
|
|
x86MapFree(virCPUx86MapPtr map)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2016-05-18 13:24:05 +00:00
|
|
|
size_t i;
|
|
|
|
|
2016-05-12 13:06:25 +00:00
|
|
|
if (!map)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
return;
|
|
|
|
|
2016-05-17 13:15:40 +00:00
|
|
|
for (i = 0; i < map->nfeatures; i++)
|
|
|
|
x86FeatureFree(map->features[i]);
|
|
|
|
VIR_FREE(map->features);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-05-18 13:24:05 +00:00
|
|
|
for (i = 0; i < map->nmodels; i++)
|
|
|
|
x86ModelFree(map->models[i]);
|
|
|
|
VIR_FREE(map->models);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-05-17 12:30:18 +00:00
|
|
|
for (i = 0; i < map->nvendors; i++)
|
|
|
|
x86VendorFree(map->vendors[i]);
|
|
|
|
VIR_FREE(map->vendors);
|
2010-08-12 20:30:11 +00:00
|
|
|
|
2016-05-17 13:15:40 +00:00
|
|
|
/* migrate_blockers only points to the features from map->features list,
|
|
|
|
* which were already freed above
|
|
|
|
*/
|
|
|
|
VIR_FREE(map->migrate_blockers);
|
2014-09-05 07:50:36 +00:00
|
|
|
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
VIR_FREE(map);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-07-02 15:51:59 +00:00
|
|
|
static int
|
2014-06-01 00:22:28 +00:00
|
|
|
x86MapLoadCallback(cpuMapElement element,
|
2010-07-02 15:51:59 +00:00
|
|
|
xmlXPathContextPtr ctxt,
|
2016-05-17 08:59:28 +00:00
|
|
|
xmlNodePtr *nodes,
|
|
|
|
int n,
|
2010-07-02 15:51:59 +00:00
|
|
|
void *data)
|
|
|
|
{
|
2016-05-11 10:30:04 +00:00
|
|
|
virCPUx86MapPtr map = data;
|
2010-07-02 15:51:59 +00:00
|
|
|
|
|
|
|
switch (element) {
|
|
|
|
case CPU_MAP_ELEMENT_VENDOR:
|
2016-05-17 08:59:28 +00:00
|
|
|
return x86VendorsLoad(map, ctxt, nodes, n);
|
2010-07-02 15:51:59 +00:00
|
|
|
case CPU_MAP_ELEMENT_FEATURE:
|
2016-05-17 08:59:28 +00:00
|
|
|
return x86FeaturesLoad(map, ctxt, nodes, n);
|
2010-07-02 15:51:59 +00:00
|
|
|
case CPU_MAP_ELEMENT_MODEL:
|
2016-05-17 08:59:28 +00:00
|
|
|
return x86ModelsLoad(map, ctxt, nodes, n);
|
2010-07-02 15:51:59 +00:00
|
|
|
case CPU_MAP_ELEMENT_LAST:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-05-11 10:30:04 +00:00
|
|
|
static virCPUx86MapPtr
|
2013-10-08 16:20:10 +00:00
|
|
|
virCPUx86LoadMap(void)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2016-05-11 10:30:04 +00:00
|
|
|
virCPUx86MapPtr map;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2013-07-04 10:03:29 +00:00
|
|
|
if (VIR_ALLOC(map) < 0)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
return NULL;
|
|
|
|
|
2010-07-02 15:51:59 +00:00
|
|
|
if (cpuMapLoad("x86", x86MapLoadCallback, map) < 0)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
goto error;
|
|
|
|
|
|
|
|
return map;
|
|
|
|
|
2014-03-25 06:50:40 +00:00
|
|
|
error:
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
x86MapFree(map);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2013-10-08 16:20:10 +00:00
|
|
|
int
|
2017-12-13 21:30:31 +00:00
|
|
|
virCPUx86DriverOnceInit(void)
|
2013-10-08 16:20:10 +00:00
|
|
|
{
|
2016-05-11 10:30:04 +00:00
|
|
|
if (!(cpuMap = virCPUx86LoadMap()))
|
2013-10-08 16:20:10 +00:00
|
|
|
return -1;
|
|
|
|
|
2017-12-12 15:23:40 +00:00
|
|
|
microcodeVersion = virHostCPUGetMicrocodeVersion();
|
|
|
|
|
2013-10-08 16:20:10 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-05-11 10:30:04 +00:00
|
|
|
static virCPUx86MapPtr
|
2013-10-08 16:20:10 +00:00
|
|
|
virCPUx86GetMap(void)
|
|
|
|
{
|
2017-12-13 21:30:31 +00:00
|
|
|
if (virCPUx86DriverInitialize() < 0)
|
2013-10-08 16:20:10 +00:00
|
|
|
return NULL;
|
|
|
|
|
2016-05-11 10:30:04 +00:00
|
|
|
return cpuMap;
|
2013-10-08 16:20:10 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2013-07-21 22:18:50 +00:00
|
|
|
static char *
|
2016-11-04 14:09:20 +00:00
|
|
|
virCPUx86DataFormat(const virCPUData *data)
|
2013-07-21 22:18:50 +00:00
|
|
|
{
|
2016-06-07 07:38:53 +00:00
|
|
|
virCPUx86DataIterator iter = virCPUx86DataIteratorInit(&data->data.x86);
|
2013-07-23 18:00:14 +00:00
|
|
|
virCPUx86CPUID *cpuid;
|
2013-07-21 22:18:50 +00:00
|
|
|
virBuffer buf = VIR_BUFFER_INITIALIZER;
|
|
|
|
|
|
|
|
virBufferAddLit(&buf, "<cpudata arch='x86'>\n");
|
|
|
|
while ((cpuid = x86DataCpuidNext(&iter))) {
|
|
|
|
virBufferAsprintf(&buf,
|
2016-05-20 08:59:13 +00:00
|
|
|
" <cpuid eax_in='0x%08x' ecx_in='0x%08x'"
|
2013-07-21 22:18:50 +00:00
|
|
|
" eax='0x%08x' ebx='0x%08x'"
|
|
|
|
" ecx='0x%08x' edx='0x%08x'/>\n",
|
2016-05-20 08:59:13 +00:00
|
|
|
cpuid->eax_in, cpuid->ecx_in,
|
2013-07-21 22:18:50 +00:00
|
|
|
cpuid->eax, cpuid->ebx, cpuid->ecx, cpuid->edx);
|
|
|
|
}
|
|
|
|
virBufferAddLit(&buf, "</cpudata>\n");
|
|
|
|
|
2014-06-27 08:40:15 +00:00
|
|
|
if (virBufferCheckError(&buf) < 0)
|
2013-07-21 22:18:50 +00:00
|
|
|
return NULL;
|
|
|
|
|
|
|
|
return virBufferContentAndReset(&buf);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static virCPUDataPtr
|
2016-11-04 14:02:26 +00:00
|
|
|
virCPUx86DataParse(xmlXPathContextPtr ctxt)
|
2013-07-21 22:18:50 +00:00
|
|
|
{
|
|
|
|
xmlNodePtr *nodes = NULL;
|
|
|
|
virCPUDataPtr cpuData = NULL;
|
2013-07-23 18:00:14 +00:00
|
|
|
virCPUx86CPUID cpuid;
|
2013-07-21 22:18:50 +00:00
|
|
|
size_t i;
|
|
|
|
int n;
|
|
|
|
|
2015-06-29 09:08:30 +00:00
|
|
|
n = virXPathNodeSet("/cpudata/cpuid", ctxt, &nodes);
|
2015-06-29 09:07:25 +00:00
|
|
|
if (n <= 0) {
|
2013-07-21 22:18:50 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
|
|
|
|
_("no x86 CPU data found"));
|
2017-02-02 11:19:13 +00:00
|
|
|
goto error;
|
2013-07-21 22:18:50 +00:00
|
|
|
}
|
|
|
|
|
2017-02-02 11:19:13 +00:00
|
|
|
if (!(cpuData = virCPUDataNew(VIR_ARCH_X86_64)))
|
|
|
|
goto error;
|
|
|
|
|
2013-07-21 22:18:50 +00:00
|
|
|
for (i = 0; i < n; i++) {
|
|
|
|
ctxt->node = nodes[i];
|
|
|
|
if (x86ParseCPUID(ctxt, &cpuid) < 0) {
|
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("failed to parse cpuid[%zu]"), i);
|
2017-02-02 11:19:13 +00:00
|
|
|
goto error;
|
2013-07-21 22:18:50 +00:00
|
|
|
}
|
2017-02-02 14:52:13 +00:00
|
|
|
if (virCPUx86DataAddCPUID(cpuData, &cpuid) < 0)
|
2017-02-02 11:19:13 +00:00
|
|
|
goto error;
|
2013-07-21 22:18:50 +00:00
|
|
|
}
|
|
|
|
|
2014-03-25 06:50:40 +00:00
|
|
|
cleanup:
|
2013-07-21 22:18:50 +00:00
|
|
|
VIR_FREE(nodes);
|
|
|
|
return cpuData;
|
2017-02-02 11:19:13 +00:00
|
|
|
|
|
|
|
error:
|
2017-02-02 14:37:40 +00:00
|
|
|
virCPUx86DataFree(cpuData);
|
2017-02-02 11:19:13 +00:00
|
|
|
cpuData = NULL;
|
|
|
|
goto cleanup;
|
2013-07-21 22:18:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2012-04-17 13:24:47 +00:00
|
|
|
/* A helper macro to exit the cpu computation function without writing
|
|
|
|
* redundant code:
|
|
|
|
* MSG: error message
|
2013-07-23 18:03:30 +00:00
|
|
|
* CPU_DEF: a virCPUx86Data pointer with flags that are conflicting
|
2012-04-17 13:24:47 +00:00
|
|
|
* RET: return code to set
|
|
|
|
*
|
|
|
|
* This macro generates the error string outputs it into logs.
|
|
|
|
*/
|
2017-11-03 12:09:47 +00:00
|
|
|
#define virX86CpuIncompatible(MSG, CPU_DEF) \
|
|
|
|
do { \
|
|
|
|
char *flagsStr = NULL; \
|
|
|
|
if (!(flagsStr = x86FeatureNames(map, ", ", (CPU_DEF)))) { \
|
|
|
|
virReportOOMError(); \
|
|
|
|
goto error; \
|
|
|
|
} \
|
|
|
|
if (message && \
|
2012-04-17 13:24:47 +00:00
|
|
|
virAsprintf(message, "%s: %s", _(MSG), flagsStr) < 0) { \
|
2017-11-03 12:09:47 +00:00
|
|
|
VIR_FREE(flagsStr); \
|
|
|
|
goto error; \
|
|
|
|
} \
|
|
|
|
VIR_DEBUG("%s: %s", MSG, flagsStr); \
|
|
|
|
VIR_FREE(flagsStr); \
|
|
|
|
ret = VIR_CPU_COMPARE_INCOMPATIBLE; \
|
2012-04-17 13:24:47 +00:00
|
|
|
} while (0)
|
|
|
|
|
2013-10-09 12:36:32 +00:00
|
|
|
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
static virCPUCompareResult
|
|
|
|
x86Compute(virCPUDefPtr host,
|
|
|
|
virCPUDefPtr cpu,
|
2012-12-18 18:44:23 +00:00
|
|
|
virCPUDataPtr *guest,
|
2012-04-17 13:24:47 +00:00
|
|
|
char **message)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2016-05-11 10:30:04 +00:00
|
|
|
virCPUx86MapPtr map = NULL;
|
2016-05-11 10:03:48 +00:00
|
|
|
virCPUx86ModelPtr host_model = NULL;
|
|
|
|
virCPUx86ModelPtr cpu_force = NULL;
|
|
|
|
virCPUx86ModelPtr cpu_require = NULL;
|
|
|
|
virCPUx86ModelPtr cpu_optional = NULL;
|
|
|
|
virCPUx86ModelPtr cpu_disable = NULL;
|
|
|
|
virCPUx86ModelPtr cpu_forbid = NULL;
|
|
|
|
virCPUx86ModelPtr diff = NULL;
|
|
|
|
virCPUx86ModelPtr guest_model = NULL;
|
2017-02-02 11:19:13 +00:00
|
|
|
virCPUDataPtr guestData = NULL;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
virCPUCompareResult ret;
|
2016-05-11 10:36:43 +00:00
|
|
|
virCPUx86CompareResult result;
|
2013-07-16 12:39:40 +00:00
|
|
|
virArch arch;
|
Convert 'int i' to 'size_t i' in src/cpu/ files
Convert the type of loop iterators named 'i', 'j', k',
'ii', 'jj', 'kk', to be 'size_t' instead of 'int' or
'unsigned int', also santizing 'ii', 'jj', 'kk' to use
the normal 'i', 'j', 'k' naming
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2013-07-08 14:09:33 +00:00
|
|
|
size_t i;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2012-12-11 12:58:54 +00:00
|
|
|
if (cpu->arch != VIR_ARCH_NONE) {
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
bool found = false;
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_CARDINALITY(archs); i++) {
|
2012-12-11 12:58:54 +00:00
|
|
|
if (archs[i] == cpu->arch) {
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
found = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-01-12 14:25:44 +00:00
|
|
|
if (!found) {
|
2012-12-11 12:58:54 +00:00
|
|
|
VIR_DEBUG("CPU arch %s does not match host arch",
|
|
|
|
virArchToString(cpu->arch));
|
2012-04-17 13:24:47 +00:00
|
|
|
if (message &&
|
|
|
|
virAsprintf(message,
|
|
|
|
_("CPU arch %s does not match host arch"),
|
2012-12-11 12:58:54 +00:00
|
|
|
virArchToString(cpu->arch)) < 0)
|
2013-07-04 10:03:29 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
return VIR_CPU_COMPARE_INCOMPATIBLE;
|
2010-01-12 14:25:44 +00:00
|
|
|
}
|
2013-07-16 12:39:40 +00:00
|
|
|
arch = cpu->arch;
|
|
|
|
} else {
|
|
|
|
arch = host->arch;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2010-07-02 15:51:59 +00:00
|
|
|
if (cpu->vendor &&
|
|
|
|
(!host->vendor || STRNEQ(cpu->vendor, host->vendor))) {
|
|
|
|
VIR_DEBUG("host CPU vendor does not match required CPU vendor %s",
|
|
|
|
cpu->vendor);
|
2012-04-17 13:24:47 +00:00
|
|
|
if (message &&
|
|
|
|
virAsprintf(message,
|
|
|
|
_("host CPU vendor does not match required "
|
|
|
|
"CPU vendor %s"),
|
|
|
|
cpu->vendor) < 0)
|
2013-07-04 10:03:29 +00:00
|
|
|
goto error;
|
2013-10-09 12:38:11 +00:00
|
|
|
|
2010-07-02 15:51:59 +00:00
|
|
|
return VIR_CPU_COMPARE_INCOMPATIBLE;
|
|
|
|
}
|
|
|
|
|
2013-10-08 16:20:10 +00:00
|
|
|
if (!(map = virCPUx86GetMap()) ||
|
2016-08-09 11:26:53 +00:00
|
|
|
!(host_model = x86ModelFromCPU(host, map, -1)) ||
|
2010-04-07 12:57:16 +00:00
|
|
|
!(cpu_force = x86ModelFromCPU(cpu, map, VIR_CPU_FEATURE_FORCE)) ||
|
|
|
|
!(cpu_require = x86ModelFromCPU(cpu, map, VIR_CPU_FEATURE_REQUIRE)) ||
|
|
|
|
!(cpu_optional = x86ModelFromCPU(cpu, map, VIR_CPU_FEATURE_OPTIONAL)) ||
|
|
|
|
!(cpu_disable = x86ModelFromCPU(cpu, map, VIR_CPU_FEATURE_DISABLE)) ||
|
|
|
|
!(cpu_forbid = x86ModelFromCPU(cpu, map, VIR_CPU_FEATURE_FORBID)))
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
goto error;
|
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
x86DataIntersect(&cpu_forbid->data, &host_model->data);
|
|
|
|
if (!x86DataIsEmpty(&cpu_forbid->data)) {
|
2012-04-17 13:24:47 +00:00
|
|
|
virX86CpuIncompatible(N_("Host CPU provides forbidden features"),
|
2016-06-07 07:38:53 +00:00
|
|
|
&cpu_forbid->data);
|
2013-10-09 12:38:11 +00:00
|
|
|
goto cleanup;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2013-07-20 22:27:40 +00:00
|
|
|
/* first remove features that were inherited from the CPU model and were
|
|
|
|
* explicitly forced, disabled, or made optional
|
|
|
|
*/
|
2016-06-07 07:38:53 +00:00
|
|
|
x86DataSubtract(&cpu_require->data, &cpu_force->data);
|
|
|
|
x86DataSubtract(&cpu_require->data, &cpu_optional->data);
|
|
|
|
x86DataSubtract(&cpu_require->data, &cpu_disable->data);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
result = x86ModelCompare(host_model, cpu_require);
|
|
|
|
if (result == SUBSET || result == UNRELATED) {
|
2016-06-07 07:38:53 +00:00
|
|
|
x86DataSubtract(&cpu_require->data, &host_model->data);
|
2012-04-17 13:24:47 +00:00
|
|
|
virX86CpuIncompatible(N_("Host CPU does not provide required "
|
|
|
|
"features"),
|
2016-06-07 07:38:53 +00:00
|
|
|
&cpu_require->data);
|
2013-10-09 12:38:11 +00:00
|
|
|
goto cleanup;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
ret = VIR_CPU_COMPARE_IDENTICAL;
|
|
|
|
|
2016-05-12 13:06:25 +00:00
|
|
|
if (!(diff = x86ModelCopy(host_model)))
|
2013-07-04 10:03:29 +00:00
|
|
|
goto error;
|
2010-04-07 12:57:16 +00:00
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
x86DataSubtract(&diff->data, &cpu_optional->data);
|
|
|
|
x86DataSubtract(&diff->data, &cpu_require->data);
|
|
|
|
x86DataSubtract(&diff->data, &cpu_disable->data);
|
|
|
|
x86DataSubtract(&diff->data, &cpu_force->data);
|
2010-04-07 12:57:16 +00:00
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
if (!x86DataIsEmpty(&diff->data))
|
2010-06-30 11:08:57 +00:00
|
|
|
ret = VIR_CPU_COMPARE_SUPERSET;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
if (ret == VIR_CPU_COMPARE_SUPERSET
|
|
|
|
&& cpu->type == VIR_CPU_TYPE_GUEST
|
|
|
|
&& cpu->match == VIR_CPU_MATCH_STRICT) {
|
2012-04-17 13:24:47 +00:00
|
|
|
virX86CpuIncompatible(N_("Host CPU does not strictly match guest CPU: "
|
|
|
|
"Extra features"),
|
2016-06-07 07:38:53 +00:00
|
|
|
&diff->data);
|
2013-10-09 12:38:11 +00:00
|
|
|
goto cleanup;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2016-05-12 13:06:25 +00:00
|
|
|
if (guest) {
|
|
|
|
if (!(guest_model = x86ModelCopy(host_model)))
|
2013-07-04 10:03:29 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
cpu_x86: fix libvirtd crash when host cpu vendor is not available
When starting a guest and copying host vendor cpuid to the guest
cpu, libvirtd would crash if the host cpu contained a NULL vendor
field. Avoid the crash by checking for a valid vendor in the host
cpu before copying the cpuid to the guest cpu.
For completeness, here is a backtrace from the crash
(gdb) bt
f0 0x00007ffff739bf33 in x86DataCpuid (cpuid=0x8, cpuid=0x8,
data=data@entry=0x7fffb800ee78) at cpu/cpu_x86.c:287
f1 virCPUx86DataAddCPUID (data=data@entry=0x7fffb800ee78, cpuid=0x8)
at cpu/cpu_x86.c:355
f2 0x00007ffff739ef47 in x86Compute (host=<optimized out>, cpu=0x7fffb8000cc0,
guest=0x7fffecca7348, message=<optimized out>) at cpu/cpu_x86.c:1580
f3 0x00007fffd2b38e53 in qemuBuildCpuModelArgStr (migrating=false,
hasHwVirt=<synthetic pointer>, qemuCaps=0x7fffb8001040, buf=0x7fffecca7360,
def=0x7fffc400ce20, driver=0x1c) at qemu/qemu_command.c:6283
f4 qemuBuildCpuCommandLine (cmd=cmd@entry=0x7fffb8002f60,
driver=driver@entry=0x7fffc80882c0, def=def@entry=0x7fffc400ce20,
qemuCaps=qemuCaps@entry=0x7fffb8001040, migrating=<optimized out>)
at qemu/qemu_command.c:6445
(gdb) f2
(gdb) p *host_model
$23 = {name = 0x7fffb800ec50 "qemu64", vendor = 0x0, signature = 0, data = {
len = 2, data = 0x7fffb800e720}}
2016-08-05 21:23:47 +00:00
|
|
|
if (cpu->vendor && host_model->vendor &&
|
2017-02-02 14:52:13 +00:00
|
|
|
virCPUx86DataAddCPUIDInt(&guest_model->data,
|
|
|
|
&host_model->vendor->cpuid) < 0)
|
2016-06-01 08:55:36 +00:00
|
|
|
goto error;
|
|
|
|
|
2015-06-25 13:06:19 +00:00
|
|
|
if (x86DataAddSignature(&guest_model->data, host_model->signature) < 0)
|
|
|
|
goto error;
|
|
|
|
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
if (cpu->type == VIR_CPU_TYPE_GUEST
|
|
|
|
&& cpu->match == VIR_CPU_MATCH_EXACT)
|
2016-06-07 07:38:53 +00:00
|
|
|
x86DataSubtract(&guest_model->data, &diff->data);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
if (x86DataAdd(&guest_model->data, &cpu_force->data))
|
2013-07-04 10:03:29 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
x86DataSubtract(&guest_model->data, &cpu_disable->data);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2017-02-02 11:19:13 +00:00
|
|
|
if (!(guestData = virCPUDataNew(arch)) ||
|
|
|
|
x86DataCopy(&guestData->data.x86, &guest_model->data) < 0)
|
2013-07-04 10:03:29 +00:00
|
|
|
goto error;
|
2017-02-02 11:19:13 +00:00
|
|
|
|
|
|
|
*guest = guestData;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2014-03-25 06:50:40 +00:00
|
|
|
cleanup:
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
x86ModelFree(host_model);
|
|
|
|
x86ModelFree(diff);
|
|
|
|
x86ModelFree(cpu_force);
|
|
|
|
x86ModelFree(cpu_require);
|
|
|
|
x86ModelFree(cpu_optional);
|
|
|
|
x86ModelFree(cpu_disable);
|
|
|
|
x86ModelFree(cpu_forbid);
|
|
|
|
x86ModelFree(guest_model);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
2014-03-25 06:50:40 +00:00
|
|
|
error:
|
2017-02-02 14:37:40 +00:00
|
|
|
virCPUx86DataFree(guestData);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
ret = VIR_CPU_COMPARE_ERROR;
|
2013-10-09 12:38:11 +00:00
|
|
|
goto cleanup;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
2012-04-17 13:24:47 +00:00
|
|
|
#undef virX86CpuIncompatible
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
|
|
|
|
static virCPUCompareResult
|
2016-08-09 11:26:53 +00:00
|
|
|
virCPUx86Compare(virCPUDefPtr host,
|
|
|
|
virCPUDefPtr cpu,
|
|
|
|
bool failIncompatible)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2016-08-09 11:26:53 +00:00
|
|
|
virCPUCompareResult ret = VIR_CPU_COMPARE_ERROR;
|
|
|
|
virCPUx86MapPtr map;
|
|
|
|
virCPUx86ModelPtr model = NULL;
|
2014-05-28 13:11:57 +00:00
|
|
|
char *message = NULL;
|
|
|
|
|
2016-08-09 11:26:53 +00:00
|
|
|
if (!host || !host->model) {
|
|
|
|
if (failIncompatible) {
|
|
|
|
virReportError(VIR_ERR_CPU_INCOMPATIBLE, "%s",
|
|
|
|
_("unknown host CPU"));
|
|
|
|
} else {
|
|
|
|
VIR_WARN("unknown host CPU");
|
|
|
|
ret = VIR_CPU_COMPARE_INCOMPATIBLE;
|
|
|
|
}
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
2014-05-28 13:11:57 +00:00
|
|
|
ret = x86Compute(host, cpu, NULL, &message);
|
|
|
|
|
2016-08-09 11:26:53 +00:00
|
|
|
if (ret == VIR_CPU_COMPARE_INCOMPATIBLE) {
|
|
|
|
bool noTSX = false;
|
|
|
|
|
|
|
|
if (STREQ_NULLABLE(cpu->model, "Haswell") ||
|
|
|
|
STREQ_NULLABLE(cpu->model, "Broadwell")) {
|
|
|
|
if (!(map = virCPUx86GetMap()))
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
if (!(model = x86ModelFromCPU(cpu, map, -1)))
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
noTSX = !x86FeatureInData("hle", &model->data, map) ||
|
|
|
|
!x86FeatureInData("rtm", &model->data, map);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (failIncompatible) {
|
|
|
|
ret = VIR_CPU_COMPARE_ERROR;
|
|
|
|
if (message) {
|
|
|
|
if (noTSX) {
|
|
|
|
virReportError(VIR_ERR_CPU_INCOMPATIBLE,
|
|
|
|
_("%s; try using '%s-noTSX' CPU model"),
|
|
|
|
message, cpu->model);
|
|
|
|
} else {
|
|
|
|
virReportError(VIR_ERR_CPU_INCOMPATIBLE, "%s", message);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if (noTSX) {
|
|
|
|
virReportError(VIR_ERR_CPU_INCOMPATIBLE,
|
|
|
|
_("try using '%s-noTSX' CPU model"),
|
|
|
|
cpu->model);
|
|
|
|
} else {
|
|
|
|
virReportError(VIR_ERR_CPU_INCOMPATIBLE, NULL);
|
|
|
|
}
|
|
|
|
}
|
2014-05-28 13:11:57 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-08-09 11:26:53 +00:00
|
|
|
cleanup:
|
|
|
|
VIR_FREE(message);
|
|
|
|
x86ModelFree(model);
|
2014-05-28 13:11:57 +00:00
|
|
|
return ret;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-05-12 12:50:17 +00:00
|
|
|
/*
|
2015-06-25 13:06:19 +00:00
|
|
|
* Checks whether a candidate model is a better fit for the CPU data than the
|
|
|
|
* current model.
|
2016-05-12 12:50:17 +00:00
|
|
|
*
|
2015-06-25 13:06:19 +00:00
|
|
|
* Returns 0 if current is better,
|
|
|
|
* 1 if candidate is better,
|
|
|
|
* 2 if candidate is the best one (search should stop now).
|
2016-05-12 12:50:17 +00:00
|
|
|
*/
|
|
|
|
static int
|
2015-06-25 13:06:19 +00:00
|
|
|
x86DecodeUseCandidate(virCPUx86ModelPtr current,
|
|
|
|
virCPUDefPtr cpuCurrent,
|
|
|
|
virCPUx86ModelPtr candidate,
|
2016-05-12 12:50:17 +00:00
|
|
|
virCPUDefPtr cpuCandidate,
|
2015-06-25 13:06:19 +00:00
|
|
|
uint32_t signature,
|
2016-05-12 12:50:17 +00:00
|
|
|
const char *preferred,
|
|
|
|
bool checkPolicy)
|
|
|
|
{
|
|
|
|
if (checkPolicy) {
|
|
|
|
size_t i;
|
|
|
|
for (i = 0; i < cpuCandidate->nfeatures; i++) {
|
|
|
|
if (cpuCandidate->features[i].policy == VIR_CPU_FEATURE_DISABLE)
|
|
|
|
return 0;
|
|
|
|
cpuCandidate->features[i].policy = -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-01-05 16:43:27 +00:00
|
|
|
if (preferred && STREQ(cpuCandidate->model, preferred)) {
|
|
|
|
VIR_DEBUG("%s is the preferred model", cpuCandidate->model);
|
2016-05-12 12:50:17 +00:00
|
|
|
return 2;
|
2018-01-05 16:43:27 +00:00
|
|
|
}
|
2016-05-12 12:50:17 +00:00
|
|
|
|
2018-01-05 16:43:27 +00:00
|
|
|
if (!cpuCurrent) {
|
|
|
|
VIR_DEBUG("%s is better than nothing", cpuCandidate->model);
|
2016-05-12 12:50:17 +00:00
|
|
|
return 1;
|
2018-01-05 16:43:27 +00:00
|
|
|
}
|
2016-05-12 12:50:17 +00:00
|
|
|
|
2015-06-25 13:06:19 +00:00
|
|
|
/* Ideally we want to select a model with family/model equal to
|
|
|
|
* family/model of the real CPU. Once we found such model, we only
|
|
|
|
* consider candidates with matching family/model.
|
|
|
|
*/
|
|
|
|
if (signature &&
|
|
|
|
current->signature == signature &&
|
2018-01-05 16:43:27 +00:00
|
|
|
candidate->signature != signature) {
|
|
|
|
VIR_DEBUG("%s differs in signature from matching %s",
|
|
|
|
cpuCandidate->model, cpuCurrent->model);
|
2015-06-25 13:06:19 +00:00
|
|
|
return 0;
|
2018-01-05 16:43:27 +00:00
|
|
|
}
|
2015-06-25 13:06:19 +00:00
|
|
|
|
2018-01-05 16:43:27 +00:00
|
|
|
if (cpuCurrent->nfeatures > cpuCandidate->nfeatures) {
|
|
|
|
VIR_DEBUG("%s results in shorter feature list than %s",
|
|
|
|
cpuCandidate->model, cpuCurrent->model);
|
2016-05-12 12:50:17 +00:00
|
|
|
return 1;
|
2018-01-05 16:43:27 +00:00
|
|
|
}
|
2016-05-12 12:50:17 +00:00
|
|
|
|
2015-06-25 13:06:19 +00:00
|
|
|
/* Prefer a candidate with matching signature even though it would
|
|
|
|
* result in longer list of features.
|
|
|
|
*/
|
|
|
|
if (signature &&
|
|
|
|
candidate->signature == signature &&
|
2018-01-05 16:43:27 +00:00
|
|
|
current->signature != signature) {
|
|
|
|
VIR_DEBUG("%s provides matching signature", cpuCandidate->model);
|
2015-06-25 13:06:19 +00:00
|
|
|
return 1;
|
2018-01-05 16:43:27 +00:00
|
|
|
}
|
2015-06-25 13:06:19 +00:00
|
|
|
|
2018-01-05 16:43:27 +00:00
|
|
|
VIR_DEBUG("%s does not result in shorter feature list than %s",
|
|
|
|
cpuCandidate->model, cpuCurrent->model);
|
2016-05-12 12:50:17 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2017-02-15 14:01:40 +00:00
|
|
|
/**
|
|
|
|
* Drop broken TSX features.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
x86DataFilterTSX(virCPUx86Data *data,
|
|
|
|
virCPUx86VendorPtr vendor,
|
|
|
|
virCPUx86MapPtr map)
|
|
|
|
{
|
|
|
|
unsigned int family;
|
|
|
|
unsigned int model;
|
|
|
|
unsigned int stepping;
|
|
|
|
|
|
|
|
if (!vendor || STRNEQ(vendor->name, "Intel"))
|
|
|
|
return;
|
|
|
|
|
|
|
|
x86DataToSignatureFull(data, &family, &model, &stepping);
|
|
|
|
|
|
|
|
if (family == 6 &&
|
|
|
|
((model == 63 && stepping < 4) ||
|
|
|
|
model == 60 ||
|
|
|
|
model == 69 ||
|
|
|
|
model == 70)) {
|
|
|
|
virCPUx86FeaturePtr feature;
|
|
|
|
|
|
|
|
VIR_DEBUG("Dropping broken TSX");
|
|
|
|
|
|
|
|
if ((feature = x86FeatureFind(map, "hle")))
|
|
|
|
x86DataSubtract(data, &feature->data);
|
|
|
|
|
|
|
|
if ((feature = x86FeatureFind(map, "rtm")))
|
|
|
|
x86DataSubtract(data, &feature->data);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
static int
|
|
|
|
x86Decode(virCPUDefPtr cpu,
|
2017-02-15 14:01:40 +00:00
|
|
|
const virCPUx86Data *cpuData,
|
2017-09-22 13:51:33 +00:00
|
|
|
virDomainCapsCPUModelsPtr models,
|
2013-08-02 19:08:19 +00:00
|
|
|
const char *preferred,
|
2017-03-17 14:58:07 +00:00
|
|
|
bool migratable)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
|
|
|
int ret = -1;
|
2016-05-11 10:30:04 +00:00
|
|
|
virCPUx86MapPtr map;
|
2016-05-11 10:03:48 +00:00
|
|
|
virCPUx86ModelPtr candidate;
|
2010-01-15 15:58:59 +00:00
|
|
|
virCPUDefPtr cpuCandidate;
|
2015-06-25 13:06:19 +00:00
|
|
|
virCPUx86ModelPtr model = NULL;
|
2010-01-15 15:58:59 +00:00
|
|
|
virCPUDefPtr cpuModel = NULL;
|
2017-02-15 14:01:40 +00:00
|
|
|
virCPUx86Data data = VIR_CPU_X86_DATA_INIT;
|
2016-06-07 07:38:53 +00:00
|
|
|
virCPUx86Data copy = VIR_CPU_X86_DATA_INIT;
|
|
|
|
virCPUx86Data features = VIR_CPU_X86_DATA_INIT;
|
2016-05-12 14:02:09 +00:00
|
|
|
virCPUx86VendorPtr vendor;
|
2017-10-13 16:17:52 +00:00
|
|
|
virDomainCapsCPUModelPtr hvModel = NULL;
|
2015-06-25 13:06:19 +00:00
|
|
|
uint32_t signature;
|
2016-05-18 13:24:05 +00:00
|
|
|
ssize_t i;
|
2016-05-12 12:50:17 +00:00
|
|
|
int rc;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2017-02-15 14:01:40 +00:00
|
|
|
if (!cpuData || x86DataCopy(&data, cpuData) < 0)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
return -1;
|
|
|
|
|
2017-02-15 14:01:40 +00:00
|
|
|
if (!(map = virCPUx86GetMap()))
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
vendor = x86DataToVendor(&data, map);
|
|
|
|
signature = x86DataToSignature(&data);
|
|
|
|
|
|
|
|
x86DataFilterTSX(&data, vendor, map);
|
2016-05-12 14:02:09 +00:00
|
|
|
|
2016-05-18 13:24:05 +00:00
|
|
|
/* Walk through the CPU models in reverse order to check newest
|
|
|
|
* models first.
|
|
|
|
*/
|
|
|
|
for (i = map->nmodels - 1; i >= 0; i--) {
|
|
|
|
candidate = map->models[i];
|
2017-10-13 16:17:52 +00:00
|
|
|
if (models &&
|
|
|
|
!(hvModel = virDomainCapsCPUModelsGet(models, candidate->name))) {
|
2011-12-21 13:27:16 +00:00
|
|
|
if (preferred && STREQ(candidate->name, preferred)) {
|
|
|
|
if (cpu->fallback != VIR_CPU_FALLBACK_ALLOW) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
|
|
|
|
_("CPU model %s is not supported by hypervisor"),
|
|
|
|
preferred);
|
2016-05-12 12:52:26 +00:00
|
|
|
goto cleanup;
|
2011-12-21 13:27:16 +00:00
|
|
|
} else {
|
|
|
|
VIR_WARN("Preferred CPU model %s not allowed by"
|
|
|
|
" hypervisor; closest supported model will be"
|
|
|
|
" used", preferred);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
VIR_DEBUG("CPU model %s not allowed by hypervisor; ignoring",
|
|
|
|
candidate->name);
|
|
|
|
}
|
2016-05-12 12:53:31 +00:00
|
|
|
continue;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2016-05-12 14:02:09 +00:00
|
|
|
/* Both vendor and candidate->vendor are pointers to a single list of
|
|
|
|
* known vendors stored in the map.
|
|
|
|
*/
|
|
|
|
if (vendor && candidate->vendor && vendor != candidate->vendor) {
|
2010-07-02 15:51:59 +00:00
|
|
|
VIR_DEBUG("CPU vendor %s of model %s differs from %s; ignoring",
|
2016-05-12 14:02:09 +00:00
|
|
|
candidate->vendor->name, candidate->name, vendor->name);
|
2016-05-12 12:53:31 +00:00
|
|
|
continue;
|
2010-07-02 15:51:59 +00:00
|
|
|
}
|
|
|
|
|
2017-10-13 16:17:52 +00:00
|
|
|
if (!(cpuCandidate = x86DataToCPU(&data, candidate, map, hvModel)))
|
2016-05-12 14:02:09 +00:00
|
|
|
goto cleanup;
|
|
|
|
cpuCandidate->type = cpu->type;
|
|
|
|
|
2015-06-25 13:06:19 +00:00
|
|
|
if ((rc = x86DecodeUseCandidate(model, cpuModel,
|
|
|
|
candidate, cpuCandidate,
|
|
|
|
signature, preferred,
|
2016-05-12 12:50:17 +00:00
|
|
|
cpu->type == VIR_CPU_TYPE_HOST))) {
|
2010-01-15 15:58:59 +00:00
|
|
|
virCPUDefFree(cpuModel);
|
|
|
|
cpuModel = cpuCandidate;
|
2015-06-25 13:06:19 +00:00
|
|
|
model = candidate;
|
2016-05-12 12:50:17 +00:00
|
|
|
if (rc == 2)
|
|
|
|
break;
|
2014-01-27 16:03:55 +00:00
|
|
|
} else {
|
2010-01-15 15:58:59 +00:00
|
|
|
virCPUDefFree(cpuCandidate);
|
2014-01-27 16:03:55 +00:00
|
|
|
}
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2016-05-12 13:06:25 +00:00
|
|
|
if (!cpuModel) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
"%s", _("Cannot find suitable CPU model for given data"));
|
2016-05-12 12:52:26 +00:00
|
|
|
goto cleanup;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2015-02-05 14:28:09 +00:00
|
|
|
/* Remove non-migratable features if requested
|
|
|
|
* Note: this only works as long as no CPU model contains non-migratable
|
|
|
|
* features directly */
|
2017-03-17 14:58:07 +00:00
|
|
|
if (migratable) {
|
2016-06-28 09:12:41 +00:00
|
|
|
i = 0;
|
|
|
|
while (i < cpuModel->nfeatures) {
|
|
|
|
if (x86FeatureIsMigratable(cpuModel->features[i].name, map)) {
|
|
|
|
i++;
|
|
|
|
} else {
|
2016-06-28 08:51:41 +00:00
|
|
|
VIR_FREE(cpuModel->features[i].name);
|
|
|
|
VIR_DELETE_ELEMENT_INPLACE(cpuModel->features, i,
|
|
|
|
cpuModel->nfeatures);
|
2015-02-05 14:28:09 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-05-12 14:02:09 +00:00
|
|
|
if (vendor && VIR_STRDUP(cpu->vendor, vendor->name) < 0)
|
|
|
|
goto cleanup;
|
|
|
|
|
2016-08-04 22:01:42 +00:00
|
|
|
VIR_STEAL_PTR(cpu->model, cpuModel->model);
|
|
|
|
VIR_STEAL_PTR(cpu->features, cpuModel->features);
|
2010-01-15 15:58:59 +00:00
|
|
|
cpu->nfeatures = cpuModel->nfeatures;
|
2016-08-04 22:01:42 +00:00
|
|
|
cpuModel->nfeatures = 0;
|
|
|
|
cpu->nfeatures_max = cpuModel->nfeatures_max;
|
|
|
|
cpuModel->nfeatures_max = 0;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
|
2016-05-12 12:52:26 +00:00
|
|
|
cleanup:
|
2010-01-15 15:58:59 +00:00
|
|
|
virCPUDefFree(cpuModel);
|
2017-02-15 14:01:40 +00:00
|
|
|
virCPUx86DataClear(&data);
|
2016-06-07 07:38:53 +00:00
|
|
|
virCPUx86DataClear(©);
|
|
|
|
virCPUx86DataClear(&features);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2012-12-18 20:27:09 +00:00
|
|
|
static int
|
|
|
|
x86DecodeCPUData(virCPUDefPtr cpu,
|
maint: avoid 'const fooPtr' in cpu files
'const fooPtr' is the same as 'foo * const' (the pointer won't
change, but it's contents can). But in general, if an interface
is trying to be const-correct, it should be using 'const foo *'
(the pointer is to data that can't be changed).
Fix up offenders in src/cpu.
* src/cpu/cpu.h (cpuArchDecode, cpuArchEncode, cpuArchUpdate)
(cpuArchHasFeature, cpuDecode, cpuEncode, cpuUpdate)
(cpuHasFeature): Use intended type.
* src/conf/cpu_conf.h (virCPUDefCopyModel, virCPUDefCopy):
Likewise.
(virCPUDefParseXML): Drop const.
* src/cpu/cpu.c (cpuDecode, cpuEncode, cpuUpdate, cpuHasFeature):
Fix fallout.
* src/cpu/cpu_x86.c (x86ModelFromCPU, x86ModelSubtractCPU)
(x86DecodeCPUData, x86EncodePolicy, x86Encode, x86UpdateCustom)
(x86UpdateHostModel, x86Update, x86HasFeature): Likewise.
* src/cpu/cpu_s390.c (s390Decode): Likewise.
* src/cpu/cpu_arm.c (ArmDecode): Likewise.
* src/cpu/cpu_powerpc.c (ppcModelFromCPU, ppcCompute, ppcDecode)
(ppcUpdate): Likewise.
* src/conf/cpu_conf.c (virCPUDefCopyModel, virCPUDefCopy)
(virCPUDefParseXML): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
2013-10-05 20:01:02 +00:00
|
|
|
const virCPUData *data,
|
2017-09-26 08:24:05 +00:00
|
|
|
virDomainCapsCPUModelsPtr models)
|
2012-12-18 20:27:09 +00:00
|
|
|
{
|
2017-09-26 08:24:05 +00:00
|
|
|
return x86Decode(cpu, &data->data.x86, models, NULL, false);
|
2012-12-18 20:27:09 +00:00
|
|
|
}
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2012-12-18 20:27:09 +00:00
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
static int
|
|
|
|
x86EncodePolicy(virCPUx86Data *data,
|
|
|
|
const virCPUDef *cpu,
|
2016-05-11 10:30:04 +00:00
|
|
|
virCPUx86MapPtr map,
|
2014-04-28 00:15:21 +00:00
|
|
|
virCPUFeaturePolicy policy)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2016-05-11 10:03:48 +00:00
|
|
|
virCPUx86ModelPtr model;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
if (!(model = x86ModelFromCPU(cpu, map, policy)))
|
2016-06-07 07:38:53 +00:00
|
|
|
return -1;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
*data = model->data;
|
|
|
|
model->data.len = 0;
|
|
|
|
model->data.data = NULL;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
x86ModelFree(model);
|
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
return 0;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int
|
2013-07-16 12:39:40 +00:00
|
|
|
x86Encode(virArch arch,
|
maint: avoid 'const fooPtr' in cpu files
'const fooPtr' is the same as 'foo * const' (the pointer won't
change, but it's contents can). But in general, if an interface
is trying to be const-correct, it should be using 'const foo *'
(the pointer is to data that can't be changed).
Fix up offenders in src/cpu.
* src/cpu/cpu.h (cpuArchDecode, cpuArchEncode, cpuArchUpdate)
(cpuArchHasFeature, cpuDecode, cpuEncode, cpuUpdate)
(cpuHasFeature): Use intended type.
* src/conf/cpu_conf.h (virCPUDefCopyModel, virCPUDefCopy):
Likewise.
(virCPUDefParseXML): Drop const.
* src/cpu/cpu.c (cpuDecode, cpuEncode, cpuUpdate, cpuHasFeature):
Fix fallout.
* src/cpu/cpu_x86.c (x86ModelFromCPU, x86ModelSubtractCPU)
(x86DecodeCPUData, x86EncodePolicy, x86Encode, x86UpdateCustom)
(x86UpdateHostModel, x86Update, x86HasFeature): Likewise.
* src/cpu/cpu_s390.c (s390Decode): Likewise.
* src/cpu/cpu_arm.c (ArmDecode): Likewise.
* src/cpu/cpu_powerpc.c (ppcModelFromCPU, ppcCompute, ppcDecode)
(ppcUpdate): Likewise.
* src/conf/cpu_conf.c (virCPUDefCopyModel, virCPUDefCopy)
(virCPUDefParseXML): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
2013-10-05 20:01:02 +00:00
|
|
|
const virCPUDef *cpu,
|
2012-12-18 18:44:23 +00:00
|
|
|
virCPUDataPtr *forced,
|
|
|
|
virCPUDataPtr *required,
|
|
|
|
virCPUDataPtr *optional,
|
|
|
|
virCPUDataPtr *disabled,
|
|
|
|
virCPUDataPtr *forbidden,
|
|
|
|
virCPUDataPtr *vendor)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2016-05-11 10:30:04 +00:00
|
|
|
virCPUx86MapPtr map = NULL;
|
2017-02-02 11:19:13 +00:00
|
|
|
virCPUDataPtr data_forced = NULL;
|
|
|
|
virCPUDataPtr data_required = NULL;
|
|
|
|
virCPUDataPtr data_optional = NULL;
|
|
|
|
virCPUDataPtr data_disabled = NULL;
|
|
|
|
virCPUDataPtr data_forbidden = NULL;
|
|
|
|
virCPUDataPtr data_vendor = NULL;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2012-12-18 20:27:09 +00:00
|
|
|
if (forced)
|
|
|
|
*forced = NULL;
|
|
|
|
if (required)
|
|
|
|
*required = NULL;
|
|
|
|
if (optional)
|
|
|
|
*optional = NULL;
|
|
|
|
if (disabled)
|
|
|
|
*disabled = NULL;
|
|
|
|
if (forbidden)
|
|
|
|
*forbidden = NULL;
|
|
|
|
if (vendor)
|
|
|
|
*vendor = NULL;
|
|
|
|
|
2016-05-12 13:06:25 +00:00
|
|
|
if (!(map = virCPUx86GetMap()))
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
goto error;
|
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
if (forced &&
|
2017-02-02 11:19:13 +00:00
|
|
|
(!(data_forced = virCPUDataNew(arch)) ||
|
|
|
|
x86EncodePolicy(&data_forced->data.x86, cpu, map,
|
|
|
|
VIR_CPU_FEATURE_FORCE) < 0))
|
2016-06-07 07:38:53 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
if (required &&
|
2017-02-02 11:19:13 +00:00
|
|
|
(!(data_required = virCPUDataNew(arch)) ||
|
|
|
|
x86EncodePolicy(&data_required->data.x86, cpu, map,
|
|
|
|
VIR_CPU_FEATURE_REQUIRE) < 0))
|
2016-06-07 07:38:53 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
if (optional &&
|
2017-02-02 11:19:13 +00:00
|
|
|
(!(data_optional = virCPUDataNew(arch)) ||
|
|
|
|
x86EncodePolicy(&data_optional->data.x86, cpu, map,
|
|
|
|
VIR_CPU_FEATURE_OPTIONAL) < 0))
|
2016-06-07 07:38:53 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
if (disabled &&
|
2017-02-02 11:19:13 +00:00
|
|
|
(!(data_disabled = virCPUDataNew(arch)) ||
|
|
|
|
x86EncodePolicy(&data_disabled->data.x86, cpu, map,
|
|
|
|
VIR_CPU_FEATURE_DISABLE) < 0))
|
2016-06-07 07:38:53 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
if (forbidden &&
|
2017-02-02 11:19:13 +00:00
|
|
|
(!(data_forbidden = virCPUDataNew(arch)) ||
|
|
|
|
x86EncodePolicy(&data_forbidden->data.x86, cpu, map,
|
|
|
|
VIR_CPU_FEATURE_FORBID) < 0))
|
2016-06-07 07:38:53 +00:00
|
|
|
goto error;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2010-07-02 15:51:59 +00:00
|
|
|
if (vendor) {
|
2016-05-11 08:47:21 +00:00
|
|
|
virCPUx86VendorPtr v = NULL;
|
2010-07-02 15:51:59 +00:00
|
|
|
|
|
|
|
if (cpu->vendor && !(v = x86VendorFind(map, cpu->vendor))) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_OPERATION_FAILED,
|
|
|
|
_("CPU vendor %s not found"), cpu->vendor);
|
2010-07-02 15:51:59 +00:00
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
2017-02-02 11:19:13 +00:00
|
|
|
if (!(data_vendor = virCPUDataNew(arch)))
|
2010-07-02 15:51:59 +00:00
|
|
|
goto error;
|
|
|
|
|
2017-02-02 14:52:13 +00:00
|
|
|
if (v && virCPUx86DataAddCPUID(data_vendor, &v->cpuid) < 0)
|
2017-02-02 11:19:13 +00:00
|
|
|
goto error;
|
|
|
|
}
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2012-12-18 20:27:09 +00:00
|
|
|
if (forced)
|
2017-02-02 11:19:13 +00:00
|
|
|
*forced = data_forced;
|
2012-12-18 20:27:09 +00:00
|
|
|
if (required)
|
2017-02-02 11:19:13 +00:00
|
|
|
*required = data_required;
|
2012-12-18 20:27:09 +00:00
|
|
|
if (optional)
|
2017-02-02 11:19:13 +00:00
|
|
|
*optional = data_optional;
|
2012-12-18 20:27:09 +00:00
|
|
|
if (disabled)
|
2017-02-02 11:19:13 +00:00
|
|
|
*disabled = data_disabled;
|
2012-12-18 20:27:09 +00:00
|
|
|
if (forbidden)
|
2017-02-02 11:19:13 +00:00
|
|
|
*forbidden = data_forbidden;
|
2012-12-18 20:27:09 +00:00
|
|
|
if (vendor)
|
2017-02-02 11:19:13 +00:00
|
|
|
*vendor = data_vendor;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
error:
|
2017-02-02 14:37:40 +00:00
|
|
|
virCPUx86DataFree(data_forced);
|
|
|
|
virCPUx86DataFree(data_required);
|
|
|
|
virCPUx86DataFree(data_optional);
|
|
|
|
virCPUx86DataFree(data_disabled);
|
|
|
|
virCPUx86DataFree(data_forbidden);
|
|
|
|
virCPUx86DataFree(data_vendor);
|
2013-10-08 16:20:10 +00:00
|
|
|
return -1;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-11-28 08:55:52 +00:00
|
|
|
#if defined(__i386__) || defined(__x86_64__)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
static inline void
|
2013-07-23 18:00:14 +00:00
|
|
|
cpuidCall(virCPUx86CPUID *cpuid)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2010-03-09 18:22:22 +00:00
|
|
|
# if __x86_64__
|
2012-10-24 13:10:51 +00:00
|
|
|
asm("xor %%ebx, %%ebx;" /* clear the other registers as some cpuid */
|
2016-05-20 08:59:13 +00:00
|
|
|
"xor %%edx, %%edx;" /* functions may use them as additional arguments */
|
2012-10-24 13:10:51 +00:00
|
|
|
"cpuid;"
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
: "=a" (cpuid->eax),
|
|
|
|
"=b" (cpuid->ebx),
|
|
|
|
"=c" (cpuid->ecx),
|
|
|
|
"=d" (cpuid->edx)
|
2016-05-20 08:59:13 +00:00
|
|
|
: "a" (cpuid->eax_in),
|
|
|
|
"c" (cpuid->ecx_in));
|
2010-03-09 18:22:22 +00:00
|
|
|
# else
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
/* we need to avoid direct use of ebx for CPUID output as it is used
|
|
|
|
* for global offset table on i386 with -fPIC
|
|
|
|
*/
|
|
|
|
asm("push %%ebx;"
|
2012-10-24 13:10:51 +00:00
|
|
|
"xor %%ebx, %%ebx;" /* clear the other registers as some cpuid */
|
2016-05-20 08:59:13 +00:00
|
|
|
"xor %%edx, %%edx;" /* functions may use them as additional arguments */
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
"cpuid;"
|
|
|
|
"mov %%ebx, %1;"
|
|
|
|
"pop %%ebx;"
|
|
|
|
: "=a" (cpuid->eax),
|
|
|
|
"=r" (cpuid->ebx),
|
|
|
|
"=c" (cpuid->ecx),
|
|
|
|
"=d" (cpuid->edx)
|
2016-05-20 08:59:13 +00:00
|
|
|
: "a" (cpuid->eax_in),
|
|
|
|
"c" (cpuid->ecx_in)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
: "cc");
|
2010-03-09 18:22:22 +00:00
|
|
|
# endif
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-05-23 15:45:40 +00:00
|
|
|
/* Leaf 0x04: deterministic cache parameters
|
|
|
|
*
|
|
|
|
* Sub leaf n+1 is invalid if eax[4:0] in sub leaf n equals 0.
|
|
|
|
*/
|
|
|
|
static int
|
2017-02-02 14:52:13 +00:00
|
|
|
cpuidSetLeaf4(virCPUDataPtr data,
|
2016-05-23 15:45:40 +00:00
|
|
|
virCPUx86CPUID *subLeaf0)
|
|
|
|
{
|
|
|
|
virCPUx86CPUID cpuid = *subLeaf0;
|
|
|
|
|
|
|
|
if (virCPUx86DataAddCPUID(data, subLeaf0) < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
while (cpuid.eax & 0x1f) {
|
|
|
|
cpuid.ecx_in++;
|
|
|
|
cpuidCall(&cpuid);
|
|
|
|
if (virCPUx86DataAddCPUID(data, &cpuid) < 0)
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* Leaf 0x07: structured extended feature flags enumeration
|
|
|
|
*
|
|
|
|
* Sub leaf n is invalid if n > eax in sub leaf 0.
|
|
|
|
*/
|
|
|
|
static int
|
2017-02-02 14:52:13 +00:00
|
|
|
cpuidSetLeaf7(virCPUDataPtr data,
|
2016-05-23 15:45:40 +00:00
|
|
|
virCPUx86CPUID *subLeaf0)
|
|
|
|
{
|
|
|
|
virCPUx86CPUID cpuid = { .eax_in = 0x7 };
|
|
|
|
uint32_t sub;
|
|
|
|
|
|
|
|
if (virCPUx86DataAddCPUID(data, subLeaf0) < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
for (sub = 1; sub <= subLeaf0->eax; sub++) {
|
|
|
|
cpuid.ecx_in = sub;
|
|
|
|
cpuidCall(&cpuid);
|
|
|
|
if (virCPUx86DataAddCPUID(data, &cpuid) < 0)
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* Leaf 0x0b: extended topology enumeration
|
|
|
|
*
|
|
|
|
* Sub leaf n is invalid if it returns 0 in ecx[15:8].
|
|
|
|
* Sub leaf n+1 is invalid if sub leaf n is invalid.
|
|
|
|
* Some output values do not depend on ecx, thus sub leaf 0 provides
|
|
|
|
* meaningful data even if it was (theoretically) considered invalid.
|
|
|
|
*/
|
|
|
|
static int
|
2017-02-02 14:52:13 +00:00
|
|
|
cpuidSetLeafB(virCPUDataPtr data,
|
2016-05-23 15:45:40 +00:00
|
|
|
virCPUx86CPUID *subLeaf0)
|
|
|
|
{
|
|
|
|
virCPUx86CPUID cpuid = *subLeaf0;
|
|
|
|
|
|
|
|
while (cpuid.ecx & 0xff00) {
|
|
|
|
if (virCPUx86DataAddCPUID(data, &cpuid) < 0)
|
|
|
|
return -1;
|
|
|
|
cpuid.ecx_in++;
|
|
|
|
cpuidCall(&cpuid);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* Leaf 0x0d: processor extended state enumeration
|
|
|
|
*
|
|
|
|
* Sub leaves 0 and 1 are valid.
|
|
|
|
* Sub leaf n (2 <= n < 32) is invalid if eax[n] from sub leaf 0 is not set
|
|
|
|
* and ecx[n] from sub leaf 1 is not set.
|
|
|
|
* Sub leaf n (32 <= n < 64) is invalid if edx[n-32] from sub leaf 0 is not set
|
|
|
|
* and edx[n-32] from sub leaf 1 is not set.
|
|
|
|
*/
|
|
|
|
static int
|
2017-02-02 14:52:13 +00:00
|
|
|
cpuidSetLeafD(virCPUDataPtr data,
|
2016-05-23 15:45:40 +00:00
|
|
|
virCPUx86CPUID *subLeaf0)
|
|
|
|
{
|
|
|
|
virCPUx86CPUID cpuid = { .eax_in = 0xd };
|
|
|
|
virCPUx86CPUID sub0;
|
|
|
|
virCPUx86CPUID sub1;
|
|
|
|
uint32_t sub;
|
|
|
|
|
|
|
|
if (virCPUx86DataAddCPUID(data, subLeaf0) < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
cpuid.ecx_in = 1;
|
|
|
|
cpuidCall(&cpuid);
|
|
|
|
if (virCPUx86DataAddCPUID(data, &cpuid) < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
sub0 = *subLeaf0;
|
|
|
|
sub1 = cpuid;
|
|
|
|
for (sub = 2; sub < 64; sub++) {
|
|
|
|
if (sub < 32 &&
|
|
|
|
!(sub0.eax & (1 << sub)) &&
|
|
|
|
!(sub1.ecx & (1 << sub)))
|
|
|
|
continue;
|
|
|
|
if (sub >= 32 &&
|
|
|
|
!(sub0.edx & (1 << (sub - 32))) &&
|
|
|
|
!(sub1.edx & (1 << (sub - 32))))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
cpuid.ecx_in = sub;
|
|
|
|
cpuidCall(&cpuid);
|
|
|
|
if (virCPUx86DataAddCPUID(data, &cpuid) < 0)
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* Leaf 0x0f: L3 cached RDT monitoring capability enumeration
|
|
|
|
* Leaf 0x10: RDT allocation enumeration
|
|
|
|
*
|
|
|
|
* res reports valid resource identification (ResID) starting at bit 1.
|
|
|
|
* Values associated with each valid ResID are reported by ResID sub leaf.
|
|
|
|
*
|
|
|
|
* 0x0f: Sub leaf n is valid if edx[n] (= res[ResID]) from sub leaf 0 is set.
|
|
|
|
* 0x10: Sub leaf n is valid if ebx[n] (= res[ResID]) from sub leaf 0 is set.
|
|
|
|
*/
|
|
|
|
static int
|
2017-02-02 14:52:13 +00:00
|
|
|
cpuidSetLeafResID(virCPUDataPtr data,
|
2016-05-23 15:45:40 +00:00
|
|
|
virCPUx86CPUID *subLeaf0,
|
|
|
|
uint32_t res)
|
|
|
|
{
|
|
|
|
virCPUx86CPUID cpuid = { .eax_in = subLeaf0->eax_in };
|
|
|
|
uint32_t sub;
|
|
|
|
|
|
|
|
if (virCPUx86DataAddCPUID(data, subLeaf0) < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
for (sub = 1; sub < 32; sub++) {
|
|
|
|
if (!(res & (1 << sub)))
|
|
|
|
continue;
|
|
|
|
cpuid.ecx_in = sub;
|
|
|
|
cpuidCall(&cpuid);
|
|
|
|
if (virCPUx86DataAddCPUID(data, &cpuid) < 0)
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* Leaf 0x12: SGX capability enumeration
|
|
|
|
*
|
|
|
|
* Sub leaves 0 and 1 is supported if ebx[2] from leaf 0x7 (SGX) is set.
|
|
|
|
* Sub leaves n >= 2 are valid as long as eax[3:0] != 0.
|
|
|
|
*/
|
|
|
|
static int
|
2017-02-02 14:52:13 +00:00
|
|
|
cpuidSetLeaf12(virCPUDataPtr data,
|
2016-05-23 15:45:40 +00:00
|
|
|
virCPUx86CPUID *subLeaf0)
|
|
|
|
{
|
|
|
|
virCPUx86CPUID cpuid = { .eax_in = 0x7 };
|
|
|
|
virCPUx86CPUID *cpuid7;
|
|
|
|
|
2017-02-02 14:52:13 +00:00
|
|
|
if (!(cpuid7 = x86DataCpuid(&data->data.x86, &cpuid)) ||
|
2016-05-23 15:45:40 +00:00
|
|
|
!(cpuid7->ebx & (1 << 2)))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (virCPUx86DataAddCPUID(data, subLeaf0) < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
cpuid.eax_in = 0x12;
|
|
|
|
cpuid.ecx_in = 1;
|
|
|
|
cpuidCall(&cpuid);
|
|
|
|
if (virCPUx86DataAddCPUID(data, &cpuid) < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
cpuid.ecx_in = 2;
|
|
|
|
cpuidCall(&cpuid);
|
|
|
|
while (cpuid.eax & 0xf) {
|
|
|
|
if (virCPUx86DataAddCPUID(data, &cpuid) < 0)
|
|
|
|
return -1;
|
|
|
|
cpuid.ecx_in++;
|
|
|
|
cpuidCall(&cpuid);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* Leaf 0x14: processor trace enumeration
|
|
|
|
*
|
|
|
|
* Sub leaf 0 reports the maximum supported sub leaf in eax.
|
|
|
|
*/
|
|
|
|
static int
|
2017-02-02 14:52:13 +00:00
|
|
|
cpuidSetLeaf14(virCPUDataPtr data,
|
2016-05-23 15:45:40 +00:00
|
|
|
virCPUx86CPUID *subLeaf0)
|
|
|
|
{
|
|
|
|
virCPUx86CPUID cpuid = { .eax_in = 0x14 };
|
|
|
|
uint32_t sub;
|
|
|
|
|
|
|
|
if (virCPUx86DataAddCPUID(data, subLeaf0) < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
for (sub = 1; sub <= subLeaf0->eax; sub++) {
|
|
|
|
cpuid.ecx_in = sub;
|
|
|
|
cpuidCall(&cpuid);
|
|
|
|
if (virCPUx86DataAddCPUID(data, &cpuid) < 0)
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* Leaf 0x17: SOC Vendor
|
|
|
|
*
|
|
|
|
* Sub leaf 0 is valid if eax >= 3.
|
|
|
|
* Sub leaf 0 reports the maximum supported sub leaf in eax.
|
|
|
|
*/
|
|
|
|
static int
|
2017-02-02 14:52:13 +00:00
|
|
|
cpuidSetLeaf17(virCPUDataPtr data,
|
2016-05-23 15:45:40 +00:00
|
|
|
virCPUx86CPUID *subLeaf0)
|
|
|
|
{
|
|
|
|
virCPUx86CPUID cpuid = { .eax_in = 0x17 };
|
|
|
|
uint32_t sub;
|
|
|
|
|
|
|
|
if (subLeaf0->eax < 3)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (virCPUx86DataAddCPUID(data, subLeaf0) < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
for (sub = 1; sub <= subLeaf0->eax; sub++) {
|
|
|
|
cpuid.ecx_in = sub;
|
|
|
|
cpuidCall(&cpuid);
|
|
|
|
if (virCPUx86DataAddCPUID(data, &cpuid) < 0)
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
static int
|
2017-02-02 14:52:13 +00:00
|
|
|
cpuidSet(uint32_t base, virCPUDataPtr data)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2016-05-23 15:45:40 +00:00
|
|
|
int rc;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
uint32_t max;
|
2016-05-23 15:45:40 +00:00
|
|
|
uint32_t leaf;
|
2016-05-20 08:59:13 +00:00
|
|
|
virCPUx86CPUID cpuid = { .eax_in = base };
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
cpuidCall(&cpuid);
|
2013-10-07 13:26:17 +00:00
|
|
|
max = cpuid.eax;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2016-05-23 15:45:40 +00:00
|
|
|
for (leaf = base; leaf <= max; leaf++) {
|
|
|
|
cpuid.eax_in = leaf;
|
2016-05-20 08:59:13 +00:00
|
|
|
cpuid.ecx_in = 0;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
cpuidCall(&cpuid);
|
2016-05-23 15:45:40 +00:00
|
|
|
|
|
|
|
/* Handle CPUID leaves that depend on previously queried bits or
|
|
|
|
* which provide additional sub leaves for ecx_in > 0
|
|
|
|
*/
|
|
|
|
if (leaf == 0x4)
|
|
|
|
rc = cpuidSetLeaf4(data, &cpuid);
|
|
|
|
else if (leaf == 0x7)
|
|
|
|
rc = cpuidSetLeaf7(data, &cpuid);
|
|
|
|
else if (leaf == 0xb)
|
|
|
|
rc = cpuidSetLeafB(data, &cpuid);
|
|
|
|
else if (leaf == 0xd)
|
|
|
|
rc = cpuidSetLeafD(data, &cpuid);
|
|
|
|
else if (leaf == 0xf)
|
|
|
|
rc = cpuidSetLeafResID(data, &cpuid, cpuid.edx);
|
|
|
|
else if (leaf == 0x10)
|
|
|
|
rc = cpuidSetLeafResID(data, &cpuid, cpuid.ebx);
|
|
|
|
else if (leaf == 0x12)
|
|
|
|
rc = cpuidSetLeaf12(data, &cpuid);
|
|
|
|
else if (leaf == 0x14)
|
|
|
|
rc = cpuidSetLeaf14(data, &cpuid);
|
|
|
|
else if (leaf == 0x17)
|
|
|
|
rc = cpuidSetLeaf17(data, &cpuid);
|
|
|
|
else
|
|
|
|
rc = virCPUx86DataAddCPUID(data, &cpuid);
|
|
|
|
|
|
|
|
if (rc < 0)
|
2013-10-07 13:26:17 +00:00
|
|
|
return -1;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2013-10-07 13:26:17 +00:00
|
|
|
return 0;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2017-03-06 20:35:49 +00:00
|
|
|
static int
|
2017-03-07 11:20:01 +00:00
|
|
|
virCPUx86GetHost(virCPUDefPtr cpu,
|
2017-09-22 13:51:33 +00:00
|
|
|
virDomainCapsCPUModelsPtr models)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
2012-12-18 18:44:23 +00:00
|
|
|
virCPUDataPtr cpuData = NULL;
|
2017-03-06 20:35:49 +00:00
|
|
|
int ret = -1;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2017-12-12 15:23:40 +00:00
|
|
|
if (virCPUx86DriverInitialize() < 0)
|
|
|
|
goto cleanup;
|
|
|
|
|
2017-03-06 20:35:49 +00:00
|
|
|
if (!(cpuData = virCPUDataNew(archs[0])))
|
|
|
|
goto cleanup;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2017-03-06 20:35:49 +00:00
|
|
|
if (cpuidSet(CPUX86_BASIC, cpuData) < 0 ||
|
|
|
|
cpuidSet(CPUX86_EXTENDED, cpuData) < 0)
|
|
|
|
goto cleanup;
|
2012-12-18 20:27:09 +00:00
|
|
|
|
2017-09-26 08:24:05 +00:00
|
|
|
ret = x86DecodeCPUData(cpu, cpuData, models);
|
2017-12-12 15:23:40 +00:00
|
|
|
cpu->microcodeVersion = microcodeVersion;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2017-03-06 20:35:49 +00:00
|
|
|
cleanup:
|
2017-02-02 14:37:40 +00:00
|
|
|
virCPUx86DataFree(cpuData);
|
2017-03-06 20:35:49 +00:00
|
|
|
return ret;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
|
2010-01-27 13:33:20 +00:00
|
|
|
static virCPUDefPtr
|
2018-05-15 08:50:32 +00:00
|
|
|
virCPUx86Baseline(virCPUDefPtr *cpus,
|
|
|
|
unsigned int ncpus,
|
|
|
|
virDomainCapsCPUModelsPtr models,
|
2018-05-15 09:57:35 +00:00
|
|
|
const char **features,
|
2018-05-15 08:50:32 +00:00
|
|
|
bool migratable)
|
2010-01-27 13:33:20 +00:00
|
|
|
{
|
2016-05-11 10:30:04 +00:00
|
|
|
virCPUx86MapPtr map = NULL;
|
2016-05-11 10:03:48 +00:00
|
|
|
virCPUx86ModelPtr base_model = NULL;
|
2010-01-27 13:33:20 +00:00
|
|
|
virCPUDefPtr cpu = NULL;
|
Convert 'int i' to 'size_t i' in src/cpu/ files
Convert the type of loop iterators named 'i', 'j', k',
'ii', 'jj', 'kk', to be 'size_t' instead of 'int' or
'unsigned int', also santizing 'ii', 'jj', 'kk' to use
the normal 'i', 'j', 'k' naming
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2013-07-08 14:09:33 +00:00
|
|
|
size_t i;
|
2016-05-11 08:47:21 +00:00
|
|
|
virCPUx86VendorPtr vendor = NULL;
|
2016-05-11 10:03:48 +00:00
|
|
|
virCPUx86ModelPtr model = NULL;
|
2010-10-13 10:26:22 +00:00
|
|
|
bool outputVendor = true;
|
2014-01-27 19:41:43 +00:00
|
|
|
const char *modelName;
|
|
|
|
bool matchingNames = true;
|
2018-05-15 09:57:35 +00:00
|
|
|
virCPUDataPtr featData = NULL;
|
2010-01-27 13:33:20 +00:00
|
|
|
|
2013-10-08 16:20:10 +00:00
|
|
|
if (!(map = virCPUx86GetMap()))
|
2010-01-27 13:33:20 +00:00
|
|
|
goto error;
|
|
|
|
|
2018-05-15 09:27:20 +00:00
|
|
|
if (!(base_model = x86ModelFromCPU(cpus[0], map, -1)))
|
2010-01-27 13:33:20 +00:00
|
|
|
goto error;
|
|
|
|
|
2012-12-11 12:58:54 +00:00
|
|
|
if (VIR_ALLOC(cpu) < 0)
|
2013-07-04 10:03:29 +00:00
|
|
|
goto error;
|
2012-12-11 12:58:54 +00:00
|
|
|
|
2010-04-14 15:41:32 +00:00
|
|
|
cpu->type = VIR_CPU_TYPE_GUEST;
|
|
|
|
cpu->match = VIR_CPU_MATCH_EXACT;
|
2010-01-27 13:33:20 +00:00
|
|
|
|
2014-09-03 17:29:38 +00:00
|
|
|
if (!cpus[0]->vendor) {
|
2010-10-13 10:26:22 +00:00
|
|
|
outputVendor = false;
|
2014-09-03 17:29:38 +00:00
|
|
|
} else if (!(vendor = x86VendorFind(map, cpus[0]->vendor))) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_OPERATION_FAILED,
|
|
|
|
_("Unknown CPU vendor %s"), cpus[0]->vendor);
|
2010-07-02 15:51:59 +00:00
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
2014-01-27 19:41:43 +00:00
|
|
|
modelName = cpus[0]->model;
|
2010-01-27 13:33:20 +00:00
|
|
|
for (i = 1; i < ncpus; i++) {
|
2010-07-02 15:51:59 +00:00
|
|
|
const char *vn = NULL;
|
|
|
|
|
2014-01-27 19:41:43 +00:00
|
|
|
if (matchingNames && cpus[i]->model) {
|
|
|
|
if (!modelName) {
|
|
|
|
modelName = cpus[i]->model;
|
|
|
|
} else if (STRNEQ(modelName, cpus[i]->model)) {
|
|
|
|
modelName = NULL;
|
|
|
|
matchingNames = false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-05-15 09:27:20 +00:00
|
|
|
if (!(model = x86ModelFromCPU(cpus[i], map, -1)))
|
2010-01-27 13:33:20 +00:00
|
|
|
goto error;
|
|
|
|
|
2010-07-02 15:51:59 +00:00
|
|
|
if (cpus[i]->vendor && model->vendor &&
|
|
|
|
STRNEQ(cpus[i]->vendor, model->vendor->name)) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_OPERATION_FAILED,
|
|
|
|
_("CPU vendor %s of model %s differs from vendor %s"),
|
|
|
|
model->vendor->name, model->name, cpus[i]->vendor);
|
2010-07-02 15:51:59 +00:00
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
2014-09-03 17:29:38 +00:00
|
|
|
if (cpus[i]->vendor) {
|
2010-07-02 15:51:59 +00:00
|
|
|
vn = cpus[i]->vendor;
|
2014-09-03 17:29:38 +00:00
|
|
|
} else {
|
2010-10-13 10:26:22 +00:00
|
|
|
outputVendor = false;
|
|
|
|
if (model->vendor)
|
|
|
|
vn = model->vendor->name;
|
|
|
|
}
|
2010-07-02 15:51:59 +00:00
|
|
|
|
|
|
|
if (vn) {
|
|
|
|
if (!vendor) {
|
|
|
|
if (!(vendor = x86VendorFind(map, vn))) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_OPERATION_FAILED,
|
|
|
|
_("Unknown CPU vendor %s"), vn);
|
2010-07-02 15:51:59 +00:00
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
} else if (STRNEQ(vendor->name, vn)) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_OPERATION_FAILED,
|
|
|
|
"%s", _("CPU vendors do not match"));
|
2010-07-02 15:51:59 +00:00
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
x86DataIntersect(&base_model->data, &model->data);
|
2010-01-27 13:33:20 +00:00
|
|
|
x86ModelFree(model);
|
2010-07-02 15:51:59 +00:00
|
|
|
model = NULL;
|
2010-01-27 13:33:20 +00:00
|
|
|
}
|
|
|
|
|
2018-05-15 09:57:35 +00:00
|
|
|
if (features) {
|
|
|
|
virCPUx86FeaturePtr feat;
|
|
|
|
|
|
|
|
if (!(featData = virCPUDataNew(archs[0])))
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
for (i = 0; features[i]; i++) {
|
|
|
|
if ((feat = x86FeatureFind(map, features[i])) &&
|
|
|
|
x86DataAdd(&featData->data.x86, &feat->data) < 0)
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
|
|
|
x86DataIntersect(&base_model->data, &featData->data.x86);
|
|
|
|
}
|
|
|
|
|
2016-06-07 07:38:53 +00:00
|
|
|
if (x86DataIsEmpty(&base_model->data)) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_OPERATION_FAILED,
|
|
|
|
"%s", _("CPUs are incompatible"));
|
2010-07-02 15:51:40 +00:00
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
2017-02-02 14:52:13 +00:00
|
|
|
if (vendor &&
|
|
|
|
virCPUx86DataAddCPUIDInt(&base_model->data, &vendor->cpuid) < 0)
|
2013-07-04 10:03:29 +00:00
|
|
|
goto error;
|
2010-07-02 15:51:59 +00:00
|
|
|
|
2017-09-22 13:51:33 +00:00
|
|
|
if (x86Decode(cpu, &base_model->data, models, modelName, migratable) < 0)
|
2010-01-27 13:33:20 +00:00
|
|
|
goto error;
|
|
|
|
|
2014-01-27 19:41:43 +00:00
|
|
|
if (STREQ_NULLABLE(cpu->model, modelName))
|
|
|
|
cpu->fallback = VIR_CPU_FALLBACK_FORBID;
|
|
|
|
|
2010-10-13 10:26:22 +00:00
|
|
|
if (!outputVendor)
|
|
|
|
VIR_FREE(cpu->vendor);
|
|
|
|
|
2014-03-25 06:50:40 +00:00
|
|
|
cleanup:
|
2010-01-27 13:33:20 +00:00
|
|
|
x86ModelFree(base_model);
|
2018-05-15 09:57:35 +00:00
|
|
|
virCPUx86DataFree(featData);
|
2010-01-27 13:33:20 +00:00
|
|
|
|
|
|
|
return cpu;
|
|
|
|
|
2014-03-25 06:50:40 +00:00
|
|
|
error:
|
2010-07-02 15:51:59 +00:00
|
|
|
x86ModelFree(model);
|
2010-01-27 13:33:20 +00:00
|
|
|
virCPUDefFree(cpu);
|
|
|
|
cpu = NULL;
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-03-23 08:32:50 +00:00
|
|
|
static int
|
2016-06-23 13:27:07 +00:00
|
|
|
x86UpdateHostModel(virCPUDefPtr guest,
|
2017-03-29 13:00:21 +00:00
|
|
|
const virCPUDef *host)
|
2010-03-23 08:32:50 +00:00
|
|
|
{
|
2016-06-23 13:27:07 +00:00
|
|
|
virCPUDefPtr updated = NULL;
|
Convert 'int i' to 'size_t i' in src/cpu/ files
Convert the type of loop iterators named 'i', 'j', k',
'ii', 'jj', 'kk', to be 'size_t' instead of 'int' or
'unsigned int', also santizing 'ii', 'jj', 'kk' to use
the normal 'i', 'j', 'k' naming
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2013-07-08 14:09:33 +00:00
|
|
|
size_t i;
|
2016-06-23 13:27:07 +00:00
|
|
|
int ret = -1;
|
2010-03-23 08:32:50 +00:00
|
|
|
|
2016-06-23 13:27:07 +00:00
|
|
|
if (!(updated = virCPUDefCopyWithoutModel(host)))
|
2010-03-23 08:32:50 +00:00
|
|
|
goto cleanup;
|
|
|
|
|
2016-06-23 13:27:07 +00:00
|
|
|
updated->type = VIR_CPU_TYPE_GUEST;
|
|
|
|
updated->mode = VIR_CPU_MODE_CUSTOM;
|
2017-03-29 13:00:21 +00:00
|
|
|
if (virCPUDefCopyModel(updated, host, true) < 0)
|
2016-06-23 13:27:07 +00:00
|
|
|
goto cleanup;
|
2010-03-23 08:32:50 +00:00
|
|
|
|
2016-06-23 13:27:07 +00:00
|
|
|
if (guest->vendor_id) {
|
|
|
|
VIR_FREE(updated->vendor_id);
|
|
|
|
if (VIR_STRDUP(updated->vendor_id, guest->vendor_id) < 0)
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < guest->nfeatures; i++) {
|
|
|
|
if (virCPUDefUpdateFeature(updated,
|
|
|
|
guest->features[i].name,
|
|
|
|
guest->features[i].policy) < 0)
|
2010-03-23 08:32:50 +00:00
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
2016-11-10 09:26:03 +00:00
|
|
|
virCPUDefStealModel(guest, updated,
|
|
|
|
guest->mode == VIR_CPU_MODE_CUSTOM);
|
2016-06-23 13:27:07 +00:00
|
|
|
guest->mode = VIR_CPU_MODE_CUSTOM;
|
|
|
|
guest->match = VIR_CPU_MATCH_EXACT;
|
2010-03-23 08:32:50 +00:00
|
|
|
ret = 0;
|
|
|
|
|
2014-03-25 06:50:40 +00:00
|
|
|
cleanup:
|
2016-06-23 13:27:07 +00:00
|
|
|
virCPUDefFree(updated);
|
2010-03-23 08:32:50 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2013-07-15 15:38:55 +00:00
|
|
|
|
|
|
|
static int
|
2016-06-23 13:27:07 +00:00
|
|
|
virCPUx86Update(virCPUDefPtr guest,
|
|
|
|
const virCPUDef *host)
|
2013-07-15 15:38:55 +00:00
|
|
|
{
|
2016-06-23 13:27:07 +00:00
|
|
|
virCPUx86ModelPtr model = NULL;
|
2016-05-11 10:30:04 +00:00
|
|
|
virCPUx86MapPtr map;
|
2014-08-27 18:27:07 +00:00
|
|
|
int ret = -1;
|
2016-06-23 13:27:07 +00:00
|
|
|
size_t i;
|
2013-07-15 15:38:55 +00:00
|
|
|
|
2016-06-23 13:27:07 +00:00
|
|
|
if (!host) {
|
|
|
|
virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
|
|
|
|
_("unknown host CPU model"));
|
|
|
|
return -1;
|
|
|
|
}
|
2013-07-15 15:38:55 +00:00
|
|
|
|
2016-06-23 13:27:07 +00:00
|
|
|
if (!(map = virCPUx86GetMap()))
|
|
|
|
return -1;
|
2013-07-15 15:38:55 +00:00
|
|
|
|
2016-06-23 13:27:07 +00:00
|
|
|
if (!(model = x86ModelFromCPU(host, map, -1)))
|
2014-08-27 18:27:07 +00:00
|
|
|
goto cleanup;
|
2013-07-15 15:38:55 +00:00
|
|
|
|
2016-06-23 13:27:07 +00:00
|
|
|
for (i = 0; i < guest->nfeatures; i++) {
|
|
|
|
if (guest->features[i].policy == VIR_CPU_FEATURE_OPTIONAL) {
|
|
|
|
int supported = x86FeatureInData(guest->features[i].name,
|
|
|
|
&model->data, map);
|
|
|
|
if (supported < 0)
|
|
|
|
goto cleanup;
|
|
|
|
else if (supported)
|
|
|
|
guest->features[i].policy = VIR_CPU_FEATURE_REQUIRE;
|
|
|
|
else
|
|
|
|
guest->features[i].policy = VIR_CPU_FEATURE_DISABLE;
|
2014-09-05 07:50:36 +00:00
|
|
|
}
|
|
|
|
}
|
2013-07-15 15:38:55 +00:00
|
|
|
|
2016-06-23 13:27:07 +00:00
|
|
|
if (guest->mode == VIR_CPU_MODE_HOST_MODEL ||
|
|
|
|
guest->match == VIR_CPU_MATCH_MINIMUM)
|
2017-03-29 13:00:21 +00:00
|
|
|
ret = x86UpdateHostModel(guest, host);
|
2016-06-23 13:27:07 +00:00
|
|
|
else
|
|
|
|
ret = 0;
|
2014-08-27 18:27:07 +00:00
|
|
|
|
|
|
|
cleanup:
|
2016-06-23 13:27:07 +00:00
|
|
|
x86ModelFree(model);
|
2014-08-27 18:27:07 +00:00
|
|
|
return ret;
|
2013-07-15 15:38:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2017-03-13 11:32:02 +00:00
|
|
|
static int
|
|
|
|
virCPUx86UpdateLive(virCPUDefPtr cpu,
|
|
|
|
virCPUDataPtr dataEnabled,
|
|
|
|
virCPUDataPtr dataDisabled)
|
|
|
|
{
|
|
|
|
virCPUx86MapPtr map;
|
|
|
|
virCPUx86ModelPtr model = NULL;
|
|
|
|
virCPUx86Data enabled = VIR_CPU_X86_DATA_INIT;
|
|
|
|
virCPUx86Data disabled = VIR_CPU_X86_DATA_INIT;
|
2017-03-14 14:05:02 +00:00
|
|
|
virBuffer bufAdded = VIR_BUFFER_INITIALIZER;
|
|
|
|
virBuffer bufRemoved = VIR_BUFFER_INITIALIZER;
|
|
|
|
char *added = NULL;
|
|
|
|
char *removed = NULL;
|
2017-03-13 11:32:02 +00:00
|
|
|
size_t i;
|
|
|
|
int ret = -1;
|
|
|
|
|
|
|
|
if (!(map = virCPUx86GetMap()))
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
if (!(model = x86ModelFromCPU(cpu, map, -1)))
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
if (dataEnabled &&
|
|
|
|
x86DataCopy(&enabled, &dataEnabled->data.x86) < 0)
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
if (dataDisabled &&
|
|
|
|
x86DataCopy(&disabled, &dataDisabled->data.x86) < 0)
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
for (i = 0; i < map->nfeatures; i++) {
|
|
|
|
virCPUx86FeaturePtr feature = map->features[i];
|
|
|
|
|
2017-06-19 11:18:52 +00:00
|
|
|
if (x86DataIsSubset(&enabled, &feature->data) &&
|
|
|
|
!x86DataIsSubset(&model->data, &feature->data)) {
|
2017-03-14 14:05:02 +00:00
|
|
|
VIR_DEBUG("Feature '%s' enabled by the hypervisor", feature->name);
|
|
|
|
if (cpu->check == VIR_CPU_CHECK_FULL)
|
|
|
|
virBufferAsprintf(&bufAdded, "%s,", feature->name);
|
|
|
|
else if (virCPUDefUpdateFeature(cpu, feature->name,
|
|
|
|
VIR_CPU_FEATURE_REQUIRE) < 0)
|
2017-03-13 11:32:02 +00:00
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
2017-06-19 11:18:52 +00:00
|
|
|
if (x86DataIsSubset(&disabled, &feature->data) ||
|
|
|
|
(x86DataIsSubset(&model->data, &feature->data) &&
|
|
|
|
!x86DataIsSubset(&enabled, &feature->data))) {
|
2017-03-14 14:05:02 +00:00
|
|
|
VIR_DEBUG("Feature '%s' disabled by the hypervisor", feature->name);
|
|
|
|
if (cpu->check == VIR_CPU_CHECK_FULL)
|
|
|
|
virBufferAsprintf(&bufRemoved, "%s,", feature->name);
|
|
|
|
else if (virCPUDefUpdateFeature(cpu, feature->name,
|
|
|
|
VIR_CPU_FEATURE_DISABLE) < 0)
|
2017-03-13 11:32:02 +00:00
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-03-14 14:05:02 +00:00
|
|
|
virBufferTrim(&bufAdded, ",", -1);
|
|
|
|
virBufferTrim(&bufRemoved, ",", -1);
|
|
|
|
|
|
|
|
if (virBufferCheckError(&bufAdded) < 0 ||
|
|
|
|
virBufferCheckError(&bufRemoved) < 0)
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
added = virBufferContentAndReset(&bufAdded);
|
|
|
|
removed = virBufferContentAndReset(&bufRemoved);
|
|
|
|
|
|
|
|
if (added || removed) {
|
|
|
|
if (added && removed)
|
|
|
|
virReportError(VIR_ERR_OPERATION_FAILED,
|
|
|
|
_("guest CPU doesn't match specification: "
|
|
|
|
"extra features: %s, missing features: %s"),
|
|
|
|
added, removed);
|
|
|
|
else if (added)
|
|
|
|
virReportError(VIR_ERR_OPERATION_FAILED,
|
|
|
|
_("guest CPU doesn't match specification: "
|
|
|
|
"extra features: %s"),
|
|
|
|
added);
|
|
|
|
else
|
|
|
|
virReportError(VIR_ERR_OPERATION_FAILED,
|
|
|
|
_("guest CPU doesn't match specification: "
|
|
|
|
"missing features: %s"),
|
|
|
|
removed);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (cpu->check == VIR_CPU_CHECK_FULL &&
|
|
|
|
!x86DataIsEmpty(&disabled)) {
|
|
|
|
virReportError(VIR_ERR_OPERATION_FAILED, "%s",
|
|
|
|
_("guest CPU doesn't match specification"));
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
2017-03-13 11:32:02 +00:00
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
cleanup:
|
|
|
|
x86ModelFree(model);
|
|
|
|
virCPUx86DataClear(&enabled);
|
|
|
|
virCPUx86DataClear(&disabled);
|
2017-03-14 14:05:02 +00:00
|
|
|
VIR_FREE(added);
|
|
|
|
VIR_FREE(removed);
|
|
|
|
virBufferFreeAndReset(&bufAdded);
|
|
|
|
virBufferFreeAndReset(&bufRemoved);
|
2017-03-13 11:32:02 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-09-16 12:13:09 +00:00
|
|
|
static int
|
|
|
|
virCPUx86CheckFeature(const virCPUDef *cpu,
|
|
|
|
const char *name)
|
|
|
|
{
|
|
|
|
int ret = -1;
|
|
|
|
virCPUx86MapPtr map;
|
|
|
|
virCPUx86ModelPtr model = NULL;
|
|
|
|
|
|
|
|
if (!(map = virCPUx86GetMap()))
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
if (!(model = x86ModelFromCPU(cpu, map, -1)))
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
ret = x86FeatureInData(name, &model->data, map);
|
|
|
|
|
|
|
|
cleanup:
|
|
|
|
x86ModelFree(model);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2013-10-09 12:36:32 +00:00
|
|
|
static int
|
2016-08-08 13:48:15 +00:00
|
|
|
virCPUx86DataCheckFeature(const virCPUData *data,
|
|
|
|
const char *name)
|
2010-09-22 11:47:09 +00:00
|
|
|
{
|
2016-05-11 10:30:04 +00:00
|
|
|
virCPUx86MapPtr map;
|
2010-09-22 11:47:09 +00:00
|
|
|
|
2013-10-08 16:20:10 +00:00
|
|
|
if (!(map = virCPUx86GetMap()))
|
2010-09-22 11:47:09 +00:00
|
|
|
return -1;
|
|
|
|
|
2016-06-28 10:23:48 +00:00
|
|
|
return x86FeatureInData(name, &data->data.x86, map);
|
2010-09-22 11:47:09 +00:00
|
|
|
}
|
2010-03-23 08:32:50 +00:00
|
|
|
|
2014-11-20 10:08:21 +00:00
|
|
|
static int
|
2016-11-04 13:20:39 +00:00
|
|
|
virCPUx86GetModels(char ***models)
|
2014-11-20 10:08:21 +00:00
|
|
|
{
|
2016-05-11 10:30:04 +00:00
|
|
|
virCPUx86MapPtr map;
|
2016-05-18 13:24:05 +00:00
|
|
|
size_t i;
|
2014-11-20 10:08:21 +00:00
|
|
|
|
|
|
|
if (!(map = virCPUx86GetMap()))
|
|
|
|
return -1;
|
|
|
|
|
2016-05-18 13:24:05 +00:00
|
|
|
if (models) {
|
|
|
|
if (VIR_ALLOC_N(*models, map->nmodels + 1) < 0)
|
|
|
|
goto error;
|
2014-11-20 10:08:21 +00:00
|
|
|
|
2016-05-18 13:24:05 +00:00
|
|
|
for (i = 0; i < map->nmodels; i++) {
|
|
|
|
if (VIR_STRDUP((*models)[i], map->models[i]->name) < 0)
|
2014-12-03 17:50:16 +00:00
|
|
|
goto error;
|
|
|
|
}
|
2014-11-20 10:08:21 +00:00
|
|
|
}
|
|
|
|
|
2016-05-18 13:24:05 +00:00
|
|
|
return map->nmodels;
|
2014-11-20 10:08:21 +00:00
|
|
|
|
|
|
|
error:
|
2014-12-03 17:50:16 +00:00
|
|
|
if (models) {
|
2016-11-25 08:18:35 +00:00
|
|
|
virStringListFree(*models);
|
2014-12-03 17:50:16 +00:00
|
|
|
*models = NULL;
|
|
|
|
}
|
2014-11-20 10:08:21 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2013-10-09 12:36:32 +00:00
|
|
|
|
2016-06-17 07:45:48 +00:00
|
|
|
static int
|
|
|
|
virCPUx86Translate(virCPUDefPtr cpu,
|
2017-09-22 13:51:33 +00:00
|
|
|
virDomainCapsCPUModelsPtr models)
|
2016-06-17 07:45:48 +00:00
|
|
|
{
|
|
|
|
virCPUDefPtr translated = NULL;
|
|
|
|
virCPUx86MapPtr map;
|
|
|
|
virCPUx86ModelPtr model = NULL;
|
|
|
|
size_t i;
|
|
|
|
int ret = -1;
|
|
|
|
|
|
|
|
if (!(map = virCPUx86GetMap()))
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
if (!(model = x86ModelFromCPU(cpu, map, -1)))
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
if (model->vendor &&
|
2017-02-02 14:52:13 +00:00
|
|
|
virCPUx86DataAddCPUIDInt(&model->data, &model->vendor->cpuid) < 0)
|
2016-06-17 07:45:48 +00:00
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
if (x86DataAddSignature(&model->data, model->signature) < 0)
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
if (!(translated = virCPUDefCopyWithoutModel(cpu)))
|
|
|
|
goto cleanup;
|
|
|
|
|
2017-09-22 13:51:33 +00:00
|
|
|
if (x86Decode(translated, &model->data, models, NULL, false) < 0)
|
2016-06-17 07:45:48 +00:00
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
for (i = 0; i < cpu->nfeatures; i++) {
|
|
|
|
virCPUFeatureDefPtr f = cpu->features + i;
|
|
|
|
if (virCPUDefUpdateFeature(translated, f->name, f->policy) < 0)
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
2016-11-10 09:26:03 +00:00
|
|
|
virCPUDefStealModel(cpu, translated, true);
|
2016-06-17 07:45:48 +00:00
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
cleanup:
|
|
|
|
virCPUDefFree(translated);
|
|
|
|
x86ModelFree(model);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2017-03-16 11:23:50 +00:00
|
|
|
static int
|
|
|
|
virCPUx86ExpandFeatures(virCPUDefPtr cpu)
|
|
|
|
{
|
|
|
|
virCPUx86MapPtr map;
|
|
|
|
virCPUDefPtr expanded = NULL;
|
|
|
|
virCPUx86ModelPtr model = NULL;
|
|
|
|
bool host = cpu->type == VIR_CPU_TYPE_HOST;
|
|
|
|
size_t i;
|
|
|
|
int ret = -1;
|
|
|
|
|
|
|
|
if (!(map = virCPUx86GetMap()))
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
if (!(expanded = virCPUDefCopy(cpu)))
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
virCPUDefFreeFeatures(expanded);
|
|
|
|
|
|
|
|
if (!(model = x86ModelFind(map, cpu->model))) {
|
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("unknown CPU model %s"), cpu->model);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!(model = x86ModelCopy(model)) ||
|
|
|
|
x86DataToCPUFeatures(expanded, host ? -1 : VIR_CPU_FEATURE_REQUIRE,
|
|
|
|
&model->data, map) < 0)
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
for (i = 0; i < cpu->nfeatures; i++) {
|
|
|
|
virCPUFeatureDefPtr f = cpu->features + i;
|
|
|
|
|
|
|
|
if (!host &&
|
|
|
|
f->policy != VIR_CPU_FEATURE_REQUIRE &&
|
|
|
|
f->policy != VIR_CPU_FEATURE_DISABLE)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (virCPUDefUpdateFeature(expanded, f->name, f->policy) < 0)
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
|
|
|
virCPUDefFreeModel(cpu);
|
|
|
|
|
|
|
|
ret = virCPUDefCopyModel(cpu, expanded, false);
|
|
|
|
|
|
|
|
cleanup:
|
|
|
|
virCPUDefFree(expanded);
|
|
|
|
x86ModelFree(model);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2017-03-29 12:45:44 +00:00
|
|
|
static virCPUDefPtr
|
|
|
|
virCPUx86CopyMigratable(virCPUDefPtr cpu)
|
|
|
|
{
|
|
|
|
virCPUDefPtr copy;
|
|
|
|
virCPUx86MapPtr map;
|
|
|
|
|
|
|
|
if (!(map = virCPUx86GetMap()))
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
if (!(copy = virCPUDefCopyWithoutModel(cpu)))
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
if (virCPUDefCopyModelFilter(copy, cpu, false,
|
|
|
|
x86FeatureIsMigratable, map) < 0)
|
|
|
|
goto error;
|
|
|
|
|
|
|
|
return copy;
|
|
|
|
|
|
|
|
error:
|
|
|
|
virCPUDefFree(copy);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2017-09-14 14:14:40 +00:00
|
|
|
static int
|
|
|
|
virCPUx86ValidateFeatures(virCPUDefPtr cpu)
|
|
|
|
{
|
|
|
|
virCPUx86MapPtr map;
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
if (!(map = virCPUx86GetMap()))
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
for (i = 0; i < cpu->nfeatures; i++) {
|
|
|
|
if (!x86FeatureFind(map, cpu->features[i].name)) {
|
|
|
|
virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
|
|
|
|
_("unknown CPU feature: %s"),
|
|
|
|
cpu->features[i].name);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2017-02-02 14:52:13 +00:00
|
|
|
int
|
|
|
|
virCPUx86DataAddCPUID(virCPUDataPtr cpuData,
|
|
|
|
const virCPUx86CPUID *cpuid)
|
|
|
|
{
|
|
|
|
return virCPUx86DataAddCPUIDInt(&cpuData->data.x86, cpuid);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2017-02-02 15:14:22 +00:00
|
|
|
int
|
|
|
|
virCPUx86DataSetSignature(virCPUDataPtr cpuData,
|
|
|
|
unsigned int family,
|
2017-10-10 11:34:28 +00:00
|
|
|
unsigned int model,
|
|
|
|
unsigned int stepping)
|
2017-02-02 15:14:22 +00:00
|
|
|
{
|
2017-10-10 11:34:28 +00:00
|
|
|
uint32_t signature = x86MakeSignature(family, model, stepping);
|
2017-02-02 15:14:22 +00:00
|
|
|
|
|
|
|
return x86DataAddSignature(&cpuData->data.x86, signature);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2017-02-02 19:12:38 +00:00
|
|
|
int
|
|
|
|
virCPUx86DataSetVendor(virCPUDataPtr cpuData,
|
|
|
|
const char *vendor)
|
|
|
|
{
|
|
|
|
virCPUx86CPUID cpuid = { 0 };
|
|
|
|
|
|
|
|
if (virCPUx86VendorToCPUID(vendor, &cpuid) < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
return virCPUx86DataAddCPUID(cpuData, &cpuid);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2017-02-02 19:30:04 +00:00
|
|
|
int
|
|
|
|
virCPUx86DataAddFeature(virCPUDataPtr cpuData,
|
|
|
|
const char *name)
|
|
|
|
{
|
|
|
|
virCPUx86FeaturePtr feature;
|
|
|
|
virCPUx86MapPtr map;
|
|
|
|
|
|
|
|
if (!(map = virCPUx86GetMap()))
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
/* ignore unknown features */
|
|
|
|
if (!(feature = x86FeatureFind(map, name)) &&
|
|
|
|
!(feature = x86FeatureFindInternal(name)))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (x86DataAdd(&cpuData->data.x86, &feature->data) < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
struct cpuArchDriver cpuDriverX86 = {
|
|
|
|
.name = "x86",
|
|
|
|
.arch = archs,
|
|
|
|
.narch = ARRAY_CARDINALITY(archs),
|
2016-08-09 11:26:53 +00:00
|
|
|
.compare = virCPUx86Compare,
|
2012-12-18 20:27:09 +00:00
|
|
|
.decode = x86DecodeCPUData,
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
.encode = x86Encode,
|
2017-02-02 14:37:40 +00:00
|
|
|
.dataFree = virCPUx86DataFree,
|
2016-11-28 08:55:52 +00:00
|
|
|
#if defined(__i386__) || defined(__x86_64__)
|
2017-03-06 20:35:49 +00:00
|
|
|
.getHost = virCPUx86GetHost,
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
#endif
|
2018-05-15 08:50:32 +00:00
|
|
|
.baseline = virCPUx86Baseline,
|
2016-06-23 13:27:07 +00:00
|
|
|
.update = virCPUx86Update,
|
2017-03-13 11:32:02 +00:00
|
|
|
.updateLive = virCPUx86UpdateLive,
|
2016-09-16 12:13:09 +00:00
|
|
|
.checkFeature = virCPUx86CheckFeature,
|
2016-08-08 13:48:15 +00:00
|
|
|
.dataCheckFeature = virCPUx86DataCheckFeature,
|
2016-11-04 14:09:20 +00:00
|
|
|
.dataFormat = virCPUx86DataFormat,
|
2016-11-04 14:02:26 +00:00
|
|
|
.dataParse = virCPUx86DataParse,
|
2016-11-04 13:20:39 +00:00
|
|
|
.getModels = virCPUx86GetModels,
|
2016-06-17 07:45:48 +00:00
|
|
|
.translate = virCPUx86Translate,
|
2017-03-16 11:23:50 +00:00
|
|
|
.expandFeatures = virCPUx86ExpandFeatures,
|
2017-03-29 12:45:44 +00:00
|
|
|
.copyMigratable = virCPUx86CopyMigratable,
|
2017-09-14 14:14:40 +00:00
|
|
|
.validateFeatures = virCPUx86ValidateFeatures,
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
};
|