Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
/*
|
|
|
|
* cpu_map.c: internal functions for handling CPU mapping configuration
|
|
|
|
*
|
2018-07-30 15:35:57 +00:00
|
|
|
* Copyright (C) 2009-2018 Red Hat, Inc.
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
*
|
|
|
|
* This library is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU Lesser General Public
|
|
|
|
* License as published by the Free Software Foundation; either
|
|
|
|
* version 2.1 of the License, or (at your option) any later version.
|
|
|
|
*
|
|
|
|
* This library is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
|
|
* Lesser General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU Lesser General Public
|
2012-09-20 22:30:55 +00:00
|
|
|
* License along with this library. If not, see
|
2012-07-21 10:06:23 +00:00
|
|
|
* <http://www.gnu.org/licenses/>.
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <config.h>
|
|
|
|
|
2012-12-12 18:06:53 +00:00
|
|
|
#include "viralloc.h"
|
2014-04-24 15:59:37 +00:00
|
|
|
#include "virfile.h"
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
#include "cpu.h"
|
|
|
|
#include "cpu_map.h"
|
2010-11-16 14:54:17 +00:00
|
|
|
#include "configmake.h"
|
2013-05-03 12:41:23 +00:00
|
|
|
#include "virstring.h"
|
2014-03-10 15:00:49 +00:00
|
|
|
#include "virlog.h"
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
#define VIR_FROM_THIS VIR_FROM_CPU
|
|
|
|
|
2014-02-28 12:16:17 +00:00
|
|
|
VIR_LOG_INIT("cpu.cpu_map");
|
|
|
|
|
2018-07-30 16:08:38 +00:00
|
|
|
static int
|
|
|
|
loadData(const char *mapfile,
|
|
|
|
xmlXPathContextPtr ctxt,
|
|
|
|
const char *element,
|
|
|
|
cpuMapLoadCallback callback,
|
|
|
|
void *data)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
|
|
|
int ret = -1;
|
|
|
|
xmlNodePtr ctxt_node;
|
2016-05-17 08:59:28 +00:00
|
|
|
xmlNodePtr *nodes = NULL;
|
|
|
|
int n;
|
2018-07-30 16:08:38 +00:00
|
|
|
size_t i;
|
|
|
|
int rv;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
ctxt_node = ctxt->node;
|
|
|
|
|
2018-07-30 16:08:38 +00:00
|
|
|
if ((n = virXPathNodeSet(element, ctxt, &nodes)) < 0)
|
2016-05-17 08:59:28 +00:00
|
|
|
goto cleanup;
|
|
|
|
|
2018-07-30 16:08:38 +00:00
|
|
|
if (n > 0 && !callback) {
|
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("Unexpected element '%s' in CPU map '%s'"), element, mapfile);
|
2016-05-17 08:59:28 +00:00
|
|
|
goto cleanup;
|
2018-07-30 16:08:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < n; i++) {
|
|
|
|
xmlNodePtr old = ctxt->node;
|
|
|
|
char *name = virXMLPropString(nodes[i], "name");
|
|
|
|
if (!name) {
|
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("cannot find %s name in CPU map '%s'"), element, mapfile);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
VIR_DEBUG("Load %s name %s", element, name);
|
|
|
|
ctxt->node = nodes[i];
|
|
|
|
rv = callback(ctxt, name, data);
|
|
|
|
ctxt->node = old;
|
|
|
|
VIR_FREE(name);
|
|
|
|
if (rv < 0)
|
|
|
|
goto cleanup;
|
|
|
|
}
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
|
2014-03-25 06:50:40 +00:00
|
|
|
cleanup:
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
ctxt->node = ctxt_node;
|
2016-05-17 08:59:28 +00:00
|
|
|
VIR_FREE(nodes);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2018-07-30 15:35:57 +00:00
|
|
|
static int
|
|
|
|
cpuMapLoadInclude(const char *filename,
|
2018-07-30 16:08:38 +00:00
|
|
|
cpuMapLoadCallback vendorCB,
|
|
|
|
cpuMapLoadCallback featureCB,
|
|
|
|
cpuMapLoadCallback modelCB,
|
2018-07-30 15:35:57 +00:00
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
xmlDocPtr xml = NULL;
|
|
|
|
xmlXPathContextPtr ctxt = NULL;
|
|
|
|
int ret = -1;
|
|
|
|
char *mapfile;
|
|
|
|
|
|
|
|
if (!(mapfile = virFileFindResource(filename,
|
2019-03-12 14:11:47 +00:00
|
|
|
abs_top_srcdir "/src/cpu_map",
|
2018-08-16 11:39:39 +00:00
|
|
|
PKGDATADIR "/cpu_map")))
|
2018-07-30 15:35:57 +00:00
|
|
|
return -1;
|
|
|
|
|
|
|
|
VIR_DEBUG("Loading CPU map include from %s", mapfile);
|
|
|
|
|
|
|
|
if (!(xml = virXMLParseFileCtxt(mapfile, &ctxt)))
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
ctxt->node = xmlDocGetRootElement(xml);
|
|
|
|
|
2018-07-30 16:08:38 +00:00
|
|
|
if (loadData(mapfile, ctxt, "vendor", vendorCB, data) < 0)
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
if (loadData(mapfile, ctxt, "feature", featureCB, data) < 0)
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
if (loadData(mapfile, ctxt, "model", modelCB, data) < 0)
|
|
|
|
goto cleanup;
|
2018-07-30 15:35:57 +00:00
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
cleanup:
|
|
|
|
xmlXPathFreeContext(ctxt);
|
|
|
|
xmlFreeDoc(xml);
|
|
|
|
VIR_FREE(mapfile);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int
|
|
|
|
loadIncludes(xmlXPathContextPtr ctxt,
|
2018-07-30 16:08:38 +00:00
|
|
|
cpuMapLoadCallback vendorCB,
|
|
|
|
cpuMapLoadCallback featureCB,
|
|
|
|
cpuMapLoadCallback modelCB,
|
2018-07-30 15:35:57 +00:00
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
int ret = -1;
|
|
|
|
xmlNodePtr ctxt_node;
|
|
|
|
xmlNodePtr *nodes = NULL;
|
|
|
|
int n;
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
ctxt_node = ctxt->node;
|
|
|
|
|
|
|
|
n = virXPathNodeSet("include", ctxt, &nodes);
|
|
|
|
if (n < 0)
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
for (i = 0; i < n; i++) {
|
|
|
|
char *filename = virXMLPropString(nodes[i], "filename");
|
|
|
|
if (!filename) {
|
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
|
|
|
|
_("Missing 'filename' in CPU map include"));
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
VIR_DEBUG("Finding CPU map include '%s'", filename);
|
2018-07-30 16:08:38 +00:00
|
|
|
if (cpuMapLoadInclude(filename, vendorCB, featureCB, modelCB, data) < 0) {
|
2018-07-30 15:35:57 +00:00
|
|
|
VIR_FREE(filename);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
VIR_FREE(filename);
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
cleanup:
|
|
|
|
ctxt->node = ctxt_node;
|
|
|
|
VIR_FREE(nodes);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
int cpuMapLoad(const char *arch,
|
2018-07-30 16:08:38 +00:00
|
|
|
cpuMapLoadCallback vendorCB,
|
|
|
|
cpuMapLoadCallback featureCB,
|
|
|
|
cpuMapLoadCallback modelCB,
|
2010-07-02 15:51:59 +00:00
|
|
|
void *data)
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
{
|
|
|
|
xmlDocPtr xml = NULL;
|
|
|
|
xmlXPathContextPtr ctxt = NULL;
|
|
|
|
virBuffer buf = VIR_BUFFER_INITIALIZER;
|
|
|
|
char *xpath = NULL;
|
|
|
|
int ret = -1;
|
2014-04-24 15:59:37 +00:00
|
|
|
char *mapfile;
|
|
|
|
|
2018-08-16 11:39:39 +00:00
|
|
|
if (!(mapfile = virFileFindResource("index.xml",
|
2019-03-12 14:11:47 +00:00
|
|
|
abs_top_srcdir "/src/cpu_map",
|
2018-08-16 11:39:39 +00:00
|
|
|
PKGDATADIR "/cpu_map")))
|
2014-04-24 15:59:37 +00:00
|
|
|
return -1;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2018-07-30 15:35:57 +00:00
|
|
|
VIR_DEBUG("Loading '%s' CPU map from %s", NULLSTR(arch), mapfile);
|
2014-03-10 15:00:49 +00:00
|
|
|
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
if (arch == NULL) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
"%s", _("undefined hardware architecture"));
|
2014-04-24 15:59:37 +00:00
|
|
|
goto cleanup;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
}
|
|
|
|
|
2016-05-13 16:27:09 +00:00
|
|
|
if (!(xml = virXMLParseFileCtxt(mapfile, &ctxt)))
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
goto cleanup;
|
|
|
|
|
2011-04-30 16:34:49 +00:00
|
|
|
virBufferAsprintf(&buf, "./arch[@name='%s']", arch);
|
2014-06-27 08:40:15 +00:00
|
|
|
if (virBufferCheckError(&buf) < 0)
|
|
|
|
goto cleanup;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
xpath = virBufferContentAndReset(&buf);
|
|
|
|
|
|
|
|
ctxt->node = xmlDocGetRootElement(xml);
|
|
|
|
|
2010-02-04 21:52:34 +00:00
|
|
|
if ((ctxt->node = virXPathNode(xpath, ctxt)) == NULL) {
|
2012-07-18 12:16:38 +00:00
|
|
|
virReportError(VIR_ERR_INTERNAL_ERROR,
|
|
|
|
_("cannot find CPU map for %s architecture"), arch);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
2018-07-30 16:08:38 +00:00
|
|
|
if (loadData(mapfile, ctxt, "vendor", vendorCB, data) < 0)
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
if (loadData(mapfile, ctxt, "feature", featureCB, data) < 0)
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
if (loadData(mapfile, ctxt, "model", modelCB, data) < 0)
|
|
|
|
goto cleanup;
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
2018-07-30 16:08:38 +00:00
|
|
|
if (loadIncludes(ctxt, vendorCB, featureCB, modelCB, data) < 0)
|
2018-07-30 15:35:57 +00:00
|
|
|
goto cleanup;
|
|
|
|
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
ret = 0;
|
|
|
|
|
2014-03-25 06:50:40 +00:00
|
|
|
cleanup:
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
xmlXPathFreeContext(ctxt);
|
|
|
|
xmlFreeDoc(xml);
|
|
|
|
VIR_FREE(xpath);
|
2014-04-24 15:59:37 +00:00
|
|
|
VIR_FREE(mapfile);
|
Adds CPU selection infrastructure
Each driver supporting CPU selection must fill in host CPU capabilities.
When filling them, drivers for hypervisors running on the same node as
libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers,
such as VMware, need to implement their own way of getting such data.
Raw data can be decoded into virCPUDefPtr using cpuDecode() function.
When implementing virConnectCompareCPU(), a hypervisor driver can just
call cpuCompareXML() function with host CPU capabilities.
For each guest for which a driver supports selecting CPU models, it must
set the appropriate feature in guest's capabilities:
virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0)
Actions needed when a domain is being created depend on whether the
hypervisor understands raw CPU data (currently CPUID for i686, x86_64
architectures) or symbolic names has to be used.
Typical use by hypervisors which prefer CPUID (such as VMware and Xen):
- convert guest CPU configuration from domain's XML into a set of raw
data structures each representing one of the feature policies:
cpuEncode(conn, architecture, guest_cpu_config,
&forced_data, &required_data, &optional_data,
&disabled_data, &forbidden_data)
- create a mask or whatever the hypervisor expects to see and pass it
to the hypervisor
Typical use by hypervisors with symbolic model names (such as QEMU):
- get raw CPU data for a computed guest CPU:
cpuGuestData(conn, host_cpu, guest_cpu_config, &data)
- decode raw data into virCPUDefPtr with a possible restriction on
allowed model names:
cpuDecode(conn, guest, data, n_allowed_models, allowed_models)
- pass guest->model and guest->features to the hypervisor
* src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c
src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h
src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h
* configure.in: check for CPUID instruction
* src/Makefile.am: glue the new files in
* src/libvirt_private.syms: add new private symbols
* po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 15:02:11 +00:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|