This patch adds RNG schemas for adding more information in the topology
output of the NUMA section in the capabilities XML.
The added elements are designed to provide more information about the
placement and topology of the processors in the system to management
applications.
A demonstration of supported XML added by this patch:
<capabilities>
<host>
<topology>
<cells num='3'>
<cell id='0'>
<cpus num='4'> <!-- this is node with Hyperthreading -->
<cpu id='0' socket_id='0' core_id='0' siblings='0-1'/>
<cpu id='1' socket_id='0' core_id='0' siblings='0-1'/>
<cpu id='2' socket_id='0' core_id='1' siblings='2-3'/>
<cpu id='3' socket_id='0' core_id='1' siblings='2-3'/>
</cpus>
</cell>
<cell id='1'>
<cpus num='4'> <!-- this is node with modules (Bulldozer) -->
<cpu id='4' socket_id='0' core_id='2' siblings='4-5'/>
<cpu id='5' socket_id='0' core_id='3' siblings='4-5'/>
<cpu id='6' socket_id='0' core_id='4' siblings='6-7'/>
<cpu id='7' socket_id='0' core_id='5' siblings='6-7'/>
</cpus>
</cell>
<cell id='2'>
<cpus num='4'> <!-- this is a normal multi-core node -->
<cpu id='8' socket_id='1' core_id='0' siblings='8'/>
<cpu id='9' socket_id='1' core_id='1' siblings='9'/>
<cpu id='10' socket_id='1' core_id='2' siblings='10'/>
<cpu id='11' socket_id='1' core_id='3' siblings='11'/>
</cpus>
</cell>
</cells>
</topology>
</host>
</capabilities>
The socket_id field represents identification of the physical socket the
CPU is plugged in. This ID may not be identical to the physical socket
ID reported by the kernel.
The core_id identifies a core within a socket. Also this field may not
accurately represent physical ID's.
The core_id is guaranteed to be unique within a cell and a socket. There
may be duplicates between sockets. Only cores sharing core_id within one
cell and one socket can be considered as threads. Cores sharing core_id
within sparate cells are distinct cores.
The siblings field is a list of CPU id's the cpu id's the CPU is sibling
with - thus a thread. The list is in the cpuset format.
This patch updates the domain and capability XML parser and formatter to
support more than one "seclabel" element for each domain and device. The
RNG schema and the tests related to this are also updated by this patch.
Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
capability.rng: Guest features can be in any order.
nodedev.rng: Added <driver> element, <capability> phys_function and
virt_functions for PCI devices.
storagepool.rng: Owner or group ID can be -1.
schema tests: New capabilities and nodedev files; changed owner and
group to -1 in pool-dir.xml.
storage_conf: Print uid_t and gid_t as signed to storage pool XML.
The capabilities XML uses the x86 specific terms 'S3', 'S4'
and 'Hybrid-Syspend'. Switch it to use the same terminology
as the API constants and virsh options, eg 'suspend_mem'
'suspend_disk' and 'suspend_hybrid'
* docs/formatcaps.html.in, docs/schemas/capability.rng,
src/conf/capabilities.c: Rename suspend constants
Some systems support a feature known as 'Hybrid-Suspend', apart from the
usual system-wide sleep states such as Suspend-to-RAM (S3) or Suspend-to-Disk
(S4). Add the functionality to discover this power management feature and
export it in the capabilities XML under the <power_management> tag.
This patch exports KVM Host Power Management capabilities as XML so that
higher-level systems management software can make use of these features
available in the host.
The script "pm-is-supported" (from pm-utils package) is run to discover if
Suspend-to-RAM (S3) or Suspend-to-Disk (S4) is supported by the host.
If either of them are supported, then a new tag "<power_management>" is
introduced in the XML under the <host> tag.
However in case the query to check for power management features succeeded,
but the host does not support any such feature, then the XML will contain
an empty <power_management/> tag. In the event that the PM query itself
failed, the XML will not contain any "power_management" tag.
To use this, new APIs could be implemented in libvirt to exploit power
management features such as S3/S4.
Add libvirt support for MicroBlaze architecture as a QEMU target. Based on mips/mipsel pattern.
Signed-off-by: John Williams <john.williams@petalogix.com>
By specifying <vendor> element in CPU requirements a guest can be
restricted to run only on CPUs by a given vendor. Host CPU vendor is
also specified in capabilities XML.
The vendor is checked when migrating a guest but it's not forced, i.e.,
guests configured without <vendor> element can be freely migrated.
Allow for a host UUID in the capabilities XML. Local drivers
will initialize this from the SMBIOS data. If a sanity check
shows SMBIOS uuid is invalid, allow an override from the
libvirtd.conf configuration file
* daemon/libvirtd.c, daemon/libvirtd.conf: Support a host_uuid
configuration option
* docs/schemas/capability.rng: Add optional host uuid field
* src/conf/capabilities.c, src/conf/capabilities.h: Include
host UUID in XML
* src/libvirt_private.syms: Export new uuid.h functions
* src/lxc/lxc_conf.c, src/qemu/qemu_driver.c,
src/uml/uml_conf.c: Set host UUID in capabilities
* src/util/uuid.c, src/util/uuid.h: Support for host UUIDs
* src/node_device/node_device_udev.c: Use the host UUID functions
* tests/confdata/libvirtd.conf, tests/confdata/libvirtd.out: Add
new host_uuid config option to test
XML schema for CPU flags
Firstly, CPU topology and model with optional features have to be
advertised in host capabilities:
<host>
<cpu>
<arch>ARCHITECTURE</arch>
<features>
<!-- old-style features are here -->
</features>
<model>NAME</model>
<topology sockets="S" cores="C" threads="T"/>
<feature name="NAME"/>
</cpu>
...
</host>
Secondly, drivers which support detailed CPU specification have to
advertise
it in guest capabilities:
<guest>
...
<features>
<cpuselection/>
</features>
</guest>
And finally, CPU may be configured in domain XML configuration:
<domain>
...
<cpu match="MATCH">
<model>NAME</model>
<topology sockets="S" cores="C" threads="T"/>
<feature policy="POLICY" name="NAME"/>
</cpu>
</domain>
Where MATCH can be one of:
- 'minimum' specified CPU is the minimum requested CPU
- 'exact' disable all additional features provided by host CPU
- 'strict' fail if host CPU doesn't exactly match
POLICY can be one of:
- 'force' turn on the feature, even if host doesn't have it
- 'require' fail if host doesn't have the feature
- 'optional' match host
- 'disable' turn off the feature, even if host has it
- 'forbid' fail if host has the feature
'force' and 'disable' policies turn on/off the feature regardless of its
availability on host. 'force' is unlikely to be used but its there for
completeness since Xen and VMWare allow it.
'require' and 'forbid' policies prevent a guest from being started on a host
which doesn't/does have the feature. 'forbid' is for cases where you disable
the feature but a guest may still try to access it anyway and you don't want
it to succeed.
'optional' policy sets the feature according to its availability on host.
When a guest is booted on a host that has the feature and then migrated to
another host, the policy changes to 'require' as we can't take the feature
away from a running guest.
Default policy for features provided by host CPU but not specified in domain
configuration is set using match attribute of cpu tag. If 'minimum' match is
requested, additional features will be treated as if they were specified
with 'optional' policy. 'exact' match implies 'disable' policy and 'strict'
match stands for 'forbid' policy.
* docs/schemas/capability.rng docs/schemas/domain.rng: extend the
RelaxNG schemas to add CPU flags support
e.g. <machine canonical='pc'>pc-0.11</machine>
* src/capabilities.c: output the canonical machine names in the
capabilities output, if available
* docs/schemas/capabilities.rng: add the new attribute
by running this command:
git ls-files -z | xargs -0 perl -pi -0777 -e 's/\n\n+$/\n/'
This is in preparation for a more strict make syntax-check
rule that will detect trailing blank lines.