Commit Graph

56 Commits

Author SHA1 Message Date
Martin Kletzander
439eaf6399 whitespace clean-ups
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
2021-07-15 14:50:48 +02:00
Michal Privoznik
0cc6f8931f capabilities: Expose NUMA interconnects
Links between NUMA nodes can have different latencies and
bandwidths. This info is newly defined in ACPI 6.2 under
Heterogeneous Memory Attribute Table (HMAT) table. Linux kernel
learned how to report these values under sysfs and thus we can
expose them in our capabilities XML. The sysfs interface is
documented in kernel's Documentation/admin-guide/mm/numaperf.rst.

Long story short, two nodes can be in initiator-target
relationship. A node can be initiator if it has a CPU or a device
that's capable of initiating memory transfer. Therefore a node
that has just memory can only be target. An initiator-target link
can then have any combination of {bandwidth, latency} - {access,
read, write} attribute (6 in total). However, the standard says
access is applicable iff read and write values are the same.
Therefore, we really have just four combinations of attributes:
bandwidth-read, bandwidth-write, latency-read, latency-write.

This is the combination that kernel reports anyway.

Then, under /sys/system/devices/node/nodeX/acccessN/initiators we
find values for those 4 attributes and also symlinks named
"nodeN" which then represent initiators to nodeX. For instance:

  /sys/system/node/node1/access1/initiators/node0 -> ../../node0
  /sys/system/node/node1/access1/initiators/read_bandwidth
  /sys/system/node/node1/access1/initiators/read_latency
  /sys/system/node/node1/access1/initiators/write_bandwidth
  /sys/system/node/node1/access1/initiators/write_latency

This means that node0 is initiator and node1 is target and values
of the interconnect can be read.

In theory, there can be separate links to memory side caches too
(e.g. one link from node X to node Y's main memory, another from
node X to node Y's L1 cache, another one to L2 cache and so on).
But sysfs does not express this relationship just yet.

The "accessN" means either "access0" or "access1". The difference
is that while the former expresses the best interconnect between
two nodes including CPUS and I/O devices (such as GPUs and NICs),
the latter includes only CPUs and thus is what we need.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1786309
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
2021-06-15 11:03:25 +02:00
Michal Privoznik
5c359377a0 capabilities: Expose NUMA memory side cache
Memory on a NUMA node can have a side caches. Configuring these
for a domain was implemented in v6.6.0-rc1~249 and friends.
However, up until now mgmt applications did not really know what
values to pass because we were not exposing caches of the host.
With recent enough kernel these are exposed under sysfs and with
a bit of parsing we can extend our capabilities XML. The sysfs
structure is documented in kernel's
Documentation/admin-guide/mm/numaperf.rst and basically maps in
1:1 fashion to our virNumaCache structure.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
2021-06-15 10:41:22 +02:00
Michal Privoznik
137e765891 schemas: Allow zero <cpu/> for capabilities
It may happen that a NUMA node has no CPUs associated with it. We
allow this for domains since v6.6.0-rc1~250. Let's update our
capabilities schema to match that.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
2021-06-15 10:41:22 +02:00
Michal Privoznik
4b3dc045b9 conf: Deduplicate NUMA distance code
After previous patches we have two structures:
virCapsHostNUMACellDistance and virNumaDistance which express the
same thing. And have the exact same members (modulo their names).
Drop the former in favor of the latter.

This change means that distances with value of 0 are no longer
printed out into capabilities XML, because domain XML code allows
partial distance specification and thus threats value of 0 as
unspecified by user (see virDomainNumaGetNodeDistance() which
returns the default LOCAL/REMOTE distance for value of 0).

Also, from ACPI 6.1 specification, section 5.2.17 System Locality
Distance Information Table (SLIT):

  Distance values of 0-9 are reserved and have no meaning.

Thus we shouldn't be ever reporting 0 in neither domain nor
capabilities XML.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
2021-05-24 19:57:45 +02:00
Daniel P. Berrangé
1e260cc449 qemu: report whether a machine type is deprecated in capabilities
QEMU has the ability to mark machine types as deprecated. This should be
exposed to management applications in the capabilities.

Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
2021-02-03 17:30:52 +00:00
Tim Wiederhake
e7ef77a7ac schema: Move host cpu definition to cputypes.rng
This also inlines the defintions for "cpufeature", "cpuspec",
"featureName" and "pagesHost", as "cpu" was the only user.

Doing so avoids a naming collision when cputypes.rng is included in
other schemas in a later patch.

Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
2020-10-07 09:18:07 +02:00
Tim Wiederhake
0e907b8216 schema: Unify apostrophe and quotation mark usage
Quotation marks were used ~ 7000 times, apostrophes ~ 3000 times.

Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
2020-10-07 09:18:07 +02:00
Daniel P. Berrangé
7b79ee2f78 hostcpu: add support for reporting die_id in NUMA topology
Update the host CPU code to report the die_id in the NUMA topology
capabilities. On systems with multiple dies, this fixes the bug
where CPU cores can't be distinguished:

 <cpus num='12'>
   <cpu id='0' socket_id='0' core_id='0' siblings='0'/>
   <cpu id='1' socket_id='0' core_id='1' siblings='1'/>
   <cpu id='2' socket_id='0' core_id='0' siblings='2'/>
   <cpu id='3' socket_id='0' core_id='1' siblings='3'/>
 </cpus>

Notice how core_id is repeated within the scope of the same socket_id.

It now reports

 <cpus num='12'>
   <cpu id='0' socket_id='0' die_id='0' core_id='0' siblings='0'/>
   <cpu id='1' socket_id='0' die_id='0' core_id='1' siblings='1'/>
   <cpu id='2' socket_id='0' die_id='1' core_id='0' siblings='2'/>
   <cpu id='3' socket_id='0' die_id='1' core_id='1' siblings='3'/>
 </cpus>

So core_id is now unique within a (socket_id, die_id) pair.

Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
2020-01-16 15:11:55 +00:00
Cole Robinson
f854e051b9 Remove phyp driver
The phyp driver was added in 2009 and does not appear to have had any
real feature change since 2011. There's virtually no evidence online
of users actually using it. IMO it's time to kill it.

This was discussed a bit in April 2016:
https://www.redhat.com/archives/libvir-list/2016-April/msg01060.html

Final discussion is here:
https://www.redhat.com/archives/libvir-list/2019-December/msg01162.html

Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
2019-12-20 12:25:42 -05:00
Peter Krempa
2936a7517e schema: capabilities: Add 'hap' feature flag
The libxl driver exposes a 'hap' feature in the capability XML but our
schema didn't cover it.

Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
2019-11-13 08:16:04 +01:00
Michal Privoznik
29682196d8 Drop UML driver
The driver is unmaintained, untested and severely broken for
quite some time now. Since nobody even reported any issue with it
let us drop it.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2018-12-17 17:52:46 +01:00
Wang Huaqiang
6af8417415 conf: Introduce RDT monitor host capability
This patch is introducing cache monitor(CMT) to cache and
memory bandwidth monitor(MBM) for monitoring CPU memory
bandwidth.

The host capability of the two monitors is also introduced
in this patch.

For CMT, the host capability is shown like:
  <host>
  ...
    <cache>
      <bank id='0' level='3' type='both' size='15' unit='MiB' cpus='0-5'>
        <control granularity='768' min='1536' unit='KiB' type='both' maxAllocs='4'/>
      </bank>
      <monitor level='3' 'reuseThreshold'='270336' maxMonitors='176'>
        <feature name='llc_occupancy'/>
      </monitor>
    </cache>
    ...
  </host>

For MBM, the capability is shown like this:
  <host>
    ...
    <memory_bandwidth>
      <node id='1' cpus='6-11'>
        <control granularity='10' min ='10' maxAllocs='4'/>
      </node>
      <monitor maxMonitors='176'>
        <feature name='mbm_total_bytes'/>
        <feature name='mbm_local_bytes'/>
      </monitor>
    </memory_bandwidth>
    ...
  </host>

Signed-off-by: Wang Huaqiang <huaqiang.wang@intel.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
2018-09-20 13:06:02 -04:00
Bing Niu
7995fecc25 conf: Add memory bandwidth allocation capability of host
Add new XML section to report host's memory bandwidth allocation
capability. The format as below example:

 <host>
 .....
   <memory_bandwidth>
     <node id='0' cpus='0-19'>
       <control granularity='10' min ='10' maxAllocs='8'/>
     </node>
   </memory_bandwidth>
</host>

granularity   ---- granularity of memory bandwidth, unit percentage.
min           ---- minimum memory bandwidth allowed, unit percentage.
maxAllocs     ---- maximum memory bandwidth allocation group supported.

Signed-off-by: Bing Niu <bing.niu@intel.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
2018-08-13 14:19:41 -04:00
Filip Alac
dc34e78e21 capabilities: Extend capabilities with iommu_support
Signed-off-by: Filip Alac <filipalac@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2018-06-05 08:33:13 +02:00
John Ferlan
c1a0601deb schema: Fix capability grammar for pagesElem
https://bugzilla.redhat.com/show_bug.cgi?id=1572491

Commit id '02129b7c0' added a single pagesElem for slightly
different purposes. One usage was an output for host page size
listing and the other for NUMA supported page sizes. For the
former, only the pages unit and size are formatted, while for
the latter the pages unit, size, and availability data is formatted.

The virt-xml-validate would fail because it expected something
extra in the host page size output. So split up pagesElem a bit
and create pagesHost and pagesNuma for the differences.

Modify some capabilityschemadata output to have the output - even
though the results may not be realistic with respect to the
original incarnation of the data.

Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by Michal Privoznik <mprivozn@redhat.com>
2018-05-25 09:36:42 -04:00
John Ferlan
f97c4cc5e1 schema: Add microcode element to capability grammar
https://bugzilla.redhat.com/show_bug.cgi?id=1572491

Commit id 'd2440f3b5' added printing the <microcode> for the
capabilities, but didn't update the capabilities schema.

While at it, update capabilityschemadata for caps-test2
and caps-test3 to output some value for validation.

Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by Michal Privoznik <mprivozn@redhat.com>
2018-05-25 09:36:32 -04:00
John Ferlan
8d84578035 schema: Add vzmigr for host migrate transport capability
Commit id '0eced74f3' added vzmigr as a valid option for
virCapabilitiesAddHostMigrateTransport, but didn't update
the capabilities schema resulting in possible virt-xml-validate
failure.

Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by Michal Privoznik <mprivozn@redhat.com>
2018-05-25 09:33:51 -04:00
John Ferlan
4cfa9309dc schema: Add rdma for host migrate transport capability
https://bugzilla.redhat.com/show_bug.cgi?id=1572491

Commit id 'b3fd95e36' added rdma as a valid option for
virCapabilitiesAddHostMigrateTransport, but didn't update
the capabilities schema resulting in possible virt-xml-validate
failure.

While at it, update the capabilityschemadata for caps-qemu-kvm

Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by Michal Privoznik <mprivozn@redhat.com>
2018-05-25 09:33:44 -04:00
John Ferlan
dcd9db75fe schema,tests: Use vpxmigr for host migrate transport capability
Commit id 'e4938ce2f' changed the esx_driver to use 'vpxmigr'
instead of esx for virCapabilitiesAddHostMigrateTransport, so
update the capabilities to allow virt-xml-validate to pass and
update the test to use the newer name.

Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by Michal Privoznik <mprivozn@redhat.com>
2018-05-25 09:33:39 -04:00
John Ferlan
7ed5984386 schema: Remove xenmigr from host migrate transport capability
Commit id '1dac5fbb' removed xenmigr as a capability option
for virCapabilitiesAddHostMigrateTransport but didn't update
the schema resulting in possible failure for virt-xml-validate.

Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by Michal Privoznik <mprivozn@redhat.com>
2018-05-25 09:33:32 -04:00
Michal Privoznik
ff5c5a9bbb rng: Fix formatting
Some elements are offset just one space compared to their parent,
some are misaligned completely, and so on.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2017-09-12 13:41:21 +02:00
Martin Kletzander
cc9f0521cd Report more correct information for cache control
On some platforms the number of bits in the cbm_mask might not be
divisible by 4 (and not even by 2), so we need to properly count the
bits.  Similar file, min_cbm_bits, is properly parsed and used, but if
the number is greater than one, we lose the information about
granularity when reporting the data in capabilities.  For that matter
always report granularity, but if it is not the same as the minimum,
add that information in there as well.

Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2017-06-16 13:09:41 +02:00
Eli Qiao
0ab409ccc4 Expose resource control capabilities for caches
Add cache resource control into capabilities for CAT without CDP:

  <cache>
    <bank id='0' level='3' type='unified' size='15360' unit='KiB' cpus='0-5'>
      <control min='768' unit='KiB' scope='both' max_allocation='4'/>
    </bank>
  </cache>

and with CDP:

  <cache>
    <bank id='0' level='3' type='unified' size='15360' unit='KiB' cpus='0-5'>
      <control min='768' unit='KiB' scope='code' max_allocation='4'/>
      <control min='768' unit='KiB' scope='data' max_allocation='4'/>
    </bank>
  </cache>

Also add new test cases for vircaps2xmltest.

Signed-off-by: Eli Qiao <liyong.qiao@intel.com>
2017-06-05 09:50:51 +02:00
Martin Kletzander
4ad6a73bfc Add host cache information in capabilities
We're only adding only info about L3 caches, we can add more
later (just by changing one line), but for now that's more than enough
without overwhelming anyone.

XML snippet of how this should look like (also seen as part of the commit):

  <cache>
    <bank id='0' level='3' type='both' size='8192' unit='KiB' cpus='0-7'/>
  </cache>

Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2017-05-09 13:12:40 +02:00
Cole Robinson
066f7c7c3a domain: conf: Drop unused OSTYPE_AIX
The phyp driver stuffed it into a DomainDefPtr during its attachdevice
routine, but the value is never advertised via capabilities so it should
be safe to drop.

Have the phyp driver use OSTYPE_LINUX, which is what it advertises via
capabilities.
2015-04-29 09:42:26 -04:00
Martin Kletzander
f864aac90b schemas: finish virTristate{Bool,Switch} transition
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
2014-09-17 16:10:26 +02:00
Michal Privoznik
02129b7c0e virCaps: expose pages info
There are two places where you'll find info on page sizes. The first
one is under <cpu/> element, where all supported pages sizes are
listed. Then the second one is under each <cell/> element which refers
to concrete NUMA node. At this place, the size of page's pool is
reported. So the capabilities XML looks something like this:

<capabilities>

  <host>
    <uuid>01281cda-f352-cb11-a9db-e905fe22010c</uuid>
    <cpu>
      <arch>x86_64</arch>
      <model>Westmere</model>
      <vendor>Intel</vendor>
      <topology sockets='1' cores='1' threads='1'/>
      ...
      <pages unit='KiB' size='4'/>
      <pages unit='KiB' size='2048'/>
      <pages unit='KiB' size='1048576'/>
    </cpu>
    ...
    <topology>
      <cells num='4'>
        <cell id='0'>
          <memory unit='KiB'>4054408</memory>
          <pages unit='KiB' size='4'>1013602</pages>
          <pages unit='KiB' size='2048'>3</pages>
          <pages unit='KiB' size='1048576'>1</pages>
          <distances/>
          <cpus num='1'>
            <cpu id='0' socket_id='0' core_id='0' siblings='0'/>
          </cpus>
        </cell>
        <cell id='1'>
          <memory unit='KiB'>4071072</memory>
          <pages unit='KiB' size='4'>1017768</pages>
          <pages unit='KiB' size='2048'>3</pages>
          <pages unit='KiB' size='1048576'>1</pages>
          <distances/>
          <cpus num='1'>
            <cpu id='1' socket_id='0' core_id='0' siblings='1'/>
          </cpus>
        </cell>
        ...
      </cells>
    </topology>
    ...
  </host>

  <guest/>

</capabilities>

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2014-06-19 15:10:49 +02:00
Michal Privoznik
8ba0a58f8d virCaps: Expose distance between host NUMA nodes
If user or management application wants to create a guest,
it may be useful to know the cost of internode latencies
before the guest resources are pinned. For example:

<capabilities>

  <host>
    ...
    <topology>
      <cells num='2'>
        <cell id='0'>
          <memory unit='KiB'>4004132</memory>
          <distances>
            <sibling id='0' value='10'/>
            <sibling id='1' value='20'/>
          </distances>
          <cpus num='2'>
            <cpu id='0' socket_id='0' core_id='0' siblings='0'/>
            <cpu id='2' socket_id='0' core_id='2' siblings='2'/>
          </cpus>
        </cell>
        <cell id='1'>
          <memory unit='KiB'>4030064</memory>
          <distances>
            <sibling id='0' value='20'/>
            <sibling id='1' value='10'/>
          </distances>
          <cpus num='2'>
            <cpu id='1' socket_id='0' core_id='0' siblings='1'/>
            <cpu id='3' socket_id='0' core_id='2' siblings='3'/>
          </cpus>
        </cell>
      </cells>
    </topology>
    ...
  </host>
  ...
</capabilities>

We can see the distance from node1 to node0 is 20 and within nodes 10.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2014-06-04 09:35:55 +02:00
Francesco Romani
85a3eb8a6d qemu: export disk snapshot support in capabilities
This patch adds an element to QEMU's capability XML, to
show if the underlying QEMU binary supports the live disk
snapshotting or not.
This allows any client to know ahead of time if the feature
is available.

Without this information available, the only way to check
for the snapshot support is to request one and check for
errors.

Signed-off-by: Francesco Romani <fromani@redhat.com>
2014-03-26 13:41:25 +01:00
Giuseppe Scrivano
b51038a4cd capabilities: add baselabel per sec driver/virt type to secmodel
Expand the "secmodel" XML fragment of "host" with a sequence of
baselabel's which describe the default security context used by
libvirt with a specific security model and virtualization type:

<secmodel>
  <model>selinux</model>
  <doi>0</doi>
  <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
  <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
</secmodel>
<secmodel>
  <model>dac</model>
  <doi>0</doi>
  <baselabel type='kvm'>107:107</baselabel>
  <baselabel type='qemu'>107:107</baselabel>
</secmodel>

"baselabel" is driver-specific information, e.g. in the DAC security
model, it indicates USER_ID:GROUP_ID.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
2013-10-29 07:06:04 -06:00
Michal Novotny
ff96888991 qemu: Implement CPUs check against machine type's cpu-max
Implement check whether (maximum) vCPUs doesn't exceed machine
type's cpu-max settings.

On older versions of QEMU the check is disabled.

Signed-off-by: Michal Novotny <minovotn@redhat.com>
2013-07-01 14:30:42 +02:00
Dusty Mabe
d3092c60f7 capabilities: add NUMA memory information
'virsh capabilities' will now include a new <memory> element
per <cell> of the topology, as in:

    <topology>
      <cells num='2'>
        <cell id='0'>
          <memory unit='KiB'>12572412</memory>
          <cpus num='12'>
          ...
        </cell>

Signed-off-by: Eric Blake <eblake@redhat.com>
2013-03-08 11:51:00 -07:00
Daniel P. Berrange
ba90ff5672 Update arch names in RNG schema to match virarch.c
When the virarch.c file was introduced to formalize the arch
list, we forgot to update the RNG schema with the new arches.
2013-02-22 10:55:37 +00:00
Peter Krempa
828820e2d3 schemas: Add schemas for more CPU topology information in the caps XML
This patch adds RNG schemas for adding more information in the topology
output of the NUMA section in the capabilities XML.

The added elements are designed to provide more information about the
placement and topology of the processors in the system to management
applications.

A demonstration of supported XML added by this patch:
<capabilities>
  <host>
    <topology>
      <cells num='3'>
        <cell id='0'>
          <cpus num='4'> <!-- this is node with Hyperthreading -->
            <cpu id='0' socket_id='0' core_id='0' siblings='0-1'/>
            <cpu id='1' socket_id='0' core_id='0' siblings='0-1'/>
            <cpu id='2' socket_id='0' core_id='1' siblings='2-3'/>
            <cpu id='3' socket_id='0' core_id='1' siblings='2-3'/>
          </cpus>
        </cell>
        <cell id='1'>
          <cpus num='4'> <!-- this is node with modules (Bulldozer) -->
            <cpu id='4' socket_id='0' core_id='2' siblings='4-5'/>
            <cpu id='5' socket_id='0' core_id='3' siblings='4-5'/>
            <cpu id='6' socket_id='0' core_id='4' siblings='6-7'/>
            <cpu id='7' socket_id='0' core_id='5' siblings='6-7'/>
          </cpus>
         </cell>
        <cell id='2'>
          <cpus num='4'> <!-- this is a normal multi-core node -->
            <cpu id='8' socket_id='1' core_id='0' siblings='8'/>
            <cpu id='9' socket_id='1' core_id='1' siblings='9'/>
            <cpu id='10' socket_id='1' core_id='2' siblings='10'/>
            <cpu id='11' socket_id='1' core_id='3' siblings='11'/>
          </cpus>
         </cell>
      </cells>
    </topology>
  </host>
</capabilities>

The socket_id field represents identification of the physical socket the
CPU is plugged in. This ID may not be identical to the physical socket
ID reported by the kernel.

The core_id identifies a core within a socket. Also this field may not
accurately represent physical ID's.

The core_id is guaranteed to be unique within a cell and a socket. There
may be duplicates between sockets. Only cores sharing core_id within one
cell and one socket can be considered as threads. Cores sharing core_id
within sparate cells are distinct cores.

The siblings field is a list of CPU id's the cpu id's the CPU is sibling
with - thus a thread. The list is in the cpuset format.
2013-01-24 10:53:00 +01:00
Osier Yang
3f46ce78fc rng: Have colorful *.rng with editor
Just add the head line to let the editor know it's XML document.
2013-01-23 23:03:17 +08:00
Marcelo Cerri
e9377dda36 Multiple security drivers in XML data
This patch updates the domain and capability XML parser and formatter to
support more than one "seclabel" element for each domain and device. The
RNG schema and the tests related to this are also updated by this patch.

Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
2012-08-20 19:13:33 +02:00
Ján Tomko
37a10129c2 Update xml schemas according to libvirt source
capability.rng: Guest features can be in any order.
nodedev.rng: Added <driver> element, <capability> phys_function and
virt_functions for PCI devices.
storagepool.rng: Owner or group ID can be -1.

schema tests: New capabilities and nodedev files; changed owner and
group to -1 in pool-dir.xml.
storage_conf: Print uid_t and gid_t as signed to storage pool XML.
2012-08-02 14:36:23 -06:00
Peter Krempa
cdab483e92 xml: Clean up schemas to use shared data types instead of local
The schema files contained duplicate data types that can be shared from
the basictypes.rng file.
2012-03-08 15:31:54 +01:00
Daniel P. Berrange
ae5e55289d Fix capabilities XML to use generic terms for suspend targets
The capabilities XML uses the x86 specific terms 'S3', 'S4'
and 'Hybrid-Syspend'. Switch it to use the same terminology
as the API constants and virsh options, eg 'suspend_mem'
'suspend_disk' and 'suspend_hybrid'

* docs/formatcaps.html.in, docs/schemas/capability.rng,
  src/conf/capabilities.c: Rename suspend constants
2011-11-30 10:12:29 +00:00
Srivatsa S. Bhat
302743f177 Add 'Hybrid-Suspend' power management discovery for the host
Some systems support a feature known as 'Hybrid-Suspend', apart from the
usual system-wide sleep states such as Suspend-to-RAM (S3) or Suspend-to-Disk
(S4). Add the functionality to discover this power management feature and
export it in the capabilities XML under the <power_management> tag.
2011-11-29 17:29:16 +08:00
Srivatsa S. Bhat
e352b16400 Export KVM Host Power Management capabilities
This patch exports KVM Host Power Management capabilities as XML so that
higher-level systems management software can make use of these features
available in the host.

The script "pm-is-supported" (from pm-utils package) is run to discover if
Suspend-to-RAM (S3) or Suspend-to-Disk (S4) is supported by the host.
If either of them are supported, then a new tag "<power_management>" is
introduced in the XML under the <host> tag.

However in case the query to check for power management features succeeded,
but the host does not support any such feature, then the XML will contain
an empty <power_management/> tag. In the event that the PM query itself
failed, the XML will not contain any "power_management" tag.

To use this, new APIs could be implemented in libvirt to exploit power
management features such as S3/S4.
2011-11-22 11:31:22 +08:00
Philipp Hahn
2d9931d20c doc: Add <deviceboot> capability.
Allow /capabilities/guest/features/deviceboot.

Signed-off-by: Philipp Hahn <hahn@univention.de>
2011-11-03 13:41:04 -06:00
John Williams
a1092070d4 microblaze: Add architecture support
Add libvirt support for MicroBlaze architecture as a QEMU target.  Based on mips/mipsel pattern.

Signed-off-by: John Williams <john.williams@petalogix.com>
2011-07-07 17:49:21 -06:00
Jiri Denemark
af53714f47 cpu: Add support for CPU vendor
By specifying <vendor> element in CPU requirements a guest can be
restricted to run only on CPUs by a given vendor. Host CPU vendor is
also specified in capabilities XML.

The vendor is checked when migrating a guest but it's not forced, i.e.,
guests configured without <vendor> element can be freely migrated.
2010-07-07 17:26:00 +02:00
Daniel P. Berrange
60881161ea Expose a host UUID in the capabilities XML
Allow for a host UUID in the capabilities XML. Local drivers
will initialize this from the SMBIOS data. If a sanity check
shows SMBIOS uuid is invalid, allow an override from the
libvirtd.conf configuration file

* daemon/libvirtd.c, daemon/libvirtd.conf: Support a host_uuid
  configuration option
* docs/schemas/capability.rng: Add optional host uuid field
* src/conf/capabilities.c, src/conf/capabilities.h: Include
  host UUID in XML
* src/libvirt_private.syms: Export new uuid.h functions
* src/lxc/lxc_conf.c, src/qemu/qemu_driver.c,
  src/uml/uml_conf.c: Set host UUID in capabilities
* src/util/uuid.c, src/util/uuid.h: Support for host UUIDs
* src/node_device/node_device_udev.c: Use the host UUID functions
* tests/confdata/libvirtd.conf, tests/confdata/libvirtd.out: Add
  new host_uuid config option to test
2010-05-25 17:09:18 +01:00
Jim Meyering
aa7847d3ad maint: convert leading TABs in *.rng files to equivalent spaces
* docs/schemas/capability.rng: Likewise.
* docs/schemas/network.rng: Likewise.
* docs/schemas/nodedev.rng: Likewise.
* docs/schemas/storagepool.rng: Likewise.
* docs/schemas/storagevol.rng: Likewise.
Use these commands:
t=$'\t'
git ls-files | grep '\.rng$' | xargs grep -lE "^ *$t" \
  | xargs perl -MText::Tabs -ni -le \
    '$m=/^( *\t[ \t]*)(.*)/; print $m ? expand($1) . $2 : $_'
2010-03-01 20:19:20 +01:00
Jiri Denemark
6df8b363f7 XML schema for CPU flags
XML schema for CPU flags

Firstly, CPU topology and model with optional features have to be
advertised in host capabilities:

    <host>
        <cpu>
            <arch>ARCHITECTURE</arch>
            <features>
                <!-- old-style features are here -->
            </features>
            <model>NAME</model>
            <topology sockets="S" cores="C" threads="T"/>
            <feature name="NAME"/>
        </cpu>
        ...
    </host>

Secondly, drivers which support detailed CPU specification have to
advertise
it in guest capabilities:

    <guest>
    ...
    <features>
            <cpuselection/>
        </features>
    </guest>

And finally, CPU may be configured in domain XML configuration:

<domain>
    ...
    <cpu match="MATCH">
        <model>NAME</model>
        <topology sockets="S" cores="C" threads="T"/>
        <feature policy="POLICY" name="NAME"/>
    </cpu>
</domain>

Where MATCH can be one of:
    - 'minimum'     specified CPU is the minimum requested CPU
    - 'exact'       disable all additional features provided by host CPU
    - 'strict'      fail if host CPU doesn't exactly match

POLICY can be one of:
    - 'force'       turn on the feature, even if host doesn't have it
    - 'require'     fail if host doesn't have the feature
    - 'optional'    match host
    - 'disable'     turn off the feature, even if host has it
    - 'forbid'      fail if host has the feature

'force' and 'disable' policies turn on/off the feature regardless of its
availability on host. 'force' is unlikely to be used but its there for
completeness since Xen and VMWare allow it.

'require' and 'forbid' policies prevent a guest from being started on a host
which doesn't/does have the feature. 'forbid' is for cases where you disable
the feature but a guest may still try to access it anyway and you don't want
it to succeed.

'optional' policy sets the feature according to its availability on host.
When a guest is booted on a host that has the feature and then migrated to
another host, the policy changes to 'require' as we can't take the feature
away from a running guest.

Default policy for features provided by host CPU but not specified in domain
configuration is set using match attribute of cpu tag. If 'minimum' match is
requested, additional features will be treated as if they were specified
with 'optional' policy. 'exact' match implies 'disable' policy and 'strict'
match stands for 'forbid' policy.

* docs/schemas/capability.rng docs/schemas/domain.rng: extend the
  RelaxNG schemas to add CPU flags support
2009-12-18 14:37:09 +01:00
Mark McLoughlin
22d990f138 Add arm arch to capabilities schema
* docs/schemas/capabilities.rng: add arm and sort arches
2009-09-10 12:25:42 +01:00
Mark McLoughlin
e45b13d248 Update capabilities schema to allow multiple machines per domain
* docs/schemas/capabilities.rng: allow multiple machines per domain
  just like they are allowed for guests
2009-09-10 12:25:42 +01:00