Links between NUMA nodes can have different latencies and bandwidths. This info is newly defined in ACPI 6.2 under Heterogeneous Memory Attribute Table (HMAT) table. Linux kernel learned how to report these values under sysfs and thus we can expose them in our capabilities XML. The sysfs interface is documented in kernel's Documentation/admin-guide/mm/numaperf.rst. Long story short, two nodes can be in initiator-target relationship. A node can be initiator if it has a CPU or a device that's capable of initiating memory transfer. Therefore a node that has just memory can only be target. An initiator-target link can then have any combination of {bandwidth, latency} - {access, read, write} attribute (6 in total). However, the standard says access is applicable iff read and write values are the same. Therefore, we really have just four combinations of attributes: bandwidth-read, bandwidth-write, latency-read, latency-write. This is the combination that kernel reports anyway. Then, under /sys/system/devices/node/nodeX/acccessN/initiators we find values for those 4 attributes and also symlinks named "nodeN" which then represent initiators to nodeX. For instance: /sys/system/node/node1/access1/initiators/node0 -> ../../node0 /sys/system/node/node1/access1/initiators/read_bandwidth /sys/system/node/node1/access1/initiators/read_latency /sys/system/node/node1/access1/initiators/write_bandwidth /sys/system/node/node1/access1/initiators/write_latency This means that node0 is initiator and node1 is target and values of the interconnect can be read. In theory, there can be separate links to memory side caches too (e.g. one link from node X to node Y's main memory, another from node X to node Y's L1 cache, another one to L2 cache and so on). But sysfs does not express this relationship just yet. The "accessN" means either "access0" or "access1". The difference is that while the former expresses the best interconnect between two nodes including CPUS and I/O devices (such as GPUs and NICs), the latter includes only CPUs and thus is what we need. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1786309 Signed-off-by: Michal Privoznik <mprivozn@redhat.com> Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Libvirt API for virtualization
Libvirt provides a portable, long term stable C API for managing the virtualization technologies provided by many operating systems. It includes support for QEMU, KVM, Xen, LXC, bhyve, Virtuozzo, VMware vCenter and ESX, VMware Desktop, Hyper-V, VirtualBox and the POWER Hypervisor.
For some of these hypervisors, it provides a stateful management daemon which runs on the virtualization host allowing access to the API both by non-privileged local users and remote users.
Layered packages provide bindings of the libvirt C API into other languages including Python, Perl, PHP, Go, Java, OCaml, as well as mappings into object systems such as GObject, CIM and SNMP.
Further information about the libvirt project can be found on the website:
License
The libvirt C API is distributed under the terms of GNU Lesser General Public License, version 2.1 (or later). Some parts of the code that are not part of the C library may have the more restrictive GNU General Public License, version 2.0 (or later). See the files COPYING.LESSER
and COPYING
for full license terms & conditions.
Installation
Instructions on building and installing libvirt can be found on the website:
https://libvirt.org/compiling.html
Contributing
The libvirt project welcomes contributions in many ways. For most components the best way to contribute is to send patches to the primary development mailing list. Further guidance on this can be found on the website:
https://libvirt.org/contribute.html
Contact
The libvirt project has two primary mailing lists:
- libvirt-users@redhat.com (for user discussions)
- libvir-list@redhat.com (for development only)
Further details on contacting the project are available on the website: