vmm: device_manager: Tie PCI bus to NUMA node 0

Make sure the unique PCI bus is tied to the default NUMA node 0, and
update the documentation to let the users know about this special case.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit is contained in:
Sebastien Boeuf 2021-06-17 11:00:55 +02:00
parent acc71c4eda
commit 07f3075773
2 changed files with 15 additions and 0 deletions

View File

@ -451,3 +451,10 @@ _Example_
--numa guest_numa_id=0,memory_zones=mem0:mem2
--numa guest_numa_id=1,memory_zones=mem1
```
### PCI bus
Cloud Hypervisor supports only one PCI bus, which is why it has been tied to
the NUMA node 0 by default. It is the user responsibility to organize the NUMA
nodes correctly so that vCPUs and guest RAM which should be located on the same
NUMA node as the PCI bus end up on the NUMA node 0.

View File

@ -3744,6 +3744,14 @@ impl Aml for DeviceManager {
let supp = aml::Name::new("SUPP".into(), &aml::ZERO);
pci_dsdt_inner_data.push(&supp);
// Since Cloud Hypervisor supports only one PCI bus, it can be tied
// to the NUMA node 0. It's up to the user to organize the NUMA nodes
// so that the PCI bus relates to the expected vCPUs and guest RAM.
let proximity_domain = 0u32;
let pxm_return = aml::Return::new(&proximity_domain);
let pxm = aml::Method::new("_PXM".into(), 0, false, vec![&pxm_return]);
pci_dsdt_inner_data.push(&pxm);
let pci_dsm = PciDsmMethod {};
pci_dsdt_inner_data.push(&pci_dsm);