By introducing the SLIT (System Locality Distance Information Table), we
provide the guest with the distance between each node. This lets the
user describe the NUMA topology with a lot of details so that slower
memory backing the VM can be exposed as being further away from other
nodes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the NumaConfig which now provides distance information, we can
internally update the list of NUMA nodes with the exact distances they
should be located from other nodes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By introducing 'distances' option, we let the user describe a list of
destination NUMA nodes with their associated distances compared to the
current node (defined through 'id').
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Extend the existing NUMA integration to validate that specifying CPUs
for each NUMA node gets propagated to the guest.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the list of CPUs related to each NUMA node, Processor Local
x2APIC Affinity structures are created and included into the SRAT table.
This describes which CPUs are part of each node.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Relying on the list of CPUs defined through the NumaConfig, this patch
will update the internal list of CPUs attached to each NUMA node.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Through this new parameter, we give users the opportunity to specify a
set of CPUs attached to a NUMA node that has been previously created
from the --memory-zone parameter.
This parameter will be extended in the future to describe the distance
between multiple nodes.
For instance, if a user wants to attach CPUs 0, 1, 2 and 6 to a NUMA
node, here are two different ways of doing so:
Either
./cloud-hypervisor ... --numa id=0,cpus=0-2:6
Or
./cloud-hypervisor ... --numa id=0,cpus=0:1:2:6
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This new test validates the guest OS can find the NUMA nodes which have
been defined by the user through the CLI, and that the right amount of
memory is associated with each node.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The SRAT table (System Resource Affinity Table) is needed to describe
NUMA nodes and how memory ranges and CPUs are attached to them.
For now it simply attaches a list of Memory Affinity structures based on
the list of NUMA nodes created from the VMM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the 'guest_numa_node' option, we create and store a list of
NUMA nodes in the MemoryManager. The point being to associate a list of
memory regions to each node, so that we can later create the ACPI tables
with the proper memory range information.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
With the introduction of this new option, the user will be able to
describe if a particular memory zone should belong to a specific NUMA
node from a guest perspective.
For instance, using '--memory-zone size=1G,guest_numa_node=2' would let
the user describe that a memory zone of 1G in the guest should be
exposed as being associated with the NUMA node 2.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Given that ACPI uses u32 as the type for the Proximity Domain, we can
use u32 instead of u64 as the type for 'host_numa_node' option.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
"struct MemoryConfig" has balloon_size but not in MemoryConfig
of cloud-hypervisor.yaml.
This commit adds it.
Signed-off-by: Hui Zhu <teawater@antfin.com>
Let's narrow down the limitation related to mbind() by allowing shared
mappings backed by a file backed by RAM. This leaves the restriction on
only for mappings backed by a regular file.
With this patch, host NUMA node can be specified even if using
vhost-user devices.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Relying on the new option 'host_numa_node' from the 'memory-zone'
parameter, the user can now define which NUMA node from the host
should be used to back the current memory zone.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Since memory zones have been introduced, it is now possible for a user
to specify multiple backends for the guest RAM. By adding a new option
'host_numa_node' to the 'memory-zone' parameter, we allow the guest RAM
to be backed by memory that might come from a specific NUMA node on the
host.
The option expects a node identifier, specifying which NUMA node should
be used to allocate the memory associated with a specific memory zone.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The flag 'mergeable' should only apply to the entire guest RAM, which is
why it is removed from the MemoryZoneConfig as it is defined as a global
parameter at the MemoryConfig level.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The 'cmdline' parameter should not be required as it is not needed when
the 'kernel' parameter is the rust-hypervisor-fw, which means the kernel
and the associated command line will be found from the EFI partition.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Factorize the codepath between simple memory and multiple memory zones.
This simplifies the way regions are memory mapped, as everything relies
on the same codepath. This is performed by creating a memory zone on the
fly for the specific use case where --memory is used with size being
different from 0. Internally, the code can rely on memory zones to
create the memory regions forming the guest memory.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
After the introduction of user defined memory zones, we can now remove
the deprecated 'file' option from --memory parameter. This makes this
parameter simpler, letting more advanced users define their own custom
memory zones through the dedicated parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
User defined memory regions can now support being snapshot and restored,
therefore this commit removes the restrictions that were applied through
earlier commit.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By factorizing a lot of code into create_ram_region(), this commit
achieves the simplification of the restore codepath. Additionally, it
makes user defined memory zones compatible with snapshot/restore.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
First thing, this patch introduces a new function to identify if a file
descriptor is linked to any hard link on the system. This can let the
VMM know if the file can be accessed by the user, or if the file will
be destroyed as soon as the VMM releases the file descriptor.
Based on this information, and associated with the knowledge about the
region being MAP_SHARED or not, the VMM can now decide to skip the copy
of the memory region content. If the user has access to the file from
the filesystem, and if the file has been mapped as MAP_SHARED, we can
consider the guest memory region content to be present in this file at
any point in time. That's why in this specific case, there's no need for
performing the copy of the memory region content into a dedicated file.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Let's not assume that a backing file is going to be the result from
a snapshot for each memory region. These regions might be backed by
a file on the host filesystem (not a temporary file in host RAM), which
means they don't need to be copied and stored into dedicated files.
That's why this commit prepares for further changes by introducing an
optional PathBuf associated with the snapshot of each memory region.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
There will be some cases where the implementation of the snapshot()
function from the Snapshottable trait will require to modify some
internal data, therefore we make this possible by updating the trait
definition with snapshot(&mut self).
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Adding a small test to validate that user defined memory zones work as
expected when using --memory-zone option.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In case the memory size is 0, this means the user defined memory
zones are used as a way to specify how to back the guest memory.
This is the first step in supporting complex use cases where the user
can define exactly which type of memory from the host should back the
memory from the guest.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for the need to map part of a file with the function
create_ram_region(), it is extended to accept a file offset as argument.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In case the provided backing file is an actual file and not a directory,
we should not truncate it, as we expect the file to already be the right
size.
This change will be important once we try to map the same file through
multiple memory mappings. We can't let the file be truncated as the
second mapping wouldn't work properly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Introducing a new CLI option --memory-zone letting the user specify
custom memory zones. When this option is present, the --memory size
must be explicitly set to 0.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
It is otherwise seems to be able to cause resource conflicts with
Windows APCI_HAL. The OS might do a better job on assigning resources
to this device, withouth them to be requested explicitly. 0xcf8 and
0xcfc are only what is certainly needed for the PCI device enumeration.
Signed-off-by: Anatol Belski <anatol.belski@microsoft.com>
Use of backing file is deprecated hence use the `hugepages` field.
Also use the `boot` field for specifying number of cpus
Signed-off-by: Amey Narkhede <ameynarkhede02@gmail.com>
We may need to store hypervisor speciific data to the VM. This support is
needed for Microsoft hyperv implementations. This patch introduces two
new definitions to Vm trait and implements for KVM.
Signed-off-by: Muminul Islam <muislam@microsoft.com>