Split the block device implementation into code that be used in common
between multiple different virtio device implementations.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In order to simplify the transition to VirtioCommon and to avoid needing
to set empty fields derive Default for VirtioCommon.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rearrange the code to match other devices which makes it easier to prep
for sharing this between other devices.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move the if-let for the taps later which makes the earlier activation
code identical to other devices.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Introduce VirtioCommon to help remove duplicated functionality and state
between implementations of VirtioDevice. Initially it is only handling
feature acknowledgement and testing.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This patch ported many tests to the new methodology, where the guest log
will be printed only when the test is failing.
Things to finish in follow-up PRs:
1. Special tests not ported yet include: test_reboot,
test_bzimage_reboot, test_serial_null(), test_serial_tty(),
test_serial_file(), test_virtio_console(), test_console_file(),
and test_simple_launch.
2. Few direct calls to 'Command::new(clh_command("cloud-hypervisor"))',
which is still printing the guest console
Signed-off-by: Bo Chen <chen.bo@intel.com>
Writing some new documentation to help users understand how the guest
memory can be described through Cloud-Hypervisor parameters.
Fixes#1659
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By extending the existing NUMA integration test, this commit validates
the proper distances between NUMA nodes are exposed to the guest.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By introducing the SLIT (System Locality Distance Information Table), we
provide the guest with the distance between each node. This lets the
user describe the NUMA topology with a lot of details so that slower
memory backing the VM can be exposed as being further away from other
nodes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the NumaConfig which now provides distance information, we can
internally update the list of NUMA nodes with the exact distances they
should be located from other nodes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By introducing 'distances' option, we let the user describe a list of
destination NUMA nodes with their associated distances compared to the
current node (defined through 'id').
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Extend the existing NUMA integration to validate that specifying CPUs
for each NUMA node gets propagated to the guest.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the list of CPUs related to each NUMA node, Processor Local
x2APIC Affinity structures are created and included into the SRAT table.
This describes which CPUs are part of each node.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Relying on the list of CPUs defined through the NumaConfig, this patch
will update the internal list of CPUs attached to each NUMA node.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Through this new parameter, we give users the opportunity to specify a
set of CPUs attached to a NUMA node that has been previously created
from the --memory-zone parameter.
This parameter will be extended in the future to describe the distance
between multiple nodes.
For instance, if a user wants to attach CPUs 0, 1, 2 and 6 to a NUMA
node, here are two different ways of doing so:
Either
./cloud-hypervisor ... --numa id=0,cpus=0-2:6
Or
./cloud-hypervisor ... --numa id=0,cpus=0:1:2:6
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This new test validates the guest OS can find the NUMA nodes which have
been defined by the user through the CLI, and that the right amount of
memory is associated with each node.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The SRAT table (System Resource Affinity Table) is needed to describe
NUMA nodes and how memory ranges and CPUs are attached to them.
For now it simply attaches a list of Memory Affinity structures based on
the list of NUMA nodes created from the VMM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the 'guest_numa_node' option, we create and store a list of
NUMA nodes in the MemoryManager. The point being to associate a list of
memory regions to each node, so that we can later create the ACPI tables
with the proper memory range information.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
With the introduction of this new option, the user will be able to
describe if a particular memory zone should belong to a specific NUMA
node from a guest perspective.
For instance, using '--memory-zone size=1G,guest_numa_node=2' would let
the user describe that a memory zone of 1G in the guest should be
exposed as being associated with the NUMA node 2.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Given that ACPI uses u32 as the type for the Proximity Domain, we can
use u32 instead of u64 as the type for 'host_numa_node' option.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>