By adding a new parameter 'id' to the virtiomem_resize() function, we
prepare this function to be usable for both global memory resizing and
memory zone resizing.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
It's important to return the region covered by virtio-mem the first time
it is inserted as the device manager must update all devices with this
information.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the previous code changes, we can now update the MemoryManager
code to create one virtio-mem region and resizing handler per memory
zone. This will naturally create one virtio-mem device per memory zone
from the DeviceManager's code which has been previously updated as well.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for resizing support of an individual memory zone,
this commit introduces a new option 'hotplug_size' to '--memory-zone'
parameter. This defines the amount of memory that can be added through
each specific memory zone.
Because memory zone resize is tied to virtio-mem, make sure the user
selects 'virtio-mem' hotplug method, otherwise return an error.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Both MemoryManager and DeviceManager are updated through this commit to
handle the creation of multiple virtio-mem devices if needed. For now,
only the framework is in place, but the behavior remains the same, which
means only the memory zone created from '--memory' generates a
virtio-mem region that can be used for resize.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to anticipate the need for storing memory regions along with
virtio-mem information for each memory zone, we create a new structure
MemoryZone that will replace Vec<Arc<GuestRegionMmap>> in the hash map
MemoryZones.
This makes thing more logical as MemoryZones becomes a list of
MemoryZone sorted by their identifier.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Inject CPUID leaves for advertising KVM HyperV support when the
"kvm_hyperv" toggle is enabled. Currently we only enable a selection of
features required to boot.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Currently we don't need to do anything to service these exits but when
the synthetic interrupt controller is active an exit will be triggered
to notify the VMM of details of the synthetic interrupt page.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Some of the io_uring setup happens upon activation of the virtio-blk
device, which is initially triggered through an MMIO VM exit. That's why
the vCPU threads must authorize io_uring related syscalls.
This commit ensures the virtio-blk io_uring implementation can be used
along with the seccomp filters enabled.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Extract common code for adding devices to the PCI bus into its own
function from the VFIO and VIRTIO code paths.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This removes the dependency of the pci crate on the devices crate which
now only contains the device implementations themselves.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The goal of this commit is to rename the existing NUMA option 'id' with
'guest_numa_id'. This is done without any modification to the way this
option behaves.
The reason for the rename is caused by the observation that all other
parameters with an option called 'id' expect a string to be provided.
Because in this particular case we expect a u32 representing a proximity
domain from the ACPI specification, it's better to name it with a more
explicit name.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The way to describe guest NUMA nodes has been updated through previous
commits, letting the user describe the full NUMA topology through the
--numa parameter (or NumaConfig).
That's why we can remove the deprecated and unused 'guest_numa_node'
option.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the previous changes introducing new options for both memory
zones and NUMA configuration, this patch changes the behavior of the
NUMA node definition. Instead of relying on the memory zones to define
the guest NUMA nodes, everything goes through the --numa parameter. This
allows for defining NUMA nodes without associating any particular memory
range to it. And in case one wants to associate one or multiple memory
ranges to it, the expectation is to describe a list of memory zone
through the --numa parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This new option provides a new way to describe the memory associated
with a NUMA node. This is the first step before we can remove the
'guest_numa_node' option from the --memory-zone parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that we have an identifier per memory zone, and in order to keep
track of the memory regions associated with the memory zones, we create
and store a map referencing list of memory regions per memory zone ID.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for allowing memory zones to be removed, but also in
anticipation for refactoring NUMA parameter, we introduce a mandatory
'id' option to the --memory-zone parameter.
This forces the user to provide a unique identifier for each memory zone
so that we can refer to these.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By introducing the SLIT (System Locality Distance Information Table), we
provide the guest with the distance between each node. This lets the
user describe the NUMA topology with a lot of details so that slower
memory backing the VM can be exposed as being further away from other
nodes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the NumaConfig which now provides distance information, we can
internally update the list of NUMA nodes with the exact distances they
should be located from other nodes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By introducing 'distances' option, we let the user describe a list of
destination NUMA nodes with their associated distances compared to the
current node (defined through 'id').
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the list of CPUs related to each NUMA node, Processor Local
x2APIC Affinity structures are created and included into the SRAT table.
This describes which CPUs are part of each node.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Relying on the list of CPUs defined through the NumaConfig, this patch
will update the internal list of CPUs attached to each NUMA node.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Through this new parameter, we give users the opportunity to specify a
set of CPUs attached to a NUMA node that has been previously created
from the --memory-zone parameter.
This parameter will be extended in the future to describe the distance
between multiple nodes.
For instance, if a user wants to attach CPUs 0, 1, 2 and 6 to a NUMA
node, here are two different ways of doing so:
Either
./cloud-hypervisor ... --numa id=0,cpus=0-2:6
Or
./cloud-hypervisor ... --numa id=0,cpus=0:1:2:6
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The SRAT table (System Resource Affinity Table) is needed to describe
NUMA nodes and how memory ranges and CPUs are attached to them.
For now it simply attaches a list of Memory Affinity structures based on
the list of NUMA nodes created from the VMM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the 'guest_numa_node' option, we create and store a list of
NUMA nodes in the MemoryManager. The point being to associate a list of
memory regions to each node, so that we can later create the ACPI tables
with the proper memory range information.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
With the introduction of this new option, the user will be able to
describe if a particular memory zone should belong to a specific NUMA
node from a guest perspective.
For instance, using '--memory-zone size=1G,guest_numa_node=2' would let
the user describe that a memory zone of 1G in the guest should be
exposed as being associated with the NUMA node 2.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Given that ACPI uses u32 as the type for the Proximity Domain, we can
use u32 instead of u64 as the type for 'host_numa_node' option.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
"struct MemoryConfig" has balloon_size but not in MemoryConfig
of cloud-hypervisor.yaml.
This commit adds it.
Signed-off-by: Hui Zhu <teawater@antfin.com>
Let's narrow down the limitation related to mbind() by allowing shared
mappings backed by a file backed by RAM. This leaves the restriction on
only for mappings backed by a regular file.
With this patch, host NUMA node can be specified even if using
vhost-user devices.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Relying on the new option 'host_numa_node' from the 'memory-zone'
parameter, the user can now define which NUMA node from the host
should be used to back the current memory zone.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Since memory zones have been introduced, it is now possible for a user
to specify multiple backends for the guest RAM. By adding a new option
'host_numa_node' to the 'memory-zone' parameter, we allow the guest RAM
to be backed by memory that might come from a specific NUMA node on the
host.
The option expects a node identifier, specifying which NUMA node should
be used to allocate the memory associated with a specific memory zone.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The flag 'mergeable' should only apply to the entire guest RAM, which is
why it is removed from the MemoryZoneConfig as it is defined as a global
parameter at the MemoryConfig level.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The 'cmdline' parameter should not be required as it is not needed when
the 'kernel' parameter is the rust-hypervisor-fw, which means the kernel
and the associated command line will be found from the EFI partition.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Factorize the codepath between simple memory and multiple memory zones.
This simplifies the way regions are memory mapped, as everything relies
on the same codepath. This is performed by creating a memory zone on the
fly for the specific use case where --memory is used with size being
different from 0. Internally, the code can rely on memory zones to
create the memory regions forming the guest memory.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
After the introduction of user defined memory zones, we can now remove
the deprecated 'file' option from --memory parameter. This makes this
parameter simpler, letting more advanced users define their own custom
memory zones through the dedicated parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
User defined memory regions can now support being snapshot and restored,
therefore this commit removes the restrictions that were applied through
earlier commit.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By factorizing a lot of code into create_ram_region(), this commit
achieves the simplification of the restore codepath. Additionally, it
makes user defined memory zones compatible with snapshot/restore.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
First thing, this patch introduces a new function to identify if a file
descriptor is linked to any hard link on the system. This can let the
VMM know if the file can be accessed by the user, or if the file will
be destroyed as soon as the VMM releases the file descriptor.
Based on this information, and associated with the knowledge about the
region being MAP_SHARED or not, the VMM can now decide to skip the copy
of the memory region content. If the user has access to the file from
the filesystem, and if the file has been mapped as MAP_SHARED, we can
consider the guest memory region content to be present in this file at
any point in time. That's why in this specific case, there's no need for
performing the copy of the memory region content into a dedicated file.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Let's not assume that a backing file is going to be the result from
a snapshot for each memory region. These regions might be backed by
a file on the host filesystem (not a temporary file in host RAM), which
means they don't need to be copied and stored into dedicated files.
That's why this commit prepares for further changes by introducing an
optional PathBuf associated with the snapshot of each memory region.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
There will be some cases where the implementation of the snapshot()
function from the Snapshottable trait will require to modify some
internal data, therefore we make this possible by updating the trait
definition with snapshot(&mut self).
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In case the memory size is 0, this means the user defined memory
zones are used as a way to specify how to back the guest memory.
This is the first step in supporting complex use cases where the user
can define exactly which type of memory from the host should back the
memory from the guest.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for the need to map part of a file with the function
create_ram_region(), it is extended to accept a file offset as argument.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In case the provided backing file is an actual file and not a directory,
we should not truncate it, as we expect the file to already be the right
size.
This change will be important once we try to map the same file through
multiple memory mappings. We can't let the file be truncated as the
second mapping wouldn't work properly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Introducing a new CLI option --memory-zone letting the user specify
custom memory zones. When this option is present, the --memory size
must be explicitly set to 0.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
It is otherwise seems to be able to cause resource conflicts with
Windows APCI_HAL. The OS might do a better job on assigning resources
to this device, withouth them to be requested explicitly. 0xcf8 and
0xcfc are only what is certainly needed for the PCI device enumeration.
Signed-off-by: Anatol Belski <anatol.belski@microsoft.com>
Some OS might check for duplicates and bail out, if it can't create a
distinct mapping. According to ACPI 5.0 section 6.1.12, while _UID is
optional, it becomes required when there are multiple devices with the
same _HID.
Signed-off-by: Anatol Belski <ab@php.net>
The brk syscall is not always called as the system might not need it.
But when it's needed from the API thread, this causes the thread to
terminate as it is not part of the authorized list of syscalls.
This should fix some sporadic failures on the CI with the musl build.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Add mprotect to API thread rules. Prevent the VMM is
killed when it is used.
Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
This patch added the seccomp_filter module to the virtio-devices crate
by taking reference code from the vmm crate. This patch also adds
allowed-list for the virtio-block worker thread.
Partially fixes: #925
Signed-off-by: Bo Chen <chen.bo@intel.com>
This patch propagates the SeccompAction value from main to the
Vm struct constructor (i.e. Vm::new_from_memory_manager), so that we can
use it to construct the DeviceManager and CpuManager struct for
controlling the behavior of the seccomp filters for vcpu/virtio-device
worker threads.
Signed-off-by: Bo Chen <chen.bo@intel.com>
This patch extends the CLI option '--seccomp' to accept the 'log'
parameter in addition 'true/false'. It also refactors the
vmm::seccomp_filters module to support both "SeccompAction::Trap" and
"SeccompAction::Log".
Fixes: #1180
Signed-off-by: Bo Chen <chen.bo@intel.com>
This patch replaces the usage of 'SeccompLevel' with 'SeccompAction',
which is the first step to support the 'log' action over system
calls that are not on the allowed list of seccomp filters.
Signed-off-by: Bo Chen <chen.bo@intel.com>
By adding a new io_uring feature gate, we let the user the possibility
to choose if he wants to enable the io_uring improvements or not.
Since the io_uring feature depends on the availability on recent host
kernels, it's better if we leave it off for now.
As soon as our CI will have support for a kernel 5.6 with all the
features needed from io_uring, we'll enable this feature gate
permanently.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In case the host supports io_uring and the specific io_uring options
needed, the VMM will choose the asynchronous version of virtio-blk.
This will enable better I/O performances compared to the default
synchronous version.
This is also important to note the VMM won't be able to use the
asynchronous version if the backend image is in QCOW format.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Cloud Hypervisor allows either the serial or virtio console to output to
TTY, but TTY input is pushed to both.
This is not correct. When Linux guest is configured to spawn TTYs on
both ttyS0 and hvc0, the user effectively issues the same commands twice
in different TTYs.
Fix this by only direct input to the one choice that is using host side
TTY.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This commit fixes an "Bad syscall" error when shutting down the VM
on AArch64 by adding the SYS_unlinkat syscall to the seccomp
whitelist.
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
Ensure that the width of the I/O port is correctly set to 32-bits in the
generic address used for the X_PM_TMR_BLK. Do this by type
parameterising GenericAddress::io_port_address() fuction.
TEST=Boot with clocksource=acpi_pm and observe no errors in the dmesg.
Fixes: #1496
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This is a counter exposed via an I/O port that runs at 3.579545MHz. Here
we use a hardcoded I/O and expose the details through the FADT table.
TEST=Boot Linux kernel and see the following in dmesg:
[ 0.506198] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
We store the device passthrough handler, so we should use it through our
internal API and only carry the passed through device configuration.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Shrink GICDevice trait to contain hypervisor agnostic API's only, which
are used in generating FDT.
Move all KVM specific logic into KvmGICDevice trait.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Make set_gsi_routing take a list of IrqRoutingEntry. The construction of
hypervisor specific structure is left to set_gsi_routing.
Now set_gsi_routes, which is part of the interrupt module, is only
responsible for constructing a list of routing entries.
This further splits hypervisor specific code from hypervisor agnostic
code.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
That function is going to return a handle for passthrough related
operations.
Move create_kvm_device code there.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
SGX expects the EPC region to be reported as "reserved" from the e820
table. This patch adds a new entry to the table if SGX is enabled.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The support for SGX is exposed to the guest through CPUID 0x12. KVM
passes static subleaves 0 and 1 from the host to the guest, without
needing any modification from the VMM itself.
But SGX also relies on dynamic subleaves 2 through N, used for
describing each EPC section. This is not handled by KVM, which means
the VMM is in charge of setting each subleaf starting from index 2
up to index N, depending on the number of EPC sections.
These subleaves 2 through N are not listed as part of the supported
CPUID entries from KVM. But it's important to set them as long as index
0 and 1 are present and indicate that SGX is supported.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of passing the GuestMemoryMmap directly to the CpuManager upon
its creation, it's better to pass a reference to the MemoryManager. This
way we will be able to know if SGX EPC region along with one or multiple
sections are present.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The SGX EPC region must be exposed through the ACPI tables so that the
guest can detect its presence. The guest only get the full range from
ACPI, as the specific EPC sections are directly described through the
CPUID of each vCPU.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the presence of one or multiple SGX EPC sections from the VM
configuration, the MemoryManager will allocate a contiguous block of
guest address space to hold the entire EPC region. Within this EPC
region, each EPC section is memory mapped.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Introducing the new CLI option --sgx-epc along with the OpenAPI
structure SgxEpcConfig, so that a user can now enable one or multiple
SGX Enclave Page Cache sections within a contiguous region from the
guest address space.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>