Commit Graph

984 Commits

Author SHA1 Message Date
Sebastien Boeuf
1e1a50ef70 vmm: Update memory configuration upon virtio-mem resizing
Based on all the preparatory work achieved through previous commits,
this patch updates the 'hotplugged_size' field for both MemoryConfig and
MemoryZoneConfig structures when either the whole memory is resized, or
simply when a memory zone is resized.

This fixes the lack of support for rebooting a VM with the right amount
of memory plugged in.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
de2b917f55 vmm: Add hotplugged_size to VirtioMemZone
Adding a new field to VirtioMemZone structure, as it lets us associate
with a particular virtio-mem region the amount of memory that should be
plugged in at boot.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
3faf8605f3 vmm: Group virtio-mem fields under a dedicated structure
This patch simplifies the code as we have one single Option for the
VirtioMemZone. This also prepares for storing additional information
related to the virtio-mem region.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
4e1b78e1ff vmm: Add 'hotplugged_size' to memory parameters
Add the new option 'hotplugged_size' to both --memory-zone and --memory
parameters so that we can let the user specify a certain amount of
memory being plugged at boot.

This is also part of making sure we can store the virtio-mem size over a
reboot of the VM.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Hui Zhu
33a1e37c35 virtio-devices: mem: Allow for an initial size
This commit gives the possibility to create a virtio-mem device with
some memory already plugged into it. This is preliminary work to be
able to reboot a VM with the virtio-mem region being already resized.

Signed-off-by: Hui Zhu <teawater@antfin.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
8b5202aa5a vmm: Always add virtio-mem region upon VM creation
Now that e820 tables are created from the 'boot_guest_memory', we can
simplify the memory manager code by adding the virtio-mem regions when
they are created. There's no need to wait for the first hotplug to
insert these regions.

This also anticipates the need for starting a VM with some memory
already plugged into the virtio-mem region.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
66fc557015 vmm: Store boot guest memory and use it for boot sequence
In order to differentiate the 'boot' memory regions from the virtio-mem
regions, we store what we call 'boot_guest_memory'. This is useful to
provide the adequate list of regions to the configure_system() function
as it expects only the list of regions that should be exposed through
the e820 table.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
1798ed8194 vmm: virtio-mem: Enforce alignment and size requirements
The virtio-mem driver is generating some warnings regarding both size
and alignment of the virtio-mem region if not based on 128MiB:

The alignment of the physical start address can make some memory
unusable.
The alignment of the physical end address can make some memory
unusable.

For these reasons, the current patch enforces virtio-mem regions to be
128MiB aligned and checks the size provided by the user is a multiple of
128MiB.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
eb7b923e22 vmm: Create virtio-mem device with appropriate NUMA node
Now that virtio-mem device accept a guest NUMA node as parameter, we
retrieve this information from the list of NUMA nodes. Based on the
memory zone associated with the virtio-mem device, we obtain the NUMA
node identifier, which we provide to the virtio-mem device.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
dcedd4cded virtio-devices: virtio-mem: Add NUMA support
Implement support for associating a virtio-mem device with a specific
guest NUMA node, based on the ACPI proximity domain identifier.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
0658559880 vmm: memory_manager: Rename 'use_zones' with 'user_provided_zones'
This brings more clarity on the meaning of this boolean.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
775f3346e3 vmm: Rename 'virtiomem' to 'virtio_mem'
For more consistency and help reading the code better, this commit
renames all 'virtiomem*' variables into 'virtio_mem*'.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
015c78411e vmm: Add a 'resize-zone' action to the API actions
Implement a new VM action called 'resize-zone' allowing the user to
resize one specific memory zone at a time. This relies on all the
preliminary work from the previous commits to resize each virtio-mem
device independently from each others.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
141df701dd vmm: memory_manager: Make virtiomem_resize function generic
By adding a new parameter 'id' to the virtiomem_resize() function, we
prepare this function to be usable for both global memory resizing and
memory zone resizing.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
34331d3e72 vmm: memory_manager: Fix virtio-mem resize
It's important to return the region covered by virtio-mem the first time
it is inserted as the device manager must update all devices with this
information.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
adc59a6f15 vmm: memory_manager: Create one virtio-mem per memory zone
Based on the previous code changes, we can now update the MemoryManager
code to create one virtio-mem region and resizing handler per memory
zone. This will naturally create one virtio-mem device per memory zone
from the DeviceManager's code which has been previously updated as well.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
c645a72c17 vmm: Add 'hotplug_size' to memory zones
In anticipation for resizing support of an individual memory zone,
this commit introduces a new option 'hotplug_size' to '--memory-zone'
parameter. This defines the amount of memory that can be added through
each specific memory zone.

Because memory zone resize is tied to virtio-mem, make sure the user
selects 'virtio-mem' hotplug method, otherwise return an error.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
30ff7e108f vmm: Prepare code to accept multiple virtio-mem devices
Both MemoryManager and DeviceManager are updated through this commit to
handle the creation of multiple virtio-mem devices if needed. For now,
only the framework is in place, but the behavior remains the same, which
means only the memory zone created from '--memory' generates a
virtio-mem region that can be used for resize.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Sebastien Boeuf
b173b6c5b4 vmm: Create a MemoryZone structure
In order to anticipate the need for storing memory regions along with
virtio-mem information for each memory zone, we create a new structure
MemoryZone that will replace Vec<Arc<GuestRegionMmap>> in the hash map
MemoryZones.

This makes thing more logical as MemoryZones becomes a list of
MemoryZone sorted by their identifier.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 19:20:04 +02:00
Rob Bradford
27c28fa3b0 vmm, arch: Enable KVM HyperV support
Inject CPUID leaves for advertising KVM HyperV support when the
"kvm_hyperv" toggle is enabled. Currently we only enable a selection of
features required to boot.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-09-16 16:08:01 +01:00
Rob Bradford
da642fcf7f hypervisor: Add "HyperV" exit to list of KVM exits
Currently we don't need to do anything to service these exits but when
the synthetic interrupt controller is active an exit will be triggered
to notify the VMM of details of the synthetic interrupt page.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-09-16 16:08:01 +01:00
Rob Bradford
5495ab7415 vmm: Add "kvm_hyperv" toggle to "--cpus"
This turns on the KVM HyperV emulation.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-09-16 16:08:01 +01:00
Sebastien Boeuf
b3435d51d9 vmm: cpu: Add missing io_uring syscalls to vCPU threads
Some of the io_uring setup happens upon activation of the virtio-blk
device, which is initially triggered through an MMIO VM exit. That's why
the vCPU threads must authorize io_uring related syscalls.

This commit ensures the virtio-blk io_uring implementation can be used
along with the seccomp filters enabled.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-16 11:59:47 +02:00
Bo Chen
9682d74763 vmm: seccomp: Add seccomp filters for signal_handler worker thread
This patch covers the last worker thread with dedicated secomp filters.

Fixes: #925

Signed-off-by: Bo Chen <chen.bo@intel.com>
2020-09-11 07:42:31 +02:00
Bo Chen
2612a6df29 vmm: seccomp: Add seccomp filters for the vcpu worker thread
Partially fixes: #925

Signed-off-by: Bo Chen <chen.bo@intel.com>
2020-09-11 07:42:31 +02:00
Rob Bradford
d793cc4da3 vmm: device_manager: Extract common PCI code
Extract common code for adding devices to the PCI bus into its own
function from the VFIO and VIRTIO code paths.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-09-11 07:33:18 +02:00
Rob Bradford
15025d71b1 devices, vm-device: Move BusDevice and Bus into vm-device
This removes the dependency of the pci crate on the devices crate which
now only contains the device implementations themselves.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-09-10 09:35:38 +01:00
dependabot-preview[bot]
f24a12913a build(deps): bump libc from 0.2.76 to 0.2.77
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.76 to 0.2.77.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.76...0.2.77)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-09-10 06:45:09 +00:00
Bo Chen
3c923f0727 virtio-devices: seccomp: Add seccomp filters for virtio_vsock thread
This patch enables the seccomp filters for the virtio_vsock worker
thread.

Partially fixes: #925

Signed-off-by: Bo Chen <chen.bo@intel.com>
2020-09-09 17:04:39 +01:00
Bo Chen
1175fa2bc7 virtio-devices: seccomp: Add seccomp filters for blk_io_uring thread
This patch enables the seccomp filters for the block_io_uring worker
thread.

Partially fixes: #925

Signed-off-by: Bo Chen <chen.bo@intel.com>
2020-09-09 17:04:39 +01:00
Sebastien Boeuf
e15dba2925 vmm: Rename NUMA option 'id' into 'guest_numa_id'
The goal of this commit is to rename the existing NUMA option 'id' with
'guest_numa_id'. This is done without any modification to the way this
option behaves.

The reason for the rename is caused by the observation that all other
parameters with an option called 'id' expect a string to be provided.

Because in this particular case we expect a u32 representing a proximity
domain from the ACPI specification, it's better to name it with a more
explicit name.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-07 07:37:14 +02:00
Sebastien Boeuf
1970ee89da main, vmm: Remove guest_numa_node option from memory zones
The way to describe guest NUMA nodes has been updated through previous
commits, letting the user describe the full NUMA topology through the
--numa parameter (or NumaConfig).

That's why we can remove the deprecated and unused 'guest_numa_node'
option.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-07 07:37:14 +02:00
Sebastien Boeuf
f21c04166a vmm: Move NUMA node list creation to Vm structure
Based on the previous changes introducing new options for both memory
zones and NUMA configuration, this patch changes the behavior of the
NUMA node definition. Instead of relying on the memory zones to define
the guest NUMA nodes, everything goes through the --numa parameter. This
allows for defining NUMA nodes without associating any particular memory
range to it. And in case one wants to associate one or multiple memory
ranges to it, the expectation is to describe a list of memory zone
through the --numa parameter.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-07 07:37:14 +02:00
Sebastien Boeuf
dc42324351 vmm: Add 'memory_zones' option to NumaConfig
This new option provides a new way to describe the memory associated
with a NUMA node. This is the first step before we can remove the
'guest_numa_node' option from the --memory-zone parameter.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-07 07:37:14 +02:00
Sebastien Boeuf
5d7215915f vmm: memory_manager: Store a list of memory zones
Now that we have an identifier per memory zone, and in order to keep
track of the memory regions associated with the memory zones, we create
and store a map referencing list of memory regions per memory zone ID.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-07 07:37:14 +02:00
Sebastien Boeuf
3ff82b4b65 main, vmm: Add mandatory id to memory zones
In anticipation for allowing memory zones to be removed, but also in
anticipation for refactoring NUMA parameter, we introduce a mandatory
'id' option to the --memory-zone parameter.

This forces the user to provide a unique identifier for each memory zone
so that we can refer to these.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-07 07:37:14 +02:00
Samuel Ortiz
e5ce6dc43c vmm: cpu: Warn if the guest is trying to access unregistered IO ranges
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
2020-09-04 14:39:58 +02:00
Sebastien Boeuf
c0d0d23932 vmm: acpi: Introduce SLIT for NUMA nodes distances
By introducing the SLIT (System Locality Distance Information Table), we
provide the guest with the distance between each node. This lets the
user describe the NUMA topology with a lot of details so that slower
memory backing the VM can be exposed as being further away from other
nodes.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-01 18:09:01 +02:00
Sebastien Boeuf
9548e7e857 vmm: Update NUMA node distances internally
Based on the NumaConfig which now provides distance information, we can
internally update the list of NUMA nodes with the exact distances they
should be located from other nodes.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-01 18:09:01 +02:00
Sebastien Boeuf
a5a29134ca vmm: Extend --numa parameter with NUMA node distances
By introducing 'distances' option, we let the user describe a list of
destination NUMA nodes with their associated distances compared to the
current node (defined through 'id').

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-01 18:09:01 +02:00
Sebastien Boeuf
629befdb4a vmm: acpi: Add CPUs to NUMA nodes
Based on the list of CPUs related to each NUMA node, Processor Local
x2APIC Affinity structures are created and included into the SRAT table.

This describes which CPUs are part of each node.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-01 15:25:00 +02:00
Sebastien Boeuf
db28db8567 vmm: Update NUMA nodes based on NumaConfig
Relying on the list of CPUs defined through the NumaConfig, this patch
will update the internal list of CPUs attached to each NUMA node.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-01 15:25:00 +02:00
Sebastien Boeuf
42f963d6f2 main, vmm: Add new --numa parameter
Through this new parameter, we give users the opportunity to specify a
set of CPUs attached to a NUMA node that has been previously created
from the --memory-zone parameter.

This parameter will be extended in the future to describe the distance
between multiple nodes.

For instance, if a user wants to attach CPUs 0, 1, 2 and 6 to a NUMA
node, here are two different ways of doing so:
Either
	./cloud-hypervisor ... --numa id=0,cpus=0-2:6
Or
	./cloud-hypervisor ... --numa id=0,cpus=0:1:2:6

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-01 15:25:00 +02:00
Sebastien Boeuf
65a23c6fc6 vmm: acpi: Create the SRAT table
The SRAT table (System Resource Affinity Table) is needed to describe
NUMA nodes and how memory ranges and CPUs are attached to them.

For now it simply attaches a list of Memory Affinity structures based on
the list of NUMA nodes created from the VMM.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-01 14:11:49 +02:00
Sebastien Boeuf
cf81254a8d vmm: memory_manager: Create a NUMA node list
Based on the 'guest_numa_node' option, we create and store a list of
NUMA nodes in the MemoryManager. The point being to associate a list of
memory regions to each node, so that we can later create the ACPI tables
with the proper memory range information.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-01 14:11:49 +02:00
Sebastien Boeuf
768dbd1fb0 vmm: Add 'guest_numa_node' option to 'memory-zone'
With the introduction of this new option, the user will be able to
describe if a particular memory zone should belong to a specific NUMA
node from a guest perspective.

For instance, using '--memory-zone size=1G,guest_numa_node=2' would let
the user describe that a memory zone of 1G in the guest should be
exposed as being associated with the NUMA node 2.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-01 14:11:49 +02:00
Sebastien Boeuf
274c001eab vmm: Use u32 instead of u64 for host_numa_node option
Given that ACPI uses u32 as the type for the Proximity Domain, we can
use u32 instead of u64 as the type for 'host_numa_node' option.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-09-01 13:29:42 +02:00
Michael Zhao
a95b6bbd8b vmm: Add seccomp rules for starting vhost-user-net backend on AArch64
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
2020-08-31 08:19:23 +02:00
Hui Zhu
f7b3581645 cloud-hypervisor.yaml: MemoryConfig: Add balloon_size
"struct MemoryConfig" has balloon_size but not in MemoryConfig
of cloud-hypervisor.yaml.
This commit adds it.

Signed-off-by: Hui Zhu <teawater@antfin.com>
2020-08-28 09:58:39 +02:00
Sebastien Boeuf
a8a9e61c3d vmm: memory_manager: Allow host NUMA for RAM backed files
Let's narrow down the limitation related to mbind() by allowing shared
mappings backed by a file backed by RAM. This leaves the restriction on
only for mappings backed by a regular file.

With this patch, host NUMA node can be specified even if using
vhost-user devices.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-08-27 08:39:38 -07:00