This patch refines the sccomp filter list for the vCPU thread, as we are
no longer spawning virtio-device threads from the vCPU thread.
Fixes: #2170
Signed-off-by: Bo Chen <chen.bo@intel.com>
This will lead to the triggering of an ACPI button inside the guest in
order to cleanly shutdown the guest.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Use the ACPI GED device to trigger a notitifcation of type
POWER_BUTTON_CHANGED which will ultimately lead to the guest being
notified.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Renamed this bitfield as it will also be used for non-hotplug purposes
such as synthesising a power button.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Older libc (like RHEL7) uses open() rather than openat(). This was
demonstrated through a failure to open /etc/localtime as used by
gmtime() libc call trigged from the vCPU thread (CMOS device.)
Fixes: #2111
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Sometimes when running under the CI tests fail due to a barrier not
being released and the guest blocks on an MMIO write. Add further
debugging to try and identify the issue.
See: #2118
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Based on the LKML feedback, the devices under /dev/sgx/* are
not justified. SGX RFC v40 moves the SGX device nodes to /dev/sgx_*
and this is reflected in kvm-sgx (next branch) too.
Update cloud-hypervisor code and documentation to follow this.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
If the vCPU thread calls log!() the time difference between the call
time and the boot up time is reported. On most environments and
architectures this covered by a vDSO call rather than a syscall. However
on some platforms this turns into a syscall.
Fixes: #2080
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
With Rust 1.49 using attributes on a function parameter is not allowed.
The recommended workaround is to put it in a new block.
error[E0658]: attributes on expressions are experimental
--> vmm/src/memory_manager.rs:698:17
|
698 | #[cfg(target_arch = "x86_64")]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: see issue #15701 <https://github.com/rust-lang/rust/issues/15701> for more information
error: removing an expression is not supported in this position
--> vmm/src/memory_manager.rs:698:17
|
698 | #[cfg(target_arch = "x86_64")]
|
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add an "fd=" parameter to allow specifying a TAP fd to use. Currently
only one fd for one queue pair is supported.
Fixes: #2052
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When a device is ready to be activated signal to the VMM thread via an
EventFd that there is a device to be activated. When the VMM receives a
notification on the EventFd that there is a device to be activated
notify the device manager to attempt to activate any devices that have
not been activated.
As a side effect the VMM thread will create the virtio device threads.
Fixes: #1863
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This can be uses to indicate to the caller that it should wait on the
barrier before returning as there is some asynchronous activity
triggered by the write which requires the KVM exit to block until it's
completed.
This is useful for having vCPU thread wait for the VMM thread to proceed
to activate the virtio devices.
See #1863
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This is the initial folder structure of the mshv module inside
the hypervisor crate. The aim of this module is to support Microsoft
Hyper-V as a supported Hypervisor.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
There are some code base and function which are purely KVM specific for
now and we don't have those supports in mshv at the moment but we have plan
for the future. We are doing a feature guard with KVM. For example, KVM has
mp_state, cpu clock support, which we don't have for mshv. In order to build
those code we are making the code base for KVM specific compilation.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
When using an PIO write to 0x80 which is a special case handle that and
then return without going through the resolve.
This removes an extra warning that is reported.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When a total ordering between multiple atomic variables is not required
then use Ordering::Acquire with atomic loads and Ordering::Release with
atomic stores.
This will improve performance as this does not require a memory fence
on x86_64 which Ordering::SeqCst will use.
Add a comment to the code in the vCPU handling code where it operates on
multiple atomics to explain why Ordering::SeqCst is required.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The DeviceNode cannot be fully represented as it embeds a Rust style
enum (i.e. with data) which is instead represented by a simple
associative array.
Fixes: #1167
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The configuration is stored separately to the Vm in the VMM. The failure
to store the config was preventing the VM from shutting down correctly
as Vmm::vm_delete() checks for the presence of the config.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The live migration support added use of this ioctl but it wasn't
included in the permitted list.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This interface is used by the vCPU thread to delegate responsibility for
handling MMIO/PIO operations and to support different approaches than a
VM exit.
During profiling I found that we were spending 13.75% of the boot CPU
uage acquiring access to the object holding the VmmOps via
ArcSwap::load_full()
13.75% 6.02% vcpu0 cloud-hypervisor [.] arc_swap::ArcSwapAny<T,S>::load_full
|
---arc_swap::ArcSwapAny<T,S>::load_full
|
--13.43%--<hypervisor::kvm::KvmVcpu as hypervisor::cpu::Vcpu>::run
std::sys_common::backtrace::__rust_begin_short_backtrace
core::ops::function::FnOnce::call_once{{vtable-shim}}
std::sys::unix:🧵:Thread:🆕:thread_start
However since the object implementing VmmOps does not need to be mutable
and it is only used from the vCPU side we can change the ownership to
being a simple Arc<> that is passed in when calling create_vcpu().
This completely removes the above CPU usage from subsequent profiles.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add config parameter to --disk called "_disable_io_uring" (the
underscore prefix indicating it is not for public consumpion.) Use this
option to disable io_uring if it would otherwise be used.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Now the VM is paused/resumed by the migration process itself.
0. The guest configuration is sent to the destination
1. Dirty page log tracking is started by start_memory_dirty_log()
2. All guest memory is sent to the destination
3. Up to 5 attempts are made to send the dirty guest memory to the
destination...
4. ...before the VM is paused
5. One last set of dirty pages is sent to the destination
6. The guest is snapshotted and sent to the destination
7. When the migration is completed the destination unpauses the received
VM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows code running in the VMM to access the VM's MemoryManager's
functionality for managing the dirty log including resetting it but also
generating a table.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Prior to sending the memory the full state is not needed only the
configuration. This is sufficient to create the appropriate structures
in the guest and have the memory allocations ready for filling.
Update the protocol documentation to add a separate config step and move
the state to after the memory is transferred. As the VM is created in a
separate step to restoring it the requires a slightly different
constructor as well as saving the VM object for the subsequent commands.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In order to do this we must extend the MemoryManager API to add the
ability to specify the tracking of the dirty pages when creating the
userspace mappings and also keep track of the userspace mappings that
have been created for RAM regions.
Currently the dirty pages are collected into ranges based on a block
level of 64 pages. The algorithm could be tweaked to create smaller
ranges but for now if any page in the block of 64 is dirty the whole
block is added to the range.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
While the addressable space size reduction of 4k in necessary due to
the Linux bug, the 64k alignment of the addressable space size is
required by Windows. This patch satisfies both.
Signed-off-by: Anatol Belski <anbelski@linux.microsoft.com>
This is tested by:
Source VMM:
target/debug/cloud-hypervisor --kernel ~/src/linux/vmlinux \
--pmem file=~/workloads/focal.raw --cpus boot=1 \
--memory size=2048M \
--cmdline"root=/dev/pmem0p1 console=ttyS0" --serial tty --console off \
--api-socket=/tmp/api1 -v
Destination VMM:
target/debug/cloud-hypervisor --api-socket=/tmp/api2 -v
And the following commands:
target/debug/ch-remote --api-socket=/tmp/api1 pause
target/debug/ch-remote --api-socket=/tmp/api2 receive-migration unix:/tmp/foo &
target/debug/ch-remote --api-socket=/tmp/api1 send-migration unix:/tmp/foo
target/debug/ch-remote --api-socket=/tmp/api2 resume
The VM is then responsive on the destination VMM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows the code to be reused when creating the VM from a snapshot
when doing VM migration.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add API entry points with stub implementation for sending and receiving
a VM from one VMM to another.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Due to a known limitation in OpenAPITools/openapi-generator tool,
it's impossible to send go zero types, like false and 0 to
cloud-hypervisor because `omitempty` is added if a field is not
required.
Set cache_size, dax, num_queues and queue_size as required to remove
`omitempty` from the json tag.
fixes#1961
Signed-off-by: Julio Montes <julio.montes@intel.com>
This also removes the need to lookup up the "exe" symlink for finding
the VMM executable path.
Fixes: #1925
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The logic to handle AArch64 system event was: SHUTDOWN and RESET were
all treated as RESET.
Now we handle them differently:
- RESET event will trigger Vmm::vm_reboot(),
- SHUTDOWN event will trigger Vmm::vm_shutdown().
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Now Vcpu::run() returns a boolean value to VcpuManager, indicating
whether the VM is going to reboot (false) or just continue (true).
Moving the handling of hypervisor VCPU run result from Vcpu to
VcpuManager gives us the flexibility to handle more scenarios like
shutting down on AArch64.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Rather than filling the guest memory from a file at the point of the the
guest memory region being created instead fill from the file later. This
simplifies the region creation code but also adds flexibility for
sourcing the guest memory from a source other than an on disk file.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
As a mirror of bdbea19e23 which ensured
that GuestMemoryMmap::read_exact_from() was used to read all the file to
the region ensure that all the guest memory region is written to disk.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This gives a nicer user experience and this error can now be used as the
source for other errors based off this.
See: #1910
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Any occurrence of of a variable containing `ext_region` is replaced with
the less confusing name `saved_region`. The point is to clearly identify
the memory regions that might have been saved during a snapshot, while
the `ext` standing for `external` was pretty unclear.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In the context of saving the memory regions content through snapshot,
using the term "backing file" brings confusion with the actual backing
file that might back the memory mapping.
To avoid such conflicting naming, the 'backing_file' field from the
MemoryRegion structure gets replaced with 'content', as this is
designating the potential file containing the memory region data.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Use GuestRegionMmap::read_exact_from() to ensure that all of the file is
read into the guest. This addresses an issue where
GuestRegionMmap::read_from() was only copying the first 2GiB of the
memory and so lead to snapshot-restore was failing when the guest RAM
was 2GiB or greater.
This change also propagates any error from the copying upwards.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When restoring if a region of RAM is backed by anonymous memory i.e from
memfd_create() then copy the contents of the ram from the file that has
been saved to disk.
Previously the code would map the memory from that file into the guest
using a MAP_PRIVATE mapping. This has the effect of
minimising the restore time but provides an issue where the restored VM
does not have the same structure as the snapshotted VM, in particular
memory is backed by files in the restored VM that were anonymously
backed in the original.
This creates two problems:
* The snapshot data is mapped from files for the pages of the guest
which prevents the storage from being reclaimed.
* When snapshotting again the guest memory will not be correctly saved
as it will have looked like it was backed by a file so it will not be
written to disk but as it is a MAP_PRIVATE mapping the changes will
never be written to the disk again. This results in incorrect
behaviour.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The standalone `--balloon` parameter being fully functional at this
point, we can get rid of the balloon options from the --memory
parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that we have a new dedicated way of asking for a balloon through the
CLI and the REST API, we can move all the balloon code to the device
manager. This allows us to simplify the memory manager, which is already
quite complex.
It also simplifies the behavior of the balloon resizing command. Instead
of providing the expected size for the RAM, which is complex when memory
zones are involved, it now expects the balloon size. This is a much more
straightforward behavior as it really resizes the balloon to the desired
size. Additionally to the simplication, the benefit of this approach is
that it does not need to be tied to the memory manager at all.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This introduces a new way of defining the virtio-balloon device. Instead
of going through the --memory parameter, the idea is to consider balloon
as a standalone virtio device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The snasphot/restore feature is not working because some CPU states are
not properly saved, which means they can't be restored later on.
First thing, we ensure the CPUID is stored so that it can be properly
restored later. The code is simplified and pushed down to the hypervisor
crate.
Second thing, we identify for each vCPU if the Hyper-V SynIC device is
emulated or not. In case it is, that means some specific MSRs will be
set by the guest. These MSRs must be saved in order to properly restore
the VM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The watchdog device is created through the "--watchdog" parameter. At
most a single watchdog can be created per VM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Before Virtio-mmio was removed, we passed an optional PCI space address
parameter to AArch64 code for generating FDT. The address is none if the
transport is MMIO.
Now Virtio-PCI is the only option, the parameter is mandatory.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Virtio-mmio is removed, now virtio-pci is the only option for virtio
transport layer. We use MSI for PCI device interrupt. While GICv2, the
legacy interrupt controller, doesn't support MSI. So GICv2 is not very
practical for Cloud-hypervisor, we can remove it.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
When shutting down a VM using VFIO, the following error has been
detected:
vfio-ioctls/src/vfio_device.rs:312 -- Could not delete VFIO group:
KvmSetDeviceAttr(Error(9))
After some investigation, it appears the KVM device file descriptor used
for removing a VFIO group was already closed. This is coming from the
Rust sequence of Drop, from the DeviceManager all the way down to
VfioDevice.
Because the DeviceManager owns passthrough_device, which is effectively
a KVM device file descriptor, when the DeviceManager is dropped, the
passthrough_device follows, with the effect of closing the KVM device
file descriptor. Problem is, VfioDevice has not been dropped yet and it
still needs a valid KVM device file descriptor.
That's why the simple way to fix this issue coming from Rust dropping
all resources is to make Linux accountable for it by duplicating the
file descriptor. This way, even when the passthrough_device is dropped,
the KVM file descriptor is closed, but a duplicated instance is still
valid and owned by the VfioContainer.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We turn on that emulation for Windows. Windows does not have KVM's PV
clock, so calling notify_guest_clock_paused results in an error.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
If the user specified a maximum physical bits value through the
`max_phys_bits` option from `--cpus` parameter, the guest CPUID
will be patched accordingly to ensure the guest will find the
right amount of physical bits.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
If the user provided a maximum physical bits value for the vCPUs, the
memory manager will adapt the guest physical address space accordingly
so that devices are not placed further than the specified value.
It's important to note that if the number exceed what is available on
the host, the smaller number will be picked.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to let the user choose maximum address space size, this patch
introduces a new option `max_phys_bits` to the `--cpus` parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The 'GuestAddress::unchecked_add' function has undefined behavior when
an overflow occurs. Its alternative 'checked_add' requires use to handle
the overflow explicitly.
Signed-off-by: Bo Chen <chen.bo@intel.com>
We are now reserving a 256M gap in the guest address space each time
when hotplugging memory with ACPI, which prevents users from hotplugging
memory to the maximum size they requested. We confirm that there is no
need to reserve this gap.
This patch removes the 'reserved gaps'. It also refactors the
'MemoryManager::start_addr' so that it is rounding-up to 128M alignment
when hotplugged memory is allowed with ACPI.
Signed-off-by: Bo Chen <chen.bo@intel.com>
We now try to create a ram region of size 0 when the requested memory
size is the same as current memory size. It results in an error of
`GuestMemoryRegion(Mmap(Os { code: 22, kind: InvalidInput, message:
"Invalid argument" }))`. This error is not meaningful to users and we
should not report it.
Signed-off-by: Bo Chen <chen.bo@intel.com>
This is a new clippy check introduced in 1.47 which requires the use of
the matches!() macro for simple match blocks that return a boolean.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The OneRegister literally means "one (arbitrary) register". Just call it
"Register" instead. There is no need to inherit KVM's naming scheme in
the hypervisor agnostic code.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
Small patch creating a dedicated `block_io_uring_is_supported()`
function for the non-io_uring case, so that we can simplify the
code in the DeviceManager.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Without the unlink(2) syscall being allowed, Cloud-Hypervisor crashes
when we remove a virtio-vsock device that has been previously added.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Because of the PCI refactoring that happened in the previous commit
d793cc4da3, the ability to fully remove a
PCI device was altered.
The refactoring was correct, but the usage of a generic function to pass
the same reference for both BusDevice, PciDevice and Any + Send + Sync
causes the Arc::ptr_eq() function to behave differently than expected,
as it does not match the references later in the code. That means we
were not able to remove the device reference from the MMIO and/or PIO
buses, which was leading to some bus range overlapping error once we
were trying to add a device again to the previous range that should have
been removed.
Fixes#1802
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Run loop in hypervisor needs a callback mechanism to access resources
like guest memory, mmio, pio etc.
VmmOps trait is introduced here, which is implemented by vmm module.
While handling vcpuexits in run loop, this trait allows hypervisor
module access to the above mentioned resources via callbacks.
Signed-off-by: Praveen Paladugu <prapal@microsoft.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
A new version of vm-memory was released upstream which resulted in some
components pulling in that new version. Update the version number used
to point to the latest version but continue to use our patched version
due to the fix for #1258
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The PMEM support has an option called "discard_writes" which when true
will prevent changes to the device from hitting the backing file. This
is trying to be the equivalent of "readonly" support of the block
device.
Previously the memory of the device was marked as KVM_READONLY. This
resulted in a trap when the guest attempted to write to it resulting a
VM exit (and recently a warning). This has a very detrimental effect on
the performance so instead make "discard_writes" truly CoW by mapping
the memory as `PROT_READ | PROT_WRITE` and using `MAP_PRIVATE` to
establish the CoW mapping.
Fixes: #1795
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The virtio-balloon change the memory size is asynchronous.
VirtioBalloonConfig.actual of balloon device show current balloon size.
This commit add memory_actual_size to vm.info to show memory actual size.
Signed-off-by: Hui Zhu <teawater@antfin.com>
Write to the exit_evt EventFD which will trigger all the devices and
vCPUs to exit. This is slightly cleaner than just exiting the process as
any temporary files will be removed.
Fixes: #1242
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This patch adds the missing the `iommu` and `id` option for
`VmAddDevice` in the openApi yaml to respect the internal data structure
in the code base. Also, setting the `id` explicitly for VFIO device
hotplug is required for VFIO device unplug through openAPI calls.
Signed-off-by: Bo Chen <chen.bo@intel.com>
According to openAPI specification [1], the format for `integer` types
can be only `int32` or `int64`, unsigned and 8-bits integers are not
supported.
This patch replaces `uint64` with `int64`, `uint32` with `int32` and
`uint8` with `int32`.
[1]: https://swagger.io/specification/#data-types
Signed-off-by: Julio Montes <julio.montes@intel.com>
MsiInterruptGroup doesn't need to know the internal field names of
InterruptRoute. Introduce two helper functions to eliminate references
to irq_fd. This is done similarly to the enable and disable helper
functions.
Also drop the pub keyword from InterruptRoute fields. It is not needed
anymore.
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
According to openAPI specification[1], the format for `integer` types
can be only `int32` or `int64`, unsigned integers are not supported.
This patch replaces `uint64` with `int64`.
[1]: https://swagger.io/specification/#data-types
Signed-off-by: Julio Montes <julio.montes@intel.com>
There is no point in manually dropping the lock for gsi_msi_routes then
instantly grabbing it again in set_gsi_routes.
Make set_gsi_routes take a reference to the routing hashmap instead.
No functional change intended.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The MTRR feature was missing from the CPUID, which is causing the guest
to ignore the MTRR settings exposed through dedicated MSRs.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Since Cloud-Hypervisor currently support one single PCI bus, we must
reflect this through the MCFG table, as it advertises the first bus and
the last bus available. In this case both are bus 0.
This patch saves quite some time during guest kernel boot, as it
prevents from checking each bus for available devices.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The states of GIC should be part of the VM states. This commit
enables the AArch64 VM states save/restore by adding save/restore
of GIC states.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
The definition of libc::SYS_ftruncate on AArch64 is different
from that on x86_64. This commit unifies the previously hard-coded
syscall number for AArch64.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
`KVM_GET_REG_LIST` ioctl is needed in save/restore AArch64 vCPU.
Therefore we whitelist this ioctl in seccomp.
Also this commit unifies the `SYS_FTRUNCATE` syscall for x86_64
and AArch64.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Similarly as the VM booting process, on AArch64 systems,
the vCPUs should be created before the creation of GIC. This
commit refactors the vCPU save/restore code to achieve the
above-mentioned restoring order.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Since calling `KVM_GET_ONE_REG` before `KVM_VCPU_INIT` will
result in an error: Exec format error (os error 8). This commit
decouples the vCPU init process from `configure_vcpus`. Therefore
in the process of restoring the vCPUs, these vCPUs can be
initialized separately before started.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
The construction of `GICR_TYPER` register will need vCPU states.
Therefore this commit adds methods to extract saved vCPU states
from the cpu manager.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Unlike x86_64, the "interrupt_controller" in the device manager
for AArch64 is only a `Gic` object that implements the
`InterruptController` to provide the interrupt delivery service.
This is not the real GIC device so that we do not need to save
its states. Also, we do not need to insert it to the device_tree.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
The value of GIC register `GICR_TYPER` is needed in restoring
the GIC states. This commit adds a field in the GIC device struct
and a method to construct its value.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
In AArch64 systems, the state of GIC device can only be
retrieved from `KVM_GET_DEVICE_ATTR` ioctl. Therefore to implement
saving/restoring the GIC states, we need to make sure that the
GIC object (either the file descriptor or the device itself) can
be extracted after the VM is started.
This commit refactors the code of GIC creation by adding a new
field `gic_device_entity` in device manager and methods to set/get
this field. The GIC object can be therefore saved in the device
manager after calling `arch::configure_system`.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
This commit adds a function which allows to save RDIST pending
tables to the guest RAM, as well as unit test case for it.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
This commit adds the unit test cases for getting/setting the GIC
distributor, redistributor and ICC registers.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Adds 3 more unit test cases for AArch64:
*save_restore_core_regs
*save_restore_system_regs
*get_set_mpstate
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
This commit ports code from firecracker and refactors the existing
AArch64 code as the preparation for implementing save/restore
AArch64 vCPU, including:
1. Modification of `arm64_core_reg` macro to retrive the index of
arm64 core register and implemention of a helper to determine if
a register is a system register.
2. Move some macros and helpers in `arch` crate to the `hypervisor`
crate.
3. Added related unit tests for above functions and macros.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Misspellings were identified by https://github.com/marketplace/actions/check-spelling
* Initial corrections suggested by Google Sheets
* Additional corrections by Google Chrome auto-suggest
* Some manual corrections
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
virtio-mem device would use 'VIRTIO_MEM_F_ACPI_PXM' to add memory to NUMA
node, which MUST be existed, otherwise it will be assigned to node id 0,
even if user specify different node id.
According ACPI spec about Memory Affinity Structure, system hardware
supports hot-add memory region using 'Hot Pluggable | Enabled' flags.
Signed-off-by: Jiangbo Wu <jiangbo.wu@intel.com>
Use zone.host_numa_node to create memory zone, so that memory zone
can apply memory policy in according with host numa node ID
Signed-off-by: Jiangbo Wu <jiangbo.wu@intel.com>
If after the creation of the self-spawned backend, the VMM cannot create
the corresponding vhost-user frontend, the VMM must kill the freshly
spawned process in order to ensure the error propagation can happen.
In case the child process would still be around, the VMM cannot return
the error as it waits onto the child to terminate.
This should help us identify when self-spawned failures are caused by a
connection being refused between the VMM and the backend.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When the VMM is terminated by receiving a SIGTERM signal, the signal
handler thread must be able to invoke ioctl(TCGETS) and ioctl(TCSETS)
without error.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on all the preparatory work achieved through previous commits,
this patch updates the 'hotplugged_size' field for both MemoryConfig and
MemoryZoneConfig structures when either the whole memory is resized, or
simply when a memory zone is resized.
This fixes the lack of support for rebooting a VM with the right amount
of memory plugged in.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Adding a new field to VirtioMemZone structure, as it lets us associate
with a particular virtio-mem region the amount of memory that should be
plugged in at boot.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This patch simplifies the code as we have one single Option for the
VirtioMemZone. This also prepares for storing additional information
related to the virtio-mem region.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Add the new option 'hotplugged_size' to both --memory-zone and --memory
parameters so that we can let the user specify a certain amount of
memory being plugged at boot.
This is also part of making sure we can store the virtio-mem size over a
reboot of the VM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit gives the possibility to create a virtio-mem device with
some memory already plugged into it. This is preliminary work to be
able to reboot a VM with the virtio-mem region being already resized.
Signed-off-by: Hui Zhu <teawater@antfin.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that e820 tables are created from the 'boot_guest_memory', we can
simplify the memory manager code by adding the virtio-mem regions when
they are created. There's no need to wait for the first hotplug to
insert these regions.
This also anticipates the need for starting a VM with some memory
already plugged into the virtio-mem region.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to differentiate the 'boot' memory regions from the virtio-mem
regions, we store what we call 'boot_guest_memory'. This is useful to
provide the adequate list of regions to the configure_system() function
as it expects only the list of regions that should be exposed through
the e820 table.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The virtio-mem driver is generating some warnings regarding both size
and alignment of the virtio-mem region if not based on 128MiB:
The alignment of the physical start address can make some memory
unusable.
The alignment of the physical end address can make some memory
unusable.
For these reasons, the current patch enforces virtio-mem regions to be
128MiB aligned and checks the size provided by the user is a multiple of
128MiB.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that virtio-mem device accept a guest NUMA node as parameter, we
retrieve this information from the list of NUMA nodes. Based on the
memory zone associated with the virtio-mem device, we obtain the NUMA
node identifier, which we provide to the virtio-mem device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Implement support for associating a virtio-mem device with a specific
guest NUMA node, based on the ACPI proximity domain identifier.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
For more consistency and help reading the code better, this commit
renames all 'virtiomem*' variables into 'virtio_mem*'.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Implement a new VM action called 'resize-zone' allowing the user to
resize one specific memory zone at a time. This relies on all the
preliminary work from the previous commits to resize each virtio-mem
device independently from each others.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By adding a new parameter 'id' to the virtiomem_resize() function, we
prepare this function to be usable for both global memory resizing and
memory zone resizing.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
It's important to return the region covered by virtio-mem the first time
it is inserted as the device manager must update all devices with this
information.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the previous code changes, we can now update the MemoryManager
code to create one virtio-mem region and resizing handler per memory
zone. This will naturally create one virtio-mem device per memory zone
from the DeviceManager's code which has been previously updated as well.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for resizing support of an individual memory zone,
this commit introduces a new option 'hotplug_size' to '--memory-zone'
parameter. This defines the amount of memory that can be added through
each specific memory zone.
Because memory zone resize is tied to virtio-mem, make sure the user
selects 'virtio-mem' hotplug method, otherwise return an error.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Both MemoryManager and DeviceManager are updated through this commit to
handle the creation of multiple virtio-mem devices if needed. For now,
only the framework is in place, but the behavior remains the same, which
means only the memory zone created from '--memory' generates a
virtio-mem region that can be used for resize.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to anticipate the need for storing memory regions along with
virtio-mem information for each memory zone, we create a new structure
MemoryZone that will replace Vec<Arc<GuestRegionMmap>> in the hash map
MemoryZones.
This makes thing more logical as MemoryZones becomes a list of
MemoryZone sorted by their identifier.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Inject CPUID leaves for advertising KVM HyperV support when the
"kvm_hyperv" toggle is enabled. Currently we only enable a selection of
features required to boot.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Currently we don't need to do anything to service these exits but when
the synthetic interrupt controller is active an exit will be triggered
to notify the VMM of details of the synthetic interrupt page.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Some of the io_uring setup happens upon activation of the virtio-blk
device, which is initially triggered through an MMIO VM exit. That's why
the vCPU threads must authorize io_uring related syscalls.
This commit ensures the virtio-blk io_uring implementation can be used
along with the seccomp filters enabled.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Extract common code for adding devices to the PCI bus into its own
function from the VFIO and VIRTIO code paths.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This removes the dependency of the pci crate on the devices crate which
now only contains the device implementations themselves.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The goal of this commit is to rename the existing NUMA option 'id' with
'guest_numa_id'. This is done without any modification to the way this
option behaves.
The reason for the rename is caused by the observation that all other
parameters with an option called 'id' expect a string to be provided.
Because in this particular case we expect a u32 representing a proximity
domain from the ACPI specification, it's better to name it with a more
explicit name.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The way to describe guest NUMA nodes has been updated through previous
commits, letting the user describe the full NUMA topology through the
--numa parameter (or NumaConfig).
That's why we can remove the deprecated and unused 'guest_numa_node'
option.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the previous changes introducing new options for both memory
zones and NUMA configuration, this patch changes the behavior of the
NUMA node definition. Instead of relying on the memory zones to define
the guest NUMA nodes, everything goes through the --numa parameter. This
allows for defining NUMA nodes without associating any particular memory
range to it. And in case one wants to associate one or multiple memory
ranges to it, the expectation is to describe a list of memory zone
through the --numa parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This new option provides a new way to describe the memory associated
with a NUMA node. This is the first step before we can remove the
'guest_numa_node' option from the --memory-zone parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that we have an identifier per memory zone, and in order to keep
track of the memory regions associated with the memory zones, we create
and store a map referencing list of memory regions per memory zone ID.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for allowing memory zones to be removed, but also in
anticipation for refactoring NUMA parameter, we introduce a mandatory
'id' option to the --memory-zone parameter.
This forces the user to provide a unique identifier for each memory zone
so that we can refer to these.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By introducing the SLIT (System Locality Distance Information Table), we
provide the guest with the distance between each node. This lets the
user describe the NUMA topology with a lot of details so that slower
memory backing the VM can be exposed as being further away from other
nodes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the NumaConfig which now provides distance information, we can
internally update the list of NUMA nodes with the exact distances they
should be located from other nodes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By introducing 'distances' option, we let the user describe a list of
destination NUMA nodes with their associated distances compared to the
current node (defined through 'id').
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the list of CPUs related to each NUMA node, Processor Local
x2APIC Affinity structures are created and included into the SRAT table.
This describes which CPUs are part of each node.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Relying on the list of CPUs defined through the NumaConfig, this patch
will update the internal list of CPUs attached to each NUMA node.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Through this new parameter, we give users the opportunity to specify a
set of CPUs attached to a NUMA node that has been previously created
from the --memory-zone parameter.
This parameter will be extended in the future to describe the distance
between multiple nodes.
For instance, if a user wants to attach CPUs 0, 1, 2 and 6 to a NUMA
node, here are two different ways of doing so:
Either
./cloud-hypervisor ... --numa id=0,cpus=0-2:6
Or
./cloud-hypervisor ... --numa id=0,cpus=0:1:2:6
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The SRAT table (System Resource Affinity Table) is needed to describe
NUMA nodes and how memory ranges and CPUs are attached to them.
For now it simply attaches a list of Memory Affinity structures based on
the list of NUMA nodes created from the VMM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the 'guest_numa_node' option, we create and store a list of
NUMA nodes in the MemoryManager. The point being to associate a list of
memory regions to each node, so that we can later create the ACPI tables
with the proper memory range information.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
With the introduction of this new option, the user will be able to
describe if a particular memory zone should belong to a specific NUMA
node from a guest perspective.
For instance, using '--memory-zone size=1G,guest_numa_node=2' would let
the user describe that a memory zone of 1G in the guest should be
exposed as being associated with the NUMA node 2.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Given that ACPI uses u32 as the type for the Proximity Domain, we can
use u32 instead of u64 as the type for 'host_numa_node' option.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
"struct MemoryConfig" has balloon_size but not in MemoryConfig
of cloud-hypervisor.yaml.
This commit adds it.
Signed-off-by: Hui Zhu <teawater@antfin.com>
Let's narrow down the limitation related to mbind() by allowing shared
mappings backed by a file backed by RAM. This leaves the restriction on
only for mappings backed by a regular file.
With this patch, host NUMA node can be specified even if using
vhost-user devices.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Relying on the new option 'host_numa_node' from the 'memory-zone'
parameter, the user can now define which NUMA node from the host
should be used to back the current memory zone.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Since memory zones have been introduced, it is now possible for a user
to specify multiple backends for the guest RAM. By adding a new option
'host_numa_node' to the 'memory-zone' parameter, we allow the guest RAM
to be backed by memory that might come from a specific NUMA node on the
host.
The option expects a node identifier, specifying which NUMA node should
be used to allocate the memory associated with a specific memory zone.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The flag 'mergeable' should only apply to the entire guest RAM, which is
why it is removed from the MemoryZoneConfig as it is defined as a global
parameter at the MemoryConfig level.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The 'cmdline' parameter should not be required as it is not needed when
the 'kernel' parameter is the rust-hypervisor-fw, which means the kernel
and the associated command line will be found from the EFI partition.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Factorize the codepath between simple memory and multiple memory zones.
This simplifies the way regions are memory mapped, as everything relies
on the same codepath. This is performed by creating a memory zone on the
fly for the specific use case where --memory is used with size being
different from 0. Internally, the code can rely on memory zones to
create the memory regions forming the guest memory.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
After the introduction of user defined memory zones, we can now remove
the deprecated 'file' option from --memory parameter. This makes this
parameter simpler, letting more advanced users define their own custom
memory zones through the dedicated parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
User defined memory regions can now support being snapshot and restored,
therefore this commit removes the restrictions that were applied through
earlier commit.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By factorizing a lot of code into create_ram_region(), this commit
achieves the simplification of the restore codepath. Additionally, it
makes user defined memory zones compatible with snapshot/restore.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
First thing, this patch introduces a new function to identify if a file
descriptor is linked to any hard link on the system. This can let the
VMM know if the file can be accessed by the user, or if the file will
be destroyed as soon as the VMM releases the file descriptor.
Based on this information, and associated with the knowledge about the
region being MAP_SHARED or not, the VMM can now decide to skip the copy
of the memory region content. If the user has access to the file from
the filesystem, and if the file has been mapped as MAP_SHARED, we can
consider the guest memory region content to be present in this file at
any point in time. That's why in this specific case, there's no need for
performing the copy of the memory region content into a dedicated file.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Let's not assume that a backing file is going to be the result from
a snapshot for each memory region. These regions might be backed by
a file on the host filesystem (not a temporary file in host RAM), which
means they don't need to be copied and stored into dedicated files.
That's why this commit prepares for further changes by introducing an
optional PathBuf associated with the snapshot of each memory region.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
There will be some cases where the implementation of the snapshot()
function from the Snapshottable trait will require to modify some
internal data, therefore we make this possible by updating the trait
definition with snapshot(&mut self).
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In case the memory size is 0, this means the user defined memory
zones are used as a way to specify how to back the guest memory.
This is the first step in supporting complex use cases where the user
can define exactly which type of memory from the host should back the
memory from the guest.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for the need to map part of a file with the function
create_ram_region(), it is extended to accept a file offset as argument.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In case the provided backing file is an actual file and not a directory,
we should not truncate it, as we expect the file to already be the right
size.
This change will be important once we try to map the same file through
multiple memory mappings. We can't let the file be truncated as the
second mapping wouldn't work properly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Introducing a new CLI option --memory-zone letting the user specify
custom memory zones. When this option is present, the --memory size
must be explicitly set to 0.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
It is otherwise seems to be able to cause resource conflicts with
Windows APCI_HAL. The OS might do a better job on assigning resources
to this device, withouth them to be requested explicitly. 0xcf8 and
0xcfc are only what is certainly needed for the PCI device enumeration.
Signed-off-by: Anatol Belski <anatol.belski@microsoft.com>