This is the initial folder structure of the mshv module inside
the hypervisor crate. The aim of this module is to support Microsoft
Hyper-V as a supported Hypervisor.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
There are some code base and function which are purely KVM specific for
now and we don't have those supports in mshv at the moment but we have plan
for the future. We are doing a feature guard with KVM. For example, KVM has
mp_state, cpu clock support, which we don't have for mshv. In order to build
those code we are making the code base for KVM specific compilation.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
When using an PIO write to 0x80 which is a special case handle that and
then return without going through the resolve.
This removes an extra warning that is reported.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When a total ordering between multiple atomic variables is not required
then use Ordering::Acquire with atomic loads and Ordering::Release with
atomic stores.
This will improve performance as this does not require a memory fence
on x86_64 which Ordering::SeqCst will use.
Add a comment to the code in the vCPU handling code where it operates on
multiple atomics to explain why Ordering::SeqCst is required.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The DeviceNode cannot be fully represented as it embeds a Rust style
enum (i.e. with data) which is instead represented by a simple
associative array.
Fixes: #1167
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The configuration is stored separately to the Vm in the VMM. The failure
to store the config was preventing the VM from shutting down correctly
as Vmm::vm_delete() checks for the presence of the config.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The live migration support added use of this ioctl but it wasn't
included in the permitted list.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This interface is used by the vCPU thread to delegate responsibility for
handling MMIO/PIO operations and to support different approaches than a
VM exit.
During profiling I found that we were spending 13.75% of the boot CPU
uage acquiring access to the object holding the VmmOps via
ArcSwap::load_full()
13.75% 6.02% vcpu0 cloud-hypervisor [.] arc_swap::ArcSwapAny<T,S>::load_full
|
---arc_swap::ArcSwapAny<T,S>::load_full
|
--13.43%--<hypervisor::kvm::KvmVcpu as hypervisor::cpu::Vcpu>::run
std::sys_common::backtrace::__rust_begin_short_backtrace
core::ops::function::FnOnce::call_once{{vtable-shim}}
std::sys::unix:🧵:Thread:🆕:thread_start
However since the object implementing VmmOps does not need to be mutable
and it is only used from the vCPU side we can change the ownership to
being a simple Arc<> that is passed in when calling create_vcpu().
This completely removes the above CPU usage from subsequent profiles.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add config parameter to --disk called "_disable_io_uring" (the
underscore prefix indicating it is not for public consumpion.) Use this
option to disable io_uring if it would otherwise be used.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Now the VM is paused/resumed by the migration process itself.
0. The guest configuration is sent to the destination
1. Dirty page log tracking is started by start_memory_dirty_log()
2. All guest memory is sent to the destination
3. Up to 5 attempts are made to send the dirty guest memory to the
destination...
4. ...before the VM is paused
5. One last set of dirty pages is sent to the destination
6. The guest is snapshotted and sent to the destination
7. When the migration is completed the destination unpauses the received
VM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows code running in the VMM to access the VM's MemoryManager's
functionality for managing the dirty log including resetting it but also
generating a table.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Prior to sending the memory the full state is not needed only the
configuration. This is sufficient to create the appropriate structures
in the guest and have the memory allocations ready for filling.
Update the protocol documentation to add a separate config step and move
the state to after the memory is transferred. As the VM is created in a
separate step to restoring it the requires a slightly different
constructor as well as saving the VM object for the subsequent commands.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In order to do this we must extend the MemoryManager API to add the
ability to specify the tracking of the dirty pages when creating the
userspace mappings and also keep track of the userspace mappings that
have been created for RAM regions.
Currently the dirty pages are collected into ranges based on a block
level of 64 pages. The algorithm could be tweaked to create smaller
ranges but for now if any page in the block of 64 is dirty the whole
block is added to the range.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
While the addressable space size reduction of 4k in necessary due to
the Linux bug, the 64k alignment of the addressable space size is
required by Windows. This patch satisfies both.
Signed-off-by: Anatol Belski <anbelski@linux.microsoft.com>
This is tested by:
Source VMM:
target/debug/cloud-hypervisor --kernel ~/src/linux/vmlinux \
--pmem file=~/workloads/focal.raw --cpus boot=1 \
--memory size=2048M \
--cmdline"root=/dev/pmem0p1 console=ttyS0" --serial tty --console off \
--api-socket=/tmp/api1 -v
Destination VMM:
target/debug/cloud-hypervisor --api-socket=/tmp/api2 -v
And the following commands:
target/debug/ch-remote --api-socket=/tmp/api1 pause
target/debug/ch-remote --api-socket=/tmp/api2 receive-migration unix:/tmp/foo &
target/debug/ch-remote --api-socket=/tmp/api1 send-migration unix:/tmp/foo
target/debug/ch-remote --api-socket=/tmp/api2 resume
The VM is then responsive on the destination VMM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows the code to be reused when creating the VM from a snapshot
when doing VM migration.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add API entry points with stub implementation for sending and receiving
a VM from one VMM to another.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Due to a known limitation in OpenAPITools/openapi-generator tool,
it's impossible to send go zero types, like false and 0 to
cloud-hypervisor because `omitempty` is added if a field is not
required.
Set cache_size, dax, num_queues and queue_size as required to remove
`omitempty` from the json tag.
fixes#1961
Signed-off-by: Julio Montes <julio.montes@intel.com>
This also removes the need to lookup up the "exe" symlink for finding
the VMM executable path.
Fixes: #1925
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The logic to handle AArch64 system event was: SHUTDOWN and RESET were
all treated as RESET.
Now we handle them differently:
- RESET event will trigger Vmm::vm_reboot(),
- SHUTDOWN event will trigger Vmm::vm_shutdown().
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Now Vcpu::run() returns a boolean value to VcpuManager, indicating
whether the VM is going to reboot (false) or just continue (true).
Moving the handling of hypervisor VCPU run result from Vcpu to
VcpuManager gives us the flexibility to handle more scenarios like
shutting down on AArch64.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Rather than filling the guest memory from a file at the point of the the
guest memory region being created instead fill from the file later. This
simplifies the region creation code but also adds flexibility for
sourcing the guest memory from a source other than an on disk file.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
As a mirror of bdbea19e23 which ensured
that GuestMemoryMmap::read_exact_from() was used to read all the file to
the region ensure that all the guest memory region is written to disk.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This gives a nicer user experience and this error can now be used as the
source for other errors based off this.
See: #1910
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Any occurrence of of a variable containing `ext_region` is replaced with
the less confusing name `saved_region`. The point is to clearly identify
the memory regions that might have been saved during a snapshot, while
the `ext` standing for `external` was pretty unclear.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In the context of saving the memory regions content through snapshot,
using the term "backing file" brings confusion with the actual backing
file that might back the memory mapping.
To avoid such conflicting naming, the 'backing_file' field from the
MemoryRegion structure gets replaced with 'content', as this is
designating the potential file containing the memory region data.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Use GuestRegionMmap::read_exact_from() to ensure that all of the file is
read into the guest. This addresses an issue where
GuestRegionMmap::read_from() was only copying the first 2GiB of the
memory and so lead to snapshot-restore was failing when the guest RAM
was 2GiB or greater.
This change also propagates any error from the copying upwards.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When restoring if a region of RAM is backed by anonymous memory i.e from
memfd_create() then copy the contents of the ram from the file that has
been saved to disk.
Previously the code would map the memory from that file into the guest
using a MAP_PRIVATE mapping. This has the effect of
minimising the restore time but provides an issue where the restored VM
does not have the same structure as the snapshotted VM, in particular
memory is backed by files in the restored VM that were anonymously
backed in the original.
This creates two problems:
* The snapshot data is mapped from files for the pages of the guest
which prevents the storage from being reclaimed.
* When snapshotting again the guest memory will not be correctly saved
as it will have looked like it was backed by a file so it will not be
written to disk but as it is a MAP_PRIVATE mapping the changes will
never be written to the disk again. This results in incorrect
behaviour.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The standalone `--balloon` parameter being fully functional at this
point, we can get rid of the balloon options from the --memory
parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that we have a new dedicated way of asking for a balloon through the
CLI and the REST API, we can move all the balloon code to the device
manager. This allows us to simplify the memory manager, which is already
quite complex.
It also simplifies the behavior of the balloon resizing command. Instead
of providing the expected size for the RAM, which is complex when memory
zones are involved, it now expects the balloon size. This is a much more
straightforward behavior as it really resizes the balloon to the desired
size. Additionally to the simplication, the benefit of this approach is
that it does not need to be tied to the memory manager at all.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This introduces a new way of defining the virtio-balloon device. Instead
of going through the --memory parameter, the idea is to consider balloon
as a standalone virtio device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The snasphot/restore feature is not working because some CPU states are
not properly saved, which means they can't be restored later on.
First thing, we ensure the CPUID is stored so that it can be properly
restored later. The code is simplified and pushed down to the hypervisor
crate.
Second thing, we identify for each vCPU if the Hyper-V SynIC device is
emulated or not. In case it is, that means some specific MSRs will be
set by the guest. These MSRs must be saved in order to properly restore
the VM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The watchdog device is created through the "--watchdog" parameter. At
most a single watchdog can be created per VM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Before Virtio-mmio was removed, we passed an optional PCI space address
parameter to AArch64 code for generating FDT. The address is none if the
transport is MMIO.
Now Virtio-PCI is the only option, the parameter is mandatory.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Virtio-mmio is removed, now virtio-pci is the only option for virtio
transport layer. We use MSI for PCI device interrupt. While GICv2, the
legacy interrupt controller, doesn't support MSI. So GICv2 is not very
practical for Cloud-hypervisor, we can remove it.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>