With Rust 1.49 using attributes on a function parameter is not allowed.
The recommended workaround is to put it in a new block.
error[E0658]: attributes on expressions are experimental
--> vmm/src/memory_manager.rs:698:17
|
698 | #[cfg(target_arch = "x86_64")]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: see issue #15701 <https://github.com/rust-lang/rust/issues/15701> for more information
error: removing an expression is not supported in this position
--> vmm/src/memory_manager.rs:698:17
|
698 | #[cfg(target_arch = "x86_64")]
|
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add an "fd=" parameter to allow specifying a TAP fd to use. Currently
only one fd for one queue pair is supported.
Fixes: #2052
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When a device is ready to be activated signal to the VMM thread via an
EventFd that there is a device to be activated. When the VMM receives a
notification on the EventFd that there is a device to be activated
notify the device manager to attempt to activate any devices that have
not been activated.
As a side effect the VMM thread will create the virtio device threads.
Fixes: #1863
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This can be uses to indicate to the caller that it should wait on the
barrier before returning as there is some asynchronous activity
triggered by the write which requires the KVM exit to block until it's
completed.
This is useful for having vCPU thread wait for the VMM thread to proceed
to activate the virtio devices.
See #1863
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This is the initial folder structure of the mshv module inside
the hypervisor crate. The aim of this module is to support Microsoft
Hyper-V as a supported Hypervisor.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
There are some code base and function which are purely KVM specific for
now and we don't have those supports in mshv at the moment but we have plan
for the future. We are doing a feature guard with KVM. For example, KVM has
mp_state, cpu clock support, which we don't have for mshv. In order to build
those code we are making the code base for KVM specific compilation.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
When using an PIO write to 0x80 which is a special case handle that and
then return without going through the resolve.
This removes an extra warning that is reported.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When a total ordering between multiple atomic variables is not required
then use Ordering::Acquire with atomic loads and Ordering::Release with
atomic stores.
This will improve performance as this does not require a memory fence
on x86_64 which Ordering::SeqCst will use.
Add a comment to the code in the vCPU handling code where it operates on
multiple atomics to explain why Ordering::SeqCst is required.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The DeviceNode cannot be fully represented as it embeds a Rust style
enum (i.e. with data) which is instead represented by a simple
associative array.
Fixes: #1167
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The configuration is stored separately to the Vm in the VMM. The failure
to store the config was preventing the VM from shutting down correctly
as Vmm::vm_delete() checks for the presence of the config.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The live migration support added use of this ioctl but it wasn't
included in the permitted list.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This interface is used by the vCPU thread to delegate responsibility for
handling MMIO/PIO operations and to support different approaches than a
VM exit.
During profiling I found that we were spending 13.75% of the boot CPU
uage acquiring access to the object holding the VmmOps via
ArcSwap::load_full()
13.75% 6.02% vcpu0 cloud-hypervisor [.] arc_swap::ArcSwapAny<T,S>::load_full
|
---arc_swap::ArcSwapAny<T,S>::load_full
|
--13.43%--<hypervisor::kvm::KvmVcpu as hypervisor::cpu::Vcpu>::run
std::sys_common::backtrace::__rust_begin_short_backtrace
core::ops::function::FnOnce::call_once{{vtable-shim}}
std::sys::unix:🧵:Thread:🆕:thread_start
However since the object implementing VmmOps does not need to be mutable
and it is only used from the vCPU side we can change the ownership to
being a simple Arc<> that is passed in when calling create_vcpu().
This completely removes the above CPU usage from subsequent profiles.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add config parameter to --disk called "_disable_io_uring" (the
underscore prefix indicating it is not for public consumpion.) Use this
option to disable io_uring if it would otherwise be used.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Now the VM is paused/resumed by the migration process itself.
0. The guest configuration is sent to the destination
1. Dirty page log tracking is started by start_memory_dirty_log()
2. All guest memory is sent to the destination
3. Up to 5 attempts are made to send the dirty guest memory to the
destination...
4. ...before the VM is paused
5. One last set of dirty pages is sent to the destination
6. The guest is snapshotted and sent to the destination
7. When the migration is completed the destination unpauses the received
VM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows code running in the VMM to access the VM's MemoryManager's
functionality for managing the dirty log including resetting it but also
generating a table.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Prior to sending the memory the full state is not needed only the
configuration. This is sufficient to create the appropriate structures
in the guest and have the memory allocations ready for filling.
Update the protocol documentation to add a separate config step and move
the state to after the memory is transferred. As the VM is created in a
separate step to restoring it the requires a slightly different
constructor as well as saving the VM object for the subsequent commands.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In order to do this we must extend the MemoryManager API to add the
ability to specify the tracking of the dirty pages when creating the
userspace mappings and also keep track of the userspace mappings that
have been created for RAM regions.
Currently the dirty pages are collected into ranges based on a block
level of 64 pages. The algorithm could be tweaked to create smaller
ranges but for now if any page in the block of 64 is dirty the whole
block is added to the range.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
While the addressable space size reduction of 4k in necessary due to
the Linux bug, the 64k alignment of the addressable space size is
required by Windows. This patch satisfies both.
Signed-off-by: Anatol Belski <anbelski@linux.microsoft.com>
This is tested by:
Source VMM:
target/debug/cloud-hypervisor --kernel ~/src/linux/vmlinux \
--pmem file=~/workloads/focal.raw --cpus boot=1 \
--memory size=2048M \
--cmdline"root=/dev/pmem0p1 console=ttyS0" --serial tty --console off \
--api-socket=/tmp/api1 -v
Destination VMM:
target/debug/cloud-hypervisor --api-socket=/tmp/api2 -v
And the following commands:
target/debug/ch-remote --api-socket=/tmp/api1 pause
target/debug/ch-remote --api-socket=/tmp/api2 receive-migration unix:/tmp/foo &
target/debug/ch-remote --api-socket=/tmp/api1 send-migration unix:/tmp/foo
target/debug/ch-remote --api-socket=/tmp/api2 resume
The VM is then responsive on the destination VMM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows the code to be reused when creating the VM from a snapshot
when doing VM migration.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add API entry points with stub implementation for sending and receiving
a VM from one VMM to another.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Due to a known limitation in OpenAPITools/openapi-generator tool,
it's impossible to send go zero types, like false and 0 to
cloud-hypervisor because `omitempty` is added if a field is not
required.
Set cache_size, dax, num_queues and queue_size as required to remove
`omitempty` from the json tag.
fixes#1961
Signed-off-by: Julio Montes <julio.montes@intel.com>
This also removes the need to lookup up the "exe" symlink for finding
the VMM executable path.
Fixes: #1925
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The logic to handle AArch64 system event was: SHUTDOWN and RESET were
all treated as RESET.
Now we handle them differently:
- RESET event will trigger Vmm::vm_reboot(),
- SHUTDOWN event will trigger Vmm::vm_shutdown().
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Now Vcpu::run() returns a boolean value to VcpuManager, indicating
whether the VM is going to reboot (false) or just continue (true).
Moving the handling of hypervisor VCPU run result from Vcpu to
VcpuManager gives us the flexibility to handle more scenarios like
shutting down on AArch64.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Rather than filling the guest memory from a file at the point of the the
guest memory region being created instead fill from the file later. This
simplifies the region creation code but also adds flexibility for
sourcing the guest memory from a source other than an on disk file.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
As a mirror of bdbea19e23 which ensured
that GuestMemoryMmap::read_exact_from() was used to read all the file to
the region ensure that all the guest memory region is written to disk.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This gives a nicer user experience and this error can now be used as the
source for other errors based off this.
See: #1910
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Any occurrence of of a variable containing `ext_region` is replaced with
the less confusing name `saved_region`. The point is to clearly identify
the memory regions that might have been saved during a snapshot, while
the `ext` standing for `external` was pretty unclear.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>