This will lead to the triggering of an ACPI button inside the guest in
order to cleanly shutdown the guest.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Use the ACPI GED device to trigger a notitifcation of type
POWER_BUTTON_CHANGED which will ultimately lead to the guest being
notified.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Renamed this bitfield as it will also be used for non-hotplug purposes
such as synthesising a power button.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Older libc (like RHEL7) uses open() rather than openat(). This was
demonstrated through a failure to open /etc/localtime as used by
gmtime() libc call trigged from the vCPU thread (CMOS device.)
Fixes: #2111
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Sometimes when running under the CI tests fail due to a barrier not
being released and the guest blocks on an MMIO write. Add further
debugging to try and identify the issue.
See: #2118
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Based on the LKML feedback, the devices under /dev/sgx/* are
not justified. SGX RFC v40 moves the SGX device nodes to /dev/sgx_*
and this is reflected in kvm-sgx (next branch) too.
Update cloud-hypervisor code and documentation to follow this.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
If the vCPU thread calls log!() the time difference between the call
time and the boot up time is reported. On most environments and
architectures this covered by a vDSO call rather than a syscall. However
on some platforms this turns into a syscall.
Fixes: #2080
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
With Rust 1.49 using attributes on a function parameter is not allowed.
The recommended workaround is to put it in a new block.
error[E0658]: attributes on expressions are experimental
--> vmm/src/memory_manager.rs:698:17
|
698 | #[cfg(target_arch = "x86_64")]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: see issue #15701 <https://github.com/rust-lang/rust/issues/15701> for more information
error: removing an expression is not supported in this position
--> vmm/src/memory_manager.rs:698:17
|
698 | #[cfg(target_arch = "x86_64")]
|
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add an "fd=" parameter to allow specifying a TAP fd to use. Currently
only one fd for one queue pair is supported.
Fixes: #2052
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When a device is ready to be activated signal to the VMM thread via an
EventFd that there is a device to be activated. When the VMM receives a
notification on the EventFd that there is a device to be activated
notify the device manager to attempt to activate any devices that have
not been activated.
As a side effect the VMM thread will create the virtio device threads.
Fixes: #1863
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This can be uses to indicate to the caller that it should wait on the
barrier before returning as there is some asynchronous activity
triggered by the write which requires the KVM exit to block until it's
completed.
This is useful for having vCPU thread wait for the VMM thread to proceed
to activate the virtio devices.
See #1863
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This is the initial folder structure of the mshv module inside
the hypervisor crate. The aim of this module is to support Microsoft
Hyper-V as a supported Hypervisor.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
There are some code base and function which are purely KVM specific for
now and we don't have those supports in mshv at the moment but we have plan
for the future. We are doing a feature guard with KVM. For example, KVM has
mp_state, cpu clock support, which we don't have for mshv. In order to build
those code we are making the code base for KVM specific compilation.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
When using an PIO write to 0x80 which is a special case handle that and
then return without going through the resolve.
This removes an extra warning that is reported.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When a total ordering between multiple atomic variables is not required
then use Ordering::Acquire with atomic loads and Ordering::Release with
atomic stores.
This will improve performance as this does not require a memory fence
on x86_64 which Ordering::SeqCst will use.
Add a comment to the code in the vCPU handling code where it operates on
multiple atomics to explain why Ordering::SeqCst is required.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The DeviceNode cannot be fully represented as it embeds a Rust style
enum (i.e. with data) which is instead represented by a simple
associative array.
Fixes: #1167
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The configuration is stored separately to the Vm in the VMM. The failure
to store the config was preventing the VM from shutting down correctly
as Vmm::vm_delete() checks for the presence of the config.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The live migration support added use of this ioctl but it wasn't
included in the permitted list.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This interface is used by the vCPU thread to delegate responsibility for
handling MMIO/PIO operations and to support different approaches than a
VM exit.
During profiling I found that we were spending 13.75% of the boot CPU
uage acquiring access to the object holding the VmmOps via
ArcSwap::load_full()
13.75% 6.02% vcpu0 cloud-hypervisor [.] arc_swap::ArcSwapAny<T,S>::load_full
|
---arc_swap::ArcSwapAny<T,S>::load_full
|
--13.43%--<hypervisor::kvm::KvmVcpu as hypervisor::cpu::Vcpu>::run
std::sys_common::backtrace::__rust_begin_short_backtrace
core::ops::function::FnOnce::call_once{{vtable-shim}}
std::sys::unix:🧵:Thread:🆕:thread_start
However since the object implementing VmmOps does not need to be mutable
and it is only used from the vCPU side we can change the ownership to
being a simple Arc<> that is passed in when calling create_vcpu().
This completely removes the above CPU usage from subsequent profiles.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add config parameter to --disk called "_disable_io_uring" (the
underscore prefix indicating it is not for public consumpion.) Use this
option to disable io_uring if it would otherwise be used.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Now the VM is paused/resumed by the migration process itself.
0. The guest configuration is sent to the destination
1. Dirty page log tracking is started by start_memory_dirty_log()
2. All guest memory is sent to the destination
3. Up to 5 attempts are made to send the dirty guest memory to the
destination...
4. ...before the VM is paused
5. One last set of dirty pages is sent to the destination
6. The guest is snapshotted and sent to the destination
7. When the migration is completed the destination unpauses the received
VM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows code running in the VMM to access the VM's MemoryManager's
functionality for managing the dirty log including resetting it but also
generating a table.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Prior to sending the memory the full state is not needed only the
configuration. This is sufficient to create the appropriate structures
in the guest and have the memory allocations ready for filling.
Update the protocol documentation to add a separate config step and move
the state to after the memory is transferred. As the VM is created in a
separate step to restoring it the requires a slightly different
constructor as well as saving the VM object for the subsequent commands.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In order to do this we must extend the MemoryManager API to add the
ability to specify the tracking of the dirty pages when creating the
userspace mappings and also keep track of the userspace mappings that
have been created for RAM regions.
Currently the dirty pages are collected into ranges based on a block
level of 64 pages. The algorithm could be tweaked to create smaller
ranges but for now if any page in the block of 64 is dirty the whole
block is added to the range.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
While the addressable space size reduction of 4k in necessary due to
the Linux bug, the 64k alignment of the addressable space size is
required by Windows. This patch satisfies both.
Signed-off-by: Anatol Belski <anbelski@linux.microsoft.com>