The emulator gets a CPU state from a CpuStateManager instance, emulates
the passed instructions stream and returns the modified CPU state.
The emulator is a skeleton for now since it comes with an empty
instruction mnemonic map.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
And an InstructionMap helper structure to map x86 mnemonic codes
to instruction handlers.
Any instruction emulation implementation should then boil down with
implementing InstructionHandler for any supported mnemonic.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Minimal will be defined by the amount of emulated instructions.
Carrying all GPRs, all CRs, segment registers and table registers should
cover quite a few instructions.
Co-developed-by: Wei Liu <liuwe@microsoft.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
For efficiently emulating x86 instructions, we need to build and pass a
CPU state copy/reference to instruction emulation handlers. Those handlers
will typically modify the CPU state and let the caller commit those
changes back through the PlatformEmulator trait set_cpu_state method.
Hypervisors typically have internal CPU state structures, that maps back
to the correspinding kernel APIs. By implementing the CpuState trait,
instruction emulators will be able to directly work on CPU state
instances that are directly consumable by the underlying hypervisor and
its kernel APIs.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
In order to emulate instructions, we need a way to get access to some of
the guest resources. The PlatformEmulator interface provides guest
memory and CPU state access to emulator implementations.
Typically, an hypervisor will implement PlatformEmulator for architecture
specific instruction emulators to build their framework on top of.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
We will need the GDT API for the hypervisor's x86 instruction
emulator implementation, it's better if the arch crate depends on the
hypervisor one rather than the other way around.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The configuration is stored separately to the Vm in the VMM. The failure
to store the config was preventing the VM from shutting down correctly
as Vmm::vm_delete() checks for the presence of the config.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rather return the None to the caller to handle instead. This removes the
source of a potential panic.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The live migration support added use of this ioctl but it wasn't
included in the permitted list.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This interface is used by the vCPU thread to delegate responsibility for
handling MMIO/PIO operations and to support different approaches than a
VM exit.
During profiling I found that we were spending 13.75% of the boot CPU
uage acquiring access to the object holding the VmmOps via
ArcSwap::load_full()
13.75% 6.02% vcpu0 cloud-hypervisor [.] arc_swap::ArcSwapAny<T,S>::load_full
|
---arc_swap::ArcSwapAny<T,S>::load_full
|
--13.43%--<hypervisor::kvm::KvmVcpu as hypervisor::cpu::Vcpu>::run
std::sys_common::backtrace::__rust_begin_short_backtrace
core::ops::function::FnOnce::call_once{{vtable-shim}}
std::sys::unix:🧵:Thread:🆕:thread_start
However since the object implementing VmmOps does not need to be mutable
and it is only used from the vCPU side we can change the ownership to
being a simple Arc<> that is passed in when calling create_vcpu().
This completely removes the above CPU usage from subsequent profiles.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
As we switched to focal for this test we no longer get any output during
the boot unless serial is used over virtio-console.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
There have been a lot of flakes around tests such as
test_virtio_fs_hotplug_dax_on_w_vhost_user_fs_daemon() or
test_virtio_fs_hotplug_dax_on() which all try and hotplug memory.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add config parameter to --disk called "_disable_io_uring" (the
underscore prefix indicating it is not for public consumpion.) Use this
option to disable io_uring if it would otherwise be used.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
With the removal of vhost-user self-spawning support we should migrate
the tests to use the binaries so that we can remove the functionality
from the cloud-hypervisor binary itself.
See: #1925
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Now the VM is paused/resumed by the migration process itself.
0. The guest configuration is sent to the destination
1. Dirty page log tracking is started by start_memory_dirty_log()
2. All guest memory is sent to the destination
3. Up to 5 attempts are made to send the dirty guest memory to the
destination...
4. ...before the VM is paused
5. One last set of dirty pages is sent to the destination
6. The guest is snapshotted and sent to the destination
7. When the migration is completed the destination unpauses the received
VM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows code running in the VMM to access the VM's MemoryManager's
functionality for managing the dirty log including resetting it but also
generating a table.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Prior to sending the memory the full state is not needed only the
configuration. This is sufficient to create the appropriate structures
in the guest and have the memory allocations ready for filling.
Update the protocol documentation to add a separate config step and move
the state to after the memory is transferred. As the VM is created in a
separate step to restoring it the requires a slightly different
constructor as well as saving the VM object for the subsequent commands.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In order to do this we must extend the MemoryManager API to add the
ability to specify the tracking of the dirty pages when creating the
userspace mappings and also keep track of the userspace mappings that
have been created for RAM regions.
Currently the dirty pages are collected into ranges based on a block
level of 64 pages. The algorithm could be tweaked to create smaller
ranges but for now if any page in the block of 64 is dirty the whole
block is added to the range.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>