SGX expects the EPC region to be reported as "reserved" from the e820
table. This patch adds a new entry to the table if SGX is enabled.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The support for SGX is exposed to the guest through CPUID 0x12. KVM
passes static subleaves 0 and 1 from the host to the guest, without
needing any modification from the VMM itself.
But SGX also relies on dynamic subleaves 2 through N, used for
describing each EPC section. This is not handled by KVM, which means
the VMM is in charge of setting each subleaf starting from index 2
up to index N, depending on the number of EPC sections.
These subleaves 2 through N are not listed as part of the supported
CPUID entries from KVM. But it's important to set them as long as index
0 and 1 are present and indicate that SGX is supported.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of passing the GuestMemoryMmap directly to the CpuManager upon
its creation, it's better to pass a reference to the MemoryManager. This
way we will be able to know if SGX EPC region along with one or multiple
sections are present.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The SGX EPC region must be exposed through the ACPI tables so that the
guest can detect its presence. The guest only get the full range from
ACPI, as the specific EPC sections are directly described through the
CPUID of each vCPU.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the presence of one or multiple SGX EPC sections from the VM
configuration, the MemoryManager will allocate a contiguous block of
guest address space to hold the entire EPC region. Within this EPC
region, each EPC section is memory mapped.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Introducing the new CLI option --sgx-epc along with the OpenAPI
structure SgxEpcConfig, so that a user can now enable one or multiple
SGX Enclave Page Cache sections within a contiguous region from the
guest address space.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In this commit we saved the BDF of a PCI device and set it to "devid"
in GSI routing entry, because this field is mandatory for GICv3-ITS.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Move the definition of RawFile from virtio-devices crate into qcow
crate. All the code that consumes RawFile also already depends on the
qcow crate for image file type detection so this change breaks the
need for the qcow crate to depend on the very large virtio-devices
crate.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
It gets bubbled all the way up from hypervsior crate to top-level
Cargo.toml.
Cloud Hypervisor can't function without KVM at this point, so make it
a default feature.
Fix all scripts that use --no-default-features.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This commit store balloon size to MemoryConfig.
After reboot, virtio-balloon can use this size to inflate back to
the size before reboot.
Signed-off-by: Hui Zhu <teawater@antfin.com>
Remove the vmm dependency from vhost_user_block and vhost_user_net where
it was existing to use config::OptionParser. By moving the OptionParser
to its own crate at the top-level we can remove the very heavy
dependency that these vhost-user backends had.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Refactored the construction of KVM IOCTL rules for Seccomp.
Separating the rules by architecture can reduce the risk of bugs and
attacks.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
In order to move the hypervisor specific parts of the VM exit handling
path, we're defining a generic, hypervisor agnostic VM exit enum.
This is what the hypervisor's Vcpu run() call should return when the VM
exit can not be completely handled through the hypervisor specific bits.
For KVM based hypervisors, this means directly forwarding the IO related
exits back to the VMM itself. For other hypervisors that e.g. rely on the
VMM to decode and emulate instructions, this means the decoding itself
would happen in the hypervisor crate exclusively, and the rest of the VM
exit handling would be handled through the VMM device model implementation.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Fix test_vm unit test by using the new abstraction and dropping some
dead code.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The code is purely for maintaining an internal counter. It is not really
tied to KVM.
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The _fd suffix is KVM specific. But since it now point to an hypervisor
agnostic hypervisor::Vm implementation, we should just rename it vm.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The _fd suffix is KVM specific. But since it now point to an hypervisor
agnostic hypervisor::Vm implementation, we should just rename it vm.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The fd naming is quite KVM specific. Since we're now using the
hypervisor crate abstractions, we can rename those into something more
readable and meaningful.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The fd naming is quite KVM specific. Since we're now using the
hypervisor crate abstractions, we can rename those into something more
readable and meaningful.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The fd naming is quite KVM specific. Since we're now using the
hypervisor crate abstractions, we can rename those into something more
readable and meaningful. Like e.g. vcpu or vm.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Split the generic virtio code (queues and device type) from the
VirtioDevice trait, transport and device implementations.
This also simplifies the feature handling in vhost_user_backend as the
vm-virtio crate is no longer has any features.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
On x86 architecture, we need to save a list of MSRs as part of the vCPU
state. By providing the full list of MSRs supported by KVM, this patch
fixes the remaining snapshot/restore issues, as the vCPU is restored
with all its previous states.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Some vCPU states such as MP_STATE can be modified while retrieving
other states. For this reason, it's important to follow a specific
order that will ensure a state won't be modified after it has been
saved. Comments about ordering requirements have been copied over
from Firecracker commit 57f4c7ca14a31c5536f188cacb669d2cad32b9ca.
This patch also set the previously saved VCPU_EVENTS, as this was
missing from the restore codepath.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The logic can be shared among hypervisor implementations.
The 'static bound is used such that we don't need to deal with extra
lifetime parameter everywhere. It should be okay because we know the
entry type E doesn't contain any reference.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This trait contains a function which produces a interrupt routing entry.
Implement that trait for KvmRoutingEntry and rewrite the update
function.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The observation is only the route entry is hypervisor dependent.
Keep a definition of KvmMsiInterruptManager to avoid too much code
churn.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The observation is that only the route field is hypervisor specific.
Provide a new function in blanket implementation. Also redefine
KvmRoutingEntry with RoutingEntry to avoid code churn.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The observation is that the GSI hashmap remains untouched before getting
passed into the MSI interrupt manager. We can create that hashmap
directly in the interrupt manager's new function.
The drops one import from the interrupt module.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This is a hash table of string to hash tables of u64s. In JSON these
hash tables are object types.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Because we don't want the guest to miss any event triggered by the
emulation of devices, it is important to resume all vCPUs before we can
resume the DeviceManager with all its associated devices.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We need consistency between pause/resume and snapshot/restore
operations. The symmetrical behavior of pausing/snapshotting
and restoring/resuming has been introduced recently, and we must
now ensure that no matter if we're using pause/resume or
snapshot/restore features, the resulting VM should be running in
the exact same way.
That's why the vCPU state is now stored upon VM pausing. The snapshot
operation being a simple serialization of the previously saved state.
The same way, the vCPU state is now restored upon VM resuming. The
restore operation being a simple deserialization of the previously
restored state.
It's interesting to note that this patch ensures time consistency from a
guest perspective, no matter which clocksource is being used. From a
previous patch, the KVM clock was saved/restored upon VM pause/resume.
We now have the same behavior for TSC, as the TSC from the vCPUs are
saved/restored upon VM pause/resume too.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of calling the resume() function from the CpuManager, which
involves more than what is needed from the shutdown codepath, and
potentially ends up with a deadlock, we replace it with a subset.
The full resume operation is reserved for a VM that has been paused.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>