We store the device passthrough handler, so we should use it through our
internal API and only carry the passed through device configuration.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Shrink GICDevice trait to contain hypervisor agnostic API's only, which
are used in generating FDT.
Move all KVM specific logic into KvmGICDevice trait.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Make set_gsi_routing take a list of IrqRoutingEntry. The construction of
hypervisor specific structure is left to set_gsi_routing.
Now set_gsi_routes, which is part of the interrupt module, is only
responsible for constructing a list of routing entries.
This further splits hypervisor specific code from hypervisor agnostic
code.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
That function is going to return a handle for passthrough related
operations.
Move create_kvm_device code there.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
SGX expects the EPC region to be reported as "reserved" from the e820
table. This patch adds a new entry to the table if SGX is enabled.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The support for SGX is exposed to the guest through CPUID 0x12. KVM
passes static subleaves 0 and 1 from the host to the guest, without
needing any modification from the VMM itself.
But SGX also relies on dynamic subleaves 2 through N, used for
describing each EPC section. This is not handled by KVM, which means
the VMM is in charge of setting each subleaf starting from index 2
up to index N, depending on the number of EPC sections.
These subleaves 2 through N are not listed as part of the supported
CPUID entries from KVM. But it's important to set them as long as index
0 and 1 are present and indicate that SGX is supported.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of passing the GuestMemoryMmap directly to the CpuManager upon
its creation, it's better to pass a reference to the MemoryManager. This
way we will be able to know if SGX EPC region along with one or multiple
sections are present.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The SGX EPC region must be exposed through the ACPI tables so that the
guest can detect its presence. The guest only get the full range from
ACPI, as the specific EPC sections are directly described through the
CPUID of each vCPU.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the presence of one or multiple SGX EPC sections from the VM
configuration, the MemoryManager will allocate a contiguous block of
guest address space to hold the entire EPC region. Within this EPC
region, each EPC section is memory mapped.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Introducing the new CLI option --sgx-epc along with the OpenAPI
structure SgxEpcConfig, so that a user can now enable one or multiple
SGX Enclave Page Cache sections within a contiguous region from the
guest address space.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In this commit we saved the BDF of a PCI device and set it to "devid"
in GSI routing entry, because this field is mandatory for GICv3-ITS.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Move the definition of RawFile from virtio-devices crate into qcow
crate. All the code that consumes RawFile also already depends on the
qcow crate for image file type detection so this change breaks the
need for the qcow crate to depend on the very large virtio-devices
crate.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This commit store balloon size to MemoryConfig.
After reboot, virtio-balloon can use this size to inflate back to
the size before reboot.
Signed-off-by: Hui Zhu <teawater@antfin.com>
Remove the vmm dependency from vhost_user_block and vhost_user_net where
it was existing to use config::OptionParser. By moving the OptionParser
to its own crate at the top-level we can remove the very heavy
dependency that these vhost-user backends had.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Refactored the construction of KVM IOCTL rules for Seccomp.
Separating the rules by architecture can reduce the risk of bugs and
attacks.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
In order to move the hypervisor specific parts of the VM exit handling
path, we're defining a generic, hypervisor agnostic VM exit enum.
This is what the hypervisor's Vcpu run() call should return when the VM
exit can not be completely handled through the hypervisor specific bits.
For KVM based hypervisors, this means directly forwarding the IO related
exits back to the VMM itself. For other hypervisors that e.g. rely on the
VMM to decode and emulate instructions, this means the decoding itself
would happen in the hypervisor crate exclusively, and the rest of the VM
exit handling would be handled through the VMM device model implementation.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Fix test_vm unit test by using the new abstraction and dropping some
dead code.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The code is purely for maintaining an internal counter. It is not really
tied to KVM.
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The _fd suffix is KVM specific. But since it now point to an hypervisor
agnostic hypervisor::Vm implementation, we should just rename it vm.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The _fd suffix is KVM specific. But since it now point to an hypervisor
agnostic hypervisor::Vm implementation, we should just rename it vm.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The fd naming is quite KVM specific. Since we're now using the
hypervisor crate abstractions, we can rename those into something more
readable and meaningful.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The fd naming is quite KVM specific. Since we're now using the
hypervisor crate abstractions, we can rename those into something more
readable and meaningful.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The fd naming is quite KVM specific. Since we're now using the
hypervisor crate abstractions, we can rename those into something more
readable and meaningful. Like e.g. vcpu or vm.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Split the generic virtio code (queues and device type) from the
VirtioDevice trait, transport and device implementations.
This also simplifies the feature handling in vhost_user_backend as the
vm-virtio crate is no longer has any features.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
On x86 architecture, we need to save a list of MSRs as part of the vCPU
state. By providing the full list of MSRs supported by KVM, this patch
fixes the remaining snapshot/restore issues, as the vCPU is restored
with all its previous states.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Some vCPU states such as MP_STATE can be modified while retrieving
other states. For this reason, it's important to follow a specific
order that will ensure a state won't be modified after it has been
saved. Comments about ordering requirements have been copied over
from Firecracker commit 57f4c7ca14a31c5536f188cacb669d2cad32b9ca.
This patch also set the previously saved VCPU_EVENTS, as this was
missing from the restore codepath.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The logic can be shared among hypervisor implementations.
The 'static bound is used such that we don't need to deal with extra
lifetime parameter everywhere. It should be okay because we know the
entry type E doesn't contain any reference.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This trait contains a function which produces a interrupt routing entry.
Implement that trait for KvmRoutingEntry and rewrite the update
function.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The observation is only the route entry is hypervisor dependent.
Keep a definition of KvmMsiInterruptManager to avoid too much code
churn.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The observation is that only the route field is hypervisor specific.
Provide a new function in blanket implementation. Also redefine
KvmRoutingEntry with RoutingEntry to avoid code churn.
Signed-off-by: Wei Liu <liuwe@microsoft.com>