Load the sections backed from the file into their required addresses in
memory and populate the HOB with details of the memory. Using the HOB
address initialize the TDX state in the vCPUs and finalize the TDX
configuration.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add API to the hypervisor interface and implement for KVM to allow the
special TDX KVM ioctls on the VM and vCPU FDs.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When booting with TDX no kernel is supplied as the TDFV is responsible
for loading the OS. The requirement to have the kernel is still
currently enforced at the validation entry point; this change merely
changes function prototypes and stored state to use Option<> to support.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add support extracting the sections out for a TDVF file which can be
then used to load the TDVF and TD HOB data into their appropriate
locations.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add the skeleton of the "tdx" feature with a module ready inside the
arch crate to store implementation details.
TEST=cargo build --features="tdx"
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
These need to be updated together as the kvm-ioctls depends upon a
strictly newer version of kvm-bindings which requires a rebase in the CH
fork.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When a vm is created with a pty device, on reboot the pty fd (sub
only) will only be associated with the vmm through the epoll event
loop. The fd being polled will have been closed due to the vm itself
dropping the pty files (and potentially reopening the fd index to a
different item making things quite confusing) and new pty fds will be
opened but not polled on for input.
This change creates a structure to encapsulate the information about
the pty fd (main File, sub File and the path to the sub File). On
reboot, a copy of the console and serial pty structs is then passed
down to the new Vm instance which will be used instead of creating a
new pty device.
This resolves the underlying issue from #2316.
Signed-off-by: William Douglas <william.r.douglas@gmail.com>
Now that virtio-mem devices can update VFIO mappings through dedicated
handlers, let's provide them from the DeviceManager.
Important to note these handlers should either be provided to virtio-mem
devices or to the unique virtio-iommu device. This must be mutually
exclusive.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Create two functions for registering/unregistering DMA mapping handlers,
each handler being associated with a VFIO device.
Whenever the plugged_size is modified (which means triggered by the
virtio-mem driver in the guest), the virtio-mem backend is responsible
for updating the DMA mappings related to every VFIO device through the
handler previously provided.
It's important to update the map when the handler is either registered
or unregistered as well, as we don't want to miss some plugged memory
that would have been added before the VFIO device is added to the VM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of letting the VfioPciDevice take the decision on how/when to
perform the DMA mapping/unmapping, we move this to the DeviceManager
instead.
The point is to let the DeviceManager choose which guest memory regions
should be mapped or not. In particular, we don't want the virtio-mem
region to be mapped/unmapped as it will be virtio-mem device
responsibility to do so.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When memory is resized through ACPI, a new region is added to the guest
memory. This region must also be added to the corresponding memory zone
in order to keep everything in sync.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Replace "--monitor-fd" with "--event-monitor" which can either take
"fd=<int>" or "path=<path>" which can point to e.g. a named pipe and
allow more flexibility.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Since the SGX server is down for maintenance, all builds are waiting on
the node agent to answer, causing all PRs to be blocked.
Let's disable temporarily the SGX CI until the server is back up.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
So far we've only had the need to emulate one instruction. There is no
need to use a HashMap when a simple tuple for the initial mapping will
do.
We can bring back the HashMap once more sophisticated use cases surface.
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This avoids code complexity down the line when we get around
implementing Windows support.
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
Moving to the latest version of the rust-vmm/vhost crate, before it gets
published on crates.io.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In particular update for the vmm-sys-util upgrade and all the other
dependent packages. This requires an updated forked version of
kvm-bindings (due to updated vfio-ioctls) but allowed the removal of our
forked version of kvm-ioctls.
The changes to the API from kvm-ioctls and vmm-sys-util required some
other minor changes to the code.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The vhost crate from rust-vmm is ready, which is why we do the switch
from the Cloud Hypervisor fork to the upstream crate.
At the same time, we rename the crate from vhost_rs to vhost.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>