As MSHV also implements set/get_clock data, this patch
removes the KVM feature guard and make it x86_64 only and
both for KVM and MSHV.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
The contents of this crate may change and cause conflicts - re-exporting
the contents is unnecessary.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
The members for {Io, Mmio}{Read, Write} are unused as instead exits of
those types are handled through the VmOps interface. Removing these is
also a prerequisite due to changes in the mutability of the
VcpuFd::run() method.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
When using multiple PCI segments, the 32-bit and 64-bit mmio
aperture is split equally between each segment. Add an option
to configure the 'weight'. For example, a PCI segment with a
`mmio32_aperture_weight` of 2 will be allocated twice as much
32-bit mmio space as a normal PCI segment.
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>
Assigning KVM_IRQFD (when unmasking a guest IRQ) after
KVM_SET_GSI_ROUTING can avoid kernel panic on the guest that is not
patched with commit a80ced6ea514 (KVM: SVM: fix panic on out-of-bounds
guest IRQ) on AMD systems.
Meanwhile, it is required to deassign KVM_IRQFD (when masking a guest
IRQ) before KVM_SET_GSI_ROUTING (see #3827).
Fixes: #6353
Signed-off-by: Yi Wang <foxywang@tencent.com>
Signed-off-by: Bo Chen <chen.bo@intel.com>
Add infrastructure to lookup the host address for mmio regions on
external dma mapping requests. This specifically resolves vfio
passthrough for virtio-iommu, allowing for nested virtualization to pass
external devices through.
Fixes#6110
Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
VfioUserDmaMapping is already in the pci crate, this moves
VfioDmaMapping to match the behavior. This is a necessary change to
allow the VfioDmaMapping trait to have access to MmioRegion memory
without creating a circular dependency. The VfioDmaMapping trait
needs to have access to mmio regions to map external devices over
mmio (a follow-up commit).
Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
The memory region that is associated with the hotpluggable part of
a virtio-mem zone isn't backed by the file specified in the
MemoryZoneConfig. The file is used only for the fixed part of the
zone. When you try to restore a snapshot with virtio-mem, the
backing file is used for all its regions. This results in the
following error:
VmRestore(MemoryManager(GuestMemoryRegion(MappingPastEof)))
This patch sets backing_file only for the fixed part of a virtio-mem
zone.
Fixes: #6337
Signed-off-by: Alexandru Matei <alexandru.matei@uipath.com>
Add IOCTL number for generic hypercall ioctl (MSHV_ROOT_HVCALL).
Update IOCTL numbers for set/get vp state.
Signed-off-by: Nuno Das Neves <nudasnev@microsoft.com>
The 'NetConfig' may contain FDs which can't be serialized correctly, as
FDs can only be donated from another process via a Unix domain socket
with `SCM_RIGHTS`. To avoid false use of the serialized FDs, this patch
explicitly set 'NetConfig' FDs as invalid for (de)serialization.
See: #6286
Signed-off-by: Bo Chen <chen.bo@intel.com>
warning: assigning the result of `Clone::clone()` may be inefficient
--> vmm/src/device_manager.rs:4188:17
|
4188 | id = child_id.clone();
| ^^^^^^^^^^^^^^^^^^^^^ help: use `clone_from()`: `id.clone_from(child_id)`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#assigning_clones
= note: `#[warn(clippy::assigning_clones)]` on by default
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
A deadlock can happen from the destination VM of live upgrade or
migration due to waiting on paused device worker threads. For example,
when a serialization error happens after the `DeviceManager` struct is
restored (where all virtio device worker threads are spawned but in
paused/parked state), a deadlock will happen from
`DeviceManager::drop()`, as it blocks for waiting worker threads to
join.
This patch ensures that we wake up all device (mostly virtio) worker
threads before we block for them to join.
Signed-off-by: Bo Chen <chen.bo@intel.com>
Currently, we only support injecting NMI for KVM guests but we can do
the same for MSHV guests as well to have feature parity.
Signed-off-by: Jinank Jain <jinankjain@microsoft.com>
This commit ensures that the HttpApi thread flushes all the responses
before the application shuts down. Without this step, in case of a
VmmShutdown request the application might terminate before the
thread sends a response.
Fixes: #6247
Signed-off-by: Alexandru Matei <alexandru.matei@uipath.com>
On platforms where PCIe P2P is supported, inject a PCI capability into
NVIDIA GPU to indicate support.
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>
Host data that is passed to the hypervisor. Then
the firmware includes the data in the attestation report.
The data might include any key or secret that the SevSnp guest
might need later.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
The host data provided at launch. Data is passed
to the hypervisor during the completion of the
isolated import.
Host Data provided by the hypervisor during guest launch.
The firmware includes this value in all attestation
reports for the guest.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
With the nightly toolchain (2024-02-18) cargo check will flag up
redundant imports either because they are pulled in by the prelude on
earlier match.
Remove those redundant imports.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
When a guest running on a terminal reboots, the sigwinch_listener
subprocess exits and a new one restarts. The parent never wait()s
for children, so the old subprocess remains as a zombie. With further
reboots, more and more zombies build up.
As there are no other children for which we want the exit status,
the easiest fix is to take advantage of the implicit reaping specified
by POSIX when we set the disposition of SIGCHLD to SIG_IGN.
For this to work, we also need to set the correct default exit signal
of SIGCHLD when using clone3() CLONE_CLEAR_SIGHAND. Unlike the fallback
fork() path, clone_args::default() initialises the exit signal to zero,
which results in a child with non-standard reaping behaviour.
Signed-off-by: Chris Webb <chris@arachsys.com>
Allow cloud-hypervisor to direct boot the bzImage kernel format using
the regular 32 bit entry point. This can share the memory and vcpu
setup with the regular PVH boot code, but requires the setup of the
'zero page'.
Signed-off-by: Stefan Nuernberger <stefan.nuernberger@cyberus-technology.de>
In case of SEV-SNP guest devices use sw-iotlb to gain access guest
memory for DMA. For that F_IOMMU/F_ACCESS_PLATFORM must be exposed in
the feature set of virtio devices.
Signed-off-by: Jinank Jain <jinankjain@microsoft.com>
Signed-off-by: Muminul Islam <muislam@microsoft.com>
The 'generate_ram_ranges' function currently hardcodes the assumption
that there are only 2 E820 RAM entries. This is not flexible enough to
handle vendor specific memory holes. Returning a Vec is also more
convenient for users of this function.
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>
Since the ACPI tables are generated inside the IGVM file in case of
SEV-SNP guest. So, we don't need to generate it inside the cloud
hypervisor.
Signed-off-by: Jinank Jain <jinankjain@microsoft.com>
Signed-off-by: Muminul Islam <muislam@microsoft.com>
We occasionally saw cloud-hypervisor crashed due to seccomp violations. The
coredumps showed the HTTP API thread crashing after it attempted to call
sched_yield(). The call came from rust stdlib's mpmc module, which calls
sched_yield() if several attempts to busy-wait for a condition to fulfil fall
short.
Since the system call is harmless and it comes from the stdlib, I opted to allow
all threads to call it.
Signed-off-by: Peteris Rudzusiks <rye@stripe.com>
Currently the only way to set the affinity for virtio block threads is
to boot the VM, search for the tid of each of the virtio block threads,
then set the affinity manually. This commit adds an option to pin virtio
block queues to specific host cpus (similar to pinning vcpus to host
cpus). A queue_affinity option has been added to the disk flag in
the cli to specify a mapping of queue indices to host cpus.
Signed-off-by: acarp <acarp@crusoeenergy.com>
Traditional way to configure vcpu don't work for sev-snp guests. All the
vCPU configuration for SEV-SNP guest is provided via VMSA.
Signed-off-by: Jinank Jain <jinankjain@microsoft.com>
Signed-off-by: Muminul Islam <muislam@microsoft.com>