While adding console devices, DeviceManager will now use the FDs in
console_info instead of creating them.
To reduce the size of this commit, I marked some variables are unused
with '_' prefix. All those variables are cleaned up in next commit.
Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com>
Use pre_create_console_devices method to create and populate console
device FDs into console_info in Vmm Object.
Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com>
With this change all the information to manage console devices is now
available within Vmm Object.
Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com>
Introduce ConsoleInfo struct. This struct will be used to store FDs of
console devices created in pre_create_console_devices and passed to
vm_boot.
Move set_raw_mode, create_pty methods to console_devices.rs to
consolidate console management methods into a single module.
Lastly, copy the logic to create and configure console devices into
pre_create_console_devices method.
Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com>
Misspellings were identified by:
https://github.com/marketplace/actions/check-spelling
* Initial corrections based on forbidden patterns from the action
* Additional corrections by Google Chrome auto-suggest
* Some manual corrections
* Adding markdown bullets to readme credits section
Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
Currently by default each core is allocated it's own socket. Basically
it is n socket 1 core 1 thread/core kind of a structure as witnessed
from within the guest.
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 8
NUMA node(s): 1
This is not a good default topology because resources are distributed
across multiple sockets. For example, a Linux guest with multi socket
configuration will have to calibrate TSC per socket due to which it
might observe a higher amount of boot time than usual.
A better idea for default topology would be 1 socket n core 1
thread/core which ensure better resource locality.
After this change topology would change to:
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Fixes: #6497
Signed-off-by: Jinank Jain <jinankjain@microsoft.com>
The original code gave an owned fd to UnixListener. That made the same
fd wrapped into two owned files.
When the files were dropped, the same fd would be closed more than once.
A newly introduced check in Rust's stdlib caught that error.
A newly cloned fd should be given to UnixListener.
Fixes: #6485
Signed-off-by: Wei Liu <liuwe@microsoft.com>
For MSHV we always create frozen partition, so we
resume the VM during boot. Also during pause and resume
VM events we call hypervisor specific API.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
Consume FDs passed via SCM_RIGHTs to VmRestore API and assign them
appropriately to RestoredNetConfig's fds field.
Signed-off-by: Purna Pavan Chandra <paekkaladevi@linux.microsoft.com>
'NetConfig' FDs, when explicitly passed via SCM_RIGHTS during VM
creation, are marked as invalid during snapshot. See: #6332.
So, Restore should support input for the new net FDs. This patch adds
new field 'net_fds' to 'RestoreConfig'. The FDs passed using this new
field are replaced into the 'fds' field of NetConfig appropriately.
The 'validate()' function ensures all net devices from 'VmConfig' backed
by FDs have a corresponding 'RestoreNetConfig' with a matched 'id' and
expected number of FDs.
The unit tests provide different inputs to parse and validate functions
to make sure parsing and error handling is as per expectation.
Fixes#6286
Signed-off-by: Purna Pavan Chandra <paekkaladevi@linux.microsoft.com>
Co-authored-by: Bo Chen <chen.bo@intel.com>
The compiler is now able to warn if an invalid attribute (e.g like a
feature) is not available.
See https://blog.rust-lang.org/2024/05/06/check-cfg.html for more
details.
Add build.rs files in the crates that use #cfg(fuzzing) to add fuzzing
to the list of valid cfg attributes.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
Code in this crate is conditional on this feature so it necessary to
expose as a new feature and use that feature as a dependency when the
feature is enabled on the vmm crate.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
The "dhat-heap" feature needs to be enabled inside the vmm crate as a
depenency from the top-level as there is build time check for that
feature inside the vmm crate.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
Update the vhost-user-backend crate version used along with related
crates (vhost and virtio-queue.) This requires minor changes to the
types used for the memory in the backends with the use of the
BitmapMmapRegion type for the Bitmap implementation.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
This is to resolve the inconsistencies from our openapi specification,
as default values do not make sense for required fields.
Reported-by: James O. D. Hunt <james.o.hunt@intel.com>
Signed-off-by: Bo Chen <chen.bo@intel.com>
As MSHV also implements set/get_clock data, this patch
removes the KVM feature guard and make it x86_64 only and
both for KVM and MSHV.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
The contents of this crate may change and cause conflicts - re-exporting
the contents is unnecessary.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
The members for {Io, Mmio}{Read, Write} are unused as instead exits of
those types are handled through the VmOps interface. Removing these is
also a prerequisite due to changes in the mutability of the
VcpuFd::run() method.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
When using multiple PCI segments, the 32-bit and 64-bit mmio
aperture is split equally between each segment. Add an option
to configure the 'weight'. For example, a PCI segment with a
`mmio32_aperture_weight` of 2 will be allocated twice as much
32-bit mmio space as a normal PCI segment.
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>
Assigning KVM_IRQFD (when unmasking a guest IRQ) after
KVM_SET_GSI_ROUTING can avoid kernel panic on the guest that is not
patched with commit a80ced6ea514 (KVM: SVM: fix panic on out-of-bounds
guest IRQ) on AMD systems.
Meanwhile, it is required to deassign KVM_IRQFD (when masking a guest
IRQ) before KVM_SET_GSI_ROUTING (see #3827).
Fixes: #6353
Signed-off-by: Yi Wang <foxywang@tencent.com>
Signed-off-by: Bo Chen <chen.bo@intel.com>
Add infrastructure to lookup the host address for mmio regions on
external dma mapping requests. This specifically resolves vfio
passthrough for virtio-iommu, allowing for nested virtualization to pass
external devices through.
Fixes#6110
Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
VfioUserDmaMapping is already in the pci crate, this moves
VfioDmaMapping to match the behavior. This is a necessary change to
allow the VfioDmaMapping trait to have access to MmioRegion memory
without creating a circular dependency. The VfioDmaMapping trait
needs to have access to mmio regions to map external devices over
mmio (a follow-up commit).
Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
The memory region that is associated with the hotpluggable part of
a virtio-mem zone isn't backed by the file specified in the
MemoryZoneConfig. The file is used only for the fixed part of the
zone. When you try to restore a snapshot with virtio-mem, the
backing file is used for all its regions. This results in the
following error:
VmRestore(MemoryManager(GuestMemoryRegion(MappingPastEof)))
This patch sets backing_file only for the fixed part of a virtio-mem
zone.
Fixes: #6337
Signed-off-by: Alexandru Matei <alexandru.matei@uipath.com>
Add IOCTL number for generic hypercall ioctl (MSHV_ROOT_HVCALL).
Update IOCTL numbers for set/get vp state.
Signed-off-by: Nuno Das Neves <nudasnev@microsoft.com>
The 'NetConfig' may contain FDs which can't be serialized correctly, as
FDs can only be donated from another process via a Unix domain socket
with `SCM_RIGHTS`. To avoid false use of the serialized FDs, this patch
explicitly set 'NetConfig' FDs as invalid for (de)serialization.
See: #6286
Signed-off-by: Bo Chen <chen.bo@intel.com>
warning: assigning the result of `Clone::clone()` may be inefficient
--> vmm/src/device_manager.rs:4188:17
|
4188 | id = child_id.clone();
| ^^^^^^^^^^^^^^^^^^^^^ help: use `clone_from()`: `id.clone_from(child_id)`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#assigning_clones
= note: `#[warn(clippy::assigning_clones)]` on by default
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
A deadlock can happen from the destination VM of live upgrade or
migration due to waiting on paused device worker threads. For example,
when a serialization error happens after the `DeviceManager` struct is
restored (where all virtio device worker threads are spawned but in
paused/parked state), a deadlock will happen from
`DeviceManager::drop()`, as it blocks for waiting worker threads to
join.
This patch ensures that we wake up all device (mostly virtio) worker
threads before we block for them to join.
Signed-off-by: Bo Chen <chen.bo@intel.com>
Currently, we only support injecting NMI for KVM guests but we can do
the same for MSHV guests as well to have feature parity.
Signed-off-by: Jinank Jain <jinankjain@microsoft.com>
This commit ensures that the HttpApi thread flushes all the responses
before the application shuts down. Without this step, in case of a
VmmShutdown request the application might terminate before the
thread sends a response.
Fixes: #6247
Signed-off-by: Alexandru Matei <alexandru.matei@uipath.com>
On platforms where PCIe P2P is supported, inject a PCI capability into
NVIDIA GPU to indicate support.
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>
Host data that is passed to the hypervisor. Then
the firmware includes the data in the attestation report.
The data might include any key or secret that the SevSnp guest
might need later.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
The host data provided at launch. Data is passed
to the hypervisor during the completion of the
isolated import.
Host Data provided by the hypervisor during guest launch.
The firmware includes this value in all attestation
reports for the guest.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
With the nightly toolchain (2024-02-18) cargo check will flag up
redundant imports either because they are pulled in by the prelude on
earlier match.
Remove those redundant imports.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
When a guest running on a terminal reboots, the sigwinch_listener
subprocess exits and a new one restarts. The parent never wait()s
for children, so the old subprocess remains as a zombie. With further
reboots, more and more zombies build up.
As there are no other children for which we want the exit status,
the easiest fix is to take advantage of the implicit reaping specified
by POSIX when we set the disposition of SIGCHLD to SIG_IGN.
For this to work, we also need to set the correct default exit signal
of SIGCHLD when using clone3() CLONE_CLEAR_SIGHAND. Unlike the fallback
fork() path, clone_args::default() initialises the exit signal to zero,
which results in a child with non-standard reaping behaviour.
Signed-off-by: Chris Webb <chris@arachsys.com>
Allow cloud-hypervisor to direct boot the bzImage kernel format using
the regular 32 bit entry point. This can share the memory and vcpu
setup with the regular PVH boot code, but requires the setup of the
'zero page'.
Signed-off-by: Stefan Nuernberger <stefan.nuernberger@cyberus-technology.de>
In case of SEV-SNP guest devices use sw-iotlb to gain access guest
memory for DMA. For that F_IOMMU/F_ACCESS_PLATFORM must be exposed in
the feature set of virtio devices.
Signed-off-by: Jinank Jain <jinankjain@microsoft.com>
Signed-off-by: Muminul Islam <muislam@microsoft.com>
The 'generate_ram_ranges' function currently hardcodes the assumption
that there are only 2 E820 RAM entries. This is not flexible enough to
handle vendor specific memory holes. Returning a Vec is also more
convenient for users of this function.
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>
Since the ACPI tables are generated inside the IGVM file in case of
SEV-SNP guest. So, we don't need to generate it inside the cloud
hypervisor.
Signed-off-by: Jinank Jain <jinankjain@microsoft.com>
Signed-off-by: Muminul Islam <muislam@microsoft.com>
We occasionally saw cloud-hypervisor crashed due to seccomp violations. The
coredumps showed the HTTP API thread crashing after it attempted to call
sched_yield(). The call came from rust stdlib's mpmc module, which calls
sched_yield() if several attempts to busy-wait for a condition to fulfil fall
short.
Since the system call is harmless and it comes from the stdlib, I opted to allow
all threads to call it.
Signed-off-by: Peteris Rudzusiks <rye@stripe.com>
Currently the only way to set the affinity for virtio block threads is
to boot the VM, search for the tid of each of the virtio block threads,
then set the affinity manually. This commit adds an option to pin virtio
block queues to specific host cpus (similar to pinning vcpus to host
cpus). A queue_affinity option has been added to the disk flag in
the cli to specify a mapping of queue indices to host cpus.
Signed-off-by: acarp <acarp@crusoeenergy.com>
Traditional way to configure vcpu don't work for sev-snp guests. All the
vCPU configuration for SEV-SNP guest is provided via VMSA.
Signed-off-by: Jinank Jain <jinankjain@microsoft.com>
Signed-off-by: Muminul Islam <muislam@microsoft.com>
On guests with large amounts of memory, using the `prefault` option can
lead to a very long boot time. This commit implements the strategy
taken by QEMU to prefault memory in parallel using multiple threads,
decreasing the time to allocate memory for large guests by
an order of magnitude or more.
For example, this commit reduces the time to allocate memory for a
guest configured with 704 GiB of memory on 1 NUMA node using 1 GiB
hugepages from 81.44134669s to just 6.865287881s.
Signed-off-by: Sean Banko <sbanko@crusoeenergy.com>
Beta clippy fix:
warning: initializer for `thread_local` value can be made `const`
--> vmm/src/sigwinch_listener.rs:27:40
|
27 | static TX: RefCell<Option<File>> = RefCell::new(None);
| ^^^^^^^^^^^^^^^^^^ help: replace with: `const { RefCell::new(None) }`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#thread_local_initializer_can_be_made_const
= note: `#[warn(clippy::thread_local_initializer_can_be_made_const)]` on by default
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
Beta clippy fix:
warning: this call to `as_ref.map(...)` does nothing
--> vmm/src/device_manager.rs🔢9
|
1234 | self.console_resize_pipe.as_ref().map(Arc::clone)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: try: `self.console_resize_pipe.clone()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#useless_asref
= note: `#[warn(clippy::useless_asref)]` on by default
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
When we import a page, we have a page with
some data or empty, empty does not mean there is no data,
it rather means it's full of zeros. We can skip writing the
data as guest memory of the page is already zeroed.
A page could be partially filled and the rest of the content is zero.
Our IGVM generation tool only fills data here if there is some data
without zeros. Rest of them are padded. We only write data
without padding and compare whether we complete writing
the buffer content. Still it's a full page and update the variable
with length of the full page.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
This commit adds the debug-console (or debugcon) device to CHV. It is a
very simple device on I/O port 0xe9 supported by QEMU and BOCHS. It is
meant for printing information as easy as possible, without any
necessary configuration from the guest at all.
It is primarily interesting to OS/kernel and firmware developers as they
can produce output as soon as the guest starts without any configuration
of a serial device or similar. Furthermore, a kernel hacker might use
this device for information of type B whereas information of type A are
printed to the serial device.
This device is not used by default by Linux, Windows, or any other
"real" OS, but only by toy kernels and during firmware development.
In the CLI, it can be configured similar to --console or --serial with
the --debug-console parameter.
Signed-off-by: Philipp Schuster <philipp.schuster@cyberus-technology.de>
This patch bumps the following crates, including `kvm-bindings@0.7.0`*,
`kvm-ioctls@0.16.0`**, `linux-loader@0.11.0`, `versionize@0.2.0`,
`versionize_derive@0.1.6`***, `vhost@0.10.0`,
`vhost-user-backend@0.13.1`, `virtio-queue@0.11.0`, `vm-memory@0.14.0`,
`vmm-sys-util@0.12.1`, and the latest of `vfio-bindings`, `vfio-ioctls`,
`mshv-bindings`,`mshv-ioctls`, and `vfio-user`.
* A fork of the `kvm-bindings` crate is being used to support
serialization of various structs for migration [1]. Also, code changes
are made to accommodate the updated `struct xsave` from the Linux
kernel. Note: these changes related to `struct xsave` break
live-upgrade.
** The new `kvm-ioctls` crate introduced breaking changes for
the `get/set_one_reg` API on `aarch64` [2], so code changes are made to
the new APIs.
*** A fork of the `versionize_derive` crate is being used to support
versionize on packed structs [3].
[1] https://github.com/cloud-hypervisor/kvm-bindings/tree/ch-v0.7.0
[2] https://github.com/rust-vmm/kvm-ioctls/pull/223
[3] https://github.com/cloud-hypervisor/versionize_derive/tree/ch-0.1.6Fixes: #6072
Signed-off-by: Bo Chen <chen.bo@intel.com>
Set the SEV control register so we know where to
start running. This register configures the
SEV feature control state on a virtual processor.
Signed-off-by: Jinank Jain <jinankjain@microsoft.com>
Signed-off-by: Muminul Islam <muislam@microsoft.com>
These Default implementations either don't produce valid configs, are
no longer used outside of tests, or both.
For the tests, we can define our own local "default" values that make
the most sense for the tests, without worrying about what's
a (somewhat) sensible "global" default value.
Signed-off-by: Alyssa Ross <hi@alyssa.is>
Bumping anyhow crate from 1.0.75 to 1.0.79 will cause seccomp
failures through integration tests. Newly added backtrace support
relies on readlink and many other syscalls.
Issue noticed with test_api_http_pause_resume test, where second time
of VM PAUSE or VM RESUME prints error and causes panic.
Noticed that panic message in a thread which is not allowed to write
output triggered the issue.
So implementing Display trait for HttpError and ApiError enums to avoid
adding many syscalls to seccomp filter section.
Signed-off-by: Ravi kumar Veeramally <ravikumar.veeramally@intel.com>
On hosts with >256 cpus, setting the cpu affinity to a host cpu index
>255 will return an error because type of `host_cpu` is `u8`.
This commit changes the type of `host_cpu` to `usize` to remove this
limitation.
Signed-off-by: Sean Banko <sbanko@crusoeenergy.com>
Uses of the old ApiRequest enum conflated two different concerns:
identifying an API request endpoint, and storing data for an API
request. This led to ApiRequest values being passed around with junk
data just to communicate a request type, which forced all API request
body types to implement Default, which in some cases doesn't make any
sense — what's the "default" path for a vhost-user socket? The
nonsensical Default values have led to tests relying on being able to
use nonsensical data, which is an impediment to adding better
validation for these types.
Rather than having API request types be represented by an enum, which
has to carry associated body data everywhere it's used, it makes more
sense to represent API request types as trait objects. These can have
an associated type for the type of the request body, and this makes it
possible to pass API request types and data around as siblings in a
type-safe way without forcing them into a single value even where it
doesn't make sense. Trait objects also give us dynamic dispatch,
which lets us get rid of several large match blocks.
To keep it possible to fuzz the HTTP API, all the Vmm methods called
by the HTTP API are pulled out into a trait, so the fuzzer can provide
its own stub implementation of the VMM.
Signed-off-by: Alyssa Ross <hi@alyssa.is>
The VIRTIO specification[1] says:
> The upper 32 bits of the CID are reserved and zeroed.
We should therefore not allow the user to supply a VSOCK CID with
those bits set. To accomplish this, limit the public API of the
virtio-vsock device to only accept 32-bit CIDs, while still using
64-bit CIDs internally since that's how virtio-vsock works.
[1]: https://docs.oasis-open.org/virtio/virtio/v1.2/csd01/virtio-v1.2-csd01.html#x1-4400004
Signed-off-by: Alyssa Ross <hi@alyssa.is>
I accidentally ran a VM with CID 2 (VMADDR_CID_HOST), and very strange
and difficult to debug behavior ensued. I don't think a virtio-vsock
device should be allowed to have any of the special CIDs
(VMADDR_CID_ANY, VMADDR_CID_HYPERVISOR, VMADDR_CID_LOCAL, VMADDR_CID_HOST).
Signed-off-by: Alyssa Ross <hi@alyssa.is>
Complete the isolated import, telling the
Microsoft hypervisor that import is done so that
MSHV can issue SNP_LAUNCH_FINISH command.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
Import all the isolated pages after parsing is
done on the iGVM file. Hypervisor adds those
pages for PSP measurement(part of the hashing).
Signed-off-by: Muminul Islam <muislam@microsoft.com>
Add a 'rate_limit_groups' field to VmConfig that defines a set of
named RateLimiterGroups.
When the 'rate_limit_group' field of DiskConfig is defined, all
virtio-blk queues will be rate-limited by a shared RateLimiterGroup.
The lifecycle of all RateLimiterGroups is tied to the Vm.
A RateLimiterGroup may exist even if no Disks are configured to use
the RateLimiterGroup. Disks may be hot-added or hot-removed from the
RateLimiterGroup.
When the 'rate_limiter' field of DiskConfig is defined, we construct
an anonymous RateLimiterGroup whose lifecycle is tied to the Disk.
This is primarily done for api backwards compatability. Importantly,
the behavior is not the same! This implementation rate_limits the
aggregate bandwidth / iops of an individual disk rather than the
bandwidth / iops of an individual queue of a disk.
When neither the 'rate_limit_group' or the 'rate_limiter' fields of
DiskConfig is defined, the Disk is not rate-limited.
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>