Properly detach a device from a domain if that device is already
attached to another domain on an attach request (following section
5.13.6.3.2 of the virtio-iommu spec). Resolves nested virtualization
reboot.
Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
Ensures that any endpoints already attached to the domain are properly
mapped to a new endpoint on said endpoint's attach request. This is done
by search for all previous mappings in the domain and then issuing map
requests for the newly attached endpoint.
Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
When restoring a VM, the VirtioPciCfgCapInfo struct is not properly
initialized. All fields are 0, including the offset where the
capabibility starts. Hence, when you read a PCI configuration register
in the range [0..length(VirtioPciCfgCap)] you get the value 0 instead of
the actual register contents.
Linux rescans the whole PCI bus when adding a new device. It reads the
values vendor_id and device_id for every device. Because these are
stored at offset 0 in pci configuration space, their value is 0 for
existing devices. As such, Linux considers that the devices have been
unplugged and it removes them from the system.
Fixes: #6265
Signed-off-by: Alexandru Matei <alexandru.matei@uipath.com>
According to the virtio iommu spec (section 5.13.6.6), all mappings
within the entire range from virt_start to virt_end in an unmap
request must be removed. This change adds this functionality,
iterating through all mappings that fall within an unmap request
for that domain and removing them.
Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
warning: `devices` (lib) generated 1 warning (run `cargo clippy --fix --lib -p devices` to apply 1 suggestion)
warning: assigning the result of `Clone::clone()` may be inefficient
--> virtio-devices/src/transport/pci_device.rs:1073:9
|
1073 | self.bar_regions = bars.clone();
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: use `clone_from()`: `self.bar_regions.clone_from(&bars)`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#assigning_clones
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
With the nightly toolchain (2024-02-18) cargo check will flag up
redundant imports either because they are pulled in by the prelude on
earlier match.
Remove those redundant imports.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
For SevSnp guest IO events are handled by GHCB protocol.
While we get the notification we have to notify via eventfd.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
Currently the only way to set the affinity for virtio block threads is
to boot the VM, search for the tid of each of the virtio block threads,
then set the affinity manually. This commit adds an option to pin virtio
block queues to specific host cpus (similar to pinning vcpus to host
cpus). A queue_affinity option has been added to the disk flag in
the cli to specify a mapping of queue indices to host cpus.
Signed-off-by: acarp <acarp@crusoeenergy.com>
This patch bumps the following crates, including `kvm-bindings@0.7.0`*,
`kvm-ioctls@0.16.0`**, `linux-loader@0.11.0`, `versionize@0.2.0`,
`versionize_derive@0.1.6`***, `vhost@0.10.0`,
`vhost-user-backend@0.13.1`, `virtio-queue@0.11.0`, `vm-memory@0.14.0`,
`vmm-sys-util@0.12.1`, and the latest of `vfio-bindings`, `vfio-ioctls`,
`mshv-bindings`,`mshv-ioctls`, and `vfio-user`.
* A fork of the `kvm-bindings` crate is being used to support
serialization of various structs for migration [1]. Also, code changes
are made to accommodate the updated `struct xsave` from the Linux
kernel. Note: these changes related to `struct xsave` break
live-upgrade.
** The new `kvm-ioctls` crate introduced breaking changes for
the `get/set_one_reg` API on `aarch64` [2], so code changes are made to
the new APIs.
*** A fork of the `versionize_derive` crate is being used to support
versionize on packed structs [3].
[1] https://github.com/cloud-hypervisor/kvm-bindings/tree/ch-v0.7.0
[2] https://github.com/rust-vmm/kvm-ioctls/pull/223
[3] https://github.com/cloud-hypervisor/versionize_derive/tree/ch-0.1.6Fixes: #6072
Signed-off-by: Bo Chen <chen.bo@intel.com>
The VIRTIO specification[1] says:
> The upper 32 bits of the CID are reserved and zeroed.
We should therefore not allow the user to supply a VSOCK CID with
those bits set. To accomplish this, limit the public API of the
virtio-vsock device to only accept 32-bit CIDs, while still using
64-bit CIDs internally since that's how virtio-vsock works.
[1]: https://docs.oasis-open.org/virtio/virtio/v1.2/csd01/virtio-v1.2-csd01.html#x1-4400004
Signed-off-by: Alyssa Ross <hi@alyssa.is>
The socket is nonblocking, so it's not guaranteed that it will be
possible to read the whole connect command in a single iteration of
the event loop. To reproduce:
(echo -n 'CONNECT '; sleep 1; echo 1234; cat) | socat STDIO UNIX-CONNECT:vsock.sock
This would produce the error:
cloud-hypervisor: 5.509209s: <_vsock4> INFO:virtio-devices/src/vsock/unix/muxer.rs:446 -- vsock: error adding local-init connection: UnixRead(Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" })
To fix this, if we only get a partial command, we need to save it for
future iterations of the event loop, and only proceed once we've read
a complete command.
Signed-off-by: Alyssa Ross <hi@alyssa.is>
Add a 'rate_limit_groups' field to VmConfig that defines a set of
named RateLimiterGroups.
When the 'rate_limit_group' field of DiskConfig is defined, all
virtio-blk queues will be rate-limited by a shared RateLimiterGroup.
The lifecycle of all RateLimiterGroups is tied to the Vm.
A RateLimiterGroup may exist even if no Disks are configured to use
the RateLimiterGroup. Disks may be hot-added or hot-removed from the
RateLimiterGroup.
When the 'rate_limiter' field of DiskConfig is defined, we construct
an anonymous RateLimiterGroup whose lifecycle is tied to the Disk.
This is primarily done for api backwards compatability. Importantly,
the behavior is not the same! This implementation rate_limits the
aggregate bandwidth / iops of an individual disk rather than the
bandwidth / iops of an individual queue of a disk.
When neither the 'rate_limit_group' or the 'rate_limiter' fields of
DiskConfig is defined, the Disk is not rate-limited.
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>
error: use of a fallible conversion when an infallible one could be used
Error: --> virtio-devices/src/vhost_user/vu_common_ctrl.rs:206:51
|
206 | let actual_size: usize = queue.size().try_into().unwrap();
| ^^^^^^^^^^^^^^^^^^^ help: use: `into()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_fallible_conversions
= note: `-D clippy::unnecessary-fallible-conversions` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::unnecessary_fallible_conversions)]`
error: could not compile `virtio-devices` (lib) due to previous error
Error: warning: build failed, waiting for other jobs to finish...
error: could not compile `virtio-devices` (lib test) due to previous error
Error: The process '/home/runner/.cargo/bin/cargo' failed with exit code 101
Signed-off-by: Bo Chen <chen.bo@intel.com>
The seccompiler v0.4.0 started to use `seccomp` syscall instead of the
`prctl` syscall. Also, threads for virtio-deivces should not need any of
these syscalls anyway.
Signed-off-by: Bo Chen <chen.bo@intel.com>
The cumulative average formula [1] requires to use signed integers
for proper calculations, while calculated result (e.g. cumulative
average) is always positive. This patch reflects the above requirements
in our code.
[1] https://en.wikipedia.org/wiki/Moving_average#Cumulative_averageFixes: #5745
Signed-off-by: Bo Chen <chen.bo@intel.com>
There is a "LATENCY_SCALE" being used for calculating cumulative average
latency, so it should also be used for the latency of the first op.
See: #5712
Signed-off-by: Bo Chen <chen.bo@intel.com>
Update to the latest vm-memory and all the crates that also depend upon
it.
Fix some deprecation warnings.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
Logically until we have handled the first operation the latency is
infinite; this logic was applied to the minimum latency originally but
this patch extends that logic to the maximum and average latency.
To prevent the initial average latency being skewed by the inclusion of
infinity the average value is initally seeded with the first measured
latency.
Fixes: #5704
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
Since kernel v6.3 the vsock packet is not split over two descriptors
and is instead included in a single one.
This change is based on the discovery and fix identified by Stefano
Garzarella for the vm-virtio vsock implementation and adapted for our
very different codebase.
Fixes: #5691
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
warning: this argument is a mutable reference, but not used mutably
--> virtio-devices/src/transport/pci_common_config.rs💯17
|
100 | queues: &mut [Queue],
| ^^^^^^^^^^^^ help: consider changing to: `&[Queue]`
|
= warning: changing this function will impact semver compatibility
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_pass_by_ref_mut
= note: `#[warn(clippy::needless_pass_by_ref_mut)]` on by default
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
Similar to balloon inflation, memory allocation is also constrained to
align with the page size. Therefore, memory is allocated in units of the
host page size, one page at a time, until all host pages that the memory
range requested by the guest are managed. If the requested size is
smaller than the page size, the entire page will still be allocated
because smaller allocations are not possible due to the page size
limitation.
Fixes: cloud-hypervisor#5369
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
Currently, virtio-balloon can't work well with page size other than 4k.
The virtio-balloon always works in units of 4kiB (BALLOON_PAGE_SIZE), but
we can only actually discard memory in units of the host page size.
We get some idea from [1] to solve this issue.
What has been done in this commit:
For balloon inflation:
A bitmap is employed to track the memory range to be released in 4k
granularity. Once it accumulates to one host page size, the corresponding
page is released, and the bitmap is cleared to handle the next record.
This process continues until all the memory range is managed. Memory will
only be released when a consecutive set of balloon request entries from
the same host page reaches the full host page size. If a balloon request
entry from a different host page is encountered, the bitmap and the base
host page address will be reset. Consequently, memory is released in
units of the page size, ensuring efficient memory management. That's say
if memory range length to be released smaller than page size or if the
guest scatters requests each of whose size is smaller than page size
across different host pages no memory will be released.
[1] https://patchwork.kernel.org/project/qemu-devel/patch/20190214043916.22128-6-david@gibson.dropbear.id.au/
Fixes: cloud-hypervisor#5369
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>