The VIRTIO specification[1] says:
> The upper 32 bits of the CID are reserved and zeroed.
We should therefore not allow the user to supply a VSOCK CID with
those bits set. To accomplish this, limit the public API of the
virtio-vsock device to only accept 32-bit CIDs, while still using
64-bit CIDs internally since that's how virtio-vsock works.
[1]: https://docs.oasis-open.org/virtio/virtio/v1.2/csd01/virtio-v1.2-csd01.html#x1-4400004
Signed-off-by: Alyssa Ross <hi@alyssa.is>
The socket is nonblocking, so it's not guaranteed that it will be
possible to read the whole connect command in a single iteration of
the event loop. To reproduce:
(echo -n 'CONNECT '; sleep 1; echo 1234; cat) | socat STDIO UNIX-CONNECT:vsock.sock
This would produce the error:
cloud-hypervisor: 5.509209s: <_vsock4> INFO:virtio-devices/src/vsock/unix/muxer.rs:446 -- vsock: error adding local-init connection: UnixRead(Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" })
To fix this, if we only get a partial command, we need to save it for
future iterations of the event loop, and only proceed once we've read
a complete command.
Signed-off-by: Alyssa Ross <hi@alyssa.is>
Add a 'rate_limit_groups' field to VmConfig that defines a set of
named RateLimiterGroups.
When the 'rate_limit_group' field of DiskConfig is defined, all
virtio-blk queues will be rate-limited by a shared RateLimiterGroup.
The lifecycle of all RateLimiterGroups is tied to the Vm.
A RateLimiterGroup may exist even if no Disks are configured to use
the RateLimiterGroup. Disks may be hot-added or hot-removed from the
RateLimiterGroup.
When the 'rate_limiter' field of DiskConfig is defined, we construct
an anonymous RateLimiterGroup whose lifecycle is tied to the Disk.
This is primarily done for api backwards compatability. Importantly,
the behavior is not the same! This implementation rate_limits the
aggregate bandwidth / iops of an individual disk rather than the
bandwidth / iops of an individual queue of a disk.
When neither the 'rate_limit_group' or the 'rate_limiter' fields of
DiskConfig is defined, the Disk is not rate-limited.
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>
error: use of a fallible conversion when an infallible one could be used
Error: --> virtio-devices/src/vhost_user/vu_common_ctrl.rs:206:51
|
206 | let actual_size: usize = queue.size().try_into().unwrap();
| ^^^^^^^^^^^^^^^^^^^ help: use: `into()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_fallible_conversions
= note: `-D clippy::unnecessary-fallible-conversions` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::unnecessary_fallible_conversions)]`
error: could not compile `virtio-devices` (lib) due to previous error
Error: warning: build failed, waiting for other jobs to finish...
error: could not compile `virtio-devices` (lib test) due to previous error
Error: The process '/home/runner/.cargo/bin/cargo' failed with exit code 101
Signed-off-by: Bo Chen <chen.bo@intel.com>
The seccompiler v0.4.0 started to use `seccomp` syscall instead of the
`prctl` syscall. Also, threads for virtio-deivces should not need any of
these syscalls anyway.
Signed-off-by: Bo Chen <chen.bo@intel.com>
The cumulative average formula [1] requires to use signed integers
for proper calculations, while calculated result (e.g. cumulative
average) is always positive. This patch reflects the above requirements
in our code.
[1] https://en.wikipedia.org/wiki/Moving_average#Cumulative_averageFixes: #5745
Signed-off-by: Bo Chen <chen.bo@intel.com>
There is a "LATENCY_SCALE" being used for calculating cumulative average
latency, so it should also be used for the latency of the first op.
See: #5712
Signed-off-by: Bo Chen <chen.bo@intel.com>
Logically until we have handled the first operation the latency is
infinite; this logic was applied to the minimum latency originally but
this patch extends that logic to the maximum and average latency.
To prevent the initial average latency being skewed by the inclusion of
infinity the average value is initally seeded with the first measured
latency.
Fixes: #5704
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
Since kernel v6.3 the vsock packet is not split over two descriptors
and is instead included in a single one.
This change is based on the discovery and fix identified by Stefano
Garzarella for the vm-virtio vsock implementation and adapted for our
very different codebase.
Fixes: #5691
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
warning: this argument is a mutable reference, but not used mutably
--> virtio-devices/src/transport/pci_common_config.rs💯17
|
100 | queues: &mut [Queue],
| ^^^^^^^^^^^^ help: consider changing to: `&[Queue]`
|
= warning: changing this function will impact semver compatibility
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_pass_by_ref_mut
= note: `#[warn(clippy::needless_pass_by_ref_mut)]` on by default
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
Similar to balloon inflation, memory allocation is also constrained to
align with the page size. Therefore, memory is allocated in units of the
host page size, one page at a time, until all host pages that the memory
range requested by the guest are managed. If the requested size is
smaller than the page size, the entire page will still be allocated
because smaller allocations are not possible due to the page size
limitation.
Fixes: cloud-hypervisor#5369
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
Currently, virtio-balloon can't work well with page size other than 4k.
The virtio-balloon always works in units of 4kiB (BALLOON_PAGE_SIZE), but
we can only actually discard memory in units of the host page size.
We get some idea from [1] to solve this issue.
What has been done in this commit:
For balloon inflation:
A bitmap is employed to track the memory range to be released in 4k
granularity. Once it accumulates to one host page size, the corresponding
page is released, and the bitmap is cleared to handle the next record.
This process continues until all the memory range is managed. Memory will
only be released when a consecutive set of balloon request entries from
the same host page reaches the full host page size. If a balloon request
entry from a different host page is encountered, the bitmap and the base
host page address will be reset. Consequently, memory is released in
units of the page size, ensuring efficient memory management. That's say
if memory range length to be released smaller than page size or if the
guest scatters requests each of whose size is smaller than page size
across different host pages no memory will be released.
[1] https://patchwork.kernel.org/project/qemu-devel/patch/20190214043916.22128-6-david@gibson.dropbear.id.au/
Fixes: cloud-hypervisor#5369
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
This commit merges crates `qcow`, `vhdx` and `block_util` into the
crate `block`, which can allow `qcow` to use functions from `block_util`
without introducing a circular crate dependency.
This commit is based on crosvm implementation:
f2eecc4152
Signed-off-by: Yu Li <liyu.yukiteru@bytedance.com>
warning: usage of `Arc<T>` where `T` is not `Send` or `Sync`
--> virtio-devices/src/vsock/device.rs:376:22
|
376 | backend: Arc::new(RwLock::new(backend)),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: consider using `Rc<T>` instead or wrapping `T` in a std::sync type like `Mutex<T>`
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#arc_with_non_send_sync
= note: `#[warn(clippy::arc_with_non_send_sync)]` on by default
The vsock backend may be shared between threads, so the type `B` in
`Vsock` should be `VsockBackend` and `Sync`.
Considering that `api_receiver` and `gdb_receiver` are only used in vmm
threads, the `Arc` can be replaced by `Rc`.
Signed-off-by: Yu Li <liyu.yukiteru@bytedance.com>
warning: useless use of `vec!`
--> test_infra/src/lib.rs:111:30
|
111 | let mut events = vec![epoll::Event::new(epoll::Events::empty(), 0); 1];
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: you can use an array directly: `[epoll::Event::new(epoll::Events::empty(), 0); 1]`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#useless_vec
= note: `#[warn(clippy::useless_vec)]` on by default
Signed-off-by: Yu Li <liyu.yukiteru@bytedance.com>
warning: casting raw pointers to the same type and constness is unnecessary (`*const protocol::MemoryRange` -> `*const protocol::MemoryRange`)
--> vm-migration/src/protocol.rs:280:17
|
280 | self.data.as_ptr() as *const MemoryRange as *const u8,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: try: `self.data.as_ptr()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_cast
= note: `#[warn(clippy::unnecessary_cast)]` on by default
Signed-off-by: Yu Li <liyu.yukiteru@bytedance.com>
Remove "enum_variant_names" clippy. Enumeration variant names should
specify their variant, not repeat the enumeration name.
Signed-off-by: Ravi kumar Veeramally <ravikumar.veeramally@intel.com>
SerialBuffer uses VecDeque::extend, which calls realloc, which a
maximum buffer size of 1 MiB. Starting at allocation sizes of
128 KiB, musl's mallocng allocator will use mremap for the allocation.
Since this was not permitted by the seccomp rules, heavy write load
could crash cloud-hypervisor with a seccomp failure. (Encountered
using virtio-console, but I don't see any reason it wouldn't happen
for the legacy serial device too.)
Signed-off-by: Alyssa Ross <hi@alyssa.is>
Don't import via glob to avoid (unused) objects colliding in the
namespace. This fixes a beta clippy issue.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
Cloud Hypervisor's vhost-user implementation will reconnect if it gets
disconnected from the backend. That means connections happen inside
the vhost-user seccomp sandbox, so all syscalls used in reconnecting
have to be allowed in that sandbox.
clock_nanosleep is used by Glibc, and nanosleep is used by musl.
Signed-off-by: Alyssa Ross <hi@alyssa.is>
Doc comments are Markdown, and can include HTML tags. Anything in
angle brackets will therefore be inserted as an HTML tag into
rustdoc's output. If that's not intentional, the left angle bracket
needs to be escaped.
I haven't fixed the doc comments in src/main.rs, because argh doesn't
understand the escaping, so the backslashes would show up in the
--help output. I've opened https://github.com/google/argh/issues/159
about that.
Signed-off-by: Alyssa Ross <hi@alyssa.is>
These need to be //! comments, because they apply to the module as a
whole, not to whatever directly follows the comment. Using ///
comments here resulted in documentation being attached to the wrong
thing, or not rendered at all.
I've also checked the Markdown formatting of these comments as
rendered by rustdoc, and fixed it where appropriate.
Signed-off-by: Alyssa Ross <hi@alyssa.is>
This change is important to do a proper resource cleanup. We decided
to do this repetitive approach as VirtioCommon can't implement Drop
without major changes to the corresponding code. Also, devices such as
Net can't easily use the epoll_threads-abstraction from VirtioCommon as
it has multiple threads with different semantics.
Signed-off-by: Philipp Schuster <philipp.schuster@cyberus-technology.de>
Add new configuration for offloading features, including
Checksum/TSO/UFO, and set these offloading features as
enabled by default.
Fixes: #4792.
Signed-off-by: Yong He <alexyonghe@tencent.com>
Add new latency counters for virtio-block device, including
minimal latency, maximal latency, and average latency for block
read and write.
The average latency is calculated based on cumulative average.
Signed-off-by: Yong He <alexyonghe@tencent.com>
Rather than aggregate the completion list into an intermediate vector
instead adjust the API to provide one completion item at a time.
With DHAT this shows the number of heap allocations has decreased.
Before:
dhat: Total: 623,852 bytes in 8,157 blocks
After:
dhat: Total: 380,444 bytes in 3,469 blocks
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
During analysis of the asynchrous block I/O handling it was observed
that the majority of the time the completion events occur in the same
order as submissions. Further the maximum number of inflight requests
during the boot time is much lower than the size of the queue.
Through the use of a double ended queue (VecDequeue) with a reasonable
pre-allocation capacity we can have O(1) allocation free addition of
items to the list of inflight requests and mostly O(1) matching of
completed requests to submissions.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
There is duplicated code when handlin queue events in handle_event()
refactor and introduce a new helper function.
Signed-off-by: Hao Xu <howeyxu@tencent.com>
The information about the identifier related to a Snapshot is only
relevant from the BTreeMap perspective, which is why we can get rid of
the duplicated identifier in every Snapshot structure.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
There's no reason to carry a HashMap of SnapshotDataSection per
Snapshot. And given we now provide at most one SnapshotDataSection per
Snapshot, there's no need to keep the id part of the SnapshotDataSection
structure.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Following the new restore design, it is not appropriate to set every
virtio device threads into a paused state after they've been started.
This is why we remove the line of code pausing the devices only after
they've been restored, and replace it with a small patch in every virtio
device implementation. When a virtio device is created as part of a
restored VM, the associated "paused" boolean is set to true. This
ensures the corresponding thread will be directly parked when being
started, avoiding the thread to be in a different state than the one it
was on the source VM during the snapshot.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>