MSHV's SEV-SNP implementation calls ioeventfds whenever there is an
event.
This change removes the need frequent allocation and deallocation of a
vector, while at the same time makes sure other call sites are
unaffected.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This avoids ambiguity of parameters:
error: ambiguous reference to positional arguments by number in a tuple variant; change this to a named argument
--> block/src/qcow/mod.rs:48:48
|
48 | #[error("File larger than max of {}: {0}", MAX_QCOW_FILE_SIZE)]
| ^^^^^^^^^^^^^^^^^^
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
This will become useful when we build the fuzzing target for the
instruction emulator, because there is no need to pull in the rest of
the hypervisor crate in that situation.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The fastfmt feature and VEX support use techniques that appear to leak
memory in the eye of LLVM's address sanitizer.
While at it, disable a bunch of instruction set decoding support we
never intend to support.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The size was set to one because without VIRTIO_BLK_F_SEG_MAX, the guest
only used one data descriptor per request.
The value 32 is empirically derived from booting a guest. This value
eliminates all SmallVec allocations observable by DHAT.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This allows the guest to put in more than one segment per request. It
can improve the throughput of the system.
Introduce a new check to make sure the queue size configured by the user
is large enough to hold at least one segment.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
When the main fuzzer function returns (), it is equivalent to
returning Corpus::Keep.
In some of the return paths, we want to reject the input so that the
libfuzzer won't spend more time mutating them.
The should make fuzzing more efficient. No functional change intended.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The checksum field in the original buffer should be zeroed.
The code was zeroing a temporary buffer. That's wrong.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The original code was buggy. It always attempted to update the header,
even when the file was opened as read-only. That led to an error.
The specification states that the headers should be updated when the
first user visible write happens. We can just drop the incorrect code.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The ASYNC flag forces requests to go to worker threads. Worker threads
are expensive. Let the kernel decide what to do.
With this change, I no longer see an excessive amount of io_uring worker
threads.
Quote from the manual for io_uring_sqe_set_flags(3):
```
IOSQE_ASYNC
Normal operation for io_uring is to try and issue an sqe
as non-blocking first, and if that fails, execute it in an
async manner. To support more efficient overlapped
operation of requests that the application knows/assumes
will always (or most of the time) block, the application
can ask for an sqe to be issued async from the start. Note
that this flag immediately causes the SQE to be offloaded
to an async helper thread with no initial non-blocking
attempt. This may be less efficient and should not be
used liberally or without understanding the performance
and efficiency tradeoffs.
```
Signed-off-by: Wei Liu <liuwe@microsoft.com>
Instead of silently ignoring the error, return an error to the callers.
This in practice should never happen, because the submission queue size
(ring depth) is the same as the virtio queue size. Virtio queue won't
push more requests than there are submission queue entries.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The original code relied on the default `read_vectored` or
`write_vectored` implementations from the standard library.
The default implementation of those functions only uses the first
non-empty buffer. That's not correct when there are more than one
buffers.
Fixes: #6876
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This system is erroring out on jobs due to insufficient memory - reduce
parallelism to allow CI jobs to complete.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>