Reads from the random file may only be partial, e.g., if the random file is an ordinary text
file. When that happens, the device needs to signal to the driver that only parts of the buffer have
been overwritten.
Signed-off-by: Markus Napierkowski <markus.napierkowski@cyberus-technology.de>
Remove the use of 'unwrap()' that assumes the guest address for request
status is always valid, which avoid virtio-block thread panic on
malformed descriptors from the guest.
Signed-off-by: Bo Chen <chen.bo@intel.com>
Some dependencies are not tracking the latest version in the .toml file
so update all dependencies to the latest version.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Now that we rely on pop_descriptor_chain() rather than iter() to iterate
over a queue, there's no more borrow on the queue itself, meaning we can
invoke add_used() directly for the iteration loop. This simplifies the
processing of the queues for each virtio device, and bring some possible
performance improvement given we don't have to iterate twice over the
list of descriptors to invoke add_used().
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Using pop_descriptor_chain() is much more appropriate than iter() since
it recreates the iterator every time, avoiding the queue to be borrowed
and allowing the virtio-net implementation to match all the other ones.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The new virtio-queue version introduced some breaking changes which need
to be addressed so that Cloud Hypervisor can still work with this
version.
The most important change is about removing a handle to the guest memory
from the Queue, meaning the caller has to provide the guest memory
handle for multiple methods from the QueueT trait.
One interesting aspect is that QueueT has been widely extended to
provide every getter and setter we need to access and update the Queue
structure without having direct access to its internal fields.
This patch ports all the virtio and vhost-user devices to this new crate
definition. It also updates both vhost-user-block and vhost-user-net
backends based on the updated vhost-user-backend crate. It also updates
the fuzz directory.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Whenever a virtio reset happens, the vhost-user backend should be
notified that the vring should be stopped. This is performed by calling
GET_VRING_BASE on the appropriate queue indexes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Rather than relying on the amount of queues to enable or disable the
queue that have been activated, we rely on the actual queue indexes
provided through the tuple including the queue index, the Queue and the
EventFd. By storing the list of indexes, we simplify the code and also
make it more accurate in case some queues aren't activated.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of passing separately a list of Queues and the equivalent list
of EventFds, we consolidate these two through a tuple along with the
queue index.
The queue index can be very useful if looking for the actual index
related to the queue, no matter if other queues have been enabled or
not.
It's also convenient to have the EventFd associated with the Queue so
that we don't have to carry two lists with the same amount of items.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When preparing the activator, we must provide the correct queue index to
clone the right EventFd associated with the queue.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
It's not mandatory for the virtio-fs driver to enable all virtqueues
provided by the backend since all it needs is one request queue to work
correctly. Therefore we lower the minimal amount of enabled queues to 1.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
And along with virtio-queue, we must also bump vhost-user-backend from
0.3.0 to 0.5.0 (since it relies on virtio-queue 0.4.0).
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The vhost-user backend was always provided the maximum queue size but
this is incorrect. Instead it must be informed of the actual queue size
that has been negotiated with the virtio driver running in the guest.
This ensures proper functioning of vhost-user-block with the Rust
Hypervisor Firmware, which uses a hardcoded queue size of 16.
Partially fixes#4285
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The latest vhost-user specification describes VHOST_USER_RESET_OWNER
command as deprecated with the following explanation:
This is no longer used. Used to be sent to request disabling all
rings, but some back-ends interpreted it to also discard connection
state (this interpretation would lead to bugs). It is recommended that
back-ends either ignore this message, or use it to disable all rings.
Also, it's been observed that when using either Rust Hypervisor Firmware
or EDK2 OVMF firmware with SPDK (using the block device as the boot
disk), the virtio reset that happens when the firmware no longer needs
to access the block device caused a failure by triggering the command
VHOST_USER_RESET_OWNER.
For all these reasons, this patch simplifies the virtio reset
implementation by simply disabling the virtqueues and no longer calling
into VHOST_USER_RESET_OWNER.
Partially fixes#4285
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This check is new in the beta version of clippy and exists to avoid
potential deadlocks by highlighting when the test in an if or for loop
is something that holds a lock. In many cases we would need to make
significant refactorings to be able to pass this check so disable in the
affected crates.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
warning: accessing first element with `data.get(0)`
--> virtio-devices/src/transport/pci_device.rs:1055:34
|
1055 | if let Some(v) = data.get(0) {
| ^^^^^^^^^^^ help: try: `data.first()`
|
= note: `#[warn(clippy::get_first)]` on by default
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#get_first
Signed-off-by: Rob Bradford <robert.bradford@intel.com>