Remove the use of 'unwrap()' that assumes the guest address for request
status is always valid, which avoid virtio-block thread panic on
malformed descriptors from the guest.
Signed-off-by: Bo Chen <chen.bo@intel.com>
Now that we rely on pop_descriptor_chain() rather than iter() to iterate
over a queue, there's no more borrow on the queue itself, meaning we can
invoke add_used() directly for the iteration loop. This simplifies the
processing of the queues for each virtio device, and bring some possible
performance improvement given we don't have to iterate twice over the
list of descriptors to invoke add_used().
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Using pop_descriptor_chain() is much more appropriate than iter() since
it recreates the iterator every time, avoiding the queue to be borrowed
and allowing the virtio-net implementation to match all the other ones.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The new virtio-queue version introduced some breaking changes which need
to be addressed so that Cloud Hypervisor can still work with this
version.
The most important change is about removing a handle to the guest memory
from the Queue, meaning the caller has to provide the guest memory
handle for multiple methods from the QueueT trait.
One interesting aspect is that QueueT has been widely extended to
provide every getter and setter we need to access and update the Queue
structure without having direct access to its internal fields.
This patch ports all the virtio and vhost-user devices to this new crate
definition. It also updates both vhost-user-block and vhost-user-net
backends based on the updated vhost-user-backend crate. It also updates
the fuzz directory.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of passing separately a list of Queues and the equivalent list
of EventFds, we consolidate these two through a tuple along with the
queue index.
The queue index can be very useful if looking for the actual index
related to the queue, no matter if other queues have been enabled or
not.
It's also convenient to have the EventFd associated with the Queue so
that we don't have to carry two lists with the same amount of items.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Moving the whole codebase to rely on the AccessPlatform definition from
vm-virtio so that we can fully remove it from virtio-queue crate.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Since we're trying to move away from the translation happening in the
virtio-queue crate, the device itself is performing the address
translation when needed.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of relying on the virtio-queue crate to store the information
about the MSI-X vectors for each queue, we handle this directly from the
PCI transport layer.
This is the first step in getting closer to the upstream version of
virtio-queue so that we can eventually move fully to the upstream
version.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Whenever the backing file of our virtio-block device is opened with
O_DIRECT, there's a requirement about the buffer address and size to be
aligned to the sector size.
We know virtio-block requests are sector aligned in terms of size, but
we must still check if the buffer address is. In case it's not, we
create an intermediate buffer that will be passed through the system
call. In case of a write operation, the content of the non-aligned
buffer must be copied beforehand, and in case of a read operation, the
content of the aligned buffer must be copied to the non-aligned one
after the operation has been completed.
Fixes#3587
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
If the disk is backed by a block device on the host a non-default
topology will be available and that topology can be advertised by virtio
block.
Fixes: #3262
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Relying on the vm-virtio/virtio-queue crate from rust-vmm which has been
copied inside the Cloud Hypervisor tree, the entire codebase is moved to
the new definition of a Queue and other related structures.
The reason for this move is to follow the upstream until we get some
agreement for the patches that we need on top of that to make it
properly work with Cloud Hypervisor.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Introduce a common solution for spawning the virtio threads which will
make it easier to add the panic handling.
During this effort I discovered that there were no seccomp filters
registered for the vhost-user-net thread nor the vhost-user-block
thread. This change also incorporates basic seccomp filters for those as
part of the refactoring.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
We are relying on applying empty 'seccomp' filters to support the
'--seccomp false' option, which will be treated as an error with the
updated 'seccompiler' crate. This patch fixes this issue by explicitly
checking whether the 'seccomp' filter is empty before applying the
filter.
Signed-off-by: Bo Chen <chen.bo@intel.com>
As the first step to complete live-migration with tracking dirty-pages
written by the VMM, this commit patches the dependent vm-memory crate to
the upstream version with the dirty-page-tracking capability. Most
changes are due to the updated `GuestMemoryMmap`, `GuestRegionMmap`, and
`MmapRegion` structs which are taking an additional generic type
parameter to specify what 'bitmap backend' is used.
The above changes should be transparent to the rest of the code base,
e.g. all unit/integration tests should pass without additional changes.
Signed-off-by: Bo Chen <chen.bo@intel.com>
Add a helper to VirtioCommon which returns duplicates of the EventFds
for kill and pause event.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In order to support using Versionize for state structures it is necessary
to use simpler, primitive, data types in the state definitions used for
snapshot restore.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Simplify snapshot & restore code by using generics to specify helper
functions that take / make a Serialize / Deserialize struct
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
To support I/O throttling on virt-net devices, we need to use the
'rate_limiter' module from the 'net_utils' crate. Given the
'virtio-devices' crate has dependency on the 'net_utils', we will need
to move the 'rate_limiter' module out of the 'virtio-devices' crate to
avoid circular dependency issue. Considering the 'rate_limiter' is not
virtio specific and could be reused for non virtio devices, we move it
to its own crate.
Signed-off-by: Bo Chen <chen.bo@intel.com>
error: name `TYPE_UNKNOWN` contains a capitalized acronym
--> vm-virtio/src/lib.rs:48:5
|
48 | TYPE_UNKNOWN = 0xFF,
| ^^^^^^^^^^^^ help: consider making the acronym lowercase, except the initial letter: `Type_Unknown`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#upper_case_acronyms
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
If the function can never return an error this is now a clippy failure:
error: this function's return value is unnecessarily wrapped by `Result`
--> virtio-devices/src/watchdog.rs:215:5
|
215 | / fn set_state(&mut self, state: &WatchdogState) -> io::Result<()> {
216 | | self.common.avail_features = state.avail_features;
217 | | self.common.acked_features = state.acked_features;
218 | | // When restoring enable the watchdog if it was previously enabled. We reset the timer
... |
223 | | Ok(())
224 | | }
| |_____^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_wraps
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Now that BlockIoUring is the only implementation of virtio-block,
handling both synchronous and asynchronous backends based on the
AsyncIo trait, we can rename it to Block.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that both synchronous and asynchronous backends rely on the
asynchronous version of virtio-block (namely BlockIoUring), we can
get rid of the synchronous version (namely Block).
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Even though the driver can provide fewer queues than those advertised
for some device types their is a minimum number that is required for
operation.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Don't assume that the number of queues provided match the number of
queues offered. The virtio spec allows the driver to program fewer
queues.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rather than having to give and return the ioeventfd used for a device
clone them each time. This will make it simpler when we start handling
the driver enabling fewer queues than advertised by the device.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In order to make the thread naming more useful derive their name from
the device id (which can be supplied by the user) and a device specific
suffix that has details of the individual queue (or queue pair.)
e.g.
rob@artemis:~$ pstree -p -c -l -t `pidof cloud-hypervisor`
cloud-hyperviso(27501)─┬─{_console}(27525)
├─{_disk0_q0}(27529)
├─{_disk0_q1}(27532)
├─{_net1_ctrl}(27533)
├─{_net1_qp0}(27534)
├─{_net1_qp1}(27535)
├─{_rng}(27526)
├─{http-server}(27504)
├─{seccomp_signal_}(27502)
├─{signal_handler}(27523)
├─{vcpu0}(27520)
├─{vcpu1}(27522)
└─{vmm}(27503)
Fixes: #2077
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When a total ordering between multiple atomic variables is not required
then use Ordering::Acquire with atomic loads and Ordering::Release with
atomic stores.
This will improve performance as this does not require a memory fence
on x86_64 which Ordering::SeqCst will use.
Add a comment to the code in the vCPU handling code where it operates on
multiple atomics to explain why Ordering::SeqCst is required.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The Windows virtio block driver puts multiple data descriptors between
the header and the status footer. To handle this when parsing iterate
over the descriptor chain until the end is reached accumulating the
address and length pairs in a vector. For execution iterate over the
vector and make sequential reads from the disk for each data descriptor.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Split the block device implementation into code that be used in common
between multiple different virtio device implementations.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In order to simplify the transition to VirtioCommon and to avoid needing
to set empty fields derive Default for VirtioCommon.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rearrange the code to match other devices which makes it easier to prep
for sharing this between other devices.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
There will be some cases where the implementation of the snapshot()
function from the Snapshottable trait will require to modify some
internal data, therefore we make this possible by updating the trait
definition with snapshot(&mut self).
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Using the Rust Barrier mechanism, this patch forces each virtio device
to acknowledge they've been correctly paused before going further.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of passing only the event type through the handle_event()
callback, we make the trait slightly more generic by providing the
epoll event to each virtio device implementation.
This is particularly useful for vsock as it will need the event set.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Currently any messages generated during the worker thread are not
shown anywhere as the thread is never join()ed on. Instead output the
error immediately.
For now only cover the subset where the work to port to EpollHandler
clashed with the seccomp filtering for virtio devices.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>