The cloud hypervisor tells the VM and the backend to support the PACKED_RING feature,
but it actually processes various variables according to the split ring logic, such
as last_avail_index. Eventually it will cause the following error (SPDK as an example):
vhost.c: 516:vhost_vq_packed_ring_enqueue: *ERROR*: descriptor has been used before
vhost_blk.c: 596:process_blk_task: *ERROR*: ====== Task 0x200113784640 req_idx 0 failed ======
vhost.c: 629:vhost_vring_desc_payload_to_iov: *ERROR*: gpa_to_vva((nil)) == NULL
Signed-off-by: Arafatms <arafatms@outlook.com>
If writing to the TAP returns EAGAIN then listen for the TAP to be
writable. When the TAP becomes writable attempt to process the TX queue
again.
Fixes: #2807
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This dependency bump needed some manual handling since the API changed
quite a lot regarding some RawFd being changed into either File or
AsRawFd traits.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Issue from beta verion of clippy:
Error: --> vm-virtio/src/queue.rs:700:59
|
700 | if let Some(used_event) = self.get_used_event(&mem) {
| ^^^^ help: change this to: `mem`
|
= note: `-D clippy::needless-borrow` implied by `-D warnings`
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow
Signed-off-by: Bo Chen <chen.bo@intel.com>
Now that vhost crate allows the caller to set the header flags, we can
set NEED_REPLY whenever the REPLY_ACK protocol feature is supported from
both ends.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
For vhost-user devices, memory should be shared between CLH and
vhost-user backend. However, madvise DONTNEED doesn't working in
this case. So, let's use fallocate PUNCH_HOLE to discard those
memory regions instead.
Signed-off-by: Li Hangjing <lihangjing@bytedance.com>
Sometimes we need balloon deflate automatically to give memory
back to guest, especially for some low priority guest processes
under memory pressure. Enable deflate_on_oom to support this.
Usage: --balloon "size=0,deflate_on_oom=on" \
Signed-off-by: Fei Li <lifei.shirley@bytedance.com>
Since using the VIRTIO configuration to expose the virtual IOMMU
topology has been deprecated, the virtio-iommu implementation must be
updated.
In order to follow the latest patchset that is about to be merged in the
upstream Linux kernel, it must rely on ACPI, and in particular the newly
introduced VIOT table to expose the information about the list of PCI
devices attached to the virtual IOMMU.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Since the reconnection thread took on the responsibility to handle
backend initiated requests as well, the variable naming should reflect
this by avoiding the "reconnect" prefix.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Vhost user INFLIGHT_SHMFD protocol feature supports inflight I/O
tracking, this commit implement the vhost-user device (master) support
of the feature. Till this commit, specific vhost-user devices (blk, fs,
or net) have not enable this feature.
Signed-off-by: Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
Add the support for reconnecting the backend request handler after a
disconnection/crash happened.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Since the slave request handler is common to all vhost-user devices, the
same way the reconnection is, it makes sense to handle the requests from
the backend through the same thread.
The reconnection thread now handles both a reconnection as well as any
request coming from the backend.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit enables socket reconnection for vhost-user-fs backends. Note
that, till this commit:
- The re-establish of the slave communication channel is no supported. So
the socket reconnection does not support virtiofsd with DAX enabled.
- Inflight I/O tracking and restoring is not supported. Therefore, only
virtio-fs daemons that are not processing inflight requests can work
normally after reconnection.
- To make the restarted virtiofsd work normally after reconnection, the
internal status of virtiofsd should also be recovered. This is not the
work of cloud-hypervisor. If the virtio-fs daemon does not support
saving or restoring its internal status, then a re-mount in guest after
socket reconnection should be performed.
Signed-off-by: Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
This commit enables socket reconnection for vhost-user-blk backends.
Note that, till this commit, inflight I/O trakcing and restoring is not
supported. Therefore, only vhost-user-blk backend that are not processing
inflight requests can work normally after reconnection.
Signed-off-by: Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
We should try to read the last avail index from the vring memory aera. This
is necessary when handling vhost-user socket reconnection.
Signed-off-by: Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
Function "GuestMemory::with_regions(_mut)" were mainly temporary methods
to access the regions in `GuestMemory` as the lack of iterator-based
access, and hence they are deprecated in the upstream vm-memory crate [1].
[1] https://github.com/rust-vmm/vm-memory/issues/133
Signed-off-by: Bo Chen <chen.bo@intel.com>
As the first step to complete live-migration with tracking dirty-pages
written by the VMM, this commit patches the dependent vm-memory crate to
the upstream version with the dirty-page-tracking capability. Most
changes are due to the updated `GuestMemoryMmap`, `GuestRegionMmap`, and
`MmapRegion` structs which are taking an additional generic type
parameter to specify what 'bitmap backend' is used.
The above changes should be transparent to the rest of the code base,
e.g. all unit/integration tests should pass without additional changes.
Signed-off-by: Bo Chen <chen.bo@intel.com>
Add a helper to VirtioCommon which returns duplicates of the EventFds
for kill and pause event.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The control queue was missing rt_sigprocmask syscall, which was causing
a crash when the VM was shutdown.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When in client mode, the VMM will retry connecting the backend for a
minute before giving up.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The reconnection code is moved to the vhost-user module as it is a
common place to be shared across all vhost-user devices.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit implements the reconnection feature for vhost-user-net in
case the connection with the backend is shutdown.
The guest has no knowledge about what happens when a disconnection
occurs, which simplifies the code to handle the reconnection. The
reconnection happens with the backend, and the VMM goes through the
vhost-user setup once again.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We thought we could move the control queue to the backend as it was
making some good sense. Unfortunately, doing so was a wrong design
decision as it broke the compatibility with OVS-DPDK backend.
This is why this commit moves the control queue back to the VMM side,
meaning an additional thread is being run for handling the communication
with the guest.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
A lot of the VIRTIO reserved features should be supported or not by the
vhost-user backend. That means on the VMM side, these features should be
available, so that they don't get lost during the negotiation.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The VIRTIO features should not be set before they are acked from the
guest. This code was only present to overcome a vhost crate limitation
that was expecting the VIRTIO features to be set before we could fetch
and set the protocol features.
The vhost crate has been recently fixed by removing the limitation,
therefore there's no need for this workaround in the Cloud Hypervisor
codebase anymore.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Everything that was shared in the net_util.rs file has been now moved to
the net_util crate. The only remaining bit was only used by the
virtio-net implementation, that is why this commit moves this code to
virtio-net, and since there's nothing left in net_util.rs, it can be
removed.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Since the net_util crate contains the common code needed for processing
the control queue, let's use it and remove the duplicate from inside the
virtio-devices crate.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Moving helpers to the net_util crate since we don't want virtio-net
common code to be split between two places. The net_util crate should be
the only place to host virtio-net common code.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Factorize the virtio features and vhost-user protocol features
negotiation through a common function that blk, fs and net
implementations can directly rely on.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Make sure the virtio features are set upon device activation. At the
time the device is activated, we know the guest acknowledged the
features, which mean it's safe to set them back to the backend.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The virtio features are negotiated and set at the time the device is
created, hence there's no need to set the features again while going
through the vhost-user setup that is performed upon queue activation.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that the control queue is correctly handled by the backend, there's
no need to handle it as well from the VMM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>