Commit Graph

6 Commits

Author SHA1 Message Date
Sebastien Boeuf
a423bf13ad virtio: Port codebase to the latest virtio-queue version
The new virtio-queue version introduced some breaking changes which need
to be addressed so that Cloud Hypervisor can still work with this
version.

The most important change is about removing a handle to the guest memory
from the Queue, meaning the caller has to provide the guest memory
handle for multiple methods from the QueueT trait.

One interesting aspect is that QueueT has been widely extended to
provide every getter and setter we need to access and update the Queue
structure without having direct access to its internal fields.

This patch ports all the virtio and vhost-user devices to this new crate
definition. It also updates both vhost-user-block and vhost-user-net
backends based on the updated vhost-user-backend crate. It also updates
the fuzz directory.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-07-29 17:41:32 +01:00
Sebastien Boeuf
3f62a172b2 virtio-devices: Pass a list of tuples for virtqueues
Instead of passing separately a list of Queues and the equivalent list
of EventFds, we consolidate these two through a tuple along with the
queue index.

The queue index can be very useful if looking for the actual index
related to the queue, no matter if other queues have been enabled or
not.

It's also convenient to have the EventFd associated with the Queue so
that we don't have to carry two lists with the same amount of items.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-07-21 14:28:41 +02:00
Sebastien Boeuf
590361aeb6 virtio-devices: Translate queue addresses back to GVA if needed
Whenever a virtio device is placed behind a vIOMMU, we have some code in
pci_common_config.rs to translate the queue addresses (descriptor table,
available ring and used ring) from GVA to GPA, so that they can be used
correctly.

But in case of vDPA, we also need to provide the queue addresses to the
vhost backend. And since the vhost backend deals with consistent IOVAs,
all addresses being provided should be GVAs if the device is placed
being a vIOMMU. For that reason, we perform a translation of the queue
addresses back from GPA to GVA if necessary, and only to be provided to
the vhost backend.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-04-05 00:09:52 +02:00
Sebastien Boeuf
b3d7ad0b91 virtio-devices: vdpa: Add extra debug!() logs
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-04-05 00:09:52 +02:00
Sebastien Boeuf
0685cd8aae virtio-devices: vdpa: Remove get_iova_range() workaround
Now that we rely on vhost v0.4.0, which contains the fix for
get_iova_range(), we don't need the workaround anymore, and we can
actually call into the dedicated function.

Fixes #3861

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-25 17:37:08 +00:00
Sebastien Boeuf
be7c389120 virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.

vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.

So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-18 12:28:40 +01:00