In case the host supports io_uring and the specific io_uring options
needed, the VMM will choose the asynchronous version of virtio-blk.
This will enable better I/O performances compared to the default
synchronous version.
This is also important to note the VMM won't be able to use the
asynchronous version if the backend image is in QCOW format.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This introduces a new version of virtio-blk device. The default
virtio-blk provides synchronous processing of the queues, while this
new version relies on io_uring from the host kernel to provide an
asynchronous processing of the queues.
This new asynchronous version provides a huge performance improvement
compared to the default synchronous version.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Creates a dedicated function relying on io_uring crate to execute
io_uring specific requests.
Also creates a function for checking io_uring support on the host.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Move to latest vhost version, as it contains the updated list of
vhost-user protocol features along with the updated list of virtio-fs
slave commands.
This will allow Cloud-Hypervisor to work with latest virtiofsd binary.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The support for SGX is exposed to the guest through CPUID 0x12. KVM
passes static subleaves 0 and 1 from the host to the guest, without
needing any modification from the VMM itself.
But SGX also relies on dynamic subleaves 2 through N, used for
describing each EPC section. This is not handled by KVM, which means
the VMM is in charge of setting each subleaf starting from index 2
up to index N, depending on the number of EPC sections.
These subleaves 2 through N are not listed as part of the supported
CPUID entries from KVM. But it's important to set them as long as index
0 and 1 are present and indicate that SGX is supported.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Extract the code that is used by vhost_user_block from the
virtio-devices crate to remove the dependencies on unrequired
functionality such as the virtio transports.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move the definition of RawFile from virtio-devices crate into qcow
crate. All the code that consumes RawFile also already depends on the
qcow crate for image file type detection so this change breaks the
need for the qcow crate to depend on the very large virtio-devices
crate.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move NetQueuePair and the related NetCounters into the net_util crate.
This means that the vhost_user_net crate now no longer depends on
virtio-devices and so does not depend on the pci, qcow or other similar
crates. This significantly simplifies the build chain for this backend.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
By moving the code for opening the two RX and TX queues into a shared
location we are starting to remove the requirement for the
vhost-user-net backend to depend on the virtio-devices crate which in of
itself depends on many other crates that are not necessary for the
backend to function.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
By moving the code for opening the TAP device into a shared location we
are starting to remove the requirement for the vhost-user-net backend to
depend on the virtio-devices crate which in of itself depends on many
other crates that are not necessary for the backend to function.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Remove the vmm dependency from vhost_user_block and vhost_user_net where
it was existing to use config::OptionParser. By moving the OptionParser
to its own crate at the top-level we can remove the very heavy
dependency that these vhost-user backends had.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
There are several dependencies that need updating so update them
manually rather than relying on dependabot. This will reduce the load on
the CI.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The binary is still built in the same location but the source code and
the dependencies for it come from the vhost_user_block crate itself.
The binary will be built with:
`cargo build --all --bin vhost_user_block` or just `cargo build --all`
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The binary is still built in the same location but the source code and
the dependencies for it come from the vhost_user_net crate itself.
The binary will be built with:
`cargo build --all --bin vhost_user_net` or just `cargo build --all`
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
With vhost_user_fs binary moved to its own crate the dependencies in the
top level can be trimmed significantly.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The binary is still built in the same location but the source code and
the dependencies for it come from the vhost_user_fs crate itself.
The binary will be built with:
`cargo build --all --bin vhost_user_fs` or just `cargo build --all`
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Split the generic virtio code (queues and device type) from the
VirtioDevice trait, transport and device implementations.
This also simplifies the feature handling in vhost_user_backend as the
vm-virtio crate is no longer has any features.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>