The main idea behind this commit is to remove all the complexity
associated with TX/RX handling for virtio-net. By using writev() and
readv() syscalls, we could get rid of intermediate buffers for both
queues.
The complexity regarding the TAP registration has been simplified as
well. The RX queue is only processed when some data are ready to be
read from TAP. The event related to the RX queue getting more
descriptors only serves the purpose to register the TAP file if it's not
already.
With all these simplifications, the code is more readable but more
performant as well. We can see an improvement of 10% for a single
queue device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This helper can open a TAP device and configure the interface on it. If
the device needs to be opened multiple times for MQ then it also handles
that correctly.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Even if the vhost-user-net device did implement all currently-defined
features, it would be very short-sighted to use ::all(), because if a
new feature was defined later, the device would start claiming to
implement it even though it didn't.
More practically, claiming to implement all features breaks using QEMU
with the cloud-hypervisor vhost-user-net backend, because QEMU will
negotiate VHOST_USER_PROTOCOL_F_SLAVE_REQ, and then break when the
communication channel isn't actually set up.
I wasn't sure exactly which features the backend should claim to
implement, though. Definitely MQ, and I'm fairly certain none of the
features I've ommitted are implemented. But I'm not sure about
REPLY_ACK. As far as I can tell it should be implemented entirely by
the vhost crate, with no cooperation required from the vhost-user-net
backend itself, so there should be no reason to let a frontend use it
if it wants to. But despite this, neither vhost-user-fs nor
vhost-user-blk claims to implement it.
Signed-off-by: Alyssa Ross <hi@alyssa.is>
EpollHelper allows the removal of much duplicated loop handling code and
instead the device specific even handling is delegated via an
implementation of EpollHelperHandler.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move NetQueuePair and the related NetCounters into the net_util crate.
This means that the vhost_user_net crate now no longer depends on
virtio-devices and so does not depend on the pci, qcow or other similar
crates. This significantly simplifies the build chain for this backend.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
By moving the code for opening the two RX and TX queues into a shared
location we are starting to remove the requirement for the
vhost-user-net backend to depend on the virtio-devices crate which in of
itself depends on many other crates that are not necessary for the
backend to function.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
By moving the code for opening the TAP device into a shared location we
are starting to remove the requirement for the vhost-user-net backend to
depend on the virtio-devices crate which in of itself depends on many
other crates that are not necessary for the backend to function.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Remove the vmm dependency from vhost_user_block and vhost_user_net where
it was existing to use config::OptionParser. By moving the OptionParser
to its own crate at the top-level we can remove the very heavy
dependency that these vhost-user backends had.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The binary is still built in the same location but the source code and
the dependencies for it come from the vhost_user_net crate itself.
The binary will be built with:
`cargo build --all --bin vhost_user_net` or just `cargo build --all`
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Split the generic virtio code (queues and device type) from the
VirtioDevice trait, transport and device implementations.
This also simplifies the feature handling in vhost_user_backend as the
vm-virtio crate is no longer has any features.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This removes the need to use CAP_NET_ADMIN privileges and instead the
host MAC addres is either provided by the user or alternatively it is
retrieved from the kernel.
TEST=Run cloud-hypervisor without CAP_NET_ADMIN permission and a
preconfigured tap device:
sudo ip tuntap add name tap0 mode tap
sudo ifconfig tap0 192.168.249.1 netmask 255.255.255.0 up
cargo clean
cargo build
target/debug/cloud-hypervisor --serial tty --console off --kernel ~/src/rust-hypervisor-firmware/target/target/release/hypervisor-fw --disk path=~/workloads/clear-33190-kvm.img --net tap=tap0
VM was also rebooted to check that works correctly.
Fixes: #1274
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The general handling of VIRTIO_RING_F_EVENT_IDX is in the
vhost_user_backend functionality and the net specific handling is in the
NetQueuePair from virtio-net.
As such enabling for the vhost-user-net backend is just the case of
adding the feature.
Fixes: #789
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The logic for handling the networking queues can now be shared between
the version running in vhost-user-net and vm-virtio.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
According to the virtio spec the guest should always be interrupted when
"used" descriptors are returned from the device to the driver. However
this was not the case for the TX queue in either the virtio-net
implementation or the vhost-user-net implementation.
This would have meant that the guest could end up with a reduced TX
throughput as it would not know that the packets had been dispatched via
the VMM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add a new "host_mac" parameter to "--net" and "--net-backend" and use
this to set the MAC address on the tap interface. If no address is given
one is randomly assigned and is stored in the config.
Support for vhost-user-net self spawning was also included.
Fixes: #1177
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Changes is vhost crate require VhostUserDaemon users to create and
provide a vhost::Listener in advance. This allows us to adopt
sandboxing strategies in the future, by being able to create the UNIX
socket before switching to a restricted namespace.
Update also the reference to vhost crate in Cargo.lock to point to the
latest commit from the dragonball branch.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Switch to using the recently added OptionParser in the code that parses
the network backend.
Whilst doing this also update the net-backend syntax to use "sock"
rather than socket.
Fixes: #1092
Partially fixes: #1091
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rather than repeat syntax for the vhost-user-net backend in multiple
places store it in one place and reference it from the required places.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
By implementing queues_per_thread(), this patch fills the last missing
bit to enable multithreaded multiqueue support for the vhost-user-net
backend implementation.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By adding a "thread_id" parameter to handle_event(), the backend crate
can now indicate to the backend implementation which thread triggered
the processing of some events.
This is applied to vhost-user-net backend and allows for simplifying a
lot the code since each thread is identical.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By adding the "thread_index" parameter to the function exit_event() from
the VhostUserBackend trait, the backend crate now has the ability to ask
the backend implementation about the exit event related to a specific
thread.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to prepare for the support of multithreaded multiqueues, the
structure VhostUserNetThread is simplified to hold only one RX queue,
one TX queue, and one TAP interface.
Following this change, VhostUserNetBackend now holds a list of threads
instead of going through each thread to handle multiqueues.
These changes decouple neatly the abstraction between the backend and
each thread. This allows for a lot of simplification since we now know
all threads are identical, hence the handling of events becomes very
straightforward.
One important point is that each thread can be locked when in use,
without causing any contention with other threads since the backend
doesn't need to be locked anymore.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that multiple worker threads can be run from the backend crate, it
is important that each backend implementation can access every worker
thread.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By changing the mutability of this function, after adapting all
backends, we should be able to implement multithreads with
multiqueues support without hitting a bottleneck on the backend
locking.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Both blk, net and fs backends have been updated to avoid the requirement
of having handle_event(&mut self). This will allow the backend crate to
avoid taking a write lock onto the backend object, which will remove the
potential contention point when multiple threads will be handling
multiqueues.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Add helpers to Vring and VhostUserSlaveReqHandler for EVENT_IDX, so
consumers of this crate can make use of this feature.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Implement the exit_event() method on the VhostUserBackend trait. It is
necessary to specify a custom exit event id in this case as the loop is
also used for handling activity on the tap file descriptors.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Split the basic launching functionality into its own function in the
newly added vhost_user_net crate.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Extract the majority of the code that provides the vhost-user-net
backend into its own crate and port the binary to use it.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>