When a total ordering between multiple atomic variables is not required
then use Ordering::Acquire with atomic loads and Ordering::Release with
atomic stores.
This will improve performance as this does not require a memory fence
on x86_64 which Ordering::SeqCst will use.
Add a comment to the code in the vCPU handling code where it operates on
multiple atomics to explain why Ordering::SeqCst is required.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
A new version of vm-memory was released upstream which resulted in some
components pulling in that new version. Update the version number used
to point to the latest version but continue to use our patched version
due to the fix for #1258
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Extract the code that is used by vhost_user_block from the
virtio-devices crate to remove the dependencies on unrequired
functionality such as the virtio transports.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move the definition of RawFile from virtio-devices crate into qcow
crate. All the code that consumes RawFile also already depends on the
qcow crate for image file type detection so this change breaks the
need for the qcow crate to depend on the very large virtio-devices
crate.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Remove the vmm dependency from vhost_user_block and vhost_user_net where
it was existing to use config::OptionParser. By moving the OptionParser
to its own crate at the top-level we can remove the very heavy
dependency that these vhost-user backends had.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The binary is still built in the same location but the source code and
the dependencies for it come from the vhost_user_block crate itself.
The binary will be built with:
`cargo build --all --bin vhost_user_block` or just `cargo build --all`
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Split the generic virtio code (queues and device type) from the
VirtioDevice trait, transport and device implementations.
This also simplifies the feature handling in vhost_user_backend as the
vm-virtio crate is no longer has any features.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Correctly implement the virtio specification by setting the writeback
field on the request based on the algorithm in the spec.
TEST=Boot with hypervisor-firmware with CH in verbose mode. See info
level messages saying cache mode is writethrough in firmware (no support
for flush or WCE). Once in the Linux kernel see messages that mode is
writeback.
Fixes: #1216
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move the method that is used to decide whether the guest should be
signalled into the Queue implementation from vm-virtio. This removes
duplicated code between vhost_user_backend and the vm-virtio block
implementation.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
As the parsing code is reused the flush feature is already implemented
and ready to be used.
Fixes: #1197
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Changes is vhost crate require VhostUserDaemon users to create and
provide a vhost::Listener in advance. This allows us to adopt
sandboxing strategies in the future, by being able to create the UNIX
socket before switching to a restricted namespace.
Update also the reference to vhost crate in Cargo.lock to point to the
latest commit from the dragonball branch.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Switch to using the recently added OptionParser in the code that parses
the block backend.
Fixes: #1092
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rather than repeat syntax for the vhost-user-block backend in multiple
places store it in one place and reference it from the required places.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
After all the previous refactoring patches, we can finally create
multiple threads under the same backend. This is directly combined with
multiqueues so that it will create one thread per queue.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Anticipating the follow up patches to run multiple threads for the same
backend, we need the initialization of the disk to happen in the high
level structure VhostUserBlkBackend.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The DiskFile will need to be shared across multiple threads when running
multiple queues across these threads. That's why it needs to be put
inside an Arc. The reason for the Mutex is because execute() expects a
mutable object implementing Read + Write + Seek. Unfortunately, this
create a contention point as the object needs to be locked from each
thread, reducing the performance gain we will get with multiple threads.
The need for an immutable object would solve this problem, and it will
be addressed later through follow up patches.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
There is no need for retrieving the VringWorker since we don't need to
register some extra file descriptors to the epoll loop.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>