This check is new in the beta version of clippy and exists to avoid
potential deadlocks by highlighting when the test in an if or for loop
is something that holds a lock. In many cases we would need to make
significant refactorings to be able to pass this check so disable in the
affected crates.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Instead of relying on the local version of vhost-user-backend, this
patch allows the block backend implementation to rely on the upstream
version of the crate from rust-vmm.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Since we're trying to move away from the translation happening in the
virtio-queue crate, the device itself is performing the address
translation when needed.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Relying on the vm-virtio/virtio-queue crate from rust-vmm which has been
copied inside the Cloud Hypervisor tree, the entire codebase is moved to
the new definition of a Queue and other related structures.
The reason for this move is to follow the upstream until we get some
agreement for the patches that we need on top of that to make it
properly work with Cloud Hypervisor.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
These crates are written to produce log messages using the error!
macro, but their logs didn't actually go anywhere, which made it very
difficult to debug when they're not working.
I've used env_logger here because it's the same log implementation
that the hypervisor crate uses.
Signed-off-by: Alyssa Ross <hi@alyssa.is>
I don't think the min_values(1) was doing anything, since
takes_value(true) was already set, and after this change I still get
an error if I try to supply the argument with no value.
But the unwrap below would fail if the argument wasn't supplied at
all. By setting required(true) we get a nice message from clap
instead.
Signed-off-by: Alyssa Ross <hi@alyssa.is>
Issue from beta verion of clippy:
Error: --> vm-virtio/src/queue.rs:700:59
|
700 | if let Some(used_event) = self.get_used_event(&mem) {
| ^^^^ help: change this to: `mem`
|
= note: `-D clippy::needless-borrow` implied by `-D warnings`
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow
Signed-off-by: Bo Chen <chen.bo@intel.com>
As the first step to complete live-migration with tracking dirty-pages
written by the VMM, this commit patches the dependent vm-memory crate to
the upstream version with the dirty-page-tracking capability. Most
changes are due to the updated `GuestMemoryMmap`, `GuestRegionMmap`, and
`MmapRegion` structs which are taking an additional generic type
parameter to specify what 'bitmap backend' is used.
The above changes should be transparent to the rest of the code base,
e.g. all unit/integration tests should pass without additional changes.
Signed-off-by: Bo Chen <chen.bo@intel.com>
Extend the current list of available virtio features in order to make it
work correctly with the VMM side.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now all crates use edition = "2018" then the majority of the "extern
crate" statements can be removed. Only those for importing macros need
to remain.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The point being to identify clearly that we're running the backend as a
server. This anticipates the addition of a new function for running the
backend as a client.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Fixes the current codebase so that every cargo clippy can be run with
the beta toolchain without any error.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
error: redundant slicing of the whole range
--> vhost_user_block/src/lib.rs:396:31
|
396 | right.copy_from_slice(&data[..]);
| ^^^^^^^^^ help: use the original slice instead: `data`
|
= note: `-D clippy::redundant-slicing` implied by `-D warnings`
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#redundant_slicing
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In order to support GET_MAX_MEM_SLOTS, ADD_MEM_REG and REM_MEM_REG, the
protocol feature CONFIGURE_MEM_SLOTS must be enabled.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The vhost crate from rust-vmm is ready, which is why we do the switch
from the Cloud Hypervisor fork to the upstream crate.
At the same time, we rename the crate from vhost_rs to vhost.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When a total ordering between multiple atomic variables is not required
then use Ordering::Acquire with atomic loads and Ordering::Release with
atomic stores.
This will improve performance as this does not require a memory fence
on x86_64 which Ordering::SeqCst will use.
Add a comment to the code in the vCPU handling code where it operates on
multiple atomics to explain why Ordering::SeqCst is required.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Extract the code that is used by vhost_user_block from the
virtio-devices crate to remove the dependencies on unrequired
functionality such as the virtio transports.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move the definition of RawFile from virtio-devices crate into qcow
crate. All the code that consumes RawFile also already depends on the
qcow crate for image file type detection so this change breaks the
need for the qcow crate to depend on the very large virtio-devices
crate.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Remove the vmm dependency from vhost_user_block and vhost_user_net where
it was existing to use config::OptionParser. By moving the OptionParser
to its own crate at the top-level we can remove the very heavy
dependency that these vhost-user backends had.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The binary is still built in the same location but the source code and
the dependencies for it come from the vhost_user_block crate itself.
The binary will be built with:
`cargo build --all --bin vhost_user_block` or just `cargo build --all`
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Split the generic virtio code (queues and device type) from the
VirtioDevice trait, transport and device implementations.
This also simplifies the feature handling in vhost_user_backend as the
vm-virtio crate is no longer has any features.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Correctly implement the virtio specification by setting the writeback
field on the request based on the algorithm in the spec.
TEST=Boot with hypervisor-firmware with CH in verbose mode. See info
level messages saying cache mode is writethrough in firmware (no support
for flush or WCE). Once in the Linux kernel see messages that mode is
writeback.
Fixes: #1216
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move the method that is used to decide whether the guest should be
signalled into the Queue implementation from vm-virtio. This removes
duplicated code between vhost_user_backend and the vm-virtio block
implementation.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
As the parsing code is reused the flush feature is already implemented
and ready to be used.
Fixes: #1197
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Changes is vhost crate require VhostUserDaemon users to create and
provide a vhost::Listener in advance. This allows us to adopt
sandboxing strategies in the future, by being able to create the UNIX
socket before switching to a restricted namespace.
Update also the reference to vhost crate in Cargo.lock to point to the
latest commit from the dragonball branch.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Switch to using the recently added OptionParser in the code that parses
the block backend.
Fixes: #1092
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rather than repeat syntax for the vhost-user-block backend in multiple
places store it in one place and reference it from the required places.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
After all the previous refactoring patches, we can finally create
multiple threads under the same backend. This is directly combined with
multiqueues so that it will create one thread per queue.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Anticipating the follow up patches to run multiple threads for the same
backend, we need the initialization of the disk to happen in the high
level structure VhostUserBlkBackend.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The DiskFile will need to be shared across multiple threads when running
multiple queues across these threads. That's why it needs to be put
inside an Arc. The reason for the Mutex is because execute() expects a
mutable object implementing Read + Write + Seek. Unfortunately, this
create a contention point as the object needs to be locked from each
thread, reducing the performance gain we will get with multiple threads.
The need for an immutable object would solve this problem, and it will
be addressed later through follow up patches.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
There is no need for retrieving the VringWorker since we don't need to
register some extra file descriptors to the epoll loop.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By adding a "thread_id" parameter to handle_event(), the backend crate
can now indicate to the backend implementation which thread triggered
the processing of some events.
This is applied to vhost-user-net backend and allows for simplifying a
lot the code since each thread is identical.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By adding the "thread_index" parameter to the function exit_event() from
the VhostUserBackend trait, the backend crate now has the ability to ask
the backend implementation about the exit event related to a specific
thread.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that multiple worker threads can be run from the backend crate, it
is important that each backend implementation can access every worker
thread.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By changing the mutability of this function, after adapting all
backends, we should be able to implement multithreads with
multiqueues support without hitting a bottleneck on the backend
locking.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Both blk, net and fs backends have been updated to avoid the requirement
of having handle_event(&mut self). This will allow the backend crate to
avoid taking a write lock onto the backend object, which will remove the
potential contention point when multiple threads will be handling
multiqueues.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This change, combined with the compiler hint to inline get_used_event,
shortens the window between the memory read and the actual check by
calling get_used_event from needs_notification.
Without it, when putting enough pressure on the vring, it's possible
that a notification is wrongly omitted, causing the queue to stall.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Now that vhost_user_backend and vm-virtio do support EVENT_IDX, use it
in vhost_user_block to reduce the number of notifications sent between
the driver and the device.
This is specially useful when using active polling on the virtqueue,
as it'll be implemented by a future patch.
This is a snapshot of kvm_stat while generating ~60K IOPS with fio on
the guest without EVENT_IDX:
Event Total %Total CurAvg/s
kvm_entry 393454 20.3 62494
kvm_exit 393446 20.3 62494
kvm_apic_accept_irq 378146 19.5 60268
kvm_msi_set_irq 369720 19.0 58881
kvm_fast_mmio 370497 19.1 58817
kvm_hv_timer_state 10197 0.5 1715
kvm_msr 8770 0.5 1443
kvm_wait_lapic_expire 7018 0.4 1118
kvm_apic 2768 0.1 538
kvm_pv_tlb_flush 2028 0.1 360
kvm_vcpu_wakeup 1453 0.1 278
kvm_apic_ipi 1384 0.1 269
kvm_fpu 1148 0.1 164
kvm_pio 574 0.0 82
kvm_userspace_exit 574 0.0 82
kvm_halt_poll_ns 24 0.0 3
And this is the snapshot while doing the same thing with EVENT_IDX:
Event Total %Total CurAvg/s
kvm_entry 35506 26.0 3873
kvm_exit 35499 26.0 3873
kvm_hv_timer_state 14740 10.8 1672
kvm_apic_accept_irq 13017 9.5 1438
kvm_msr 12845 9.4 1421
kvm_wait_lapic_expire 10422 7.6 1118
kvm_apic 3788 2.8 502
kvm_pv_tlb_flush 2708 2.0 340
kvm_vcpu_wakeup 1992 1.5 258
kvm_apic_ipi 1894 1.4 251
kvm_fpu 1476 1.1 164
kvm_pio 738 0.5 82
kvm_userspace_exit 738 0.5 82
kvm_msi_set_irq 701 0.5 69
kvm_fast_mmio 238 0.2 4
kvm_halt_poll_ns 50 0.0 1
kvm_ple_window_update 28 0.0 0
kvm_page_fault 4 0.0 0
It can be clearly appreciated how the number of vm exits per second,
specially the ones related to notifications (kvm_fast_mmio and
kvm_msi_set_irq) is drastically lower.
Signed-off-by: Sergio Lopez <slp@redhat.com>