Commit Graph

193 Commits

Author SHA1 Message Date
Rob Bradford
b69f6d4f6c vhost_user_net, vhost_user_block, option_parser: Remove vmm dependency
Remove the vmm dependency from vhost_user_block and vhost_user_net where
it was existing to use config::OptionParser. By moving the OptionParser
to its own crate at the top-level we can remove the very heavy
dependency that these vhost-user backends had.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-07-06 18:33:29 +01:00
Rob Bradford
72802b34cd vhost_user_block: Move binary into vhost_user_block crate
The binary is still built in the same location but the source code and
the dependencies for it come from the vhost_user_block crate itself.

The binary will be built with:

`cargo build --all --bin vhost_user_block` or just `cargo build --all`

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-07-06 10:56:10 +02:00
Rob Bradford
2a6eb31d5b vm-virtio, virtio-devices: Split device implementation from virt queues
Split the generic virtio code (queues and device type) from the
VirtioDevice trait, transport and device implementations.

This also simplifies the feature handling in vhost_user_backend as the
vm-virtio crate is no longer has any features.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-07-02 17:09:28 +01:00
dependabot-preview[bot]
aac87196d6 build(deps): bump vm-memory from 0.2.0 to 0.2.1
Bumps [vm-memory](https://github.com/rust-vmm/vm-memory) from 0.2.0 to 0.2.1.
- [Release notes](https://github.com/rust-vmm/vm-memory/releases)
- [Changelog](https://github.com/rust-vmm/vm-memory/blob/v0.2.1/CHANGELOG.md)
- [Commits](https://github.com/rust-vmm/vm-memory/compare/v0.2.0...v0.2.1)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-05-28 17:06:48 +01:00
Rob Bradford
c31ad72ee9 build: Address issues found by 1.43.0 clippy
These are mostly due to use of "bare use" statements and unnecessary vector
creation.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-05-27 19:32:12 +02:00
dependabot-preview[bot]
a4bb96d45c build(deps): bump libc from 0.2.70 to 0.2.71
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.70 to 0.2.71.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.70...0.2.71)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-05-27 09:02:13 +02:00
Rob Bradford
dc66eee8f0 vhost_user_block: Ensure backing file consistency
Correctly implement the virtio specification by setting the writeback
field on the request based on the algorithm in the spec.

TEST=Boot with hypervisor-firmware with CH in verbose mode. See info
level messages saying cache mode is writethrough in firmware (no support
for flush or WCE). Once in the Linux kernel see messages that mode is
writeback.

Fixes: #1216

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-05-21 08:40:43 +02:00
Rob Bradford
9d88ba7afb vhost_user_block: Use VirtioBlockConfig from vm-virtio
Use the same definition of the struct as vm-virtio.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-05-21 08:40:43 +02:00
Rob Bradford
a813b57f59 vm-virtio, vhost_user_{fs,block,backend}: Move EVENT_IDX handling
Move the method that is used to decide whether the guest should be
signalled into the Queue implementation from vm-virtio. This removes
duplicated code between vhost_user_backend and the vm-virtio block
implementation.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-05-20 12:56:25 +02:00
Rob Bradford
e6fd6d6360 vhost_user_block: Implement VIRTIO_BLK_F_FLUSH
As the parsing code is reused the flush feature is already implemented
and ready to be used.

Fixes: #1197

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-05-18 13:11:00 +01:00
Sergio Lopez
c4bf383fd7 vhost_user_*: Create a vhost::Listener in advance
Changes is vhost crate require VhostUserDaemon users to create and
provide a vhost::Listener in advance. This allows us to adopt
sandboxing strategies in the future, by being able to create the UNIX
socket before switching to a restricted namespace.

Update also the reference to vhost crate in Cargo.lock to point to the
latest commit from the dragonball branch.

Signed-off-by: Sergio Lopez <slp@redhat.com>
2020-05-14 17:16:23 +02:00
dependabot-preview[bot]
2991fd2a48 build(deps): bump libc from 0.2.69 to 0.2.70
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.69 to 0.2.70.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.69...0.2.70)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-05-12 20:26:43 +02:00
Rob Bradford
b57eeb9628 vhost_user_block: Add "queue_size" to --block-backend
This allows the configuration of the queue size like other backends.

Fixes: #1131

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-05-11 09:40:40 +02:00
Rob Bradford
5016fcf8d5 vhost_user_block: Use config::OptionParser to simplify block backend parsing
Switch to using the recently added OptionParser in the code that parses
the block backend.

Fixes: #1092

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-05-11 09:40:40 +02:00
Rob Bradford
f3f398eb44 vhost_user_block: Consolidate the vhost-user-block backend syntax
Rather than repeat syntax for the vhost-user-block backend in multiple
places store it in one place and reference it from the required places.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-05-11 09:40:40 +02:00
Rob Bradford
d5bfa2dfc8 vmm, vhost_user_block: Make parameter names match --disk
Make the --block-backend parameters match the --disk parameters.

Fixes: #898

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-04-30 15:20:55 +02:00
Sebastien Boeuf
a517be4eac vhost_user_blk: Add multithreaded multiqueue support
After all the previous refactoring patches, we can finally create
multiple threads under the same backend. This is directly combined with
multiqueues so that it will create one thread per queue.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-04-17 12:52:28 +02:00
Sebastien Boeuf
13c8283fbe vhost_user_blk: Make everything private when possible
Doing some cleanup as most of the code does not need to be public.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-04-17 12:52:28 +02:00
Sebastien Boeuf
a31f5f8106 vhost_user_blk: Move disk initialization to VhostUserBlkBackend
Anticipating the follow up patches to run multiple threads for the same
backend, we need the initialization of the disk to happen in the high
level structure VhostUserBlkBackend.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-04-17 12:52:28 +02:00
Sebastien Boeuf
e78e34b36a vhost_user_blk: Make DiskFile sharable across threads
The DiskFile will need to be shared across multiple threads when running
multiple queues across these threads. That's why it needs to be put
inside an Arc. The reason for the Mutex is because execute() expects a
mutable object implementing Read + Write + Seek. Unfortunately, this
create a contention point as the object needs to be locked from each
thread, reducing the performance gain we will get with multiple threads.

The need for an immutable object would solve this problem, and it will
be addressed later through follow up patches.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-04-17 12:52:28 +02:00
Sebastien Boeuf
808586ece7 vhost_user_blk: Simplify the code by removing VringWorker
There is no need for retrieving the VringWorker since we don't need to
register some extra file descriptors to the epoll loop.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-04-17 12:52:28 +02:00
Sebastien Boeuf
1a0a2c0182 vhost_user_backend: Provide the thread ID to handle_event()
By adding a "thread_id" parameter to handle_event(), the backend crate
can now indicate to the backend implementation which thread triggered
the processing of some events.

This is applied to vhost-user-net backend and allows for simplifying a
lot the code since each thread is identical.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-04-14 14:11:41 +02:00
Sebastien Boeuf
cfffb7edb0 vhost_user_backend: Allow for one exit_event per thread
By adding the "thread_index" parameter to the function exit_event() from
the VhostUserBackend trait, the backend crate now has the ability to ask
the backend implementation about the exit event related to a specific
thread.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-04-14 14:11:41 +02:00
Sebastien Boeuf
cd2b03f6ed vhost_user_backend: Return a list of vring workers
Now that multiple worker threads can be run from the backend crate, it
is important that each backend implementation can access every worker
thread.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-04-14 14:11:41 +02:00
Sebastien Boeuf
40e4dc6339 vhost_user_backend: Change handle_event as immutable
By changing the mutability of this function, after adapting all
backends, we should be able to implement multithreads with
multiqueues support without hitting a bottleneck on the backend
locking.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-04-14 14:11:41 +02:00
Sebastien Boeuf
8f434df1fb vhost_user: Adapt backends to let handle_event be immutable
Both blk, net and fs backends have been updated to avoid the requirement
of having handle_event(&mut self). This will allow the backend crate to
avoid taking a write lock onto the backend object, which will remove the
potential contention point when multiple threads will be handling
multiqueues.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-04-14 14:11:41 +02:00
dependabot-preview[bot]
886c0f9093 build(deps): bump libc from 0.2.68 to 0.2.69
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.68 to 0.2.69.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.68...0.2.69)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-04-14 09:27:04 +01:00
Rob Bradford
8acc15a63c build: Bump vm-memory and linux-loader dependencies
linux-loader depends on vm-memory so must be updated at the same time.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-03-23 14:27:41 +00:00
dependabot-preview[bot]
51f51ea17d build(deps): bump libc from 0.2.67 to 0.2.68
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.67 to 0.2.68.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.67...0.2.68)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-03-17 21:36:38 +00:00
Sergio Lopez
3957d1ee27 vhost_user_backend: call get_used_event from needs_notification
This change, combined with the compiler hint to inline get_used_event,
shortens the window between the memory read and the actual check by
calling get_used_event from needs_notification.

Without it, when putting enough pressure on the vring, it's possible
that a notification is wrongly omitted, causing the queue to stall.

Signed-off-by: Sergio Lopez <slp@redhat.com>
2020-03-12 14:34:21 +00:00
Rob Bradford
f0a3e7c4a1 build: Bump linux-loader and vm-memory dependencies
linux-loader now uses the released vm-memory so we must move to that
version at the same time.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-03-05 11:01:30 +01:00
Eryu Guan
5200bf3c59 Cargo: switch vhost_rs to external crate
As cloud-hypervisor/vhost crate (dragonball branch) is ready to be used,
switch vhost_rs from internal crate to the external one.

Signed-off-by: Eryu Guan <eguan@linux.alibaba.com>
2020-03-03 13:14:45 +00:00
dependabot-preview[bot]
f190cb05b5 build(deps): bump libc from 0.2.66 to 0.2.67
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.66 to 0.2.67.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.66...0.2.67)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-02-21 08:03:30 +00:00
Sergio Lopez
5c06b7f862 vhost_user_block: Implement optional static polling
Actively polling the virtqueue significantly reduces the latency of
each I/O operation, at the expense of using more CPU time. This
features is specially useful when using low-latency devices (SSD,
NVMe) as the backend.

This change implements static polling. When a request arrives after
being idle, vhost_user_block will keep checking the virtqueue for new
requests, until POLL_QUEUE_US (50us) has passed without finding one.

POLL_QUEUE_US is defined to be 50us, based on the current latency of
enterprise SSDs (< 30us) and the overhead of the emulation.

This feature is enabled by default, and can be disabled by using the
"poll_queue" parameter of "block-backend".

This is a test using null_blk as a backend for the image, with the
following parameters:

 - null_blk gb=20 nr_devices=1 irqmode=2 completion_nsec=0 no_sched=1

With "poll_queue=false":

fio --ioengine=sync --bs=4k --rw randread --name randread --direct=1
--filename=/dev/vdb --time_based --runtime=10

randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.14
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=169MiB/s][r=43.2k IOPS][eta 00m:00s]
randread: (groupid=0, jobs=1): err= 0: pid=433: Tue Feb 18 11:12:59 2020
  read: IOPS=43.2k, BW=169MiB/s (177MB/s)(1688MiB/10001msec)
    clat (usec): min=17, max=836, avg=21.64, stdev= 3.81
     lat (usec): min=17, max=836, avg=21.77, stdev= 3.81
    clat percentiles (nsec):
     |  1.00th=[19328],  5.00th=[19840], 10.00th=[20352], 20.00th=[21120],
     | 30.00th=[21376], 40.00th=[21376], 50.00th=[21376], 60.00th=[21632],
     | 70.00th=[21632], 80.00th=[21888], 90.00th=[22144], 95.00th=[22912],
     | 99.00th=[28544], 99.50th=[30336], 99.90th=[39168], 99.95th=[42752],
     | 99.99th=[71168]
   bw (  KiB/s): min=168440, max=188496, per=100.00%, avg=172912.00, stdev=3975.63, samples=19
   iops        : min=42110, max=47124, avg=43228.00, stdev=993.91, samples=19
  lat (usec)   : 20=5.90%, 50=94.08%, 100=0.02%, 250=0.01%, 500=0.01%
  lat (usec)   : 750=0.01%, 1000=0.01%
  cpu          : usr=10.35%, sys=25.82%, ctx=432417, majf=0, minf=10
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=432220,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=1688MiB (1770MB), run=10001-10001msec

Disk stats (read/write):
  vdb: ios=427867/0, merge=0/0, ticks=7346/0, in_queue=0, util=99.04%

With "poll_queue=true" (default):

fio --ioengine=sync --bs=4k --rw randread --name randread --direct=1
--filename=/dev/vdb --time_based --runtime=10

randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.14
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=260MiB/s][r=66.7k IOPS][eta 00m:00s]
randread: (groupid=0, jobs=1): err= 0: pid=422: Tue Feb 18 11:14:47 2020
  read: IOPS=68.5k, BW=267MiB/s (280MB/s)(2674MiB/10001msec)
    clat (usec): min=10, max=966, avg=13.60, stdev= 3.49
     lat (usec): min=10, max=966, avg=13.70, stdev= 3.50
    clat percentiles (nsec):
     |  1.00th=[11200],  5.00th=[11968], 10.00th=[11968], 20.00th=[12224],
     | 30.00th=[12992], 40.00th=[13504], 50.00th=[13760], 60.00th=[13888],
     | 70.00th=[14016], 80.00th=[14144], 90.00th=[14272], 95.00th=[14656],
     | 99.00th=[20352], 99.50th=[23936], 99.90th=[35072], 99.95th=[36096],
     | 99.99th=[47872]
   bw (  KiB/s): min=265456, max=296456, per=100.00%, avg=274229.05, stdev=13048.14, samples=19
   iops        : min=66364, max=74114, avg=68557.26, stdev=3262.03, samples=19
  lat (usec)   : 20=98.84%, 50=1.15%, 100=0.01%, 250=0.01%, 500=0.01%
  lat (usec)   : 750=0.01%, 1000=0.01%
  cpu          : usr=8.24%, sys=21.15%, ctx=684669, majf=0, minf=10
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=684611,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=267MiB/s (280MB/s), 267MiB/s-267MiB/s (280MB/s-280MB/s), io=2674MiB (2804MB), run=10001-10001msec

Disk stats (read/write):
  vdb: ios=677855/0, merge=0/0, ticks=7026/0, in_queue=0, util=99.04%

Signed-off-by: Sergio Lopez <slp@redhat.com>
2020-02-19 17:13:47 +00:00
Sergio Lopez
0e4e27ea9d vhost_user_block: Make use of the EVENT_IDX feature
Now that vhost_user_backend and vm-virtio do support EVENT_IDX, use it
in vhost_user_block to reduce the number of notifications sent between
the driver and the device.

This is specially useful when using active polling on the virtqueue,
as it'll be implemented by a future patch.

This is a snapshot of kvm_stat while generating ~60K IOPS with fio on
the guest without EVENT_IDX:

 Event                                         Total %Total CurAvg/s
 kvm_entry                                    393454   20.3    62494
 kvm_exit                                     393446   20.3    62494
 kvm_apic_accept_irq                          378146   19.5    60268
 kvm_msi_set_irq                              369720   19.0    58881
 kvm_fast_mmio                                370497   19.1    58817
 kvm_hv_timer_state                            10197    0.5     1715
 kvm_msr                                        8770    0.5     1443
 kvm_wait_lapic_expire                          7018    0.4     1118
 kvm_apic                                       2768    0.1      538
 kvm_pv_tlb_flush                               2028    0.1      360
 kvm_vcpu_wakeup                                1453    0.1      278
 kvm_apic_ipi                                   1384    0.1      269
 kvm_fpu                                        1148    0.1      164
 kvm_pio                                         574    0.0	  82
 kvm_userspace_exit                              574    0.0	  82
 kvm_halt_poll_ns                                 24    0.0	   3

And this is the snapshot while doing the same thing with EVENT_IDX:

 Event                                         Total %Total CurAvg/s
 kvm_entry                                     35506   26.0     3873
 kvm_exit                                      35499   26.0     3873
 kvm_hv_timer_state                            14740   10.8     1672
 kvm_apic_accept_irq                           13017    9.5     1438
 kvm_msr                                       12845    9.4     1421
 kvm_wait_lapic_expire                         10422    7.6     1118
 kvm_apic                                       3788    2.8      502
 kvm_pv_tlb_flush                               2708    2.0      340
 kvm_vcpu_wakeup                                1992    1.5      258
 kvm_apic_ipi                                   1894    1.4      251
 kvm_fpu                                        1476    1.1      164
 kvm_pio                                         738    0.5       82
 kvm_userspace_exit                              738    0.5	  82
 kvm_msi_set_irq                                 701    0.5	  69
 kvm_fast_mmio                                   238    0.2        4
 kvm_halt_poll_ns                                 50    0.0        1
 kvm_ple_window_update                            28    0.0        0
 kvm_page_fault                                    4    0.0        0

It can be clearly appreciated how the number of vm exits per second,
specially the ones related to notifications (kvm_fast_mmio and
kvm_msi_set_irq) is drastically lower.

Signed-off-by: Sergio Lopez <slp@redhat.com>
2020-02-19 17:13:47 +00:00
Sergio Lopez
1ef6996207 vhost_user_backend: Add helpers for EVENT_IDX
Add helpers to Vring and VhostUserSlaveReqHandler for EVENT_IDX, so
consumers of this crate can make use of this feature.

Signed-off-by: Sergio Lopez <slp@redhat.com>
2020-02-19 17:13:47 +00:00
Rob Bradford
c33c38bd96 vhost_user_block: Port to new exit event strategy
Implement the newly added exit_event() method on the VhostUserBackend
trait to allow the backend to provide an EventFD for triggering an exit
of the worker thread.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-02-11 15:21:07 +01:00
Rob Bradford
7ca691fc2f vhost_user_block: Implement and use worker shutdown
Add a kill EventFD and use this to shutdown the worker thread when the
backend terminates.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-02-10 09:16:49 +01:00
Rob Bradford
710394b872 vhost_user_block: Forward the error from unexpected event
This is consistent with vhost-user-net and vhost-user-fs.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-02-10 09:16:49 +01:00
Rob Bradford
4f4c3d3ebe vhost_user_block: Make Error behave like net and fs versions
Make the Error struct behave more like the net and fs equivalents.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-02-10 09:16:49 +01:00
Yang Zhong
99da1dff90 vhost-user-blk: Add MQ support in backend
Adding the num_queues parameter for vhost-user-blk backend, which
can enable MQ support in the backend.

This patch has enabled the MQ support from handle_event, and the
vhost-user-backend crate will enable multiple threads to call this
handle_event to handle read/write operations.

Signed-off-by: Yang Zhong <yang.zhong@intel.com>
2020-02-03 09:49:27 +01:00
Rob Bradford
7f73eebbdb vhost_user_block: Split launching backend into its own function
Split the basic launching functionality into its own function in the
newly added vhost_user_block crate.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-01-23 10:30:06 +00:00
Rob Bradford
1dd2451895 vhost_user_block: Refactor vhost_user_block backend code into a new crate
Extract the majority of the code that provides the vhost-user-block
backend into its own crate and port the binary to use it.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-01-23 10:30:06 +00:00