Check the rust formatting rather than just reformatting code on the CI
agent.
Also fix a formatting error that slipped in whilst the cargo fmt check
was not working correctly.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
A previous version of this change attempted to avoid panicking by not
using .expect() when handling an error when attempting to write to the
log file. Unfortunately the macro eprintln!() that was used to replace
the .expect() also has the behaviour of panicking if stderr cannot be
used. Instead swallow the error completely as if writing to the log has
failed at logging time it is almost certainly the case that any message
about the log would also not be seen.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Since the vhost-user-blk binary will be removed and the newer
release will integrate this block backend into cloud-hypervisor
binary. The block backend code has been added num_queues cmdline
support, we need update multiple queues help info for this
block-backend in the cloud-hypervisor.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
We measure the memory overhead that the VMM process adds to the guest VM
and compare it with a maximum acceptable limit. The test is run against
a simple VM, running 1 vCPU and 512MB of RAM. Although this is not by
any mean a comprehensive VMM overhead measurement, it will allow us to
detect when and if any PR makes our code cross an arbitrary memory
overhead threshold.
Fixes: #64
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
As this can happen during the running of the VMM we should be very
careful not to panic() as that can lead to a thread being used by the VM
disappearing.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Adding the num_queues parameter for vhost-user-blk backend, which
can enable MQ support in the backend.
This patch has enabled the MQ support from handle_event, and the
vhost-user-backend crate will enable multiple threads to call this
handle_event to handle read/write operations.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Add a socket and vhost_user parameter to this option so that the same
configuration option can be used for both virtio-block and
vhost-user-block. For now it is necessary to specify both vhost_user
and socket parameters as auto activation is not yet implemented. The wce
parameter for supporting "Write Cache Enabling" is also added to the
disk configuration.
The original command line parameter is still supported for now and will
be removed in a future release.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add a socket and vhost_user parameter to this option so that the same
configuration option can be used for both virtio-net and vhost-user-net.
For now it is necessary to specify both vhost_user and socket parameters
as auto activation is not yet implemented. The original command line
parameter is still supported for now.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In order to reduce the amount of times VMs are being started through
integration tests, this commit consolidates very similar tests related
to virtio-blk into a single one.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Just add a new integration test to verify that multiqueue support is
correctly supported and that we can find the right amount of queues in
the guest.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The number of queues and the size of each queue were not configurable.
In anticipation for adding multiqueue support, this commit introduces
some new parameters to let the user decide about the number of queues
and the queue size.
Note that the default values for each of these parameters are identical
to the default values used for vhost-user-blk, that is 1 for the number
of queues and 128 for the queue size.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Add explicit exclusions with --net-backend from --block-backend (and
vice-versa.) And also with "--kernel" as this is the option for "VM boot" that is never optional.
Ideally we would conflcit the backend arguments against the "vm-group"
however this does not work as it includes some arguments that have a
default value set and thus clap thinks those arguments are always
provided. Conflicting with "--kernel" is thus a reasonable compromise.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Split the basic launching functionality into its own function in the
newly added vhost_user_block crate.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Split the basic launching functionality into its own function in the
newly added vhost_user_net crate.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Extract the majority of the code that provides the vhost-user-block
backend into its own crate and port the binary to use it.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Extract the majority of the code that provides the vhost-user-net
backend into its own crate and port the binary to use it.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Generate the VM params until after the logging has been enabled. This
will make it possible to reuse the logging for vhost backends.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Previous commit added 'direct' support for virtio-blk to be able to
open files backing disk images with O_DIRECT, so add an integration
test for this new feature.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Previous commit added 'readonly' support for virtio-blk to be able to
configure read-only devices, so add an integration test for this new
feature.
Signed-off-by: Sergio Lopez <slp@redhat.com>
This test is a variant of test_boot_vhost_user_blk(), named
test_boot_vhost_user_blk_direct(), that instances the vhost-user-blk
daemon with 'direct=true', to test this recently introduced feature
for opening files with O_DIRECT.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Add missing WCE (write-cache enable) property support. This not only
an enhancement, but also a fix for a bug.
Right now, when vhost_user_blk uses a qcow2 image, it doesn't write
the QCOW2 metadata until the guest explicitly requests a flush. In
practice, this is equivalent to the write back semantic.
Without WCE, the guest assumes write through for the virtio_blk
device, and doesn't send those flush requests. Adding support for WCE,
and enabling it by default, we ensure the guest does send said
requests.
Supporting "WCE = false" would require updating our qcow2
implementation to ensure that, when required, it honors the write
through semantics by not deferring the updates to QCOW2 metadata.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Add support for opening the disk images with O_DIRECT. This allows
bypassing the host's file system cache, which is useful to avoid
polluting its cache and for better data integrity.
This mode of operation can be enabled by adding the "direct=<bool>"
parameter to the "backend" argument:
./target/debug/vhost_user_blk --backend image=test.raw,sock=/tmp/vhostblk,direct=true
The "direct" parameter defaults to "false", to preserve the original
behavior.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Use RawFile as backend instead of File. This allows us to abstract
the access to the actual image with a specialized layer, so we have a
place where we can deal with the low-level peculiarities.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Doing I/O on an image opened with O_DIRECT requires to adhere to
certain restrictions, requiring the following elements to be aligned:
- Address of the source/destination memory buffer.
- File offset.
- Length of the data to be read/written.
The actual alignment value depends on various elements, and according
to open(2) "(...) there is currently no filesystem-independent
interface for an application to discover these restrictions (...)".
To discover such value, we iterate through a list of alignments
(currently, 512 and 4096) calling pread() with each one and checking
if the operation succeeded.
We also extend RawFile so it can be used as a backend for QcowFile,
so the later can be easily adapted to support O_DIRECT too.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Update queue number with 4 to verify if vhost-user-net device
and backend could work well with multiple queue.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
There are two new options num_queues and queue_size defined for
virtio-net, add them in test_valid_vm_config_net which is used
to validate that both the CLI and the OpenAPI will generate the
same configuration.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Update the common part in net_util.rs under vm-virtio to add mq
support, meanwhile enable mq for virtio-net device, vhost-user-net
device and vhost-user-net backend. Multiple threads will be created,
one thread will be responsible to handle one queue pair separately.
To gain the better performance, it requires to have the same amount
of vcpus as queue pair numbers defined for the net device, due to
the cpu affinity.
Multiple thread support is not added for vhost-user-net backend
currently, it will be added in future.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Add num_queues and queue_size for virtio-net device to make them configurable,
while add the associated options in command line.
Update cloud-hypervisor.yaml with the new options for NetConfig.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Since the common parts are put into net_util.rs under vm-virtio,
refactoring code for virtio-net device, vhost-user-net device
and backend to shrink the code size and improve readability
meanwhile.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
This specifies how much address space should be reserved for hotplugging
of RAM. This space is reserved by adding move the start of the device
area by the desired amount.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
With the latest version of virtiofsd, some changes have been made
to the socket path parameter. This needs to be updated in the cloud
hypervisor codebase, so that our CI keeps running correctly.
Fixes#611
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The cache= option from the virtiofsd backend is meant to be used
depending on how the user wants to use the guest page cache. For most
use cases, we want to bypass the guest page cache to reduce the guest
memory footprint, which is why "cache=none" should be the default.
"cache=always" is the other option when the user wants to do something
specific by using the guest page cache, but should not be the default
anymore.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
With latest kernel, virtio-fs mount command has been simplified, and
this needs to be applied everywhere in our tests and documentation.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Because we don't always reach the expected footprint improvements with
KSM, let's review the numbers. By reducing the expectations and
increasing the amount of pages to scan, this should stabilize the CI.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>