Remove some in/out parameters and instead rely on them as members of the
&mut self parameter. This prepares the way to more easily store state on
the DeviceManager.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Remove some in/out parameters and instead rely on them as members of the
&mut self parameter. A follow-up commit will change the callee functions
that create the devices themselves.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Modify these functions to take an &mut self and become methods on
DeviceManager. This allows the removal of some in/out parameters and
leads the way to further refactoring and simplification.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The MemoryManager should only be included on the I/O bus when doing ACPI
builds as that is the only time it will be interrogated.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Currently the MemoryManager is only used on the ACPI code paths after
the DeviceManager has been created. This will change in a future commit
as part of the refactoring so for now always include it but name it with
underscore prefix to indicate it might not always be used.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Adding the num_queues parameter for vhost-user-blk backend, which
can enable MQ support in the backend.
This patch has enabled the MQ support from handle_event, and the
vhost-user-backend crate will enable multiple threads to call this
handle_event to handle read/write operations.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
The previous code only support one queue, and we need
to support MQ in vhost user block device. This patch
can work with SPDK with MQ setting.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
We don't need to force the cargo-audit install, we can check if it's
already available instead and install if it's not.
Also, since we now have workspaces properly setup, we can call directly
into cargo fmt and avoid calling into find magic incantation.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The script is a development tool that runs all commands in a dedicated
container. This allows for containerized, isolated and reproducible
builds and CI runs.
The script supports the following command:
* build: Build Cloud Hypervisor binaries (debug and release)
* build-container: Build the container used by the script
* tests: Run unit, cargo and integration tests
$ ./scripts/dev_cli.sh help
Cloud Hypervisor dev_cli.sh
Usage: dev_cli.sh <command> [<command args>]
Available commands:
build [--debug|--release] [-- [<cargo args>]]
Build the Cloud Hypervisor binaries.
--debug Build the debug binaries. This is the default.
--release Build the release binaries.
tests [--unit|--cargo|--all]
Run the Cloud Hypervisor tests.
--unit Run the unit tests.
--cargo Run the cargo tests.
--integration Run the integration tests.
--all Run all tests.
build-container [--type]
Build the Cloud Hypervisor container.
--dev Build dev container. This is the default.
help
Display this help message.
Fixes: #682Fixes: #684
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Because of the new set of patches related to virtio-iommu allowing only
for the topology to be described through virtio configuration, this
patch updates the kernel branch and the kernel configuration our CI
relies on.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that devices attached to the virtual IOMMU are described through
virtio configuration, there is no need for the DeviceManager to store
the list of IDs for all these devices. Instead, things are handled
locally when PCI devices are being added.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of relying on the ACPI tables to describe the devices attached
to the virtual IOMMU, let's use the virtio topology, as the ACPI support
is getting deprecated.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the new structures previously introduced, the new topology
feature is being fully implemented through this commit. This allows
the description of the devices attached to the virtual IOMMU, which
is why a new function attach_devices() has been introduced. It gives
the virtual IOMMU device the full list of devices which must be attached
to it, letting the device share this information through its virtio
configuration.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The virtio-iommu device defines a new virtio feature allowing the
topology to be discovered fully through virtio configuration.
By topology, it means describing the devices attached to the virtual
IOMMU. This is currently managed through ACPI with IORT and VIOT table,
but this is another way of describing it.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The virtio capability VIRTIO_PCI_CAP_PCI_CFG is exposed through the
device's PCI config space the same way other virtio-pci capabilities
are exposed.
The main and important difference is that this specific capability is
designed as a way for the guest to access virtio capabilities without
mapping the PCI BAR. This is very rarely used, but it can be useful when
it is too early for the guest to be able to map the BARs.
One thing to note, this special feature MUST be implemented, based on
the virtio specification.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to anticipate the need to support more features related to the
access of a device's PCI config space, this commits changes the self
reference in the function read_config_register() to be mutable.
This also brings some more flexibility for any implementation of the
PciDevice trait.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Add a socket and vhost_user parameter to this option so that the same
configuration option can be used for both virtio-block and
vhost-user-block. For now it is necessary to specify both vhost_user
and socket parameters as auto activation is not yet implemented. The wce
parameter for supporting "Write Cache Enabling" is also added to the
disk configuration.
The original command line parameter is still supported for now and will
be removed in a future release.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add a socket and vhost_user parameter to this option so that the same
configuration option can be used for both virtio-net and vhost-user-net.
For now it is necessary to specify both vhost_user and socket parameters
as auto activation is not yet implemented. The original command line
parameter is still supported for now.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This commit introduces a clear definition of the virtio-fs
configuration structure, allowing vhost-user-fs device to
rely on it.
This makes the code more readable for developers.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit reuses the clear definition of the virtio-blk
configuration structure, allowing both vhost-user-blk and
virtio-blk devices to rely on it.
This makes the code more readable for developers.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit introduces a clear definition of the virtio-net
configuration structure, allowing both vhost-user-net and
virtio-net devices to rely on it.
This makes the code more readable for developers.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to reduce the amount of times VMs are being started through
integration tests, this commit consolidates very similar tests related
to virtio-blk into a single one.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Just add a new integration test to verify that multiqueue support is
correctly supported and that we can find the right amount of queues in
the guest.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>