In order to anticipate the need for both msi.rs and msix.rs to rely on
some KVM utils and InterruptRoute structure to handle the update of the
KVM GSI routes, this commit adds these utilities directly to the pci
crate. So far, these were exclusively used by the vfio crate, which is
why there were located there.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Because we will need to share the same list of GSI routes across
multiple PCI devices (virtio-pci, VFIO), this commit moves the creation
of such list to a higher level location in the code.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This test is a variant of test_boot_vhost_user_blk(), named
test_boot_vhost_user_blk_direct(), that instances the vhost-user-blk
daemon with 'direct=true', to test this recently introduced feature
for opening files with O_DIRECT.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Add missing WCE (write-cache enable) property support. This not only
an enhancement, but also a fix for a bug.
Right now, when vhost_user_blk uses a qcow2 image, it doesn't write
the QCOW2 metadata until the guest explicitly requests a flush. In
practice, this is equivalent to the write back semantic.
Without WCE, the guest assumes write through for the virtio_blk
device, and doesn't send those flush requests. Adding support for WCE,
and enabling it by default, we ensure the guest does send said
requests.
Supporting "WCE = false" would require updating our qcow2
implementation to ensure that, when required, it honors the write
through semantics by not deferring the updates to QCOW2 metadata.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Add support for opening the disk images with O_DIRECT. This allows
bypassing the host's file system cache, which is useful to avoid
polluting its cache and for better data integrity.
This mode of operation can be enabled by adding the "direct=<bool>"
parameter to the "backend" argument:
./target/debug/vhost_user_blk --backend image=test.raw,sock=/tmp/vhostblk,direct=true
The "direct" parameter defaults to "false", to preserve the original
behavior.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Use RawFile as backend instead of File. This allows us to abstract
the access to the actual image with a specialized layer, so we have a
place where we can deal with the low-level peculiarities.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Doing I/O on an image opened with O_DIRECT requires to adhere to
certain restrictions, requiring the following elements to be aligned:
- Address of the source/destination memory buffer.
- File offset.
- Length of the data to be read/written.
The actual alignment value depends on various elements, and according
to open(2) "(...) there is currently no filesystem-independent
interface for an application to discover these restrictions (...)".
To discover such value, we iterate through a list of alignments
(currently, 512 and 4096) calling pread() with each one and checking
if the operation succeeded.
We also extend RawFile so it can be used as a backend for QcowFile,
so the later can be easily adapted to support O_DIRECT too.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Update queue number with 4 to verify if vhost-user-net device
and backend could work well with multiple queue.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
There are two new options num_queues and queue_size defined for
virtio-net, add them in test_valid_vm_config_net which is used
to validate that both the CLI and the OpenAPI will generate the
same configuration.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Update the common part in net_util.rs under vm-virtio to add mq
support, meanwhile enable mq for virtio-net device, vhost-user-net
device and vhost-user-net backend. Multiple threads will be created,
one thread will be responsible to handle one queue pair separately.
To gain the better performance, it requires to have the same amount
of vcpus as queue pair numbers defined for the net device, due to
the cpu affinity.
Multiple thread support is not added for vhost-user-net backend
currently, it will be added in future.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Add num_queues and queue_size for virtio-net device to make them configurable,
while add the associated options in command line.
Update cloud-hypervisor.yaml with the new options for NetConfig.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Add support to allow VMMs to open the same tap device many times, it will
create multiple file descriptors meanwhile.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Current guest kernel will check the oneline cpu count, in principle,
if the online cpu count is not smaller than the number of queue pairs
VMM reported, the net packets could be put/get to all the virtqueues,
otherwise, only the number of queue pairs that match the oneline cpu
count will have packets work with. guest kernel will send command
through control queue to tell VMMs the actual queue pair numbers which
it could currently play with. Add mq process in control queue handling
to get the queue pair number, VMM will verify if it is in a valid range,
nothing else but this.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
While feature VIRTIO_NET_F_CTRL_VQ is negotiated, control queue
will exits besides the Tx/Rx virtqueues, an epoll handler should
be started to monitor and handle the control queue event.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
As virtio spec 1.1 said, the driver uses the control queue
to send commands to manipulate various features of the devices,
such as VIRTIO_NET_F_MQ which is required by multiple queue
support. Here add the control queue handling process.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Since the common parts are put into net_util.rs under vm-virtio,
refactoring code for virtio-net device, vhost-user-net device
and backend to shrink the code size and improve readability
meanwhile.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
There are some common logic shared among virtio-net device, vhost-user-net
device and vhost-user-net backend, abstract those parts into net_util.rs
to improve code maintainability and readability.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
According to virtio spec, for used buffer notifications, if
MSI-X capability is enabled, and queue msix vector is
VIRTIO_MSI_NO_VECTOR 0xffff, the device must not deliver an
interrupt for that virtqueue.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Use independent bits for storing whether there is a CPU or memory device
changed when reporting changes via ACPI GED interrupt. This prevents a
later notification squashing an earlier one and ensure that hotplugging
both CPU and memory at the same time succeeds.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
If a new amount of RAM is requested in the VmResize command try and
hotplug if it an increase (MemoryManager::Resize() silently ignores
decreases.)
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
If there is a GED interrupt and the field indicates that the memory
device has changed triggers a scan of the memory devices.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Generate and expose the DSDT table entries required to support memory
hotplug. The AML methods call into the MemoryManager via I/O ports
exposed as fields.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Expose the details of hotplug RAM slots via an I/O port. This will be
consumed by the ACPI DSDT tables to report the hotplug memory details to
the guest.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add a "resize()" method on MemoryManager which will create a new memory
allocation based on the difference between the desired RAM amount and
the amount already in use. After allocating the added RAM using the same
backing method as the boot RAM store the details in a vector and update
the KVM map and create a new GuestMemoryMmap and replace all the users.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
For now the new memory size is only used after a reboot but support for
hotplugging memory will be added in a later commit.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When the value is read from the I/O port via the ACPI AML functions to
determine what has been triggered the notifiction value is reset
preventing a second read from exposing the value. If we need support
multiple types of GED notification (such as memory hotplug) then we
should avoid reading the value multiple times.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
These fields allow you to expose the component part of an existing
buffer, such as a resource template as a new field which is required to
expose the memory CRS details.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This specifies how much address space should be reserved for hotplugging
of RAM. This space is reserved by adding move the start of the device
area by the desired amount.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In order to be able to support resizing either vCPUs or memory or both
make the fields in the resize command optional.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Make the GuestMemoryMmap from a Vec<Arc<GuestRegionMmap>> by using this
method we can persist a set of regions in the MemoryManager and then
extend this set with a newly created region. Ultimately that will allow
the hotplug of memory.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
If neither PCI or MMIO are built in, we should not bother creating any
virtio devices at all.
When building a minimal VMM made of a kernel with an initramfs and a
serial console, the RNG virtio device is still created even though there
is no way it can ever get probed.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>