Add a socket and vhost_user parameter to this option so that the same
configuration option can be used for both virtio-block and
vhost-user-block. For now it is necessary to specify both vhost_user
and socket parameters as auto activation is not yet implemented. The wce
parameter for supporting "Write Cache Enabling" is also added to the
disk configuration.
The original command line parameter is still supported for now and will
be removed in a future release.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add a socket and vhost_user parameter to this option so that the same
configuration option can be used for both virtio-net and vhost-user-net.
For now it is necessary to specify both vhost_user and socket parameters
as auto activation is not yet implemented. The original command line
parameter is still supported for now.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This commit introduces a clear definition of the virtio-fs
configuration structure, allowing vhost-user-fs device to
rely on it.
This makes the code more readable for developers.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit reuses the clear definition of the virtio-blk
configuration structure, allowing both vhost-user-blk and
virtio-blk devices to rely on it.
This makes the code more readable for developers.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit introduces a clear definition of the virtio-net
configuration structure, allowing both vhost-user-net and
virtio-net devices to rely on it.
This makes the code more readable for developers.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to reduce the amount of times VMs are being started through
integration tests, this commit consolidates very similar tests related
to virtio-blk into a single one.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Just add a new integration test to verify that multiqueue support is
correctly supported and that we can find the right amount of queues in
the guest.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit improves the existing virtio-blk implementation, allowing
for better I/O performance. The cost for the end user is to accept
allocating more vCPUs to the virtual machine, so that multiple I/O
threads can run in parallel.
One thing to notice, the amount of vCPUs must be egal or superior to the
amount of queues dedicated to the virtio-blk device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The number of queues and the size of each queue were not configurable.
In anticipation for adding multiqueue support, this commit introduces
some new parameters to let the user decide about the number of queues
and the queue size.
Note that the default values for each of these parameters are identical
to the default values used for vhost-user-blk, that is 1 for the number
of queues and 128 for the queue size.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The trait bound and non trait bound virtio devices can use the same
inner implementation.
Also, the virtio pausable trait definiton can also be factorized.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Now that we have factorized the common virtio pausable implementation,
it's cleaner to have a dedicated macro for control queue devices rather
than overload the macro prototype.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
By adding an internal layer of abstraction (the hidden VirtioPausable
trait), we can factorize the virtio common code.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Now that we unified epoll_thread to potentially be a vector of threads,
it makes sense to make it a plural field.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Although only the block and net virtio devices can actually be multi
threaded (for now), handling them as special cases makes the code more
complex.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Devices like virtio-pmem and virtio-fs require some dedicated memory
region to be mapped. The memory mapping from the DeviceManager is being
replaced by the usage of MmapRegion from the vm-memory crate.
The unmap will happen automatically when the MmapRegion will be dropped,
which should happen when the DeviceManager gets dropped.
Fixes#240
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When the CI runs in a brand new VM, there's no problem with the validity
of the images as they just got downloaded from the Azure bucket.
In case of a user who runs the CI locally, while doing some debug, he
might provision the images with cloudinit at some point, and later try
to run the CI based on these same images. What happens is that the CI
might randomly fail because the provisioning will not happen again as
it already happened.
This patch ensures the CI to fail early and show an error message to
notify the user about the validity of the images, based on their
sha1sum.
Fixes#112
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Move GED device reporting of required device type to scan into an MMIO
region rather than an I/O port.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rather than have the MemoryManager device sit on the I/O bus allocate
space for MMIO and add it to the MMIO bus.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The way to get the BAR size is by writing all 1's to the BAR register in
the PCI config space. The mechanism was in place but the parameters were
swapped. The data buffer was provided with the actual offset, while the
offset was provided with the actual all 1's dword. We were effectively
trying to write the real offset at the offset 0xffffffff, which was
failing and resulting in the size being wrong.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Update the kernel build instructions to use the configuration and branch
that we test and develop against.
Fixes: #521
Signed-off-by: Rob Bradford <robert.bradford@intel.com>