By making the registration functions immutable, this patch prevents from
self borrowing issues with the RwLock on self.mem.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Following the refactoring of the code allowing multiple threads to
access the same instance of the guest memory, this patch goes one step
further by adding RwLock to it. This anticipates the future need for
being able to modify the content of the guest memory at runtime.
The reasons for adding regions to an existing guest memory could be:
- Add virtio-pmem and virtio-fs regions after the guest memory was
created.
- Support future hotplug of devices, memory, or anything that would
require more memory at runtime.
Because most of the time, the lock will be taken as read only, using
RwLock instead of Mutex is the right approach.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The VMM guest memory was cloned (copied) everywhere the code needed to
have ownership of it. In order to clean the code, and in anticipation
for future support of modifying this guest memory instance at runtime,
it is important that every part of the code share the same instance.
Because VirtioDevice implementations need to have access to it from
different threads, that's why Arc must be used in this case.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When there are no available descriptors in the queue (observed when the
network interface hasn't been brought up by the kernel) stop waiting for
notifications that the TAP fd should be read from.
This avoids a situation where the TAP device has data avaiable and wakes
up the virtio-net thread only for the virtio-net thread not read that
data as it has nowhere to put it.
When there are descriptors available in the queue then we resume waiting
for the epoll event on the TAP fd.
This bug demonstrated itself as 100% CPU usage for cloud-hypervisor
binary prior to the guest network interface being brought up. The
solution was inspired by the Firecracker virtio-net code.
Fixes: #208
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The virtiofsd daemon takes a bit of time creating and listening on the
socket. By adding 10s timeout, we make sure the vhost-user socket has
been properly created before the VMM tries to connect to it.
Also, the daemon needs cap_dac_override capabilities to access debugfs
filesystem.
Last thing, both virtio-fs and virtio-pmem tests were slightly different
from the others since they were not explicitly killing cloud-hypervisor
and virtiofsd processes once the test was done.
Fixes#182
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Rather than set filesystem permissions on the /dev/kvm device instead
use the kvm group added by installing qemu for running the unit tests.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Adjust to reflect that it's QEMU being built here in preparation for
subsequent PRs that also want to build QEMU.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When running the script from an interactive environment there are always
some files inside the git directory that rm prompts to delete so instead
pass "-f" to avoid that.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
panic()ing after a panic() has already been recovered by the credibility
test system (i.e. after an aver! has failed) results in an abort which
triggers SIGILL.
Adjust the SSH based commands to generate a Result<...,Error> which we
then either propagate through the test block. Or if the function is
directly being evaluated in an aver! macro call .unwrap_with_default()
(or .unwrap_or() in the case where the default would be wrong.)
See #182
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When virtio-fs is being tested through the integration tests, there is
one specific test where DAX and cache region are disabled. In this case
the virtiofsd daemon should be used with the correct option cache=none
instead of cache=always.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Latest clippy version complains about our existing code for the
following reasons:
- trait objects without an explicit `dyn` are deprecated
- `...` range patterns are deprecated
- lint `clippy::const_static_lifetime` has been renamed to
`clippy::redundant_static_lifetimes`
- unnecessary `unsafe` block
- unneeded return statement
All these issues have been fixed through this patch, and rustfmt has
been run to cleanup potential formatting errors due to those changes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We timestamp the VM creation time, and log the elapsed time between that
instant and the debug ioport events.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The 0x80 IO port is typically used for BIOS debugging and testing on
bare metal x86 platforms.
We use that port and its dedicated 16 debug codes to time and track the
guest boot process.
Fixes#63
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The linux-loader crate has been updated with a regnerated bootparams.rs
which has changed the API slightly. Update to the latest linux-loader
and adapt the code to reflect the changes:
* e820_map is renamed to e820_table (and all similar variables updated)
* e820entry is renamed to boot_e820_entry
* The E820 type constants are not no longer included
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
By introducing new kernel configuration related to DAX support, the
tests are not working as they were before. The format of the image
passed through virtio-pmem needs to be in proper raw format, otherwise
the virtio-pmem driver cannot complete its probing.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The existing integration tests are extended to support both use cases
where dax=on and dax=off.
In order to support DAX, the kernel configuration needs to be updated to
include CONFIG_FS_DAX and CONFIG_ZONE_DEVICE.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This patch enables the vhost-user protocol features to let the slave
initiates some request towards the master (VMM). It also takes care of
receiving the requests from the slave and take appropriate actions based
on the request type.
The way the flow works now are as follow:
- The VMM creates a region of memory that is made available to the
guest by exposing it through the virtio-fs PCI BAR 2.
- The virtio-fs device is created by the VMM, exposing some protocol
features bits to virtiofsd, letting it know that it can send some
request to the VMM through a dedicated socket.
- On behalf of the guest driver asking for reading or writing a file,
virtiofsd sends a request to the VMM, asking for a file descriptor to
be mapped into the shared memory region at a specific offset.
- The guest can directly read/write the file at the offset of the
memory region.
This implementation is more performant than the one using exclusively
the virtqueues. With the virtqueues, the content of the file needs to be
copied to the queues every time the guest is asking to access it.
With the shared memory region, the virtqueues become the control plane
where the libfuse commands are sent to virtiofsd. The data plane is
literally the whole memory region which does not need any extra copy of
the file content. The only penalty is the first time a file is accessed,
it needs to be mapped into the VMM virtual address space.
Another interesting case where this solution will not perform as well as
expected is when a file is larger than the region itself. This means the
file needs to be mapped in several times, but more than that this means
it needs to be remapped every time it's being accessed.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When the cache_size parameter from virtio-fs device is not empty, the
VMM creates a dedicated memory region where the shared files will be
memory mapped by the virtio-fs device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to support the more performant version of virtio-fs, that is
the one relying on a shared memory region between host and guest, we
introduce two new parameters to the --fs device.
The "dax" parameter allows the user to choose if he wants to use the
shared memory region with virtio-fs. By default, this parameter is "on".
The "cache_size" parameter allows the user to specify the amount of
memory that should be shared between host and guest. By default, the
value of this parameter is 8Gib as advised by virtio-fs maintainers.
Note that dax=off and cache_size are incompatible.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In the context of shared memory regions, they could not be present for
most of the virtio devices. For this reason, we prefer dedicate a BAR
for the shared memory regions.
Another reason is that memory regions, if there are several, can be
allocated all at once as a contiguous region, which then can be used as
its own BAR. It would be more complicated to try to allocate the BAR 0
holding the regular information about the virtio-pci device along with
the shared memory regions.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This patch cleans up the VirtioDevice trait. Since some function are PCI
specific and since they are not even used, it makes sense to remove them
from the trait definition.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the newly added SharedMemoryConfig capability to the virtio
specification, and based on the fact that it is not tied to the type of
transport (pci or mmio), we can create as part of the VirtioDevice trait
a new method that will provide the shared memory regions associated with
the device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the latest version of the virtio specification, the structure
virtio_pci_cap has been updated and a new structure virtio_pci_cap64 has
been introduced.
virtio_pci_cap now includes a field "id" that does not modify the
existing structure size since there was a 3 bytes reserved field
already there. The id is used in the context of shared memory regions
which need to be identified since there could be more than one of this
kind of capability.
virtio_pci_cap64 is a new structure that includes virtio_pci_cap and
extends it to allow 64 bits offsets and 64 bits region length. This is
used in the context of shared memory regions capability, as we might need
to describe regions of 4G or more, that could be placed at a 4G offset
or more in the associated BAR.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The length of the PCI capability as it is being calculated by the guest
was not accurate since it was not including the implicit 2 bytes offset.
The reason for this offset is that the structure itself does not contain
the capability ID (1 byte) and the next capability pointer (1 byte), but
the structure exposed through PCI config space does include those bytes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
No matter if the communication is coming from the master or the slave,
it should always reply with an ack if the message header specifies that
it expects a reply.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Because the way to mount virtio-fs filesystem changed with newest
kernel, we need to update the mount command in our integration tests.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>