In order to allow virtio-pci devices to use MSI-X messages instead
of legacy pin based interrupts, this patch implements the MSI-X
support for cloud-hypervisor. The VMM code and virtio-pci bits have
been modified based on the "msix" module previously added to the pci
crate.
Fixes#12
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to have access to the newly added signal_msi() function
from the kvm-ioctls crate, this commit updates the version of the
kvm-ioctls to the latest one.
Because set_user_memory_region() has been swtiched to "unsafe", we
also need to handle this small change in our cloud-hypervisor code
directly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to support MSI-X, this commit adds to the pci crate a new
module called "msix". This module brings all the necessary pieces
to let any PCI device implement MSI-X support.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Because we cannot always assume the irq fd will be the way to send
an IRQ to the guest, this means we cannot make the assumption that
every virtio device implementation should expect an EventFd to
trigger an IRQ.
This commit organizes the code related to virtio devices so that it
now expects a Rust closure instead of a known EventFd. This lets the
caller decide what should be done whenever a device needs to trigger
an interrupt to the guest.
The closure will allow for other type of interrupt mechanism such as
MSI to be implemented. From the device perspective, it could be a
pin based interrupt or an MSI, it does not matter since the device
will simply call into the provided callback, passing the appropriate
Queue as a reference. This design keeps the device model generic.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Add the BSD and Apache license.
Make all crosvm references point to the BSD license.
Add the right copyrights and identifier to our VMM code.
Add Intel copyright to the vm-virtio and pci crates.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
After the virtio-blk device support has been introduced in the
previous commit, the vmm need to rely on this new device to boot
from disk images instead of initrd built into the kernel.
In order to achieve the proper support of virtio-blk, this commit
had to handle a few things:
- Register an ioevent fd for each virtqueue. This important to be
notified from the virtio driver that something has been written
on the queue.
- Fix the retrieval of 64bits BAR address. This is needed to provide
the right address which need to be registered as the notification
address from the virtio driver.
- Fix the write_bar and read_bar functions. They were both assuming
to be provided with an address, from which they were trying to
find the associated offset. But the reality is that the offset is
directly provided by the Bus layer.
- Register a new virtio-blk device as a virtio-pci device from the
vm.rs code. When the VM is started, it expects a block device to
be created, using this block device as the VM rootfs.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This crate is based on the crosvm devices/src/pci implementation from 107edb3e
We introduced a few changes:
- This one is a standalone crate. The device crate does not carry any
PCI specific bits.
- Simplified PCI root configuration. We only carry a pointer to a
PciConfiguration, not a wrapper around it.
- Simplified BAR allocation API. All BARs from the PciDevice instance
must be generated at once through the PciDevice.allocate_bars()
method.
- The PCI BARs are added to the MMIO bus from the PciRoot add_device()
method.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>