This starts the main, single VMM thread, which:
1. Creates the VMM instance
2. Starts the VMM control loop
3. Manages the VMM control loop exits for handling resets and shutdowns.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Unlike the Vmm structure we removed with commit bdfd1a3f, this new one
is really meant to represent the VM monitoring/management object.
For that, we implement a control loop that will replace the one that's
currently embedded within the Vm structure itself.
This will allow us to decouple the VM lifecycle management from the VM
object itself, by having a constantly running VMM control loop.
Besides the VM specific events (exit, reset, stdin for now), the VMM
control loop also handles all the Cloud Hypervisor IPC requests.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The VMM thread and control loop will be the sole consumer of the
EpollContext and EpollDispatch API, so let's move it to lib.rs.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Cloud Hypervisor IPC is a simple, mpsc based protocol for threads to
send command to the furture VMM thread. This patch adds the API
definition for that IPC, which will be used by both the main thread
to e.g. start a new VM based on the CLI arguments and the future HTTP
server to relay external requests received from a local Unix domain
socket.
We are moving it to its own "api" module because this is where the
external API (HTTP based) will also be implemented.
The VMM thread will be listening for IPC requests from an mpsc receiver,
process them and send a response back through another mpsc channel.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
As we're going to move the control loop to the VMM thread, the exit and
reset EventFds are no longer going to be owned by the VM.
We pass a copy of them when creating the Vm instead.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
In order to handle the VM STDIN stream from a separate VMM thread
without having to export the DeviceManager, we simply add a console
handling method to the Vm structure.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
In order to transfer the control loop to a separate VMM thread, we want
to shrink the VM control loop to a bare minimum.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Once passed to the VM creation routine, a VmConfig structure is
immutable. We can simply carry a Arc of it instead of a reference.
This also allows us to remove any lifetime bound from our VM.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The Vmm structure is just a placeholder for the KVM instance. We can
create it directly from the VM creation routine instead.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
We can integrate the kernel loading into the VM start method.
The VM start flow is then: Vm::new() -> vm.start(), which feels more
natural.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Convert Path to PathBuf and remove the associated lifetime.
Now we can remove the VmConfig associated lifetime.
Fixes#298
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Now that vhost-user has been fixed regarding size of the virtqueues,
booting a VM with the firmware from a vhost-user-blk backend actually
works. That's why this commit updates the previously introduced
integration test to make it use the firmware instead of direct kernel
boot.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The vhost-user implementation was always passing the maximum size
supported by the virtqueues to the backend, but this is obviously wrong
as it must pass the size being set by the driver running in the guest.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The 32 or 64 bits type for the memory BAR was not set correctly. This
patch ensure the right type is applied to the BAR.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
If we expect the vhost-user-blk device to be used for booting a VMM
along with the firmware, then need the device to support being reset.
In the vhost-user context, this means the backend needs to be informed
the vrings are disabled and stopped, and the owner needs to be reset
too.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit extends the existing integration test related to
vhost-user-blk by validating the block image contains one file
"foo" containing "bar".
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Currently we need use backend device from Qemu to test vhost-user-blk
device. Once the rust backend is ready, we will replace it.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
In vhost-user-blk, only WCE value can be set back to device in
guest kernel like
echo "write through" > /sys/block/vda/cache_type
So write_config() will only set WCE value from guest kernel to
vhost user side.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Since config space in vhost-user-blk are mostly from backend
device, this change will get config space info from backend
by vhost-user protocol.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Since vhost-user-blk use same error definition with vhost-user-net,
those errors need define to common usage.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
vhost-user-blk has better performance than virtio-blk, so we need
add vhost-user-blk support with SPDK in Rust-based VMMs.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Add to the vhost_rs crate the INFLIGHT_SHMFD protocol feature that was
missing from the list. The feature has been recently introduced in
vhost-user specification, which is the reason why it was missing.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Following the vhost-user specification that can be found at
https://github.com/qemu/qemu/blob/master/docs/interop/vhost-user.rst#virtio-device-config-space
this patch fixes the GET_CONFIG command.
This command is very specific as it needs to provide a body and a
payload corresponding to the content of the configuration structure as
an array of bytes. As a reply, it receives the body and payload provided
by the slave.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This reverts commit 3e99098bf3 because we
realized there was no need for those extra functions to be introduced as
the existing one from the vhost crate are sufficient for the vhost-user
configuration use case.
Now that virtio-bindings is a crate part of the rust-vmm project, we
want to rely on this one instead of the local one we had so far.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that we have expanded our address space we should add automated
testing to try VMs that use a large amount of RAM.
As the hypervisor does an anonymous mmap() for the backing of memory and
during a typical test boot the guest will not touch it all it should be
possible to test large RAM VMs even if that exceeds the RAM of the host
machine.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The last byte was missing from the E820 RAM area. This was due to the
function using the last address relative to the first address in the
range to calculate the size. This incorrectly calculated the size by
one. This produced incorrect E820 tables like this:
[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000001ffffffe] usable
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Probe for the size of the host physical address range and use that to
establish the address range for the VM. This removes the limitation on
the size of the VM RAM and gives more space for the devices.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Currently all devices and guest memory share the same 64GiB
allocation. With guest memory working upwards and devices working
downwards. This creates issues if you want to either have a VM with a
large amount of memory or want to have devices with a large allocation
(e.g. virtio-pmem.)
As it is possible for the hypervisor to place devices anywhere in its
address range it is required for simplistic users like the firmware to
set up an identity page table mapping across the full range. Currently
the hypervisor sets up an identify mapping of 1GiB which the firmware
extends to 64GiB to match the current address space size of the
hypervisor.
A simpler solution is to place the device needed for booting with the
firmware (virtio-block) inside the 32-bit memory hole. This allows the
firmware to easily access the block device and paves the way for
increasing the address space beyond the current 64GiB limit.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
After the 32-bit gap the memory is shared between the devices and the
RAM. Ensure that the ACPI tables correctly indicate where the RAM ends
and the device area starts by patching the precompiled tables. We get
the following valid output now from the PCI bus probing (8GiB guest)
[ 0.317757] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window]
[ 0.319035] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window]
[ 0.320215] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
[ 0.321431] pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
[ 0.322613] pci_bus 0000:00: resource 8 [mem 0x240000000-0xfffffffff window]
Signed-off-by: Rob Bradford <robert.bradford@intel.com>