The virtio specification defines a device can be reset, which was not
supported by this virtio-console implementation. The reason it is needed
is to support unbinding this device from the guest driver, and rebind it
to vfio-pci driver.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The virtio specification defines a device can be reset, which was not
supported by this virtio-vsock implementation. The reason it is needed
is to support unbinding this device from the guest driver, and rebind
it to vfio-pci driver.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The virtio specification defines a device can be reset, which was not
supported by this virtio-pmem implementation. The reason it is needed
is to support unbinding this device from the guest driver, and rebind
it to vfio-pci driver.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The virtio specification defines a device can be reset, which was not
supported by this virtio-rng implementation. The reason it is needed
is to support unbinding this device from the guest driver, and rebind
it to vfio-pci driver.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The virtio specification defines a device can be reset, which was not
supported by this virtio-net implementation. The reason it is needed is
to support unbinding this device from the guest driver, and rebind it to
vfio-pci driver.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
As of commit 2b94334a, Firecracker includes all the changes we need.
We can now switch to using it instead of carrying a copy.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
We used to have errors definitions spread across vmm, vm, api,
and http.
We now have a cleaner separation: All API routines only return an
ApiResult. All VM operations, including the VMM wrappers, return a
VmResult. This makes it easier to carry errors up to the HTTP caller.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
In order to support further use cases where a VM configuration could be
modified through the HTTP API, we only store the passed VM config when
being asked to create a VM. The actual creation will happen when booting
a new config for the first time.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Use a more idiomatic "let Ok(foo) = result" construct for:
105 | if try_numeric.is_ok() {
| ------------------- the check is happening here
106 | self.content_length = try_numeric.unwrap();
| ^^^^^^^^^^^^^^^^^^^^
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
We use the serde crate to serialize and deserialize the VmVConfig
structure. This structure will be passed from the HTTP API caller as a
JSON payload and we need to deserialize it into a VmConfig.
For a convenient use of the HTTP API, we also provide Default traits
implementations for some of the VmConfig fields (vCPUs, memory, etc...).
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The linux_loader crate Cmdline struct is not serializable.
Instead of forcing the upstream create to carry a serde dependency, we
simply use a String for the passed command line and build the actual
CmdLine when we need it (in vm::new()).
Also, the cmdline offset is not a configuration knob, so we remove it.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
They point to a vm_virtio structure (VhostUserConfig) and in order to
make the whole config serializable (through the serde crate for
example), we'd have to add a serde dependency to the vm_virtio crate.
Instead we use a local, serializable structure and convert it to
VhostUserConfig from the DeviceManager code.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The kernel path was the only mandatory command line option.
With the addition of the --api-socket option, we can run without a
kernel path and get it later through the API.
Since we can end up with VM configurations that are no longer valid by
default, we need to provide a validation check for it. For now, if the
kernel path is not defined, the VM configuration is invalid.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The API server will unconditionally run through a UNIX domain socket
which default path is /run/user/<uid>/cloud-hypervisor.<pid>.
The --api-socket command line option allows to override that default
value with some custom socket path.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
With the API server socket option, we will be able to support a model
where the user can start cloud-hypervisor with no options or an
alternative API server socket path. In this case, we don't want to try
to start a new guest VM, and for that we need to know if the user has
set any VM configuration at all. Grouping all VM configuration specific
options together is one way to be able to know about it.
If the user has not set any VM configuration, we only start the API
server. If it has set anything, we will verify that the overall
configuration is valid and will implicitly convert that configuration
into a request to the API server.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The Cloud Hyper HTTP server runs a synchronous, multi-threaded
loop that receives HTTP requests and tries to call the corresponding
endpoint handlers for the requests URIs.
An endpoint handler will parse the HTTP request and potentially
translate it into and IPC request. The handler holds an notifier and an
mspc Sender for respectively notifying and sending the IPC payload to
the VMM API server. The handler then waits for an API server response
and translate it back into an HTTP response.
The HTTP server is responsible for sending the reponse back to the
caller.
The HTTP server uses a static routes hash table that maps URIs to
endpoint handlers.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Based on Firecracker commit 58edf03b.
We're going to use the micro_http crate to serve the cloud-hypervisor
HTTP API.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The cloud-hypervisor API uses HTTP as a transport and is accessible
through a local UNIX socket.
The API root path is /api/v1 and is a collection of RPC-style methods.
All methods are static, unlike typical REST APIs. Variable (e.g. device
IDs) are passed through the request body.
Fixes: #244
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Based off of crosvm revision b5237bbcf074eb30cf368a138c0835081e747d71
add a CMOS device. This environments that can't use KVM clock to get the
current time (e.g. Windows and EFI.)
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
We no longer build vubridge, so we end up cloning qemu and building
virtiofs and the block backend all the time.
Fixes: #312
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
An integration test relying on the new vhost-user-net backend now
replaces the previous test using the QEMU test backend. This allows
us to avoid building the QEMU backend, and we now really exercise the
vhost-user-net implementation as it is used for the ssh communication
in this test.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Create vhost-user-net backend with Tap interface, to offload network
transaction from cloud-hypervisor. The goal is to provide flexibility
about the backend being in use, but also more security as it will allow
users to isolate the backend with different security profiles since it
will run as a dedicated process on the host.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
While implement vhost-user-net backend with Tap interface, it keeps
failed to enable the tx vring, since there is a checking in
slave_req_handler.rs to require acked_protocol_features to be setup
as a pre-requirement, which is filled by set_protocol_features call.
Add this call in vhost-user-net device implementation to address the issue.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Remove workspace from vhost_user_backend/Cargo.toml to have
vhost-user-backend compiled in cloud-hypervisor. Add workspace in
Cargo.toml to have vhost-user-backend consumed by vhost-user-net.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
The PCI Express Firmware specification says that the region may
be included in the E820 tables (but it must always be in the ACPI
tables.)
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The PCI Express Firmware spec says that the region to be used for PCI
MMCONFIG should be reserved as part of the motherboard's resources in
the ACPI tables.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Refactor the PCI datastructures to move the device ownership to a PciBus
struct. This PciBus struct can then be used by both a PciConfigIo and
PciConfigMmio in order to expose the configuration space via both IO
port and also via MMIO for PCI MMCONFIG.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The PCI MMCONFIG area must be below 4GiB and must not be part of the
device space. Shrink the device area and put the PCI MMCONFIG region
above it.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Patch the table with the currently used constants. This will be relevant
when we want to adjust the size of the PCI device area to accomodate the
PCI MMCONFIG region.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This commit fixes all the remaining issues that were found as part of
the integration with vhost-user-net.
It fixes the way to notify that a vring is used, by using the proper
EventFd.
It removes the process_queue() function from the trait, since the
complexity it was introducing was leading to deadlocks with mutexes.
It moves the register/unregister functions for registering custom events
from the backend, from the VringEpollHandler to the VringWorker. This
allows for a lot of simplification and solve a deadlock issue.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The original logic does not has any problem without offset, since the
current offset is zero. However, if offset is not zero, while convert
vmm address to backend process address, it needs to consider the
offset.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>