Move the release of the managed memory region from the DeviceManager to
the vhost-user-fs device. This ensures that the memory will be freed when
the device is unplugged which will lead to it being Drop()ed.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move the release of the managed memory region from the DeviceManager to
the virtio-pmem device. This ensures that the memory will be freed when
the device is unplugged which will lead to it being Drop()ed.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
By implementing queues_per_thread(), this patch fills the last missing
bit to enable multithreaded multiqueue support for the vhost-user-net
backend implementation.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By adding a "thread_id" parameter to handle_event(), the backend crate
can now indicate to the backend implementation which thread triggered
the processing of some events.
This is applied to vhost-user-net backend and allows for simplifying a
lot the code since each thread is identical.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By adding the "thread_index" parameter to the function exit_event() from
the VhostUserBackend trait, the backend crate now has the ability to ask
the backend implementation about the exit event related to a specific
thread.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to prepare for the support of multithreaded multiqueues, the
structure VhostUserNetThread is simplified to hold only one RX queue,
one TX queue, and one TAP interface.
Following this change, VhostUserNetBackend now holds a list of threads
instead of going through each thread to handle multiqueues.
These changes decouple neatly the abstraction between the backend and
each thread. This allows for a lot of simplification since we now know
all threads are identical, hence the handling of events becomes very
straightforward.
One important point is that each thread can be locked when in use,
without causing any contention with other threads since the backend
doesn't need to be locked anymore.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that multiple worker threads can be run from the backend crate, it
is important that each backend implementation can access every worker
thread.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to support multiqueues running on multiple threads for
increasing the IO performances, this commit introduces a new function
queues_per_thread() to the VhostUserBackend trait.
This gives each backend implementation the opportunity to define which
queues belong to which thread.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By changing the mutability of this function, after adapting all
backends, we should be able to implement multithreads with
multiqueues support without hitting a bottleneck on the backend
locking.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Both blk, net and fs backends have been updated to avoid the requirement
of having handle_event(&mut self). This will allow the backend crate to
avoid taking a write lock onto the backend object, which will remove the
potential contention point when multiple threads will be handling
multiqueues.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
While testing self spawned vhost-user backends, it appeared that the
backend was aborting due to a missing system call in the seccomp
filters. mremap() was the culprit and this patch simply adds it to the
whitelist.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This should make it a pointer in the Go generated code so that it will
be ommitted and thus not populated with an unhelpful default value.
Fixes: #1015
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This ensures that the field is filled with None when it is not specified
as part of the deserialisation step.
Fixes: #1015
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
We can now allow guests that specify an initramfs to boot
using the PVH boot protocol.
Signed-off-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com>
Fill and write to guest memory the necessary boot module
structure to allow a guest using the PVH boot protocol
to load an initramfs image.
Signed-off-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com>
This allows the validation of this requirement for both command line
booted VMs and those booted via the API.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows the validation of this requirement for both command line
booted VMs and those booted via the API.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
As the VmConfig::Parse() also does validation work it only make sense to
parse the VM options on the VM boot path only.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Replace the existing VmConfig::valid() check with a call into
.validate() as part of earlier config setup or boot API checks.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When performing an API boot validate the configuration. For now only
some very basic validation is performed but in subsequent commits
the validation will be extended.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The configuration comes from a variety of places (commandline, REST API
and restore) however some validation was only happening on the command
line parsing path. Therefore introduce a new ability to validate the
configuration before proceeding so that this can be used for commandline
and API boots.
For now move just the console and serial output mode validation under
the new validation API.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Making sure the OpenAPI definition is up to date with newly added
structure and parameters to support VM restoration.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that the restore path uses RestoreConfig structure, we add a new
parameter called "prefault" to it. This will give the user the ability
to populate the pages corresponding to the mapped regions backed by the
snapshotted memory files.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The goal here is to move the restore parameters into a dedicated
structure that can be reused from the entire codebase, making the
addition or removal of a parameter easier.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When CoW can be used, the VM restoration time is reduced, but the pages
are not populated. This can lead to some slowness from the guest when
accessing these pages.
Depending on the use case, we might prefer a slower boot time for better
performances from guest runtime. The way to achieve this is to prefault
the pages in this case, using the MAP_POPULATE flag along with CoW.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This patch extends the previous behavior on the restore codepath.
Instead of copying the memory regions content from the snapshot files
into the new memory regions, the VMM will use the snapshot region files
as the backing files behind each mapped region. This is done in order to
reduce the time for the VM to be restored.
When the source VM has been initially started with a backing file, this
means it has been mapped with the MAP_SHARED flag. For this case, we
cannot use the CoW trick to speed up the VM restore path and we simply
fallback onto the copy of the memory regions content.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Whenever a MemoryManager is restored from a snapshot, the memory regions
associated with it might need to directly back the mapped memory for
increased performances. If that's the case, a list of external regions
is provided and the MemoryManager should simply ignore what's coming
from the MemoryConfig.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that we can choose specific mmap flags for the guest RAM, we create
a new parameter "copy_on_write" meaning that the memory mappings backed
by a file should be performed with MAP_PRIVATE instead of MAP_SHARED.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to anticipate the need for special mmap flags when memory
mapping the guest RAM, we need to switch from from_file() wrapper to
build() wrapper.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
A few KVM ioctls were missing in order to perform both snapshot and
restore while keeping seccomp enabled.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This connects the dots together, making the request from the user reach
the actual implementation for restoring the VM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This connects the dots together, making the request from the user reach
the actual implementation for snapshotting the VM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The MemoryManager is somehow a special case, as its restore() function
was not implemented as part of the Snapshottable trait. Instead, and
because restoring memory regions rely both on vm.json and every memory
region snapshot file, the memory manager is restored at creation time.
This makes the restore path slightly different from CpuManager, Vcpu,
DeviceManager and Vm, but achieve the correct restoration of the
MemoryManager along with its memory regions filled with the correct
content.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
This is only implementing the send() function in order to store all Vm
states into a file.
This needs to be extended for live migration, by adding more transport
methods, and also the recv() function must be implemented.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
By aggregating snapshots from the CpuManager, the MemoryManager and the
DeviceManager, Vm implements the snapshot() function from the
Snapshottable trait.
And by restoring snapshots from the CpuManager, the MemoryManager and
the DeviceManager, Vm implements the restore() function from the
Snapshottable trait.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
This implements the send() function of the Transportable trait, so that
the guest memory regions can be saved into one file per region.
This will need to be extended for live migration, as it will require
other transport methods and the recv() function will need to be
implemented too.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
In order to snapshot the content of the guest RAM, the MemoryManager
must implement the Snapshottable trait.
Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
Implement the Snapshottable trait for Vcpu, and then implements it for
CpuManager. Note that CpuManager goes through the Snapshottable
implementation of Vcpu for every vCPU in order to implement the
Snapshottable trait for itself.
Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Anticipating the need for a slightly different function for restoring
vCPUs, this patch factorizes most of the vCPU creation, so that it can
be reused for migration purposes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
These two new helpers will be useful to capture a vCPU state and being
able to restore it at a later time.
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>