Now Vcpu::run() returns a boolean value to VcpuManager, indicating
whether the VM is going to reboot (false) or just continue (true).
Moving the handling of hypervisor VCPU run result from Vcpu to
VcpuManager gives us the flexibility to handle more scenarios like
shutting down on AArch64.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Set the test case test_snapshot_restore X86 only, instead of excluding
it from test command line.
The command line option was added because we used to support migration
with Virtio-MMIO, but not Virtio-PCI.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Rather than filling the guest memory from a file at the point of the the
guest memory region being created instead fill from the file later. This
simplifies the region creation code but also adds flexibility for
sourcing the guest memory from a source other than an on disk file.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Tests not ported include 1) the ones that start guest VMs without
network (e.g. test_net_hotplug, test_initramfs), 2) test_vfio that
involves l2 guest. Also, some tests that use bionic guest image are
given extended timeout (120s) for 'wait_vm_boot'.
Signed-off-by: Bo Chen <chen.bo@intel.com>
Instead of waiting blindly with fixed amount of sleeping time, we can
use the `wait-timeout` crate to explicitly wait VM shutdown (with a
timeout). It can reduces the execution time of some tests
substantially. Also, this patch increases the `shutdown` timeout for
'test_reboot', which should fix the recent sporadic failures on this
test.
Signed-off-by: Bo Chen <chen.bo@intel.com>
Address build failure from activity in the development virtio-fs branch
by using the stable fork.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Instead of blindly waiting for 20-40s for the guest VM to boot, this
patch waits the notification from the guest VM explicitly by using a
simple TcpListener on the host and a custom systemd service in the
guest.
This patch also ported few tests to use this new machanism, while more
tests are to be ported.
Signed-off-by: Bo Chen <chen.bo@intel.com>
As a mirror of bdbea19e23 which ensured
that GuestMemoryMmap::read_exact_from() was used to read all the file to
the region ensure that all the guest memory region is written to disk.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Use a Result<> type with an error to simplify the code in start_vmm().
This will also make it easier to add cleanup funtionality.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This gives a nicer user experience and this error can now be used as the
source for other errors based off this.
See: #1910
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Now that we that our CI is running with a kernel that is new enough to
support io_uring we can turn this feature on by default.
See: #1561
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Any occurrence of of a variable containing `ext_region` is replaced with
the less confusing name `saved_region`. The point is to clearly identify
the memory regions that might have been saved during a snapshot, while
the `ext` standing for `external` was pretty unclear.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In the context of saving the memory regions content through snapshot,
using the term "backing file" brings confusion with the actual backing
file that might back the memory mapping.
To avoid such conflicting naming, the 'backing_file' field from the
MemoryRegion structure gets replaced with 'content', as this is
designating the potential file containing the memory region data.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Use GuestRegionMmap::read_exact_from() to ensure that all of the file is
read into the guest. This addresses an issue where
GuestRegionMmap::read_from() was only copying the first 2GiB of the
memory and so lead to snapshot-restore was failing when the guest RAM
was 2GiB or greater.
This change also propagates any error from the copying upwards.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Split out the HTTP request handling code from ch-remote into a new
crate which can be used in other places where talking to the API server
by HTTP is necessary.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Use the trait for Read/Write rather than specifying the concrete type.
This allows for the functionality to be used for different socket types.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When restoring if a region of RAM is backed by anonymous memory i.e from
memfd_create() then copy the contents of the ram from the file that has
been saved to disk.
Previously the code would map the memory from that file into the guest
using a MAP_PRIVATE mapping. This has the effect of
minimising the restore time but provides an issue where the restored VM
does not have the same structure as the snapshotted VM, in particular
memory is backed by files in the restored VM that were anonymously
backed in the original.
This creates two problems:
* The snapshot data is mapped from files for the pages of the guest
which prevents the storage from being reclaimed.
* When snapshotting again the guest memory will not be correctly saved
as it will have looked like it was backed by a file so it will not be
written to disk but as it is a MAP_PRIVATE mapping the changes will
never be written to the disk again. This results in incorrect
behaviour.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Now that virtio-balloon is not declared as part of the --memory
parameter, the integration tests are updated to keep the correct
behavior.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The standalone `--balloon` parameter being fully functional at this
point, we can get rid of the balloon options from the --memory
parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that we have a new dedicated way of asking for a balloon through the
CLI and the REST API, we can move all the balloon code to the device
manager. This allows us to simplify the memory manager, which is already
quite complex.
It also simplifies the behavior of the balloon resizing command. Instead
of providing the expected size for the RAM, which is complex when memory
zones are involved, it now expects the balloon size. This is a much more
straightforward behavior as it really resizes the balloon to the desired
size. Additionally to the simplication, the benefit of this approach is
that it does not need to be tied to the memory manager at all.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This introduces a new way of defining the virtio-balloon device. Instead
of going through the --memory parameter, the idea is to consider balloon
as a standalone virtio device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The snasphot/restore feature is not working because some CPU states are
not properly saved, which means they can't be restored later on.
First thing, we ensure the CPUID is stored so that it can be properly
restored later. The code is simplified and pushed down to the hypervisor
crate.
Second thing, we identify for each vCPU if the Hyper-V SynIC device is
emulated or not. In case it is, that means some specific MSRs will be
set by the guest. These MSRs must be saved in order to properly restore
the VM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The watchdog device is created through the "--watchdog" parameter. At
most a single watchdog can be created per VM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This device operates a single virtq. When the driver offers a descriptor
to the device it is interpreted as a "ping" to indicate that the guest
is alive. A periodic timer fires and if when the timer is fired there
has not been a "ping" from the guest then the device will reset the VM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Before Virtio-mmio was removed, we passed an optional PCI space address
parameter to AArch64 code for generating FDT. The address is none if the
transport is MMIO.
Now Virtio-PCI is the only option, the parameter is mandatory.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>