Commit Graph

1034 Commits

Author SHA1 Message Date
Rob Bradford
a8643dc523 vm-device, vmm: Wait for barrier if one is returned
Wait for the barrier if one is provided by the result of the MMIO and
PIO write.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-12-17 11:23:53 +00:00
Rob Bradford
1fc6d50f3e misc: Make Bus::write() return an Option<Arc<Barrier>>
This can be uses to indicate to the caller that it should wait on the
barrier before returning as there is some asynchronous activity
triggered by the write which requires the KVM exit to block until it's
completed.

This is useful for having vCPU thread wait for the VMM thread to proceed
to activate the virtio devices.

See #1863

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-12-17 11:23:53 +00:00
Muminul Islam
f4af668d76 hypervisor, vmm: Implement MsiInterruptOps for mshv
Co-Developed-by: Wei Liu <liuwe@microsoft.com>
Signed-off-by: Wei Liu <liuwe@microsoft.com>
Signed-off-by: Muminul Islam <muislam@microsoft.com>
2020-12-09 14:55:20 +01:00
Muminul Islam
9ce6c3b75c hypervisor, vmm: Feature guard KVM specific code
There are some code base and function which are purely KVM specific for
now and we don't have those supports in mshv at the moment but we have plan
for the future. We are doing a feature guard with KVM. For example, KVM has
mp_state, cpu clock support,  which we don't have for mshv. In order to build
those code we are making the code base for KVM specific compilation.

Signed-off-by: Muminul Islam <muislam@microsoft.com>
2020-12-09 14:55:20 +01:00
Rob Bradford
9e6a023f7f vmm: Don't continue to lookup debug I/O port
When using an PIO write to 0x80 which is a special case handle that and
then return without going through the resolve.

This removes an extra warning that is reported.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-12-03 07:25:42 -08:00
Rob Bradford
ffaab46934 misc: Use a more relaxed memory model when possible
When a total ordering between multiple atomic variables is not required
then use Ordering::Acquire with atomic loads and Ordering::Release with
atomic stores.

This will improve performance as this does not require a memory fence
on x86_64 which Ordering::SeqCst will use.

Add a comment to the code in the vCPU handling code where it operates on
multiple atomics to explain why Ordering::SeqCst is required.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-12-02 19:04:30 +01:00
Rob Bradford
280d4fb245 vmm: Include device tree in vm.info API
The DeviceNode cannot be fully represented as it embeds a Rust style
enum (i.e. with data) which is instead represented by a simple
associative array.

Fixes: #1167

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-12-01 16:44:25 +01:00
Rob Bradford
593a958fe5 pci, vmm: Include VFIO devices in device tree
Fixes: #1687

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-12-01 10:49:04 +01:00
Rob Bradford
b2608ca285 vmm: cpu: Fix clippy issues inside test
Found by:  cargo clippy --all-features --all --tests

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-26 09:32:46 +01:00
Rob Bradford
6f63cd00eb vmm: device_manager: Remove ActivatedBackend struct
This was used for vhost-user self spawning but no longer has any users.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-25 20:47:43 +01:00
Rob Bradford
b5b97f7b05 vmm: When receiving a migration store the config
The configuration is stored separately to the Vm in the VMM. The failure
to store the config was preventing the VM from shutting down correctly
as Vmm::vm_delete() checks for the presence of the config.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-25 01:27:26 +01:00
Rob Bradford
df6b52924f vmm: Unlink created socket after source connects
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-25 01:27:26 +01:00
Rob Bradford
1ab1341775 vmm: seccomp_filters: Add KVM_GET_DIRTY_LOG to permitted calls
The live migration support added use of this ioctl but it wasn't
included in the permitted list.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-25 01:27:26 +01:00
Samuel Ortiz
fadeb98c67 cargo: Bulk update
Includes updates for ssh2, cc, syn, tinyvec, backtrace micro-http and libssh2.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
2020-11-23 12:25:31 +01:00
Rob Bradford
0fec326582 hypervisor, vmm: Remove shared ownership of VmmOps
This interface is used by the vCPU thread to delegate responsibility for
handling MMIO/PIO operations and to support different approaches than a
VM exit.

During profiling I found that we were spending 13.75% of the boot CPU
uage acquiring access to the object holding the VmmOps via
ArcSwap::load_full()

    13.75%     6.02%  vcpu0            cloud-hypervisor    [.] arc_swap::ArcSwapAny<T,S>::load_full
            |
            ---arc_swap::ArcSwapAny<T,S>::load_full
               |
                --13.43%--<hypervisor::kvm::KvmVcpu as hypervisor::cpu::Vcpu>::run
                          std::sys_common::backtrace::__rust_begin_short_backtrace
                          core::ops::function::FnOnce::call_once{{vtable-shim}}
                          std::sys::unix:🧵:Thread:🆕:thread_start

However since the object implementing VmmOps does not need to be mutable
and it is only used from the vCPU side we can change the ownership to
being a simple Arc<> that is passed in when calling create_vcpu().

This completely removes the above CPU usage from subsequent profiles.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-19 00:16:02 +01:00
Rob Bradford
c622278030 config, device_manager: Add support for disabling io_uring for testing
Add config parameter to --disk called "_disable_io_uring" (the
underscore prefix indicating it is not for public consumpion.) Use this
option to disable io_uring if it would otherwise be used.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-18 11:47:54 +01:00
Rob Bradford
3ac9b6c404 vmm: Implement live migration
Now the VM is paused/resumed by the migration process itself.

0. The guest configuration is sent to the destination
1. Dirty page log tracking is started by start_memory_dirty_log()
2. All guest memory is sent to the destination
3. Up to 5 attempts are made to send the dirty guest memory to the
   destination...
4. ...before the VM is paused
5. One last set of dirty pages is sent to the destination
6. The guest is snapshotted and sent to the destination
7. When the migration is completed the destination unpauses the received
   VM.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-17 16:57:11 +00:00
Rob Bradford
b34703d29f vmm: vm: Add dirty log related passthrough methods
This allows code running in the VMM to access the VM's MemoryManager's
functionality for managing the dirty log including resetting it but also
generating a table.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-17 16:57:11 +00:00
Rob Bradford
cf6763dfdb vmm: migration: Add missing response check
A read and check of the response was missing from when sending the
memory to the destination.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-17 16:57:11 +00:00
Rob Bradford
11a69450ba vm-migration, vmm: Send configuration in separate step
Prior to sending the memory the full state is not needed only the
configuration. This is sufficient to create the appropriate structures
in the guest and have the memory allocations ready for filling.

Update the protocol documentation to add a separate config step and move
the state to after the memory is transferred. As the VM is created in a
separate step to restoring it the requires a slightly different
constructor as well as saving the VM object for the subsequent commands.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-17 16:57:11 +00:00
Rob Bradford
c62e409827 memory_manager: Generate a MemoryRangeTable for dirty ranges
In order to do this we must extend the MemoryManager API to add the
ability to specify the tracking of the dirty pages when creating the
userspace mappings and also keep track of the userspace mappings that
have been created for RAM regions.

Currently the dirty pages are collected into ranges based on a block
level of 64 pages. The algorithm could be tweaked to create smaller
ranges but for now if any page in the block of 64 is dirty the whole
block is added to the range.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-17 16:57:11 +00:00
Rob Bradford
8baa244ec1 hypervisor: Add control for dirty page logging
When creating a userspace mapping provide a control for enabling the
logging of dirty pages.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-17 16:57:11 +00:00
Anatol Belski
b399287430 memory_manager: Make addressable space size 64k aligned
While the addressable space size reduction of 4k in necessary due to
the Linux bug, the 64k alignment of the addressable space size is
required by Windows. This patch satisfies both.

Signed-off-by: Anatol Belski <anbelski@linux.microsoft.com>
2020-11-16 16:39:11 +00:00
Rob Bradford
ca60adda70 vmm: Add support for sending and receiving migration if VM is paused
This is tested by:

Source VMM:

target/debug/cloud-hypervisor --kernel ~/src/linux/vmlinux \
--pmem file=~/workloads/focal.raw --cpus boot=1 \
--memory size=2048M \
--cmdline"root=/dev/pmem0p1 console=ttyS0" --serial tty --console off \
--api-socket=/tmp/api1 -v

Destination VMM:

target/debug/cloud-hypervisor --api-socket=/tmp/api2 -v

And the following commands:

target/debug/ch-remote --api-socket=/tmp/api1 pause
target/debug/ch-remote --api-socket=/tmp/api2 receive-migration unix:/tmp/foo &
target/debug/ch-remote --api-socket=/tmp/api1 send-migration unix:/tmp/foo
target/debug/ch-remote --api-socket=/tmp/api2 resume

The VM is then responsive on the destination VMM.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-11 11:07:24 +01:00
Rob Bradford
dfe2dadb3e vmm: memory_manager: Make the snapshot source directory an Option
This allows the code to be reused when creating the VM from a snapshot
when doing VM migration.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-11 11:07:24 +01:00
Rob Bradford
7ac764518c vmm: api: Implement API support for migration
Add API entry points with stub implementation for sending and receiving
a VM from one VMM to another.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-11 11:07:24 +01:00
Julio Montes
270631922d vmm: openapi: remove omitempty json tag
Due to a known limitation in OpenAPITools/openapi-generator tool,
it's impossible to send go zero types, like false and 0 to
cloud-hypervisor because `omitempty` is added if a field is not
required.
Set cache_size, dax, num_queues and queue_size as required to remove
`omitempty` from the json tag.

fixes #1961

Signed-off-by: Julio Montes <julio.montes@intel.com>
2020-11-10 19:09:17 +01:00
Rob Bradford
7b77f1ef90 vmm: Remove self-spawning functionality for vhost-user-{net,block}
This also removes the need to lookup up the "exe" symlink for finding
the VMM executable path.

Fixes: #1925

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-09 00:16:15 +01:00
Rob Bradford
0005d11e32 vmm: config: Require a socket when using vhost-user
With self-spawning being removed both parameters are now required.

Fixes: #1925

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-11-09 00:16:15 +01:00
Michael Zhao
093a581ee1 vmm: Implement VM rebooting on AArch64
The logic to handle AArch64 system event was: SHUTDOWN and RESET were
all treated as RESET.

Now we handle them differently:
- RESET event will trigger Vmm::vm_reboot(),
- SHUTDOWN event will trigger Vmm::vm_shutdown().

Signed-off-by: Michael Zhao <michael.zhao@arm.com>
2020-10-30 17:14:44 +00:00
Michael Zhao
69394c9c35 vmm: Handle hypervisor VCPU run result from Vcpu to VcpuManager
Now Vcpu::run() returns a boolean value to VcpuManager, indicating
whether the VM is going to reboot (false) or just continue (true).
Moving the handling of hypervisor VCPU run result from Vcpu to
VcpuManager gives us the flexibility to handle more scenarios like
shutting down on AArch64.

Signed-off-by: Michael Zhao <michael.zhao@arm.com>
2020-10-30 17:14:44 +00:00
Rob Bradford
cb88ceeae8 vmm: memory_manager: Move the restoration of guest memory later
Rather than filling the guest memory from a file at the point of the the
guest memory region being created instead fill from the file later. This
simplifies the region creation code but also adds flexibility for
sourcing the guest memory from a source other than an on disk file.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-10-30 12:31:47 +01:00
Rob Bradford
21db6f53c8 vmm: memory_manager: Write all guest region to disk
As a mirror of bdbea19e23 which ensured
that GuestMemoryMmap::read_exact_from() was used to read all the file to
the region ensure that all the guest memory region is written to disk.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-10-27 12:11:31 -07:00
Rob Bradford
dfd21cbfc5 vmm: Use thiserror/anyhow for vmm::Error
This gives a nicer user experience and this error can now be used as the
source for other errors based off this.

See: #1910

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-10-27 13:27:23 +00:00
Sebastien Boeuf
7e127df415 vmm: memory_manager: Replace 'ext_region' by 'saved_region'
Any occurrence of of a variable containing `ext_region` is replaced with
the less confusing name `saved_region`. The point is to clearly identify
the memory regions that might have been saved during a snapshot, while
the `ext` standing for `external` was pretty unclear.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-10-23 21:59:52 +02:00
Sebastien Boeuf
c0e8e5b53f vmm: memory_manager: Replace 'backing_file' variable names
In the context of saving the memory regions content through snapshot,
using the term "backing file" brings confusion with the actual backing
file that might back the memory mapping.

To avoid such conflicting naming, the 'backing_file' field from the
MemoryRegion structure gets replaced with 'content', as this is
designating the potential file containing the memory region data.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-10-23 21:59:52 +02:00
Rob Bradford
bdbea19e23 vmm: memory_manager: Completely fill guest ram from snapshot
Use GuestRegionMmap::read_exact_from() to ensure that all of the file is
read into the guest. This addresses an issue where
GuestRegionMmap::read_from() was only copying the first 2GiB of the
memory and so lead to snapshot-restore was failing when the guest RAM
was 2GiB or greater.

This change also propagates any error from the copying upwards.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-10-23 17:56:19 +01:00
Rob Bradford
a60b437f89 vmm: memory_manager: Always copy anonymous RAM regions from disk
When restoring if a region of RAM is backed by anonymous memory i.e from
memfd_create() then copy the contents of the ram from the file that has
been saved to disk.

Previously the code would map the memory from that file into the guest
using a MAP_PRIVATE mapping. This has the effect of
minimising the restore time but provides an issue where the restored VM
does not have the same structure as the snapshotted VM, in particular
memory is backed by files in the restored VM that were anonymously
backed in the original.

This creates two problems:

* The snapshot data is mapped from files for the pages of the guest
  which prevents the storage from being reclaimed.
* When snapshotting again the guest memory will not be correctly saved
  as it will have looked like it was backed by a file so it will not be
  written to disk but as it is a MAP_PRIVATE mapping the changes will
  never be written to the disk again. This results in incorrect
  behaviour.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-10-23 12:34:32 +02:00
Sebastien Boeuf
f4e391922f vmm: Remove balloon options from --memory parameter
The standalone `--balloon` parameter being fully functional at this
point, we can get rid of the balloon options from the --memory
parameter.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-10-22 16:33:16 +02:00
Sebastien Boeuf
3594685279 vmm: Move balloon code from MemoryManager to DeviceManager
Now that we have a new dedicated way of asking for a balloon through the
CLI and the REST API, we can move all the balloon code to the device
manager. This allows us to simplify the memory manager, which is already
quite complex.

It also simplifies the behavior of the balloon resizing command. Instead
of providing the expected size for the RAM, which is complex when memory
zones are involved, it now expects the balloon size. This is a much more
straightforward behavior as it really resizes the balloon to the desired
size. Additionally to the simplication, the benefit of this approach is
that it does not need to be tied to the memory manager at all.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-10-22 16:33:16 +02:00
Sebastien Boeuf
1d479e5e08 vmm: Introduce new --balloon parameter
This introduces a new way of defining the virtio-balloon device. Instead
of going through the --memory parameter, the idea is to consider balloon
as a standalone virtio device.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-10-22 16:33:16 +02:00
Sebastien Boeuf
28e12e9f3a vmm, hypervisor: Fix snapshot/restore for Windows guest
The snasphot/restore feature is not working because some CPU states are
not properly saved, which means they can't be restored later on.

First thing, we ensure the CPUID is stored so that it can be properly
restored later. The code is simplified and pushed down to the hypervisor
crate.

Second thing, we identify for each vCPU if the Hyper-V SynIC device is
emulated or not. In case it is, that means some specific MSRs will be
set by the guest. These MSRs must be saved in order to properly restore
the VM.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-10-21 19:11:03 +01:00
Rob Bradford
885ee9567b vmm: Add support for creating virtio-watchdog
The watchdog device is created through the "--watchdog" parameter. At
most a single watchdog can be created per VM.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
2020-10-21 16:02:39 +01:00
Michael Zhao
0b0596ef30 arch: Simplify PCI space address handling in AArch64 FDT
Before Virtio-mmio was removed, we passed an optional PCI space address
parameter to AArch64 code for generating FDT. The address is none if the
transport is MMIO.
Now Virtio-PCI is the only option, the parameter is mandatory.

Signed-off-by: Michael Zhao <michael.zhao@arm.com>
2020-10-21 12:20:30 +01:00
Michael Zhao
2f2e10ea35 arch: Remove GICv2
Virtio-mmio is removed, now virtio-pci is the only option for virtio
transport layer. We use MSI for PCI device interrupt. While GICv2, the
legacy interrupt controller, doesn't support MSI. So GICv2 is not very
practical for Cloud-hypervisor, we can remove it.

Signed-off-by: Michael Zhao <michael.zhao@arm.com>
2020-10-19 14:58:48 +01:00
Sebastien Boeuf
af3c6c34c3 vmm: Remove mmio and pci differentiation
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-10-19 14:58:48 +01:00
Sebastien Boeuf
7cbd47a71a vmm: Prevent KVM device fd from being unusable from VfioContainer
When shutting down a VM using VFIO, the following error has been
detected:

vfio-ioctls/src/vfio_device.rs:312 -- Could not delete VFIO group:
KvmSetDeviceAttr(Error(9))

After some investigation, it appears the KVM device file descriptor used
for removing a VFIO group was already closed. This is coming from the
Rust sequence of Drop, from the DeviceManager all the way down to
VfioDevice.

Because the DeviceManager owns passthrough_device, which is effectively
a KVM device file descriptor, when the DeviceManager is dropped, the
passthrough_device follows, with the effect of closing the KVM device
file descriptor. Problem is, VfioDevice has not been dropped yet and it
still needs a valid KVM device file descriptor.

That's why the simple way to fix this issue coming from Rust dropping
all resources is to make Linux accountable for it by duplicating the
file descriptor. This way, even when the passthrough_device is dropped,
the KVM file descriptor is closed, but a duplicated instance is still
valid and owned by the VfioContainer.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-10-15 19:15:28 +02:00
Wei Liu
d667ed0c70 vmm: don't call notify_guest_clock_paused when Hyper-V emulation is on
We turn on that emulation for Windows. Windows does not have KVM's PV
clock, so calling notify_guest_clock_paused results in an error.

Signed-off-by: Wei Liu <liuwe@microsoft.com>
2020-10-15 19:14:25 +02:00
Sebastien Boeuf
1b9890b807 vmm: cpu: Set CPU physical bits based on user input
If the user specified a maximum physical bits value through the
`max_phys_bits` option from `--cpus` parameter, the guest CPUID
will be patched accordingly to ensure the guest will find the
right amount of physical bits.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-10-13 18:58:36 +02:00
Sebastien Boeuf
aec88e20d7 vmm: memory_manager: Rely on physical bits for address space size
If the user provided a maximum physical bits value for the vCPUs, the
memory manager will adapt the guest physical address space accordingly
so that devices are not placed further than the specified value.

It's important to note that if the number exceed what is available on
the host, the smaller number will be picked.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2020-10-13 18:58:36 +02:00