In migration, vm object is created by new_from_migration with
NULL kvm clock. so vm.set_clock will not be called during vm resume.
If the guest using kvm-clock, the ticks will be stopped after migration.
As clock was already saved to snapshot, add a method to restore it before
vm resume in migration. after that, guest's kvm-clock works well.
Signed-off-by: Ren Lei <ren.lei4@zte.com.cn>
Connecting a restored KVM clock vm will take long time, as clock
is NOT restored immediately after vm resume from snapshot.
this is because 9ce6c3b incorrectly remove vm_snapshot.clock, and
always pass None to new_from_memory_manager, which will result to
kvm_set_clock() never be called during restore from snapshot.
Fixes: 9ce6c3b
Signed-off-by: Ren Lei <ren.lei4@zte.com.cn>
Now that the control queue is correctly handled by the backend, there's
no need to handle it as well from the VMM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Before this change, the FDT was loaded at the end of RAM. The address of
FDT was not fixed.
While UEFI (edk2 now) requires fixed address to find FDT and RSDP.
Now the FDT is moved to the beginning of RAM, which is a fixed address.
RSDP is wrote to 2 MiB after FDT, also a fixed address.
Kernel comes 2 MiB after RSDP.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
These messages are predominantly during the boot process but will also
occur during events such as hotplug.
These cover all the significant steps of the boot and can be helpful for
diagnosing performance and functionality issues during the boot.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Separate the population of the memory and the HOB from the TDX
initialisation of the memory so that the latter can happen after the CPU
is initialised.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Now all crates use edition = "2018" then the majority of the "extern
crate" statements can be removed. Only those for importing macros need
to remain.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Adding the support for an OVS vhost-user backend to connect as the
vhost-user client. This means we introduce with this patch a new
option to our `--net` parameter. This option is called 'server' in order
to ask the VMM to run as the server for the vhost-user socket.
Fixes#1745
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Create a temporary copy of the config, add the new device and validate
that. This needs to be done separately to adding it to the config to
avoid race conditions that might be result in config changes being
overwritten.
Fixes: #2564
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
To handle that devices are stored in an Option<Vec<T>> and reduce
duplicated code use generic function to add the devices to the the
struct.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The latest kvm-sgx code has renamed sgx_virt_epc device node
to sgx_vepc. Update cloud-hypervisor code and documentation to
follow this.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
Because the http thread no longer needs to create the api socket,
remove the socket, bind and listen syscalls from the seccomp filter.
Signed-off-by: William Douglas <william.douglas@intel.com>
Instead of using the http server's method to have it create the
fd (causing the http thread to need to support the socket, bind and
listen syscalls). Create the socket fd in the vmm thread and use the
http server's new method supporting passing in this fd for the api
socket.
Signed-off-by: William Douglas <william.douglas@intel.com>
To avoid race issues where the api-socket may not be created by the
time a cloud-hypervisor caller is ready to look for it, enable the
caller to pass the api-socket fd directly.
Avoid breaking current callers by allowing the --api-socket path to be
passed as it is now in addition to through the path argument.
Signed-off-by: William Douglas <william.r.douglas@gmail.com>
In order to support using Versionize for state structures it is necessary
to use simpler, primitive, data types in the state definitions used for
snapshot restore.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Windows guests read this field upon PCI device ejection. Let's make sure
we don't return an error as this is valid. We simply return an empty u32
since the ejection is done right away upon write access, which means
there's no pending ejection that might be reported to the guest.
Here is the error that was shown during PCI device removal:
ERROR:vmm/src/device_manager.rs:3960 -- Accessing unknown location at
base 0x7ffffee000, offset 0x8
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of tracking on a block level of 64 pages, we are now collecting
dirty pages one by one. It improves the efficiency of dirty memory
tracking while live migration.
Signed-off-by: Bo Chen <chen.bo@intel.com>
By disabling this KVM feature, we prevent the guest from using APF
(Asynchronous Page Fault) mechanism. The kernel has recently switched to
using interrupts to notify about a page being ready, but for some
reasons, this is causing unexpected behavior with Cloud Hypervisor, as
it will make the vcpu thread spin at 100%.
While investigating the issue, it's better to disable the KVM feature to
prevent 100% CPU usage in some cases.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The original code had a generic type E. It was later replaced by a
concrete type. The code should have been simplified when the replacement
happened.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The MCRS method returns a 64-bit memory range descriptor. The
calculation is supposed to be done as follows:
max = min + len - 1
However, every operand is represented not as a QWORD but as combination
of two DWORDs for high and low part. Till now, the calculation was done
this way, please see also inline comments:
max.lo = min.lo + len.lo //this may overflow, need to carry over to high
max.hi = min.hi + len.hi
max.hi = max.hi - 1 // subtraction needs to happen on the low part
This calculation has been corrected the following way:
max.lo = min.lo + len.lo
max.hi = min.hi + len.hi + (max.lo < min.lo) // check for overflow
max.lo = max.lo - 1 // subtract from low part
The relevant part from the generated ASL for the MCRS method:
```
Method (MCRS, 1, Serialized)
{
Acquire (MLCK, 0xFFFF)
\_SB.MHPC.MSEL = Arg0
Name (MR64, ResourceTemplate ()
{
QWordMemory (ResourceProducer, PosDecode, MinFixed, MaxFixed, Cacheable, ReadWrite,
0x0000000000000000, // Granularity
0x0000000000000000, // Range Minimum
0xFFFFFFFFFFFFFFFE, // Range Maximum
0x0000000000000000, // Translation Offset
0xFFFFFFFFFFFFFFFF, // Length
,, _Y00, AddressRangeMemory, TypeStatic)
})
CreateQWordField (MR64, \_SB.MHPC.MCRS._Y00._MIN, MINL) // _MIN: Minimum Base Address
CreateDWordField (MR64, 0x12, MINH)
CreateQWordField (MR64, \_SB.MHPC.MCRS._Y00._MAX, MAXL) // _MAX: Maximum Base Address
CreateDWordField (MR64, 0x1A, MAXH)
CreateQWordField (MR64, \_SB.MHPC.MCRS._Y00._LEN, LENL) // _LEN: Length
CreateDWordField (MR64, 0x2A, LENH)
MINL = \_SB.MHPC.MHBL
MINH = \_SB.MHPC.MHBH
LENL = \_SB.MHPC.MHLL
LENH = \_SB.MHPC.MHLH
MAXL = (MINL + LENL) /* \_SB_.MHPC.MCRS.LENL */
MAXH = (MINH + LENH) /* \_SB_.MHPC.MCRS.LENH */
If ((MAXL < MINL))
{
MAXH += One /* \_SB_.MHPC.MCRS.MAXH */
}
MAXL -= One
Release (MLCK)
Return (MR64) /* \_SB_.MHPC.MCRS.MR64 */
}
```
Fixes#1800.
Signed-off-by: Anatol Belski <anbelski@linux.microsoft.com>
Fixes the current codebase so that every cargo clippy can be run with
the beta toolchain without any error.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
There are two parts:
- Unconditionally zero the output area. The length of the incoming
vector has been seen from 1 to 4 bytes, even though just the first
byte might need to be handled. But also, this ensures any possibly
unhandled offset will return zeroed result to the caller. The former
implementation used an I/O port which seems to behave differently from
MMIO and wouldn't require explicit output zeroing.
- An access with zero offset still takes place and needs to be handled.
Fixes#2437.
Signed-off-by: Anatol Belski <anbelski@linux.microsoft.com>
The following is from the Hyper-V specification v6.0b.
Cpuid leaf 0x40000003 EDX:
Bit 3: Support for physical CPU dynamic partitioning events is
available.
When Windows determines to be running under a hypervisor, it will
require this cpuid bit to be set to support dynamic CPU operations.
Cpuid leaf 0x40000004 EAX:
Bit 5: Recommend using relaxed timing for this partition. If
used, the VM should disable any watchdog timeouts that
rely on the timely delivery of external interrupts.
This bit has been figured out as required after seeing guest BSOD
when CPU hotplug bit is enabled. Race conditions seem to arise after a
hotplug operation, when a system watchdog has expired.
Closes#1799.
Signed-off-by: Anatol Belski <anbelski@linux.microsoft.com>
error: name `GPIOInterruptDisabled` contains a capitalized acronym
Error: --> devices/src/legacy/gpio_pl061.rs:46:5
|
46 | GPIOInterruptDisabled,
| ^^^^^^^^^^^^^^^^^^^^^ help: consider making the acronym lowercase, except the initial letter: `GpioInterruptDisabled`
|
= note: `-D clippy::upper-case-acronyms` implied by `-D warnings`
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#upper_case_acronyms
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
error: name `FinalizeTDX` contains a capitalized acronym
--> vmm/src/vm.rs:274:5
|
274 | FinalizeTDX(hypervisor::HypervisorVmError),
| ^^^^^^^^^^^ help: consider making the acronym lowercase, except the initial letter: `FinalizeTdx`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#upper_case_acronyms
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
warning: name `AcpiPMTimerDevice` contains a capitalized acronym
--> devices/src/acpi.rs:175:12
|
175 | pub struct AcpiPMTimerDevice {
| ^^^^^^^^^^^^^^^^^ help: consider making the acronym lowercase, except the initial letter: `AcpiPmTimerDevice`
|
= note: `#[warn(clippy::upper_case_acronyms)]` on by default
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#upper_case_acronyms
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
warning: name `IORegion` contains a capitalized acronym
--> pci/src/configuration.rs:320:5
|
320 | IORegion = 0x01,
| ^^^^^^^^ help: consider making the acronym lowercase, except the initial letter (notice the capitalization): `IoRegion`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#upper_case_acronyms
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
warning: name `LocalAPIC` contains a capitalized acronym
--> vmm/src/cpu.rs:197:8
|
197 | struct LocalAPIC {
| ^^^^^^^^^ help: consider making the acronym lowercase, except the initial letter: `LocalApic`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#upper_case_acronyms
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
error: name `TYPE_UNKNOWN` contains a capitalized acronym
--> vm-virtio/src/lib.rs:48:5
|
48 | TYPE_UNKNOWN = 0xFF,
| ^^^^^^^^^^^^ help: consider making the acronym lowercase, except the initial letter: `Type_Unknown`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#upper_case_acronyms
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
error: name `SDT` contains a capitalized acronym
--> acpi_tables/src/sdt.rs:27:12
|
27 | pub struct SDT {
| ^^^ help: consider making the acronym lowercase, except the initial letter: `Sdt`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#upper_case_acronyms
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Drop the generic type E and use IrqRoutngEntry directly. This allows
dropping a bunch of trait bounds from code.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
Their make_entry functions look the same now. Extract the logic to a
common function.
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
There's no need to have the code creating the passthrough_device being
duplicated since we can factorize it in a function used in both cases
(both cold plugged and hot plugged devices VFIO devices).
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Extend and use the existing DeviceTree to retrieve useful information
related to PCI devices. This removes the duplication with pci_devices
field which was internal to the DeviceManager.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Make the code a bit clearer by changing the naming of the structure
holding the list of IRQs reserved for PCI devices. It is also modified
into an array of 32 entries since we know this is the amount of PCI
slots that is supported.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We define a new enum in order to classify PCI device under virtio or
VFIO. This is a cleaner approach than using the Any trait, and
downcasting it to find the object back.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Introduces a tuple holding both information needed by pci_id_list and
pci_devices.
Changes pci_devices to be a BTreeMap of this new tuple.
Now that pci_devices holds the information needed from pci_id_list,
pci_id_list is no longer needed.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for further factorization, the pci_id_list is now a
hashmap of PCI b/d/f leading to each device name.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of relying on a PCI specific device list, we use the DeviceTree
as a reference to determine if a device name is already in use or not.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Only if we have a valid API server path then create the API server. For
now this has no functional change there is a default API server path in
the clap handling but rather prepares to do so optionally.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This commit switches the default serial device from 16550 to the
Arm dedicated UART controller PL011. The `ttyAMA0` can be enabled.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
On AArch64, interrupt controller (GIC) is emulated by KVM. VMM need to
set IRQ routing for devices, including legacy ones.
Before this commit, IRQ routing was only set for MSI. Legacy routing
entries of type KVM_IRQ_ROUTING_IRQCHIP were missing. That is way legacy
devices (like serial device ttyS0) does not work.
The setting of X86 IRQ routing entries are not impacted.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Extend the existing url_to_path() to take the URL string and then use
that to simplify the snapshot/restore code paths.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Relies on the preliminary work allowing virtio devices to be updated
with a single memory at a time instead of updating the entire memory at
once.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The MMIO structure contains the length rather than the maximum address
so it is necessary to subtract the starting address from the end address
to calculate the length.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Load the sections backed from the file into their required addresses in
memory and populate the HOB with details of the memory. Using the HOB
address initialize the TDX state in the vCPUs and finalize the TDX
configuration.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add API to the hypervisor interface and implement for KVM to allow the
special TDX KVM ioctls on the VM and vCPU FDs.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When booting with TDX no kernel is supplied as the TDFV is responsible
for loading the OS. The requirement to have the kernel is still
currently enforced at the validation entry point; this change merely
changes function prototypes and stored state to use Option<> to support.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add the skeleton of the "tdx" feature with a module ready inside the
arch crate to store implementation details.
TEST=cargo build --features="tdx"
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When a vm is created with a pty device, on reboot the pty fd (sub
only) will only be associated with the vmm through the epoll event
loop. The fd being polled will have been closed due to the vm itself
dropping the pty files (and potentially reopening the fd index to a
different item making things quite confusing) and new pty fds will be
opened but not polled on for input.
This change creates a structure to encapsulate the information about
the pty fd (main File, sub File and the path to the sub File). On
reboot, a copy of the console and serial pty structs is then passed
down to the new Vm instance which will be used instead of creating a
new pty device.
This resolves the underlying issue from #2316.
Signed-off-by: William Douglas <william.r.douglas@gmail.com>
Now that virtio-mem devices can update VFIO mappings through dedicated
handlers, let's provide them from the DeviceManager.
Important to note these handlers should either be provided to virtio-mem
devices or to the unique virtio-iommu device. This must be mutually
exclusive.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of letting the VfioPciDevice take the decision on how/when to
perform the DMA mapping/unmapping, we move this to the DeviceManager
instead.
The point is to let the DeviceManager choose which guest memory regions
should be mapped or not. In particular, we don't want the virtio-mem
region to be mapped/unmapped as it will be virtio-mem device
responsibility to do so.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When memory is resized through ACPI, a new region is added to the guest
memory. This region must also be added to the corresponding memory zone
in order to keep everything in sync.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In particular update for the vmm-sys-util upgrade and all the other
dependent packages. This requires an updated forked version of
kvm-bindings (due to updated vfio-ioctls) but allowed the removal of our
forked version of kvm-ioctls.
The changes to the API from kvm-ioctls and vmm-sys-util required some
other minor changes to the code.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This commit moves both pci and vmm code from the internal vfio-ioctls
crate to the upstream one from the rust-vmm project.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that ExternalDmaMapping is defined in vm-device, let's use it from
there.
This commit also defines the function get_host_address_range() to move
away from the vfio-ioctls dependency.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The help information displayed for our `--disk` option is incorrect and
incomplete, e.g. missing the `direct` and `poll_queue` field.
Signed-off-by: Bo Chen <chen.bo@intel.com>
The main idea behind this commit is to remove all the complexity
associated with TX/RX handling for virtio-net. By using writev() and
readv() syscalls, we could get rid of intermediate buffers for both
queues.
The complexity regarding the TAP registration has been simplified as
well. The RX queue is only processed when some data are ready to be
read from TAP. The event related to the RX queue getting more
descriptors only serves the purpose to register the TAP file if it's not
already.
With all these simplifications, the code is more readable but more
performant as well. We can see an improvement of 10% for a single
queue device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This function can then be used by the TDX code to allocate the memory at
specific locations required for the TDVF to run from.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Update for clippy in Rust 1.50.0:
error: Unnecessary nested match
--> vmm/src/vm.rs:419:17
|
419 | / if let vm_device::BusError::MissingAddressRange = e {
420 | | warn!("Guest MMIO write to unregistered address 0x{:x}", gpa);
421 | | }
| |_________________^
|
= note: `-D clippy::collapsible-match` implied by `-D warnings`
help: The outer pattern can be modified to include the inner pattern.
--> vmm/src/vm.rs:418:17
|
418 | Err(e) => {
| ^ Replace this binding
419 | if let vm_device::BusError::MissingAddressRange = e {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ with this pattern
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#collapsible_match
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
If the function can never return an error this is now a clippy failure:
error: this function's return value is unnecessarily wrapped by `Result`
--> virtio-devices/src/watchdog.rs:215:5
|
215 | / fn set_state(&mut self, state: &WatchdogState) -> io::Result<()> {
216 | | self.common.avail_features = state.avail_features;
217 | | self.common.acked_features = state.acked_features;
218 | | // When restoring enable the watchdog if it was previously enabled. We reset the timer
... |
223 | | Ok(())
224 | | }
| |_____^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_wraps
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Depending on the host OS the code for looking up the time for the CMOS
make require extra syscalls to be permitted for the vCPU thread.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
With all the preliminary work done in the previous commits, we can
update the VFIO implementation to support INTx along with MSI and MSI-X.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Here we are adding the PCI routing table, commonly called _PRT, to the
ACPI DSDT. For simplification reasons, we chose not to implement PCI
links as this involves dynamic decision from the guest OS, which result
in lots of complexity both from an AML perspective and from a device
manager perspective.
That's why the _PRT creates a static list of 32 entries, each assigned
with the IRQ number previously reserved by the device manager.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to support INTx for PCI devices, each PCI device must be
assigned an IRQ. This is preliminary work to reserve 8 IRQs which will
be shared across the 32 PCI devices.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for accessing the legacy interrupt manager from the
function creating a VFIO PCI device, we store it as part of the
DeviceManager, to make it available for all methods.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The DeviceManager already has a hold onto the MSI interrupt manager,
therefore there's no need to pass it through every function. Instead,
let's simplify the code by using the attribute from DeviceManager's
instance.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Both GIC and IOAPIC must implement a new method notifier() in order to
provide the caller with an EventFd corresponding to the IRQ it refers
to.
This is needed in anticipation for supporting INTx with VFIO PCI
devices.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for supporting the notifier function for the legacy
interrupt source group, we need this function to return an EventFd
instead of a reference to this same EventFd.
The reason is we can't return a reference when there's an Arc<Mutex<>>
involved in the call chain.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Swap the last two parameters of guest_mem_{read,write} to be consistent
with other read / write functions.
Use more descriptive parameter names.
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This reflects that it generates CPUID state used across all vCPUs.
Further ensure that errors from this function get correctly propagated.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move the code for populating the CPUID with KVM HyperV emulation details from
the per-vCPU CPUID handling code to the shared CPUID handling code.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move the code for populating the CPUID with details of the CPU
identification from the per-vCPU CPUID handling code to the shared CPUID
handling code.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move the code for populating the CPUID with details of the maximum
address space from the per-vCPU CPUID handling code to the shared CPUID
handling code.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add the ability for cloud-hypervisor to create, manage and monitor a
pty for serial and/or console I/O from a user. The reasoning for
having cloud-hypervisor create the ptys is so that clients, libvirt
for example, could exit and later re-open the pty without causing I/O
issues. If the clients were responsible for creating the pty, when
they exit the main pty fd would close and cause cloud-hypervisor to
get I/O errors on writes.
Ideally the main and subordinate pty fds would be kept in the main
vmm's Vm structure. However, because the device manager owns parsing
the configuration for the serial and console devices, the information
is instead stored in new fields under the DeviceManager structure
directly.
From there hooking up the main fd is intended to look as close to
handling stdin and stdout on the tty as possible (there is some future
work ahead for perhaps moving support for the pty into the
vmm_sys_utils crate).
The main fd is used for reading user input and writing to output of
the Vm device. The subordinate fd is used to setup raw mode and it is
kept open in order to avoid I/O errors when clients open and close the
pty device.
The ability to handle multiple inputs as part of this change is
intentional. The current code allows serial and console ptys to be
created and both be used as input. There was an implementation gap
though with the queue_input_bytes needing to be modified so the pty
handlers for serial and console could access the methods on the serial
and console structures directly. Without this change only a single
input source could be processed as the console would switch based on
its input type (this is still valid for tty and isn't otherwise
modified).
Signed-off-by: William Douglas <william.r.douglas@gmail.com>
Use the newly added hugepages_size option if provided by the user to
pick a huge page size when creating the memfd region. If none is
specified use the system default.
Sadly different huge pages cannot be tested by an integration test as
creating a pool of the non-default size cannot be done at runtime
(requires kernel to be booted with certain parameters.)
TETS=Manually tested with a kernel booted with both 1GiB and 2MiB huge
pages (hugepagesz=1G hugepages=1 hugepagesz=2M hugepages=512)
Fixes: #2230
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows the user to use an alternative huge page size otherwise the
default size will be used.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This commit introduces a new information to the VirtioMemZone structure
in order to know if the memory zone is backed by hugepages.
Based on this new information, the virtio-mem device is now able to
determine if madvise(MADV_DONTNEED) should be performed or not. The
madvise documentation specifies that MADV_DONTNEED advice will fail if
the memory range has been allocated with some hugepages.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Signed-off-by: Hui Zhu <teawater@antfin.com>
By introducing a ResizeSender object, we avoid having a Resize clone
with a different content than the original Resize object.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Relying on the simplified version of the synchronous support for RAW
disk files, the new fixed_vhd_sync module in the block_util crate
introduces the synchronous support for fixed VHD disk files.
With this patch, the fixed VHD support is complete as it is implemented
in both synchronous and asynchronous versions.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Using directly preadv and pwritev, we can simply use a RawFd instead of
a file, and we don't need to use the more complex implementation from
the qcow crate.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit adds the asynchronous support for fixed VHD disk files.
It introduces FixedVhd as a new ImageType, moving the image type
detection to the block_util crate (instead of qcow crate).
It creates a new vhd module in the block_util crate in order to handle
VHD footer, following the VHD specification.
It creates a new fixed_vhd_async module in the block_util crate to
implement the asynchronous version of fixed VHD disk file. It relies on
io_uring.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
_EJx built in should not return.
dsdt.dsl 813: Return (CEJ0 (0x00))
Warning 3104 - ^ Reserved method should not return a value (_EJ0)
dsdt.dsl 813: Return (CEJ0 (0x00))
Error 6080 - ^ Called method returns no value
Fixes: #2216
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The mutex timeout should be 0xffff rather than 0xfff to disable the
timeout feature.
dsdt.dsl 745: Acquire (\_SB.PRES.CPLK, 0x0FFF)
Warning 3130 - ^ Result is not used, possible operator timeout will be missed
dsdt.dsl 767: Acquire (\_SB.PRES.CPLK, 0x0FFF)
Warning 3130 - ^ Result is not used, possible operator timeout will be missed
dsdt.dsl 775: Acquire (\_SB.PRES.CPLK, 0x0FFF)
Warning 3130 - ^ Result is not used, possible operator timeout will be missed
Fixes: #2216
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This patch enables multi-queue support for creating virtio-net devices by
accepting multiple TAP fds, e.g. '--net fds=3:7'.
Fixes: #2164
Signed-off-by: Bo Chen <chen.bo@intel.com>
Building with 1.51 nightly produces the following warning:
warning: unnecessary trailing semicolon
--> vmm/src/device_manager.rs:396:6
|
396 | };
| ^ help: remove this semicolon
|
= note: `#[warn(redundant_semicolons)]` on by default
warning: 1 warning emitted
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This skeleton commit brings in the support for compiling aarch64 with
the "acpi" feature ready to the ACPI enabling. It builds on the work to
move the ACPI hotplug devices from I/O ports to MMIO and conditionalises
any code that is x86_64 only (i.e. because it uses an I/O port.)
Filling in the aarch64 specific details in tables such as the MADT it
out of the scope.
See: #2178
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
It might be useful debugging information for the user to know what kind
of disk file implementation is in use.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that BlockIoUring is the only implementation of virtio-block,
handling both synchronous and asynchronous backends based on the
AsyncIo trait, we can rename it to Block.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the synchronous QCOW file implementation present in the qcow
crate, we created a new qcow_sync module in block_util that ports this
synchronous implementation to the AsyncIo trait.
The point is to reuse virtio-blk asynchronous implementation for both
synchronous and asynchronous backends.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the synchronous RAW file implementation present in the qcow
crate, we created a new raw_sync module in block_util that ports this
synchronous implementation to the AsyncIo trait.
The point is to reuse virtio-blk asynchronous implementation for both
synchronous and asynchronous backends.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the new DiskFile and AsyncIo traits, the implementation of
asynchronous block support does not have to be tied to io_uring anymore.
Instead, the only thing the virtio-blk implementation knows is that it
is using an asynchronous implementation of the underlying disk file.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Currently the GED control is in a fixed I/O port address but instead use
an MMIO address that has been chosen by the allocator.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This patch refines the sccomp filter list for the vCPU thread, as we are
no longer spawning virtio-device threads from the vCPU thread.
Fixes: #2170
Signed-off-by: Bo Chen <chen.bo@intel.com>
This will lead to the triggering of an ACPI button inside the guest in
order to cleanly shutdown the guest.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Use the ACPI GED device to trigger a notitifcation of type
POWER_BUTTON_CHANGED which will ultimately lead to the guest being
notified.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Renamed this bitfield as it will also be used for non-hotplug purposes
such as synthesising a power button.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Older libc (like RHEL7) uses open() rather than openat(). This was
demonstrated through a failure to open /etc/localtime as used by
gmtime() libc call trigged from the vCPU thread (CMOS device.)
Fixes: #2111
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Sometimes when running under the CI tests fail due to a barrier not
being released and the guest blocks on an MMIO write. Add further
debugging to try and identify the issue.
See: #2118
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Based on the LKML feedback, the devices under /dev/sgx/* are
not justified. SGX RFC v40 moves the SGX device nodes to /dev/sgx_*
and this is reflected in kvm-sgx (next branch) too.
Update cloud-hypervisor code and documentation to follow this.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
If the vCPU thread calls log!() the time difference between the call
time and the boot up time is reported. On most environments and
architectures this covered by a vDSO call rather than a syscall. However
on some platforms this turns into a syscall.
Fixes: #2080
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
With Rust 1.49 using attributes on a function parameter is not allowed.
The recommended workaround is to put it in a new block.
error[E0658]: attributes on expressions are experimental
--> vmm/src/memory_manager.rs:698:17
|
698 | #[cfg(target_arch = "x86_64")]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: see issue #15701 <https://github.com/rust-lang/rust/issues/15701> for more information
error: removing an expression is not supported in this position
--> vmm/src/memory_manager.rs:698:17
|
698 | #[cfg(target_arch = "x86_64")]
|
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add an "fd=" parameter to allow specifying a TAP fd to use. Currently
only one fd for one queue pair is supported.
Fixes: #2052
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When a device is ready to be activated signal to the VMM thread via an
EventFd that there is a device to be activated. When the VMM receives a
notification on the EventFd that there is a device to be activated
notify the device manager to attempt to activate any devices that have
not been activated.
As a side effect the VMM thread will create the virtio device threads.
Fixes: #1863
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This can be uses to indicate to the caller that it should wait on the
barrier before returning as there is some asynchronous activity
triggered by the write which requires the KVM exit to block until it's
completed.
This is useful for having vCPU thread wait for the VMM thread to proceed
to activate the virtio devices.
See #1863
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This is the initial folder structure of the mshv module inside
the hypervisor crate. The aim of this module is to support Microsoft
Hyper-V as a supported Hypervisor.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
There are some code base and function which are purely KVM specific for
now and we don't have those supports in mshv at the moment but we have plan
for the future. We are doing a feature guard with KVM. For example, KVM has
mp_state, cpu clock support, which we don't have for mshv. In order to build
those code we are making the code base for KVM specific compilation.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
When using an PIO write to 0x80 which is a special case handle that and
then return without going through the resolve.
This removes an extra warning that is reported.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When a total ordering between multiple atomic variables is not required
then use Ordering::Acquire with atomic loads and Ordering::Release with
atomic stores.
This will improve performance as this does not require a memory fence
on x86_64 which Ordering::SeqCst will use.
Add a comment to the code in the vCPU handling code where it operates on
multiple atomics to explain why Ordering::SeqCst is required.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The DeviceNode cannot be fully represented as it embeds a Rust style
enum (i.e. with data) which is instead represented by a simple
associative array.
Fixes: #1167
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The configuration is stored separately to the Vm in the VMM. The failure
to store the config was preventing the VM from shutting down correctly
as Vmm::vm_delete() checks for the presence of the config.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The live migration support added use of this ioctl but it wasn't
included in the permitted list.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This interface is used by the vCPU thread to delegate responsibility for
handling MMIO/PIO operations and to support different approaches than a
VM exit.
During profiling I found that we were spending 13.75% of the boot CPU
uage acquiring access to the object holding the VmmOps via
ArcSwap::load_full()
13.75% 6.02% vcpu0 cloud-hypervisor [.] arc_swap::ArcSwapAny<T,S>::load_full
|
---arc_swap::ArcSwapAny<T,S>::load_full
|
--13.43%--<hypervisor::kvm::KvmVcpu as hypervisor::cpu::Vcpu>::run
std::sys_common::backtrace::__rust_begin_short_backtrace
core::ops::function::FnOnce::call_once{{vtable-shim}}
std::sys::unix:🧵:Thread:🆕:thread_start
However since the object implementing VmmOps does not need to be mutable
and it is only used from the vCPU side we can change the ownership to
being a simple Arc<> that is passed in when calling create_vcpu().
This completely removes the above CPU usage from subsequent profiles.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add config parameter to --disk called "_disable_io_uring" (the
underscore prefix indicating it is not for public consumpion.) Use this
option to disable io_uring if it would otherwise be used.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Now the VM is paused/resumed by the migration process itself.
0. The guest configuration is sent to the destination
1. Dirty page log tracking is started by start_memory_dirty_log()
2. All guest memory is sent to the destination
3. Up to 5 attempts are made to send the dirty guest memory to the
destination...
4. ...before the VM is paused
5. One last set of dirty pages is sent to the destination
6. The guest is snapshotted and sent to the destination
7. When the migration is completed the destination unpauses the received
VM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows code running in the VMM to access the VM's MemoryManager's
functionality for managing the dirty log including resetting it but also
generating a table.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Prior to sending the memory the full state is not needed only the
configuration. This is sufficient to create the appropriate structures
in the guest and have the memory allocations ready for filling.
Update the protocol documentation to add a separate config step and move
the state to after the memory is transferred. As the VM is created in a
separate step to restoring it the requires a slightly different
constructor as well as saving the VM object for the subsequent commands.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In order to do this we must extend the MemoryManager API to add the
ability to specify the tracking of the dirty pages when creating the
userspace mappings and also keep track of the userspace mappings that
have been created for RAM regions.
Currently the dirty pages are collected into ranges based on a block
level of 64 pages. The algorithm could be tweaked to create smaller
ranges but for now if any page in the block of 64 is dirty the whole
block is added to the range.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
While the addressable space size reduction of 4k in necessary due to
the Linux bug, the 64k alignment of the addressable space size is
required by Windows. This patch satisfies both.
Signed-off-by: Anatol Belski <anbelski@linux.microsoft.com>
This is tested by:
Source VMM:
target/debug/cloud-hypervisor --kernel ~/src/linux/vmlinux \
--pmem file=~/workloads/focal.raw --cpus boot=1 \
--memory size=2048M \
--cmdline"root=/dev/pmem0p1 console=ttyS0" --serial tty --console off \
--api-socket=/tmp/api1 -v
Destination VMM:
target/debug/cloud-hypervisor --api-socket=/tmp/api2 -v
And the following commands:
target/debug/ch-remote --api-socket=/tmp/api1 pause
target/debug/ch-remote --api-socket=/tmp/api2 receive-migration unix:/tmp/foo &
target/debug/ch-remote --api-socket=/tmp/api1 send-migration unix:/tmp/foo
target/debug/ch-remote --api-socket=/tmp/api2 resume
The VM is then responsive on the destination VMM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows the code to be reused when creating the VM from a snapshot
when doing VM migration.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add API entry points with stub implementation for sending and receiving
a VM from one VMM to another.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Due to a known limitation in OpenAPITools/openapi-generator tool,
it's impossible to send go zero types, like false and 0 to
cloud-hypervisor because `omitempty` is added if a field is not
required.
Set cache_size, dax, num_queues and queue_size as required to remove
`omitempty` from the json tag.
fixes#1961
Signed-off-by: Julio Montes <julio.montes@intel.com>
This also removes the need to lookup up the "exe" symlink for finding
the VMM executable path.
Fixes: #1925
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The logic to handle AArch64 system event was: SHUTDOWN and RESET were
all treated as RESET.
Now we handle them differently:
- RESET event will trigger Vmm::vm_reboot(),
- SHUTDOWN event will trigger Vmm::vm_shutdown().
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Now Vcpu::run() returns a boolean value to VcpuManager, indicating
whether the VM is going to reboot (false) or just continue (true).
Moving the handling of hypervisor VCPU run result from Vcpu to
VcpuManager gives us the flexibility to handle more scenarios like
shutting down on AArch64.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Rather than filling the guest memory from a file at the point of the the
guest memory region being created instead fill from the file later. This
simplifies the region creation code but also adds flexibility for
sourcing the guest memory from a source other than an on disk file.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
As a mirror of bdbea19e23 which ensured
that GuestMemoryMmap::read_exact_from() was used to read all the file to
the region ensure that all the guest memory region is written to disk.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This gives a nicer user experience and this error can now be used as the
source for other errors based off this.
See: #1910
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Any occurrence of of a variable containing `ext_region` is replaced with
the less confusing name `saved_region`. The point is to clearly identify
the memory regions that might have been saved during a snapshot, while
the `ext` standing for `external` was pretty unclear.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In the context of saving the memory regions content through snapshot,
using the term "backing file" brings confusion with the actual backing
file that might back the memory mapping.
To avoid such conflicting naming, the 'backing_file' field from the
MemoryRegion structure gets replaced with 'content', as this is
designating the potential file containing the memory region data.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Use GuestRegionMmap::read_exact_from() to ensure that all of the file is
read into the guest. This addresses an issue where
GuestRegionMmap::read_from() was only copying the first 2GiB of the
memory and so lead to snapshot-restore was failing when the guest RAM
was 2GiB or greater.
This change also propagates any error from the copying upwards.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When restoring if a region of RAM is backed by anonymous memory i.e from
memfd_create() then copy the contents of the ram from the file that has
been saved to disk.
Previously the code would map the memory from that file into the guest
using a MAP_PRIVATE mapping. This has the effect of
minimising the restore time but provides an issue where the restored VM
does not have the same structure as the snapshotted VM, in particular
memory is backed by files in the restored VM that were anonymously
backed in the original.
This creates two problems:
* The snapshot data is mapped from files for the pages of the guest
which prevents the storage from being reclaimed.
* When snapshotting again the guest memory will not be correctly saved
as it will have looked like it was backed by a file so it will not be
written to disk but as it is a MAP_PRIVATE mapping the changes will
never be written to the disk again. This results in incorrect
behaviour.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The standalone `--balloon` parameter being fully functional at this
point, we can get rid of the balloon options from the --memory
parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that we have a new dedicated way of asking for a balloon through the
CLI and the REST API, we can move all the balloon code to the device
manager. This allows us to simplify the memory manager, which is already
quite complex.
It also simplifies the behavior of the balloon resizing command. Instead
of providing the expected size for the RAM, which is complex when memory
zones are involved, it now expects the balloon size. This is a much more
straightforward behavior as it really resizes the balloon to the desired
size. Additionally to the simplication, the benefit of this approach is
that it does not need to be tied to the memory manager at all.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This introduces a new way of defining the virtio-balloon device. Instead
of going through the --memory parameter, the idea is to consider balloon
as a standalone virtio device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The snasphot/restore feature is not working because some CPU states are
not properly saved, which means they can't be restored later on.
First thing, we ensure the CPUID is stored so that it can be properly
restored later. The code is simplified and pushed down to the hypervisor
crate.
Second thing, we identify for each vCPU if the Hyper-V SynIC device is
emulated or not. In case it is, that means some specific MSRs will be
set by the guest. These MSRs must be saved in order to properly restore
the VM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The watchdog device is created through the "--watchdog" parameter. At
most a single watchdog can be created per VM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Before Virtio-mmio was removed, we passed an optional PCI space address
parameter to AArch64 code for generating FDT. The address is none if the
transport is MMIO.
Now Virtio-PCI is the only option, the parameter is mandatory.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Virtio-mmio is removed, now virtio-pci is the only option for virtio
transport layer. We use MSI for PCI device interrupt. While GICv2, the
legacy interrupt controller, doesn't support MSI. So GICv2 is not very
practical for Cloud-hypervisor, we can remove it.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
When shutting down a VM using VFIO, the following error has been
detected:
vfio-ioctls/src/vfio_device.rs:312 -- Could not delete VFIO group:
KvmSetDeviceAttr(Error(9))
After some investigation, it appears the KVM device file descriptor used
for removing a VFIO group was already closed. This is coming from the
Rust sequence of Drop, from the DeviceManager all the way down to
VfioDevice.
Because the DeviceManager owns passthrough_device, which is effectively
a KVM device file descriptor, when the DeviceManager is dropped, the
passthrough_device follows, with the effect of closing the KVM device
file descriptor. Problem is, VfioDevice has not been dropped yet and it
still needs a valid KVM device file descriptor.
That's why the simple way to fix this issue coming from Rust dropping
all resources is to make Linux accountable for it by duplicating the
file descriptor. This way, even when the passthrough_device is dropped,
the KVM file descriptor is closed, but a duplicated instance is still
valid and owned by the VfioContainer.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We turn on that emulation for Windows. Windows does not have KVM's PV
clock, so calling notify_guest_clock_paused results in an error.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
If the user specified a maximum physical bits value through the
`max_phys_bits` option from `--cpus` parameter, the guest CPUID
will be patched accordingly to ensure the guest will find the
right amount of physical bits.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
If the user provided a maximum physical bits value for the vCPUs, the
memory manager will adapt the guest physical address space accordingly
so that devices are not placed further than the specified value.
It's important to note that if the number exceed what is available on
the host, the smaller number will be picked.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to let the user choose maximum address space size, this patch
introduces a new option `max_phys_bits` to the `--cpus` parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The 'GuestAddress::unchecked_add' function has undefined behavior when
an overflow occurs. Its alternative 'checked_add' requires use to handle
the overflow explicitly.
Signed-off-by: Bo Chen <chen.bo@intel.com>
We are now reserving a 256M gap in the guest address space each time
when hotplugging memory with ACPI, which prevents users from hotplugging
memory to the maximum size they requested. We confirm that there is no
need to reserve this gap.
This patch removes the 'reserved gaps'. It also refactors the
'MemoryManager::start_addr' so that it is rounding-up to 128M alignment
when hotplugged memory is allowed with ACPI.
Signed-off-by: Bo Chen <chen.bo@intel.com>
We now try to create a ram region of size 0 when the requested memory
size is the same as current memory size. It results in an error of
`GuestMemoryRegion(Mmap(Os { code: 22, kind: InvalidInput, message:
"Invalid argument" }))`. This error is not meaningful to users and we
should not report it.
Signed-off-by: Bo Chen <chen.bo@intel.com>
This is a new clippy check introduced in 1.47 which requires the use of
the matches!() macro for simple match blocks that return a boolean.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The OneRegister literally means "one (arbitrary) register". Just call it
"Register" instead. There is no need to inherit KVM's naming scheme in
the hypervisor agnostic code.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
Small patch creating a dedicated `block_io_uring_is_supported()`
function for the non-io_uring case, so that we can simplify the
code in the DeviceManager.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Without the unlink(2) syscall being allowed, Cloud-Hypervisor crashes
when we remove a virtio-vsock device that has been previously added.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Because of the PCI refactoring that happened in the previous commit
d793cc4da3, the ability to fully remove a
PCI device was altered.
The refactoring was correct, but the usage of a generic function to pass
the same reference for both BusDevice, PciDevice and Any + Send + Sync
causes the Arc::ptr_eq() function to behave differently than expected,
as it does not match the references later in the code. That means we
were not able to remove the device reference from the MMIO and/or PIO
buses, which was leading to some bus range overlapping error once we
were trying to add a device again to the previous range that should have
been removed.
Fixes#1802
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Run loop in hypervisor needs a callback mechanism to access resources
like guest memory, mmio, pio etc.
VmmOps trait is introduced here, which is implemented by vmm module.
While handling vcpuexits in run loop, this trait allows hypervisor
module access to the above mentioned resources via callbacks.
Signed-off-by: Praveen Paladugu <prapal@microsoft.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
A new version of vm-memory was released upstream which resulted in some
components pulling in that new version. Update the version number used
to point to the latest version but continue to use our patched version
due to the fix for #1258
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The PMEM support has an option called "discard_writes" which when true
will prevent changes to the device from hitting the backing file. This
is trying to be the equivalent of "readonly" support of the block
device.
Previously the memory of the device was marked as KVM_READONLY. This
resulted in a trap when the guest attempted to write to it resulting a
VM exit (and recently a warning). This has a very detrimental effect on
the performance so instead make "discard_writes" truly CoW by mapping
the memory as `PROT_READ | PROT_WRITE` and using `MAP_PRIVATE` to
establish the CoW mapping.
Fixes: #1795
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The virtio-balloon change the memory size is asynchronous.
VirtioBalloonConfig.actual of balloon device show current balloon size.
This commit add memory_actual_size to vm.info to show memory actual size.
Signed-off-by: Hui Zhu <teawater@antfin.com>
Write to the exit_evt EventFD which will trigger all the devices and
vCPUs to exit. This is slightly cleaner than just exiting the process as
any temporary files will be removed.
Fixes: #1242
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This patch adds the missing the `iommu` and `id` option for
`VmAddDevice` in the openApi yaml to respect the internal data structure
in the code base. Also, setting the `id` explicitly for VFIO device
hotplug is required for VFIO device unplug through openAPI calls.
Signed-off-by: Bo Chen <chen.bo@intel.com>
According to openAPI specification [1], the format for `integer` types
can be only `int32` or `int64`, unsigned and 8-bits integers are not
supported.
This patch replaces `uint64` with `int64`, `uint32` with `int32` and
`uint8` with `int32`.
[1]: https://swagger.io/specification/#data-types
Signed-off-by: Julio Montes <julio.montes@intel.com>
MsiInterruptGroup doesn't need to know the internal field names of
InterruptRoute. Introduce two helper functions to eliminate references
to irq_fd. This is done similarly to the enable and disable helper
functions.
Also drop the pub keyword from InterruptRoute fields. It is not needed
anymore.
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
According to openAPI specification[1], the format for `integer` types
can be only `int32` or `int64`, unsigned integers are not supported.
This patch replaces `uint64` with `int64`.
[1]: https://swagger.io/specification/#data-types
Signed-off-by: Julio Montes <julio.montes@intel.com>
There is no point in manually dropping the lock for gsi_msi_routes then
instantly grabbing it again in set_gsi_routes.
Make set_gsi_routes take a reference to the routing hashmap instead.
No functional change intended.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The MTRR feature was missing from the CPUID, which is causing the guest
to ignore the MTRR settings exposed through dedicated MSRs.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Since Cloud-Hypervisor currently support one single PCI bus, we must
reflect this through the MCFG table, as it advertises the first bus and
the last bus available. In this case both are bus 0.
This patch saves quite some time during guest kernel boot, as it
prevents from checking each bus for available devices.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The states of GIC should be part of the VM states. This commit
enables the AArch64 VM states save/restore by adding save/restore
of GIC states.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
The definition of libc::SYS_ftruncate on AArch64 is different
from that on x86_64. This commit unifies the previously hard-coded
syscall number for AArch64.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
`KVM_GET_REG_LIST` ioctl is needed in save/restore AArch64 vCPU.
Therefore we whitelist this ioctl in seccomp.
Also this commit unifies the `SYS_FTRUNCATE` syscall for x86_64
and AArch64.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Similarly as the VM booting process, on AArch64 systems,
the vCPUs should be created before the creation of GIC. This
commit refactors the vCPU save/restore code to achieve the
above-mentioned restoring order.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Since calling `KVM_GET_ONE_REG` before `KVM_VCPU_INIT` will
result in an error: Exec format error (os error 8). This commit
decouples the vCPU init process from `configure_vcpus`. Therefore
in the process of restoring the vCPUs, these vCPUs can be
initialized separately before started.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
The construction of `GICR_TYPER` register will need vCPU states.
Therefore this commit adds methods to extract saved vCPU states
from the cpu manager.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Unlike x86_64, the "interrupt_controller" in the device manager
for AArch64 is only a `Gic` object that implements the
`InterruptController` to provide the interrupt delivery service.
This is not the real GIC device so that we do not need to save
its states. Also, we do not need to insert it to the device_tree.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
The value of GIC register `GICR_TYPER` is needed in restoring
the GIC states. This commit adds a field in the GIC device struct
and a method to construct its value.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
In AArch64 systems, the state of GIC device can only be
retrieved from `KVM_GET_DEVICE_ATTR` ioctl. Therefore to implement
saving/restoring the GIC states, we need to make sure that the
GIC object (either the file descriptor or the device itself) can
be extracted after the VM is started.
This commit refactors the code of GIC creation by adding a new
field `gic_device_entity` in device manager and methods to set/get
this field. The GIC object can be therefore saved in the device
manager after calling `arch::configure_system`.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
This commit adds a function which allows to save RDIST pending
tables to the guest RAM, as well as unit test case for it.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
This commit adds the unit test cases for getting/setting the GIC
distributor, redistributor and ICC registers.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Adds 3 more unit test cases for AArch64:
*save_restore_core_regs
*save_restore_system_regs
*get_set_mpstate
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
This commit ports code from firecracker and refactors the existing
AArch64 code as the preparation for implementing save/restore
AArch64 vCPU, including:
1. Modification of `arm64_core_reg` macro to retrive the index of
arm64 core register and implemention of a helper to determine if
a register is a system register.
2. Move some macros and helpers in `arch` crate to the `hypervisor`
crate.
3. Added related unit tests for above functions and macros.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Misspellings were identified by https://github.com/marketplace/actions/check-spelling
* Initial corrections suggested by Google Sheets
* Additional corrections by Google Chrome auto-suggest
* Some manual corrections
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
virtio-mem device would use 'VIRTIO_MEM_F_ACPI_PXM' to add memory to NUMA
node, which MUST be existed, otherwise it will be assigned to node id 0,
even if user specify different node id.
According ACPI spec about Memory Affinity Structure, system hardware
supports hot-add memory region using 'Hot Pluggable | Enabled' flags.
Signed-off-by: Jiangbo Wu <jiangbo.wu@intel.com>
Use zone.host_numa_node to create memory zone, so that memory zone
can apply memory policy in according with host numa node ID
Signed-off-by: Jiangbo Wu <jiangbo.wu@intel.com>
If after the creation of the self-spawned backend, the VMM cannot create
the corresponding vhost-user frontend, the VMM must kill the freshly
spawned process in order to ensure the error propagation can happen.
In case the child process would still be around, the VMM cannot return
the error as it waits onto the child to terminate.
This should help us identify when self-spawned failures are caused by a
connection being refused between the VMM and the backend.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When the VMM is terminated by receiving a SIGTERM signal, the signal
handler thread must be able to invoke ioctl(TCGETS) and ioctl(TCSETS)
without error.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on all the preparatory work achieved through previous commits,
this patch updates the 'hotplugged_size' field for both MemoryConfig and
MemoryZoneConfig structures when either the whole memory is resized, or
simply when a memory zone is resized.
This fixes the lack of support for rebooting a VM with the right amount
of memory plugged in.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Adding a new field to VirtioMemZone structure, as it lets us associate
with a particular virtio-mem region the amount of memory that should be
plugged in at boot.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This patch simplifies the code as we have one single Option for the
VirtioMemZone. This also prepares for storing additional information
related to the virtio-mem region.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Add the new option 'hotplugged_size' to both --memory-zone and --memory
parameters so that we can let the user specify a certain amount of
memory being plugged at boot.
This is also part of making sure we can store the virtio-mem size over a
reboot of the VM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit gives the possibility to create a virtio-mem device with
some memory already plugged into it. This is preliminary work to be
able to reboot a VM with the virtio-mem region being already resized.
Signed-off-by: Hui Zhu <teawater@antfin.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that e820 tables are created from the 'boot_guest_memory', we can
simplify the memory manager code by adding the virtio-mem regions when
they are created. There's no need to wait for the first hotplug to
insert these regions.
This also anticipates the need for starting a VM with some memory
already plugged into the virtio-mem region.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to differentiate the 'boot' memory regions from the virtio-mem
regions, we store what we call 'boot_guest_memory'. This is useful to
provide the adequate list of regions to the configure_system() function
as it expects only the list of regions that should be exposed through
the e820 table.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The virtio-mem driver is generating some warnings regarding both size
and alignment of the virtio-mem region if not based on 128MiB:
The alignment of the physical start address can make some memory
unusable.
The alignment of the physical end address can make some memory
unusable.
For these reasons, the current patch enforces virtio-mem regions to be
128MiB aligned and checks the size provided by the user is a multiple of
128MiB.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that virtio-mem device accept a guest NUMA node as parameter, we
retrieve this information from the list of NUMA nodes. Based on the
memory zone associated with the virtio-mem device, we obtain the NUMA
node identifier, which we provide to the virtio-mem device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Implement support for associating a virtio-mem device with a specific
guest NUMA node, based on the ACPI proximity domain identifier.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
For more consistency and help reading the code better, this commit
renames all 'virtiomem*' variables into 'virtio_mem*'.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Implement a new VM action called 'resize-zone' allowing the user to
resize one specific memory zone at a time. This relies on all the
preliminary work from the previous commits to resize each virtio-mem
device independently from each others.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By adding a new parameter 'id' to the virtiomem_resize() function, we
prepare this function to be usable for both global memory resizing and
memory zone resizing.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
It's important to return the region covered by virtio-mem the first time
it is inserted as the device manager must update all devices with this
information.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the previous code changes, we can now update the MemoryManager
code to create one virtio-mem region and resizing handler per memory
zone. This will naturally create one virtio-mem device per memory zone
from the DeviceManager's code which has been previously updated as well.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for resizing support of an individual memory zone,
this commit introduces a new option 'hotplug_size' to '--memory-zone'
parameter. This defines the amount of memory that can be added through
each specific memory zone.
Because memory zone resize is tied to virtio-mem, make sure the user
selects 'virtio-mem' hotplug method, otherwise return an error.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Both MemoryManager and DeviceManager are updated through this commit to
handle the creation of multiple virtio-mem devices if needed. For now,
only the framework is in place, but the behavior remains the same, which
means only the memory zone created from '--memory' generates a
virtio-mem region that can be used for resize.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to anticipate the need for storing memory regions along with
virtio-mem information for each memory zone, we create a new structure
MemoryZone that will replace Vec<Arc<GuestRegionMmap>> in the hash map
MemoryZones.
This makes thing more logical as MemoryZones becomes a list of
MemoryZone sorted by their identifier.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Inject CPUID leaves for advertising KVM HyperV support when the
"kvm_hyperv" toggle is enabled. Currently we only enable a selection of
features required to boot.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Currently we don't need to do anything to service these exits but when
the synthetic interrupt controller is active an exit will be triggered
to notify the VMM of details of the synthetic interrupt page.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Some of the io_uring setup happens upon activation of the virtio-blk
device, which is initially triggered through an MMIO VM exit. That's why
the vCPU threads must authorize io_uring related syscalls.
This commit ensures the virtio-blk io_uring implementation can be used
along with the seccomp filters enabled.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Extract common code for adding devices to the PCI bus into its own
function from the VFIO and VIRTIO code paths.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This removes the dependency of the pci crate on the devices crate which
now only contains the device implementations themselves.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The goal of this commit is to rename the existing NUMA option 'id' with
'guest_numa_id'. This is done without any modification to the way this
option behaves.
The reason for the rename is caused by the observation that all other
parameters with an option called 'id' expect a string to be provided.
Because in this particular case we expect a u32 representing a proximity
domain from the ACPI specification, it's better to name it with a more
explicit name.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The way to describe guest NUMA nodes has been updated through previous
commits, letting the user describe the full NUMA topology through the
--numa parameter (or NumaConfig).
That's why we can remove the deprecated and unused 'guest_numa_node'
option.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the previous changes introducing new options for both memory
zones and NUMA configuration, this patch changes the behavior of the
NUMA node definition. Instead of relying on the memory zones to define
the guest NUMA nodes, everything goes through the --numa parameter. This
allows for defining NUMA nodes without associating any particular memory
range to it. And in case one wants to associate one or multiple memory
ranges to it, the expectation is to describe a list of memory zone
through the --numa parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This new option provides a new way to describe the memory associated
with a NUMA node. This is the first step before we can remove the
'guest_numa_node' option from the --memory-zone parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that we have an identifier per memory zone, and in order to keep
track of the memory regions associated with the memory zones, we create
and store a map referencing list of memory regions per memory zone ID.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for allowing memory zones to be removed, but also in
anticipation for refactoring NUMA parameter, we introduce a mandatory
'id' option to the --memory-zone parameter.
This forces the user to provide a unique identifier for each memory zone
so that we can refer to these.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By introducing the SLIT (System Locality Distance Information Table), we
provide the guest with the distance between each node. This lets the
user describe the NUMA topology with a lot of details so that slower
memory backing the VM can be exposed as being further away from other
nodes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the NumaConfig which now provides distance information, we can
internally update the list of NUMA nodes with the exact distances they
should be located from other nodes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By introducing 'distances' option, we let the user describe a list of
destination NUMA nodes with their associated distances compared to the
current node (defined through 'id').
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the list of CPUs related to each NUMA node, Processor Local
x2APIC Affinity structures are created and included into the SRAT table.
This describes which CPUs are part of each node.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Relying on the list of CPUs defined through the NumaConfig, this patch
will update the internal list of CPUs attached to each NUMA node.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Through this new parameter, we give users the opportunity to specify a
set of CPUs attached to a NUMA node that has been previously created
from the --memory-zone parameter.
This parameter will be extended in the future to describe the distance
between multiple nodes.
For instance, if a user wants to attach CPUs 0, 1, 2 and 6 to a NUMA
node, here are two different ways of doing so:
Either
./cloud-hypervisor ... --numa id=0,cpus=0-2:6
Or
./cloud-hypervisor ... --numa id=0,cpus=0:1:2:6
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The SRAT table (System Resource Affinity Table) is needed to describe
NUMA nodes and how memory ranges and CPUs are attached to them.
For now it simply attaches a list of Memory Affinity structures based on
the list of NUMA nodes created from the VMM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the 'guest_numa_node' option, we create and store a list of
NUMA nodes in the MemoryManager. The point being to associate a list of
memory regions to each node, so that we can later create the ACPI tables
with the proper memory range information.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
With the introduction of this new option, the user will be able to
describe if a particular memory zone should belong to a specific NUMA
node from a guest perspective.
For instance, using '--memory-zone size=1G,guest_numa_node=2' would let
the user describe that a memory zone of 1G in the guest should be
exposed as being associated with the NUMA node 2.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Given that ACPI uses u32 as the type for the Proximity Domain, we can
use u32 instead of u64 as the type for 'host_numa_node' option.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
"struct MemoryConfig" has balloon_size but not in MemoryConfig
of cloud-hypervisor.yaml.
This commit adds it.
Signed-off-by: Hui Zhu <teawater@antfin.com>
Let's narrow down the limitation related to mbind() by allowing shared
mappings backed by a file backed by RAM. This leaves the restriction on
only for mappings backed by a regular file.
With this patch, host NUMA node can be specified even if using
vhost-user devices.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Relying on the new option 'host_numa_node' from the 'memory-zone'
parameter, the user can now define which NUMA node from the host
should be used to back the current memory zone.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Since memory zones have been introduced, it is now possible for a user
to specify multiple backends for the guest RAM. By adding a new option
'host_numa_node' to the 'memory-zone' parameter, we allow the guest RAM
to be backed by memory that might come from a specific NUMA node on the
host.
The option expects a node identifier, specifying which NUMA node should
be used to allocate the memory associated with a specific memory zone.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The flag 'mergeable' should only apply to the entire guest RAM, which is
why it is removed from the MemoryZoneConfig as it is defined as a global
parameter at the MemoryConfig level.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The 'cmdline' parameter should not be required as it is not needed when
the 'kernel' parameter is the rust-hypervisor-fw, which means the kernel
and the associated command line will be found from the EFI partition.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Factorize the codepath between simple memory and multiple memory zones.
This simplifies the way regions are memory mapped, as everything relies
on the same codepath. This is performed by creating a memory zone on the
fly for the specific use case where --memory is used with size being
different from 0. Internally, the code can rely on memory zones to
create the memory regions forming the guest memory.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
After the introduction of user defined memory zones, we can now remove
the deprecated 'file' option from --memory parameter. This makes this
parameter simpler, letting more advanced users define their own custom
memory zones through the dedicated parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
User defined memory regions can now support being snapshot and restored,
therefore this commit removes the restrictions that were applied through
earlier commit.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By factorizing a lot of code into create_ram_region(), this commit
achieves the simplification of the restore codepath. Additionally, it
makes user defined memory zones compatible with snapshot/restore.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
First thing, this patch introduces a new function to identify if a file
descriptor is linked to any hard link on the system. This can let the
VMM know if the file can be accessed by the user, or if the file will
be destroyed as soon as the VMM releases the file descriptor.
Based on this information, and associated with the knowledge about the
region being MAP_SHARED or not, the VMM can now decide to skip the copy
of the memory region content. If the user has access to the file from
the filesystem, and if the file has been mapped as MAP_SHARED, we can
consider the guest memory region content to be present in this file at
any point in time. That's why in this specific case, there's no need for
performing the copy of the memory region content into a dedicated file.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Let's not assume that a backing file is going to be the result from
a snapshot for each memory region. These regions might be backed by
a file on the host filesystem (not a temporary file in host RAM), which
means they don't need to be copied and stored into dedicated files.
That's why this commit prepares for further changes by introducing an
optional PathBuf associated with the snapshot of each memory region.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
There will be some cases where the implementation of the snapshot()
function from the Snapshottable trait will require to modify some
internal data, therefore we make this possible by updating the trait
definition with snapshot(&mut self).
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In case the memory size is 0, this means the user defined memory
zones are used as a way to specify how to back the guest memory.
This is the first step in supporting complex use cases where the user
can define exactly which type of memory from the host should back the
memory from the guest.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for the need to map part of a file with the function
create_ram_region(), it is extended to accept a file offset as argument.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In case the provided backing file is an actual file and not a directory,
we should not truncate it, as we expect the file to already be the right
size.
This change will be important once we try to map the same file through
multiple memory mappings. We can't let the file be truncated as the
second mapping wouldn't work properly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Introducing a new CLI option --memory-zone letting the user specify
custom memory zones. When this option is present, the --memory size
must be explicitly set to 0.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
It is otherwise seems to be able to cause resource conflicts with
Windows APCI_HAL. The OS might do a better job on assigning resources
to this device, withouth them to be requested explicitly. 0xcf8 and
0xcfc are only what is certainly needed for the PCI device enumeration.
Signed-off-by: Anatol Belski <anatol.belski@microsoft.com>
Some OS might check for duplicates and bail out, if it can't create a
distinct mapping. According to ACPI 5.0 section 6.1.12, while _UID is
optional, it becomes required when there are multiple devices with the
same _HID.
Signed-off-by: Anatol Belski <ab@php.net>
The brk syscall is not always called as the system might not need it.
But when it's needed from the API thread, this causes the thread to
terminate as it is not part of the authorized list of syscalls.
This should fix some sporadic failures on the CI with the musl build.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Add mprotect to API thread rules. Prevent the VMM is
killed when it is used.
Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
This patch added the seccomp_filter module to the virtio-devices crate
by taking reference code from the vmm crate. This patch also adds
allowed-list for the virtio-block worker thread.
Partially fixes: #925
Signed-off-by: Bo Chen <chen.bo@intel.com>
This patch propagates the SeccompAction value from main to the
Vm struct constructor (i.e. Vm::new_from_memory_manager), so that we can
use it to construct the DeviceManager and CpuManager struct for
controlling the behavior of the seccomp filters for vcpu/virtio-device
worker threads.
Signed-off-by: Bo Chen <chen.bo@intel.com>
This patch extends the CLI option '--seccomp' to accept the 'log'
parameter in addition 'true/false'. It also refactors the
vmm::seccomp_filters module to support both "SeccompAction::Trap" and
"SeccompAction::Log".
Fixes: #1180
Signed-off-by: Bo Chen <chen.bo@intel.com>
This patch replaces the usage of 'SeccompLevel' with 'SeccompAction',
which is the first step to support the 'log' action over system
calls that are not on the allowed list of seccomp filters.
Signed-off-by: Bo Chen <chen.bo@intel.com>
By adding a new io_uring feature gate, we let the user the possibility
to choose if he wants to enable the io_uring improvements or not.
Since the io_uring feature depends on the availability on recent host
kernels, it's better if we leave it off for now.
As soon as our CI will have support for a kernel 5.6 with all the
features needed from io_uring, we'll enable this feature gate
permanently.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In case the host supports io_uring and the specific io_uring options
needed, the VMM will choose the asynchronous version of virtio-blk.
This will enable better I/O performances compared to the default
synchronous version.
This is also important to note the VMM won't be able to use the
asynchronous version if the backend image is in QCOW format.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Cloud Hypervisor allows either the serial or virtio console to output to
TTY, but TTY input is pushed to both.
This is not correct. When Linux guest is configured to spawn TTYs on
both ttyS0 and hvc0, the user effectively issues the same commands twice
in different TTYs.
Fix this by only direct input to the one choice that is using host side
TTY.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This commit fixes an "Bad syscall" error when shutting down the VM
on AArch64 by adding the SYS_unlinkat syscall to the seccomp
whitelist.
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
Ensure that the width of the I/O port is correctly set to 32-bits in the
generic address used for the X_PM_TMR_BLK. Do this by type
parameterising GenericAddress::io_port_address() fuction.
TEST=Boot with clocksource=acpi_pm and observe no errors in the dmesg.
Fixes: #1496
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This is a counter exposed via an I/O port that runs at 3.579545MHz. Here
we use a hardcoded I/O and expose the details through the FADT table.
TEST=Boot Linux kernel and see the following in dmesg:
[ 0.506198] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
We store the device passthrough handler, so we should use it through our
internal API and only carry the passed through device configuration.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Shrink GICDevice trait to contain hypervisor agnostic API's only, which
are used in generating FDT.
Move all KVM specific logic into KvmGICDevice trait.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Make set_gsi_routing take a list of IrqRoutingEntry. The construction of
hypervisor specific structure is left to set_gsi_routing.
Now set_gsi_routes, which is part of the interrupt module, is only
responsible for constructing a list of routing entries.
This further splits hypervisor specific code from hypervisor agnostic
code.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
That function is going to return a handle for passthrough related
operations.
Move create_kvm_device code there.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
SGX expects the EPC region to be reported as "reserved" from the e820
table. This patch adds a new entry to the table if SGX is enabled.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The support for SGX is exposed to the guest through CPUID 0x12. KVM
passes static subleaves 0 and 1 from the host to the guest, without
needing any modification from the VMM itself.
But SGX also relies on dynamic subleaves 2 through N, used for
describing each EPC section. This is not handled by KVM, which means
the VMM is in charge of setting each subleaf starting from index 2
up to index N, depending on the number of EPC sections.
These subleaves 2 through N are not listed as part of the supported
CPUID entries from KVM. But it's important to set them as long as index
0 and 1 are present and indicate that SGX is supported.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of passing the GuestMemoryMmap directly to the CpuManager upon
its creation, it's better to pass a reference to the MemoryManager. This
way we will be able to know if SGX EPC region along with one or multiple
sections are present.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The SGX EPC region must be exposed through the ACPI tables so that the
guest can detect its presence. The guest only get the full range from
ACPI, as the specific EPC sections are directly described through the
CPUID of each vCPU.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the presence of one or multiple SGX EPC sections from the VM
configuration, the MemoryManager will allocate a contiguous block of
guest address space to hold the entire EPC region. Within this EPC
region, each EPC section is memory mapped.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Introducing the new CLI option --sgx-epc along with the OpenAPI
structure SgxEpcConfig, so that a user can now enable one or multiple
SGX Enclave Page Cache sections within a contiguous region from the
guest address space.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In this commit we saved the BDF of a PCI device and set it to "devid"
in GSI routing entry, because this field is mandatory for GICv3-ITS.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Move the definition of RawFile from virtio-devices crate into qcow
crate. All the code that consumes RawFile also already depends on the
qcow crate for image file type detection so this change breaks the
need for the qcow crate to depend on the very large virtio-devices
crate.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
It gets bubbled all the way up from hypervsior crate to top-level
Cargo.toml.
Cloud Hypervisor can't function without KVM at this point, so make it
a default feature.
Fix all scripts that use --no-default-features.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This commit store balloon size to MemoryConfig.
After reboot, virtio-balloon can use this size to inflate back to
the size before reboot.
Signed-off-by: Hui Zhu <teawater@antfin.com>
Remove the vmm dependency from vhost_user_block and vhost_user_net where
it was existing to use config::OptionParser. By moving the OptionParser
to its own crate at the top-level we can remove the very heavy
dependency that these vhost-user backends had.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Refactored the construction of KVM IOCTL rules for Seccomp.
Separating the rules by architecture can reduce the risk of bugs and
attacks.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
In order to move the hypervisor specific parts of the VM exit handling
path, we're defining a generic, hypervisor agnostic VM exit enum.
This is what the hypervisor's Vcpu run() call should return when the VM
exit can not be completely handled through the hypervisor specific bits.
For KVM based hypervisors, this means directly forwarding the IO related
exits back to the VMM itself. For other hypervisors that e.g. rely on the
VMM to decode and emulate instructions, this means the decoding itself
would happen in the hypervisor crate exclusively, and the rest of the VM
exit handling would be handled through the VMM device model implementation.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Fix test_vm unit test by using the new abstraction and dropping some
dead code.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The code is purely for maintaining an internal counter. It is not really
tied to KVM.
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The _fd suffix is KVM specific. But since it now point to an hypervisor
agnostic hypervisor::Vm implementation, we should just rename it vm.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The _fd suffix is KVM specific. But since it now point to an hypervisor
agnostic hypervisor::Vm implementation, we should just rename it vm.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The fd naming is quite KVM specific. Since we're now using the
hypervisor crate abstractions, we can rename those into something more
readable and meaningful.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The fd naming is quite KVM specific. Since we're now using the
hypervisor crate abstractions, we can rename those into something more
readable and meaningful.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The fd naming is quite KVM specific. Since we're now using the
hypervisor crate abstractions, we can rename those into something more
readable and meaningful. Like e.g. vcpu or vm.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Split the generic virtio code (queues and device type) from the
VirtioDevice trait, transport and device implementations.
This also simplifies the feature handling in vhost_user_backend as the
vm-virtio crate is no longer has any features.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
On x86 architecture, we need to save a list of MSRs as part of the vCPU
state. By providing the full list of MSRs supported by KVM, this patch
fixes the remaining snapshot/restore issues, as the vCPU is restored
with all its previous states.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Some vCPU states such as MP_STATE can be modified while retrieving
other states. For this reason, it's important to follow a specific
order that will ensure a state won't be modified after it has been
saved. Comments about ordering requirements have been copied over
from Firecracker commit 57f4c7ca14a31c5536f188cacb669d2cad32b9ca.
This patch also set the previously saved VCPU_EVENTS, as this was
missing from the restore codepath.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The logic can be shared among hypervisor implementations.
The 'static bound is used such that we don't need to deal with extra
lifetime parameter everywhere. It should be okay because we know the
entry type E doesn't contain any reference.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This trait contains a function which produces a interrupt routing entry.
Implement that trait for KvmRoutingEntry and rewrite the update
function.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The observation is only the route entry is hypervisor dependent.
Keep a definition of KvmMsiInterruptManager to avoid too much code
churn.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The observation is that only the route field is hypervisor specific.
Provide a new function in blanket implementation. Also redefine
KvmRoutingEntry with RoutingEntry to avoid code churn.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The observation is that the GSI hashmap remains untouched before getting
passed into the MSI interrupt manager. We can create that hashmap
directly in the interrupt manager's new function.
The drops one import from the interrupt module.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This is a hash table of string to hash tables of u64s. In JSON these
hash tables are object types.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Because we don't want the guest to miss any event triggered by the
emulation of devices, it is important to resume all vCPUs before we can
resume the DeviceManager with all its associated devices.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We need consistency between pause/resume and snapshot/restore
operations. The symmetrical behavior of pausing/snapshotting
and restoring/resuming has been introduced recently, and we must
now ensure that no matter if we're using pause/resume or
snapshot/restore features, the resulting VM should be running in
the exact same way.
That's why the vCPU state is now stored upon VM pausing. The snapshot
operation being a simple serialization of the previously saved state.
The same way, the vCPU state is now restored upon VM resuming. The
restore operation being a simple deserialization of the previously
restored state.
It's interesting to note that this patch ensures time consistency from a
guest perspective, no matter which clocksource is being used. From a
previous patch, the KVM clock was saved/restored upon VM pause/resume.
We now have the same behavior for TSC, as the TSC from the vCPUs are
saved/restored upon VM pause/resume too.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of calling the resume() function from the CpuManager, which
involves more than what is needed from the shutdown codepath, and
potentially ends up with a deadlock, we replace it with a subset.
The full resume operation is reserved for a VM that has been paused.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We want each Vcpu to store the vCPU state upon VM pausing. This is the
reason why we need to explicitly implement the Pausable trait for the
Vcpu structure.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When set_user_memory_region was moved to hypervisor crate, it was turned
into a safe function that wrapped around an unsafe call. All but one
call site had the safety statements removed. But safety statement was
not moved inside the wrapper function.
Add the safety statement back to help reasoning in the future. Also
remove that one last instance where the safety statement is not needed .
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
That removes one more KVM-ism in VMM crate.
Note that there are more KVM specific code in those files to be split
out, but we're not at that stage yet.
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
Collate the virtio device counters in DeviceManager for each device that
exposes any and expose it through the recently added HTTP API.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The counters are a hash of device name to hash of counter name to u64
value. Currently the API is only implemented with a stub that returns an
empty set of counters.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The structure is tightly coupled with KVM. It uses KVM specific
structures and calls. Add Kvm prefix to it.
Microsoft hypervisor will implement its own interrupt group(s) later.
No functional change intended.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
Now that the VMM uses KVM_KVMCLOCK_CTRL from the KVM API, it must be
added to the seccomp filters list.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Through the newly added API notify_guest_clock_paused(), this patch
improves the vCPU pause operation by letting the guest know that each
vCPU is being paused. This is important to avoid soft lockups detection
from the guest that could happen because the VM has been paused for more
than 20 seconds.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
It's important that on restore path, the CpuManager's vCPU gets filled
with each new vCPU that is being created. In order to cover both boot
and restore paths, the list is being filled from the common function
create_vcpu().
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that the VMM uses both KVM_GET_CLOCK and KVM_SET_CLOCK from the KVM
API, they must be added to the seccomp filters list.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to maintain correct time when doing pause/resume and
snapshot/restore operations, this patch stores the clock value
on pause, and restore it on resume. Because snapshot/restore
expects a VM to be paused before the snapshot and paused after
the restore, this covers the migration use case too.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When a request is made to increase the number of vCPUs in the VM attempt
to reuse any previously removed (and hence inactive) vCPUs before
creating new ones.
This ensures that the APIC ID is not reused for a different KVM vCPU
(which is not allowed) and that the APIC IDs are also sequential.
The two key changes to support this are:
* Clearing the "kill" bit on the old vCPU state so that it does not
immediately exit upon thread recreation.
* Using the length of the vcpus vector (the number of allocated vcpus)
rather than the number of active vCPUs (.present_vcpus()) to determine
how many should be created.
This change also introduced some new info!() debugging on the vCPU
creation/removal path to aid further development in the future.
TEST=Expanded test_cpu_hotplug test.
Fixes: #1338
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
After the vCPU has been ejected and the thread shutdown it is useful to
clear the "kill" flag so that if the vCPU is reused it does not
immediately exit upon thread recreation.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
These messages are intended to be useful to support debugging related to
vCPU hotplug/unplug issues.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The same way the VM and the vCPUs are restored in a paused state, all
devices associated with the device manager must be restored in the same
paused state.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Because we need to pause the VM before it is snapshot, it should be
restored in a paused state to keep the sequence symmetrical. That's the
reason why the state machine regarding the valid VM's state transition
needed to be updated accordingly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
To follow a symmetrical model, and avoid potential race conditions, it's
important to restore a previously snapshot VM in a "paused" state.
The snapshot operation being valid only if the VM has been previously
paused.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When the hypervisor crate was introduced, a few places that handled
errors were commented out in favor of unwrap, but that's bad practice.
Restore proper error handling in those places in this patch.
We cannot use from_raw_os_error anymore because it is wrapped deep under
hypervisor crate. Create new custom errors instead.
Fixes: e4dee57e81 ("arch, pci, vmm: Initial switch to the hypervisor crate")
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This commit fixes some warnings introduced in the previous
hyperviosr crate PR.Removed some unused variables from arch/aarch64
module.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
Start moving the vmm, arch and pci crates to being hypervisor agnostic
by using the hypervisor trait and abstractions. This is not a complete
switch and there are still some remaining KVM dependencies.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
There are two CPUID leaves for handling CPU topology, 0xb and 0x1f. The
difference between the two is that the 0x1f leaf (Extended Topology
Leaf) supports exposing multiple die packages.
Fixes: #1284
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The extended topology leaf (0x1f) also needs to have the APIC ID (which
is the KVM cpu ID) set. This mirrors the APIC ID set on the 0xb topology
leaf
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rather than saving the individual parts into the CpuManager save the
full struct as it now also contains the topology data.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows the user to optionally specify the desired CPU topology. All
parts of the topology must be specified and the product of all parts
must match the maximum vCPUs.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Its test case calls remove unconditionally. Instead of making the test
code call remove conditionally, removing the pci_support dependency
simplifies things -- that function is just a wrapper around HashMap's
remove function anyway.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
Now that PCI device hotplug returns a response, the OpenAPI definition
must reflect it, describing what is expected to be received.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to provide a more comprehensive b/d/f to the user, the
serialization of PciDeviceInfo is implemented manually to control the
formatting.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This patch completes the series by connecting the dots between the HTTP
frontend and the device manager backend.
Any request to hotplug a VFIO, disk, fs, pmem, net, or vsock device will
now return a response including the device name and the place of the
device in the PCI topology.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Pass from the device manager to the calling code the information about
the PCI device that has just been hotplugged.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to provide the device name and PCI b/d/f associated with a
freshly hotplugged device, the hotplugging functions from the device
manager return a new structure called PciDeviceInfo.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
X86 and AArch64 work in different ways to shutdown a VM.
X86 exit VMM event loop through ACPI device;
AArch64 need to exit from CPU loop of a SystemEvent.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Screened IO bus because it is not for AArch64.
Enabled Serial, RTC and Virtio devices with MMIO transport option.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Implemented GSI allocator and system allocator for AArch64.
Renamed some layout definitions to align more code between architectures.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
In order to workaround a Linux bug that happens when we place devices at
the end of the physical address space on recent hardware (52 bits limit)
we reduce the MMIO address space by one 4k page. This way, nothing gets
allocated in the last 4k of the address space, which is negligible given
the amount of space in the address space.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The action of "vm.delete" should not report errors on non-booted
VMs. This patch also revised the "docs/api.md" to reflect the right
'Prerequisites' of different API actions, e.g. on "vm.delete" and
"vm.boot".
Fixes: #1110
Signed-off-by: Bo Chen <chen.bo@intel.com>
This removes the need to use CAP_NET_ADMIN privileges and instead the
host MAC addres is either provided by the user or alternatively it is
retrieved from the kernel.
TEST=Run cloud-hypervisor without CAP_NET_ADMIN permission and a
preconfigured tap device:
sudo ip tuntap add name tap0 mode tap
sudo ifconfig tap0 192.168.249.1 netmask 255.255.255.0 up
cargo clean
cargo build
target/debug/cloud-hypervisor --serial tty --console off --kernel ~/src/rust-hypervisor-firmware/target/target/release/hypervisor-fw --disk path=~/workloads/clear-33190-kvm.img --net tap=tap0
VM was also rebooted to check that works correctly.
Fixes: #1274
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows an existing TAP interface to be used without needing
CAP_NET_ADMIN permissions on the Cloud Hypervisor binary as the ioctl to
bring up the interface is avoided.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
There is a much stronger PCI dependency from vfio_pci.rs than a VFIO one
from pci/src/vfio.rs. It seems more natural to have the PCI specific
VFIO implementation in the PCI crate rather than the other way around.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Now the flow of both architectures are aligned to:
1. load kernel
2. create VCPU's
3. configure system
4. start VCPU's
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Between X86 and AArch64, there is some difference in booting a VM:
- X86_64 can setup IOAPIC before creating any VCPU.
- AArch64 have to create VCPU's before creating GIC.
The old process is:
1. load_kernel()
load kernel binary
configure system
2. activate_vcpus()
create & start VCPU's
So we need to separate "activate_vcpus" into "create_vcpus" and
"activate_vcpus" (to start vcpus only). Setup GIC and create FDT
between the 2 steps.
The new procedure is:
1. load_kernel()
load kernel binary
(X86_64) configure system
2. create VCPU's
3. (AArch64) setup GIC
4. (AArch64) configure system
5. start VCPU's
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Instead of responding only headers with error code, we now return
complete error responses to HTTP requests with errors (e.g. undefined
endpoints and InternalSeverError).
Fixes: #472
Signed-off-by: Bo Chen <chen.bo@intel.com>
When doing self spawning the child will attempt to set the umask() again. Let
it through the seccomp rules so long as it the safe mask again.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This commit only implements the InterruptController crate on AArch64.
The device specific part for GIC is to be added.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
IOAPIC, a X86 specific interrupt controller, is referenced by device
manager and CPU manager. To work with more architectures, a common
type for all architectures is needed.
This commit introduces trait InterruptController to provide architecture
agnostic functions. Device manager and CPU manager can use it without
caring what the underlying device is.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
This is a preparing commit to build and test CH on AArch64. All building
issues were fixed, but no functionality was introduced.
For X86, the logic of code was not changed at all.
For ARM, the architecture specific part is still empty. And we applied
some tricks to workaround lint warnings. But such code will be replaced
later by other commits with real functionality.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
This config option provided very little value and instead we now enable
this feature (which then lets the guest control the cache mode)
unconditionally.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Explicit call to 'close()' is required on file descriptors allocated
from 'epoll::create()', which is missing for the 'EpollContext' and
'VringWorker'. This patch enforces to close the file descriptors by
reusing the Drop trait of the 'File' struct.
Signed-off-by: Bo Chen <chen.bo@intel.com>
Add a new "host_mac" parameter to "--net" and "--net-backend" and use
this to set the MAC address on the tap interface. If no address is given
one is randomly assigned and is stored in the config.
Support for vhost-user-net self spawning was also included.
Fixes: #1177
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The clock_nanosleep system call needs to be whitelisted since the commit
12e00c0f45 introduced the use of a sleep()
function. Without this patch, we can see an error when the VM is paused
or killed.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Like the actions that don't take data such as "pause" or "resume" use a
common handler implementation to remove duplicated code for handling
simple endpoints like the hotplug ones.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Many of the API requests take a similar form with a single data item
(i.e. config for a device hotplug) expand the VmAction enum to handle
those actions and a single function to dispatch those API events.
For now port the existing helper functions to use this new API. In the
future the HTTP layer can create the VmAction directly avoiding the
extra layer of indirection.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rather than save the save a function pointer and use that instead the
underlying action. This is useful for two reasons:
1. We can ensure that we generate HttpErrors in the same way as the
other endpoints where API error variant should be determined by the
request being made not the underlying error.
2. It can be extended to handle other generic actions where the function
prototype differs slightly.
As result of this refactoring it was found that the "vm.delete" endpoint
was not connected so address that issue.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Extend the EndpointHandler trait to include automatic support for
handling PUT requests. This will allow the removal of lots of duplicated
code in the following commit from the API handling code.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The ch branch has been rebased to incorporate the latest upstream code
requiring a small change to the unit tests.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
By passing a reference of the DeviceTree to the AddressManager, we can
now update the DeviceTree whenever a PCI BAR is reprogrammed. This is
mandatory to maintain the correct resources information related to each
virtio-pci device, which will ensure correct information will be stored
upon VM snapshot.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We want to be able to share the same DeviceTree across multiple threads,
particularly to handle the use case where PCI BAR reprogramming might
need to update the tree while from another thread a new device is being
added to the tree.
That's why this patch moves the DeviceTree instance into an Arc<Mutex<>>
so that we can later share a reference of the same mutable tree with the
AddressManager responsible for handling PCI BAR reprogramming.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By using the vector of resources provided by the DeviceNode, the device
manager can store the information related to PCI BARs from a virtio-pci
device. Based on this, and upon VM restoration, the device manager can
restore the BARs in the expected location in the guest address space.
One thing to note is that we only need to provide the VirtioPciDevice
with the configuration BAR (BAR 0) since the SHaredMemory BAR info comes
from the virtio device directly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the new field "pci_bdf", a virtio-pci device can be restored at
the same place on the PCI bus it was located before the VM snapshot.
This ensures consistent placement on the PCI bus, based on the stored
information related to each device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We need a way to store the information about where a PCI device was
placed on the PCI bus before the VM was snapshotted. The way to do this
is by adding an extra field to the DeviceNode structure.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Use the ACPI feature to control whether to build the mptable. This is
necessary as the mptable and ACPI RSDP table can easily overwrite each
other leading to it failing to boot.
TEST=Compile with default features and see that --cpus boot=48 now
works, try with --no-default-features --features "pci" and observe the
--cpus boot=48 also continues to work.
Fixes: #1132
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Switch to using the recently added OptionParser in the code that parses
the block backend.
Fixes: #1092
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Switch to using the recently added OptionParser in the code that parses
the network backend.
Whilst doing this also update the net-backend syntax to use "sock"
rather than socket.
Fixes: #1092
Partially fixes: #1091
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
To avoid a race condition where the signal might "miss" the KVM_RUN
ioctl() instead reapeatedly try sending a signal until the vCPU run is
interrupted (as indicated by setting a new per vCPU atomic.)
It important to also clear this atomic when coming out of a paused
state.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
To ensure that the DeviceManager threads (such as those used for virtio
devices) are cleaned up it is necessary to unpark them so that they get
cleanly terminated as part of the shutdown.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
After setting the kill signal flag for the vCPU thread release the pause
flag and unpark the threads. This ensures that that the vCPU thread will
wake up and check the kill signal flag if the VM is paused.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rather than immediately entering the vCPU run() code check if the kill
signal is set. This allows paused VMs to be shutdown.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This module will be dedicated to DeviceNode and DeviceTree definitions
along with some dedicated unit tests.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This iterator will let the VMM enumerate the resources associated
with the DeviceManager, allowing for introspection.
Moreover, by implementing a double ended iterator, we can get the
hierarchy from the leaves to the root of the tree, which is very
helpful in the context of restoring the devices in the right order.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that the device tree fully replaced the need for a dedicated list of
migratable devices, this commit cleans up the codebase by removing it
from the DeviceManager.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit switches from migratable_devices to device_tree in order to
restore devices exclusively based on the device tree.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit adds an extra field to the DeviceNode so that the structure
can hold a Migratable device. The long term plan is to be able to remove
the dedicated table of migratable devices, but instead rely only on the
device tree.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to hide the complexity chosen for the device tree stored in
the DeviceManager, we introduce a new DeviceTree structure.
For now, this structure is a simple passthrough of a HashMap, but it can
be extended to handle some DeviceTree specific operations.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This device has a dedicated memory region in the guest address space,
which means in case of snapshot/restore, it must be restored in the
exact same location it was during the snapshot.
That's through the resources that we can describe the location of this
extra memory region, allowing the device for correct restoring.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This device has a dedicated memory region in the guest address space,
which means in case of snapshot/restore, it must be restored in the
exact same location it was during the snapshot.
That's through the resources that we can describe the location of this
extra memory region, allowing the device for correct restoring.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the device tree, retrieve the resources associated with a
virtio-mmio device to restore it at the right location in guest address
space. Also, the IRQ number is correctly restored.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of splitting the MMIO allocation and the device creation into
separate functions for virtio-mmio devices, it's is easier to move
everything into the same function as we'll be able to gather resources
in the same place for the same device.
These resources will be stored in the device tree in a follow up patch.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In case the VM is created from scratch, the devices should be created
after the DeviceManager has been created. But this should not affect the
restore codepath, as in this case the devices should be created as part
of the restore() function.
It's necessary to perform this differentiation as the restore must go
through the following steps:
- Create the DeviceManager
- Restore the DeviceManager with the right state
- Create the devices based on the restored DeviceManager's device tree
- Restore each device based on the restored DeviceManager's device tree
That's why this patch leverages the recent split of the DeviceManager's
creation to achieve what's needed.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit performs the split of the DeviceManager's creation into two
separate functions by moving anything related to device's creation after
the DeviceManager structure has been initialized.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the device tree, we now ensure the restore can be done in the
right order, as it will respect the dependencies between nodes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The DeviceManager itself must be snapshotted in order to store the
information regarding the devices associated with it, which effectively
means we need to store the device tree.
The mechanics to snapshot and restore the DeviceManagerState are added
to the existing snapshot() and restore() implementations.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The DeviceManager now creates a tree of devices in order to store the
resources associated with each device, but also to track dependencies
between devices.
This is a key part for proper introspection, but also to support
snapshot and restore correctly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
It's not possible to call UnixListener::Bind() on an existing file so
unlink the created socket when shutting down the Vsock device.
This will allow the VM to be rebooted with a vsock device.
Fixes: #1083
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Generate an error during validation if an attempt it made to place a
device behind an IOMMU or using a VFIO device when not using PCI.
Fixes: #751
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Let's put an underscore "_" in front of each device name to identify
when it has been set internally.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The virtio-console was not added to the list of Migratable devices,
which is fixed from this patch.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
It is based off the name from the virtio device attached to this
transport layer.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
It is based off the name from the virtio device attached to this
transport layer.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Because we know we will need every virtio device to be identified with a
unique id, we can simplify the code by making the identifier mandatory.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>