In this way, we have all functions related to generate default values of
vm-config structs in the same location.
Signed-off-by: Bo Chen <chen.bo@intel.com>
These have been replaced by members of PayloadConfig and should be
removed in v28.0 (mentioned in v26.0 release notes.)
Fixes: #4737
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This is consistent when considering that some structs have a
`#[derive(Default)`] so it makes sense for the default implementations
to be in the same location.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Place the data structures that are required for constructing a VmConfig
into it's own module from the logic that exists to suppot them.
This is useful as a consumer of the API can now clearly see what data
structures make up the API for creating VMs.
This has no functional change and I made no attempt to clean up the
ordering (it's as in the original file) nor any other clean up.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Bumps [clap](https://github.com/clap-rs/clap) from 3.2.22 to 4.0.9.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](clap-rs/clap@v3.2.22...v4.0.9)
---
updated-dependencies:
- dependency-name: clap
dependency-type: direct:production
update-type: version-update:semver-major
...
Moving to the major version 4 introduced some breaking changes which had
to be handled manually.
Fixes#4709
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This option is needed for the openapi consumer (e.g. Kata Containers) to
load firmware (e.g. td-shim) for booting.
Signed-off-by: Bo Chen <chen.bo@intel.com>
This simplifies the CI process but also logical with the existing
functionality under "guest_debug" (dumping guest memory).
Fixes: #4679
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Adding the support for the user to set the MTU for the vhost-user-net
backend, which allows the integration test to be extended with the test
of the MTU parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Adjust MTU logic such that:
1. Apply an MTU to the TAP interface if the user supplies it
2. Always query the TAP interface for the MTU and expose that.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This simplifes the buld and checks with very little overhead and the
fwdebug device is I/O port device on 0x402 that can be used by edk2 as a
very simple character device.
See: #4679
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add tracing of the VM boot sequence from the point at which the request
to create a VM is received to the hand-off to the vCPU threads running.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add a new feature "tracing" that enables tracing functionality via the
"tracer" crate (sadly features and crates cannot share the same name.)
Setup: tracer::start()
The main functionality is a tracer::trace_scope()! macro that will add
trace points for the duration of the scope. Tracing events are per
thread.
Finish: tracer::end() this will write the trace file (pretty printed
JSON) to a file in the current directory.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add a new "mtu" parameter to the NetConfig structure and therefore to
the --net option. This allows Cloud Hypervisor's users to define the
Maximum Transmission Unit (MTU) they want to use for the network
interface that they create.
In details, there are two main aspects. On the one hand, the TAP
interface is created with the proper MTU if it is provided. And on the
other hand the guest is made aware of the MTU through the VIRTIO
configuration. That means the MTU is properly set on both the TAP on the
host and the network interface in the guest.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
There's no need to delegate the resize operation to the virtio-mem
thread. This can come directly from the vmm thread which will use the
Mem object to update the VIRTIO configuration and trigger the interrupt
for the guest to be notified.
In order to achieve what's described above, the VirtioMemZone structure
now has a handle onto the Mem object directly. This avoids the need for
intermediate Resize and ResizeSender structures.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Given the AMX x86 feature has been made available since kernel v5.17,
and given we don't have any test validating this feature, there's no
need to keep it behing a Rust feature gate.
Fixes#3996
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Multiple rust-vmm crates must be updated at once given the vm-memory one
has been updated and they all rely on vm-memory.
- vm-memory from 0.8.0 to 0.9.0
- vhost from 0.4.0 to 0.5.0
- virtio-queue from 0.5.0 to 0.6.0
- vhost-user-backend from 0.6.0 to 0.7.0
- linux-loader from 0.4.0 to 0.5.0
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Removing the option --tdx to specify that we want to run a TD VM. Rely
on --platform option by adding the "tdx" boolean parameter. This is the
new way for enabling TDX with Cloud Hypervisor.
Along with this change, the way to retrieve the firmware path has been
updated to rely on the recently introduced PayloadConfig structure.
Fixes#4556
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The PCI buses should not declare the address space related to the MMIO
config space given it's already declared in the MCFG table and through
the motherboard device PNP0C02 in the DSDT table.
The PCI MMIO config region for the segment was being wrongly exposed as
part of the _CRS for the ACPI bus device (using Memory32Fixed). Exposing
it via this object was ineffectual as the equivalent entry in the
PNP0C02 (_SB_.MBRD) marked those ranges as not usable via the kernel.
Either way, with both devices used by the kernel, the kernel will not
try and use those memory ranges for the device BARs. However under
td-shim on TDX the PNP0C02 device is not on the permitted list of
devices so the the memory ranges were not marked as unusable resulting
in the kernel attempting to allocate BARs that collided with the PCI
MMIO configuration space.
This is based on the kernel documentation PCI/acpi-info.rst which relies
on ACPI and PCI Firmware specifications. And here are the interesting
quotes from this document:
"""
Prior to the addition of Extended Address Space descriptors, the failure
of Consumer/Producer meant there was no way to describe bridge registers
in the PNP0A03/PNP0A08 device itself. The workaround was to describe the
bridge registers (including ECAM space) in PNP0C02 catch-all devices.
With the exception of ECAM, the bridge register space is device-specific
anyway, so the generic PNP0A03/PNP0A08 driver (pci_root.c) has no need
to know about it.
PNP0C02 “motherboard” devices are basically a catch-all. There’s no
programming model for them other than “don’t use these resources for
anything else.” So a PNP0C02 _CRS should claim any address space that is
(1) not claimed by _CRS under any other device object in the ACPI
namespace and (2) should not be assigned by the OS to something else.
The address range reported in the MCFG table or by _CBA method (see
Section 4.1.3) must be reserved by declaring a motherboard resource. For
most systems, the motherboard resource would appear at the root of the
ACPI namespace (under _SB) in a node with a _HID of EISAID (PNP0C02),
and the resources in this case should not be claimed in the root PCI
bus’s _CRS. The resources can optionally be returned in Int15 E820 or
EFIGetMemoryMap as reserved memory but must always be reported through
ACPI as a motherboard resource.
"""
This change has been manually tested by running a VM with multiple
segments (4 segments), and by hotplugging an additional disk to the
segment number 2 (third segment).
From one shell:
"""
cloud-hypervisor \
--cpus boot=1 \
--memory size=1G \
--kernel vmlinux \
--cmdline "root=/dev/vda1 rw console=hvc0" \
--disk path=jammy-server-cloudimg.raw \
--api-socket /tmp/ch.sock \
--platform num_pci_segments=4
"""
From another shell (after the VM is booted):
"""
ch-remote \
--api-socket=/tmp/ch.sock \
add-disk \
path=test-disk.raw,id=disk2,pci_segment=2
"""
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Use VgicConfig to initialize Vgic.
Use Gic::create_default_config everywhere so we don't always recompute
redist/msi registers.
Add a helper create_test_vgic_config for tests in hypervisor crate.
Signed-off-by: Nuno Das Neves <nudasnev@microsoft.com>
AArch64 can share the same way of loading payload with x86_64. It makes
the payload loading more consistent between different arches.
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
uefi_flash is used when load firmware, that is load payload depends on
device manager. move uefi_flash to memory manager can eliminate the
dependency.
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
A new firmware item has been added into payload config, we need
extend ability to load standalone firmware on AArch64.
"load_kernel" method will be the entry of image loading work including
kernel and firmware.
This change is back compatible. So, we can either load firmware through
"-kernel" like before or "-firmware".
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
Later, we will load standalone firmware. So, refactor load_kernel
by abstracting load_firmware method.
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
Given the virtio-console is now able to buffer its output when no PTY is
connected on the other end, the device manager code is updated to enable
this. Moving the endpoint type from FilePair to PtyPair enables the
proper codepath in the virtio-console implementation, as well as
updating the PTY resize code, and forcing the PTY to always be
non-blocking.
The non-blocking behavior is required to avoid blocking the guest that
would be waiting on the virtio-console driver. When receiving an
EWOULDBLOCK error, the output will simply be redirected to the temporary
buffer so that it can be later flushed.
The PTY resize logic has been slightly modified to ensure the PTY file
descriptors are closed. It avoids the child process to keep a hold onto
the PTY device, which would have caused the PTY to believe something is
connected on the other end, which would have prevented the detection of
any new connection on the PTY.
Fixes#4521
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We want to be able to reuse the SerialBuffer from the virtio-devices
crate, particularly from the virtio-console implementation. That's why
we move the SerialBuffer definition to its own crate so that it can be
accessed from both vmm and virtio-devices crates, without creating any
cyclic dependency.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
If the epoll_wait() call returns EINTR, this only means a signal has
been delivered before any of the file descriptors registered triggered
an event or before the end of the timeout (if timeout isn't -1). For
that reason, we should simply try to listen on the epoll loop again.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We must limit how much the buffer can grow, otherwise this could lead
the process to consume all the memory on the machine. This could happen
if the output from the guest was very important and nothing would
connect to the PTY for a long time.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Set the maximum number of HW breakpoints according to the value returned
from `Hypervisor::get_guest_debug_hw_bps()`.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
On AArch64, `translate_gva` API is not provided by KVM. We implemented
it in VMM by walking through translation tables.
Address translation is big topic, here we only focus the scenario that
happens in VMM while debugging kernel. This `translate_gva`
implementation is restricted to:
- Exception Level 1
- Translate high address range only (kernel space)
This implementation supports following Arm-v8a features related to
address translation:
- FEAT_LPA
- FEAT_LVA
- FEAT_LPA2
The implementation supports page sizes of 4KiB, 16KiB and 64KiB.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
The goal of this patch is to provide a reliable way to detect when the
other end of the PTY is connected, and therefore be able to identify
when we can write to the PTY device. This is needed because writing to
the PTY device when the other end isn't connected causes the loss of
the written bytes.
The way to detect the connection on the other end of the PTY is by
knowing the other end is disconnected at first with the presence of the
EPOLLHUP event. Later on, when the connection happens, EPOLLHUP is not
triggered anymore, and that's when we can assume it's okay to write to
the PTY main device.
It's important to note we had to ensure the file descriptor for the
other end was closed, otherwise we would have never seen the EPOLLHUP
event. And we did so by removing the "sub" field from the PtyPair
structure as it was keeping the associated File opened.
Fixes#3170
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Since our firmware files are still designed to be used via PVH use the
load_kernel() function to load the firmware falling back to legacy
firmware loading if necessary.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Adding new I/O ports for both the ACPI shutdown and the ACPI PM timer
devices so they can be triggered from both addresses. The reason for
this change is that TDX expects only certain I/O ports to be enabled
based on what QEMU exposes. We follow this to avoid new ports from being
opened exclusively for Cloud Hypervisor.
We have to keep the former I/O ports available given all firmwares
haven't been updated yet. Once we reach a point where we know both Rust
Hypervisor Firmware, OVMF, TDVF and TDSHIM have been updated with the
new port values, we'll be able to remove the former ports.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The old API remains usable, and will remain usable for two releases but
we should only advertise the new API.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Introduce a new top level member of VmConfig called PayloadConfig that
(currently) encompasses the kernel, commandline and initramfs for the
guest to use.
In future this can be extended for firmware use. The existing
"--kernel", "--cmdline" and "initramfs" CLI parameters now fill the
PayloadConfig.
Any config supplied which uses the now deprecated config members have
those members mapped to the new version with a warning.
See: #4445
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
By checking in the validation logic we get checking for both devices
specified in the initial config but also hotplug too.
Fixes: #4453
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The uuid indicates the unique ID of a virtual machine.
cloud-hypervisor takes the uuid passed by libvirt
and uses it to initialize cloud-init.
Signed-off-by: lizhaoxin1 <Lxiaoyouling@163.com>
The parameter "poll_queue" was useful at the time Cloud Hypervisor was
responsible for spawning vhost-user backends, as it was carrying the
information the vhost-user-block backend should have this option enabled
or not.
It's been quite some time that we walked away from this design, as we
now expect a management layer to be responsible for running vhost-user
backends.
That's the reason why we can remove "poll_queue" from the DiskConfig
structure.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The new virtio-queue version introduced some breaking changes which need
to be addressed so that Cloud Hypervisor can still work with this
version.
The most important change is about removing a handle to the guest memory
from the Queue, meaning the caller has to provide the guest memory
handle for multiple methods from the QueueT trait.
One interesting aspect is that QueueT has been widely extended to
provide every getter and setter we need to access and update the Queue
structure without having direct access to its internal fields.
This patch ports all the virtio and vhost-user devices to this new crate
definition. It also updates both vhost-user-block and vhost-user-net
backends based on the updated vhost-user-backend crate. It also updates
the fuzz directory.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When starting the VM such that it is already on a breakpoint (via
stop_on_boot) when attached to gdb then start the vCPUs in a paused
state rather than starting the vCPUs later (upon resume).
Further, make the resumption/break of the VM more resilient by only
attempting to resume the vCPUs if were are already in a break point and
only attempting to pause/break if we were already running.
Fixes: #4354
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Remove the hardcoded addresses.
Also remove PM_TMR_BLK as spec compliant implementation will use
X_PM_TMR_BLK over this field.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The original code uses kvm_device_attr directly outside of the
hyeprvisor crate. That leaks hypervisor details.
No functional change intended.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This requires making get/set_lapic_reg part of the type.
For the moment we cannot provide a default variant for the new type,
because picking one will be wrong for the other hypervisor, so I just
drop the test cases that requires LapicState::default().
Signed-off-by: Wei Liu <liuwe@microsoft.com>
CpuId is an alias type for the flexible array structure type over
CpuIdEntry. The type itself and the type of the element in the array
portion are tied to the underlying hypervisor.
Switch to using CpuIdEntry slice or vector directly. The construction of
CpuId type is left to hypervisors.
This allows us to decouple CpuIdEntry from hypervisors more easily.
No functional change intended.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
We only need to do this for x86 since MSHV does not have aarch64 support
yet. This reduces unnecessary code churn.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
VmState was introduced to hold hypervisor specific VM state. KVM does
not need it and MSHV does not really use it yet.
Just drop the code. It can be easily revived once there is a need.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
Previously, we were assuming that every time an eventfd notified us,
there was only a single event waiting for us. This meant that if,
while one API request was being processed, two more arrived, the
second one would not be processed (until the next one arrived, when it
would be processed instead of that event, and so on). To fix this,
make sure we're processing the number of API and debug requests we've
been told have arrived, rather than just one. This is easy to
demonstrate by sending lots of API events and adding some sleeps to
make sure multiple events can arrive while each is being processed.
For other uses of eventfd, like the exit event, this doesn't matter —
even if we've received multiple exit events in quick succession, we
only need to exit once. So I've only made this change where receiving
an event is non-idempotent, i.e. where it matters that we process the
event the right number of times.
Technically, reset requests are also non-idempotent — there's an
observable difference between a VM resetting once, and a VM resetting
once and then immediately resetting again. But I've left that alone
for now because two resets in immediate succession doesn't sound like
something anyone would ever want to me.
Signed-off-by: Alyssa Ross <hi@alyssa.is>
Function `system_registers` took mutable vector reference and modified
the vector content. Now change the definition to `get/set` style.
And rename to `get/set_sys_regs` to align with other functions.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
On AArch64, the function `core_registers` and `set_core_registers` are
the same thing of `get/set_regs` on x86_64. Now the names are aligned.
This will benefit supporting `gdb`.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
The VM specific signal (currently only SIGWINCH) should only be handled
when the VM is running.
The generic VMM signals (SIGINT and SIGTERM) need handling at all times.
Split the signal handling into two separate threads which have differing
lifetimes.
Tested by:
1.) Boot full VM and check resize handling (SIGWINCH) works & sending
SIGTERM leads to cleanup (tested that API socket is removed.)
2.) Start without a VM and send SIGTERM/SIGINT and observe cleanup (API
socket removed)
3.) Boot full VM, delete VM and observe 2.) holds.
4.) Boot full VM, delete VM, recreate VM and observe 1.) holds.
Fixes: #4269
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
And along with virtio-queue, we must also bump vhost-user-backend from
0.3.0 to 0.5.0 (since it relies on virtio-queue 0.4.0).
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The Linux kernel now checks for this before marking CPUs as
hotpluggable:
commit aa06e20f1be628186f0c2dcec09ea0009eb69778
Author: Mario Limonciello <mario.limonciello@amd.com>
Date: Wed Sep 8 16:41:46 2021 -0500
x86/ACPI: Don't add CPUs that are not online capable
A number of systems are showing "hotplug capable" CPUs when they
are not really hotpluggable. This is because the MADT has extra
CPU entries to support different CPUs that may be inserted into
the socket with different numbers of cores.
Starting with ACPI 6.3 the spec has an Online Capable bit in the
MADT used to determine whether or not a CPU is hotplug capable
when the enabled bit is not set.
Link: https://uefi.org/htmlspecs/ACPI_Spec_6_4_html/05_ACPI_Software_Programming_Model/ACPI_Software_Programming_Model.html?#local-apic-flags
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This check is new in the beta version of clippy and exists to avoid
potential deadlocks by highlighting when the test in an if or for loop
is something that holds a lock. In many cases we would need to make
significant refactorings to be able to pass this check so disable in the
affected crates.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
warning: you are deriving `PartialEq` and can implement `Eq`
--> vmm/src/serial_manager.rs:59:30
|
59 | #[derive(Debug, Clone, Copy, PartialEq)]
| ^^^^^^^^^ help: consider deriving `Eq` as well: `PartialEq, Eq`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#derive_partial_eq_without_eq
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Tested:
1. SIGTERM based
2. VM shutdown/poweroff
3. Injected VM boot failure after calling Vm::setup_tty()
Fixes: #4248
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The snapshots are stored in a BTree which is ordered however as the ids
are strings lexical ordering places "11" ahead of "2". So encode the
vCPU id with zero padding so it is lexically sorted.
This fixes issues with CPU restore on aarch64.
See: #4239
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When restoring a VM, the restore codepath will take care of mapping the
MMIO regions based on the information from the snapshot, rather than
having the mapping being performed during device creation.
When the device is created, information such as which BARs contain the
MSI-X tables are missing, preventing to perform the mapping of the MMIO
regions.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on recent KVM host patches (merged in Linux 5.16), it's forbidden
to call into KVM_SET_CPUID2 after the first successful KVM_RUN returned.
That means saving CPU states during the pause sequence, and restoring
these states during the resume sequence will not work with the current
design starting with kernel version 5.16.
In order to solve this problem, let's simply move the save/restore logic
to the snapshot/restore sequences rather than the pause/resume ones.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The vCPU is created and set after all the devices on a VM's boot.
There's no reason to follow a different order on the restore codepath as
this could cause some unexpected behaviors.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Combined the `GicDevice` struct in `arch` crate and the `Gic` struct in
`devices` crate.
After moving the KVM specific code for GIC in `arch`, a very thin wapper
layer `GicDevice` was left in `arch` crate. It is easy to combine it
with the `Gic` in `devices` crate.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
In order to ensure that the virtio device thread is spawned from the vmm
thread we use an asynchronous activation mechanism for the virtio
devices. This change optimises that code so that we do not need to
iterate through all virtio devices on the platform in order to find the
one that requires activation. We solve this by creating a separate short
lived VirtioPciDeviceActivator that holds the required state for the
activation (e.g. the clones of the queues) this can then be stored onto
the device manager ready for asynchronous activation.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Based on the newly added guest_debug feature, this patch adds http
endpoint support.
Signed-off-by: Yi Wang <wang.yi59@zte.com.cn>
Co-authored-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The crash tool use a special note segment which named 'QEMU' to
analyze kaslr info and so on. If we don't add the 'QEMU' note
segment, crash tool can't find linux version to move on.
For now, the most convenient way is to add 'QEMU' note segment to
make crash tool happy.
Signed-off-by: Yi Wang <wang.yi59@zte.com.cn>
Guest memory is needed for analysis in crash tool, so save it
for coredump.
Signed-off-by: Yi Wang <wang.yi59@zte.com.cn>
Co-authored-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
It's useful to dump the guest, which named coredump so that crash
tool can be used to analysize it when guest hung up.
Let's add GuestDebuggable trait and Coredumpxxx error to support
coredump firstly.
Signed-off-by: Yi Wang <wang.yi59@zte.com.cn>
Co-authored-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The error message incorrectly said that the user was trying to combine
cache_size without dax whereas it is only usuable with dax.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Remove the code from the DeviceManager that prepares the DAX cache since
the functionality has now been removed.
Fixes: #3889
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
`GicDevice` trait was defined for the common part of GicV3 and ITS.
Now that the standalone GicV3 do not exist, `GicDevice` is not needed.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
This reverts commit f160572f9d.
There has been increased flakiness around the live migration tests since
this was merged. Speculatively reverting to see if there is increased
stability.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In order to ensure that the virtio device thread is spawned from the vmm
thread we use an asynchronous activation mechanism for the virtio
devices. This change optimises that code so that we do not need to
iterate through all virtio devices on the platform in order to find the
one that requires activation. We solve this by creating a separate short
lived VirtioPciDeviceActivator that holds the required state for the
activation (e.g. the clones of the queues) this can then be stored onto
the device manager ready for asynchronous activation.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Latest cargo beta version raises warnings about unused macro rules.
Simply remove them to fix the beta build.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
There is no need to include serde_derive separately,
as it can be specified as serde feature instead.
Signed-off-by: Maksym Pavlenko <pavlenko.maksym@gmail.com>
Explicitly re-export types from the hypervisor specific modules. This
makes it much clearer what the common functionality that is exposed is.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
And thus only export what is necessary through a `pub use`. This is
consistent with some of the other modules and makes it easier to
understand what the external interface of the hypervisor crate is.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
By taking advantage of the fact that IrqRoutingEntry is exported by the
hypervisor crate (that is typedef'ed to the hypervisor specific version)
then the code for handling the MsiInterruptManager can be simplified.
This is particularly useful if in this future it is not a typedef but
rather a wrapper type.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This removes the requirement to leak as many datastructures from the
hypervisor crate into the vmm crate.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The trait and functionality is about operations on the VM rather than
the VMM so should be named appropriately. This clashed with with
existing struct for the concrete implementation that was renamed
appropriately.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Whenever going through the codepath of loading a RAW firmware, we always
add an extra RAM region to the guest memory through the memory manager.
But we must be careful to use the updated guest memory rather than a
previous reference that wasn't containing the new region, as this can
lead to the following error:
VmBoot(FirmwareLoad(InvalidGuestAddress(GuestAddress(4290772992))))
This is fixed by the current patch, getting the latest reference onto
the guest memory from the memory manager right after the new region has
been added.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This is required when hot-removing a vfio-user device. Details code path
below:
Thread 6 "vcpu0" received signal SIGSYS, Bad system call.
[Switching to Thread 0x7f8196889700 (LWP 2358305)]
0x00007f8196dae7ab in shutdown () at ../sysdeps/unix/syscall-template.S:78
78 T_PSEUDO (SYSCALL_SYMBOL, SYSCALL_NAME, SYSCALL_NARGS)
(gdb) bt
0x00007f8196dae7ab in shutdown () at ../sysdeps/unix/syscall-template.S:78
0x000056189240737d in std::sys::unix::net::Socket::shutdown ()
at library/std/src/sys/unix/net.rs:383
std::os::unix::net::stream::UnixStream::shutdown () at library/std/src/os/unix/net/stream.rs:479
0x000056189210e23d in vfio_user::Client::shutdown (self=0x7f8190014300)
at vfio_user/src/lib.rs:787
0x00005618920b9d02 in <pci::vfio_user::VfioUserPciDevice as core::ops::drop::Drop>::drop (
self=0x7f819002d7c0) at pci/src/vfio_user.rs:551
0x00005618920b8787 in core::ptr::drop_in_place<pci::vfio_user::VfioUserPciDevice> ()
at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/ptr/mod.rs:188
0x00005618920b92e3 in core::ptr::drop_in_place<core::cell::UnsafeCell<dyn pci::device::PciDevice>>
() at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/ptr/mod.rs:188
0x00005618920b9362 in core::ptr::drop_in_place<std::sync::mutex::Mutex<dyn pci::device::PciDevice>> () at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/ptr/mod.rs:188
0x00005618920d8a3e in alloc::sync::Arc<T>::drop_slow (self=0x7f81968852b8)
at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/alloc/src/sync.rs:1092
0x00005618920ba273 in <alloc::sync::Arc<T> as core::ops::drop::Drop>::drop (self=0x7f81968852b8)
at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/alloc/src/sync.rs:1688
0x00005618920b76fb in core::ptr::drop_in_place<alloc::sync::Arc<std::sync::mutex::Mutex<dyn pci::device::PciDevice>>> ()
at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/ptr/mod.rs:188
0x0000561891b5e47d in vmm::device_manager::DeviceManager::eject_device (self=0x7f8190009600,
pci_segment_id=0, device_id=3) at vmm/src/device_manager.rs:4000
0x0000561891b674bc in <vmm::device_manager::DeviceManager as vm_device:🚌:BusDevice>::write (
self=0x7f8190009600, base=70368744108032, offset=8, data=&[u8](size=4) = {...})
at vmm/src/device_manager.rs:4625
0x00005618921927d5 in vm_device:🚌:Bus::write (self=0x7f8190006e00, addr=70368744108040,
data=&[u8](size=4) = {...}) at vm-device/src/bus.rs:235
0x0000561891b72e10 in <vmm::vm::VmOps as hypervisor::vm::VmmOps>::mmio_write (
self=0x7f81900097b0, gpa=70368744108040, data=&[u8](size=4) = {...}) at vmm/src/vm.rs:378
0x0000561892133ae2 in <hypervisor::kvm::KvmVcpu as hypervisor::cpu::Vcpu>::run (
self=0x7f8190013c90) at hypervisor/src/kvm/mod.rs:1114
0x0000561891914e85 in vmm::cpu::Vcpu::run (self=0x7f819001b230) at vmm/src/cpu.rs:348
0x000056189189f2cb in vmm::cpu::CpuManager::start_vcpu::{{closure}}::{{closure}} ()
at vmm/src/cpu.rs:953
Signed-off-by: Bo Chen <chen.bo@intel.com>
Since both Net and vhost_user::Net implement the Migratable trait, we
can factorize the common part to simplify the code related to the net
creation.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Since both Block and vhost_user::Blk implement the Migratable trait, we
can factorize the common part to simplify the code related to the disk
creation.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Extend the validate() function for both DiskConfig and NetConfig so that
we return an error if a vhost-user-block or vhost-user-net device is
expected to be placed behind the virtual IOMMU. Since these devices
don't support this feature, we can't allow iommu to be set to true in
these cases.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This is a cleaner approach to handling the I/O port write to 0x80.
Whilst doing this also use generate the timestamp at the start of the VM
creation. For consistency use the same timestamp for the ARM equivalent.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
We don't use the VmmOps trait directly for manipulating memory in the
core of the VMM as it's really designed for the MSHV crate to handle
instruction decoding. As I plan to make this trait MSHV specific to
allow reduced locking for MMIO and PIO handling when running on KVM this
use should be removed.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
To correctly map MMIO regions to the guest, we will need to wait for valid
MMIO region information which is generated from 'PciDevice::allocate_bars()'
(as a part of 'DeviceManager::add_pci_device()').
Signed-off-by: Bo Chen <chen.bo@intel.com>
For devices that cannot be named by the user use the "__" prefix to
identify them as internal devices. Check that any identifiers provided
in the config do not clash with those internal names. This prevents the
user from creating a disk such as "__serial" which would then cause a
failure in unpredictable manner.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Whenever a device (virtio, vfio, vfio-user or vdpa) is hotplugged, we
must verify the provided identifier is unique, otherwise we must return
an error.
Particularly, this will prevent issues with identifiers for serial,
console, IOAPIC, balloon, rng, watchdog, iommu and gpio since all of
these are hardcoded by the VMM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
All hotpluggable devices were properly removed from the VmConfig when a
remove-device command was issued, except for the "fs" type. Fix this
lack of support as it is causing the integration tests to fail with the
recent addition of verifying that identifiers are unique.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The device identifiers generated from the DeviceManager were not
guaranteed to be unique since they were not taking the list of
identifiers provided through the configuration.
By returning the list of unique identifiers from the configuration, and
by providing it to the DeviceManager, the generation of new identifiers
can rely both on the DeviceTree and the list of IDs from the
configuration.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The socket will safely deleted on shutdown and so it is not necessary to
delete the API socket when starting the HTTP server.
Fixes: #4026
Signed-off-by: LiHui <andrewli@kubesphere.io>
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Start loading the kernel as possible in the VM in a separate thread.
Whilst it is loading other work can be carried out such as initialising
the devices.
The biggest performance improvement is seen with a more complex set of
devices. If using e.g. four virtio-net devices then the time to start the
kernel improves by 20-30ms. With the simplest configuration the
improvement was of the order of 2-3ms.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows the same code for generating the kernel command line to be
used on both aarch64 and x86_64 when the latter starts loading the
kernel in asynchronously.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This is not required for x86_64 and maintains a tight coupling between
kernel loading and the DeviceManager.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This reverts commit 87eed369cd.
The reason we're reverting this is that OpenAPI Specification[0] doesn't
know how to deal with unsigned types. :-/
Right now the best to do is keep it as it's, as an int64, and try to fix
OpenAPI, or even switch to swagger, as the latter knows how to properly
deal with those. However, switching to swagger is far from being an 1:1
transition and will require time to experiment, thus reverting this for
now seems the best approach.
[0]: https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.1.0.md#data-types
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
The Token Bucket fields are, on the Cloud Hypervisor side, u64.
However, we expose those as int64 in the OpenAPI YAML file.
With that in mind, let's adjust the yaml file to expose those as uint64.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This means that the automatic enabling of the virtio-iommu will also be
applied to VMs creates via the API as well as the CLI.
Fixes: #4016
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
If using the ACPI based hotplug only memory can be added so if the
hotplug RAM size is the same as the boot RAM size then do not include
the memory manager DSDT entries.
Also: this change simplifies the code marginally by making the
HotplugMethod enum Copyable.
This was identified from the following perf output:
1.78% 0.00% vmm cloud-hypervisor [.] <vmm::memory_manager::MemorySlots as acpi_tables::aml::Aml>::append_aml_bytes
|
---<vmm::memory_manager::MemorySlots as acpi_tables::aml::Aml>::append_aml_bytes
<vmm::memory_manager::MemorySlot as acpi_tables::aml::Aml>::append_aml_bytes
acpi_tables::aml::Name::new
<acpi_tables::aml::Path as acpi_tables::aml::Aml>::append_aml_bytes
__libc_malloc
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
No further changes are necessary that adding a #[derive(Error)] as there
is a manual implementation of Display.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Extend VfioCommon structure to own the MSI interrupt manager. This will
be useful for implementing the restore code path.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This carries a string that is exposed via DMI/SMBIOS and is particularly
useful for cloud-init initialisation.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Use a single enum member for representing errors from the internal API.
This avoids the ugly duplication of the API call name in the error
message:
e.g.
$ target/debug/ch-remote --api-socket /tmp/api resize --cpus 2
Error running command: Server responded with an error: InternalServerError: VmResize(VmResize(CpuManager(DesiredVCpuCountExceedsMax)))
Becomes:
$ target/debug/ch-remote --api-socket /tmp/api resize --cpus 2
Error running command: Server responded with an error: InternalServerError: ApiError(VmResize(CpuManager(DesiredVCpuCountExceedsMax)))
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Instead of defining some very generic resources as PioAddressRange or
MmioAddressRange for each PCI BAR, let's move to the new Resource type
PciBar in order to make things clearer. This allows the code for being
more readable, but also removes the need for hard assumptions about the
MMIO and PIO ranges. PioAddressRange and MmioAddressRange types can be
used to describe everything except PCI BARs. BARs are very special as
they can be relocated and have special information we want to carry
along with them.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to make the code more consistent and easier to read, we remove
the former tuple that was used to describe a BAR, replacing it with the
existing structure PciBarConfiguration.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By factorizing the algorithm untangling TDVF sections from guest RAM
into a dedicated function, we can write some unit tests to validate it
properly achieves what we expect.
Adding the "tdx" feature to the unit tests, otherwise it wouldn't get
tested.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By adding a new method id() to the PciDevice trait, we allow the caller
to retrieve a unique identifier. This is used in the context of BAR
relocation to identify the device being relocated, so that we can update
the DeviceTree resources for all PCI devices (and not only
VirtioPciDevice).
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By returning the new PCI resources from add_pci_device(), we allow the
factorization of the code translating the BARs into resources. This
allows VIRTIO, VFIO and vfio-user to add the resources to the DeviceTree
node.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Relying on the function introduced recently to get the PCI resources and
handle the restore case, both VFIO and vfio-user device creation paths
now have access to PCI resources, which can be provided to the function
add_pci_device().
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Create a dedicated function for getting the PCI segment, b/d/f and
optional resources. This is meant for handling the potential case of a
restore.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Updating the way of restoring BAR addresses for virtio-pci by providing
a more generic approach that will be reused for other PciDevice
implementations (i.e VfioPcidevice and VfioUserPciDevice).
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The calls to these functions are always preceded by a call to
InterruptSourceGroup::update(). By adding a masked boolean to that
function call it possible to remove 50% of the calls to the
KVM_SET_GSI_ROUTING ioctl as the the update will correctly handle the
masked or unmasked case.
This causes the ioctl to disappear from the perf report for a boot of
the VM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
EDK2 execution requires a flash device at address 0.
The new added device is not a fully functional flash. It doesn't
implement any spec of a flash device. Instead, a piece of memory is used
to simulate the flash simply.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Rust 2021 edition has a few improvements over the 2018 edition. Migrate
the project to 2021 edition by following recommended migration steps.
Luckily, the code itself doesn't require fixing.
Bump MSRV to 1.56 as it is required by the 2021 edition. Also fix the
clap build dependency to make Cloud Hypervisor build again.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This is a refactoring commit to simplify source code.
Removed some functions that only return a layout const.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Some addresses defined in `layout.rs` were of type `GuestAddress`, and
are `u64`. Now align the types of all the `*_START` definitions to
`GuestAddress`.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
The reserved space is for devices.
Some devices (like TPM) require arbitrary addresses close to 4GiB.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
`RAM_64BIT_START` was set to 1 GiB, not a real 64-bit address. Now
rename it `RAM_START` to avoid confusion.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Add a new iommu parameter to VdpaConfig in order to place the vDPA
device behind a virtual IOMMU.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The list of memory resources provided through the HOB wasn't accurate
because of the broken logic. The fix provides correct ranges to the
firmware.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on latest QEMU patches from branch tdx-qemu-2022.03.29-v7.0.0-rc1
we should only report as memory resources the TempMem sections from TDVF
sections.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The introduction of a error if live resizing is not possible is a
regression compared to the original behaviour where the new size would
be stored in the config and reflected in the next boot. This behaviour
was also inconsistent with the effect of resizing with no VM booted.
Instead of generating an error allow the code to go ahead and update the
config so that the new size will be available upon the reboot.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Similarly to the previous commit restricting the cpu resizing error only
to the situations where the vcpu amount has changed, let's do the same
with the memory and be consistent throughout our code base.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
188078467d made clear that resize should
only happen when dealing with a "dynamic" CpuManager. Although this is
very much correct, it causes a regression on Kata Containers (and on any
other consumer of Cloud Hypervisor) in cases where a resize would be
triggered but the vCPUs values wouldn't be changed.
There's no doubt Kata Containers could do better and do not call a
resize in such situations, and that's something that should **also** be
solved there. However, we should also work this around on Cloud
Hypervisor side as it introduces a regression with the current Kata
Containers code.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
By enabling the VIRTIO feature VIRTIO_F_IOMMU_PLATFORM for all
vhost-user devices when needed, we force the guest to use the DMA API,
making these devices compatible with TDX. By using DMA API, the guest
triggers the TDX codepath to share some of the guest memory, in
particular the virtqueues and associated buffers so that the VMM and
vhost-user backends/processes can access this memory.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
If EFI reset fails on the Linux kernel then it will fallthrough to CMOS
reset. Implement this as one of our reset solutions.
Fixes: #3912
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Compile this feature in by default as it's well supported on both
aarch64 and x86_64 and we only officially support using it (no non-acpi
binaries are available.)
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
AMX is an x86 extension adding hardware units for matrix
operations (int and float dot products). The goal of the extension is
to provide performance enhancements for these common operations.
On Linux, AMX requires requesting the permission from the kernel prior
to use. Guests wanting to make use of the feature need to have the
request made prior to starting the vm.
This change then adds the first --cpus features option amx that when
passed will enable AMX usage for guests (needs a 5.17+ kernel) or
exits with failure.
The activation is done in the CpuManager of the VMM thread as it
allows migration and snapshot/restore to work fairly painlessly for
AMX enabled workloads.
Signed-off-by: William Douglas <william.douglas@intel.com>
Disable the DAX feature from the virtio-fs implementation as the feature
is still not stable. The feature is deprecated, meaning the 'dax'
parameter will be removed in about 2 releases cycles.
In the meantime, the parameter value is ignored and forced to be
disabled.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When running non-dynamic or with virtio-mem for hotplug the ACPI
functionality should not be included on the DSDT nor does the
MemoryManager need to be placed on the MMIO bus.
Fixes: #3883
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This is now consistent with not supplying the _CRS for the device when
CpuManager is not dynamic.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rather than just printing a message return an error back through the API
if the user attempts to hotplug a device that supports being behind an
IOMMU where that device isn't placed on an IOMMU segment.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Ensure devices that are specified to be on a PCI segment that is behind
the IOMMU are IOMMU enabled if possible or error out for those devices
that do not support it.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Previously it was not possible to enable vIOMMU for a virtio device.
However with the ability to place an entire PCI segment behind the
IOMMU the IOMMU mapping needs to be setup for the virtio device if it is
behind the IOMMU.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This can already be calculated by the summing the tables reported by the
Linux kernel but this is more convenient.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Separate the destruction and cleanup of original VM and the creation of
the new one. In particular have a clear hand off point for resources
(e.g. reset EventFd) used by the new VM from the original. In the
situation where vm.shutdown() generates an error this also avoids the
Vmm reference to the Vm (self.vm) from being maintained.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the newly added Vdpa device along with the new vdpa parameter,
this patch enables the support for vDPA devices.
It's important to note this the only virtio device for which we provide
an ExternalDmaMapping instance. This will allow for the right DMA ranges
to be mapped/unmapped.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Introduce a new --vdpa parameter associated with a VdpaConfig for the
future creation of a Vdpa device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This will significantly reduce the size of the DSDT and the effort
required to parse them if there is no requirement to support
hotplug/unplug.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
If the CpuManager is dynamic it devices CPUs can be
hotplugged/unplugged.
Since TDX does not support CPU hotplug this is currently the only
determinator as to whether the CpuManager is dynamic.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
vmm.ping/vm.info will hang for PUT method, vm.create/vmm.shutdonw hang for GET method.
Because these four APIs do not write the response body when the HTTP method does not match.
Signed-off-by: LiHui <andrewli@kubesphere.io>
In case the virtio device which requires DMA mapping is placed behind a
virtual IOMMU, we shouldn't map/unmap any region manually. Instead, we
provide the DMA handler to the virtio-iommu device so that it can
trigger the proper mappings.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
If a virtio device is associated with a DMA handler, the DMA mapping and
unmapping is performed from the device manager through the handler.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Given that some virtio device might need some DMA handling, we provide a
way to store this through the VirtioPciDevice layer, so that it can be
accessed when the PCI device is removed.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for handling potential DMA mapping/unmapping operations for a
virtio device, we extend the MetaVirtioDevice with an additional field
that holds an optional DMA handler.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The tuple of information related to each virtio device is too big, and
it's better to factorize it through a dedicated structure.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When mask a msi irq, we set the entry.masked to be true, so kvm
hypervisor will not pass the gsi to kernel through KVM_SET_GSI_ROUTING
ioctl which update kvm->irq_routing. This will trigger kernel
panic on AMD platform when the gsi is the largest one in kernel
kvm->irqfds.items:
crash> bt
PID: 22218 TASK: ffff951a6ad74980 CPU: 73 COMMAND: "vcpu8"
#0 [ffffb1ba6707fa40] machine_kexec at ffffffff8565b397
#1 [ffffb1ba6707fa90] __crash_kexec at ffffffff85788a6d
#2 [ffffb1ba6707fb58] crash_kexec at ffffffff8578995d
#3 [ffffb1ba6707fb70] oops_end at ffffffff85623c0d
#4 [ffffb1ba6707fb90] no_context at ffffffff856692c9
#5 [ffffb1ba6707fbf8] exc_page_fault at ffffffff85f95b51
#6 [ffffb1ba6707fc50] asm_exc_page_fault at ffffffff86000ace
[exception RIP: svm_update_pi_irte+227]
RIP: ffffffffc0761b53 RSP: ffffb1ba6707fd08 RFLAGS: 00010086
RAX: ffffb1ba6707fd78 RBX: ffffb1ba66d91000 RCX: 0000000000000001
RDX: 00003c803f63f1c0 RSI: 000000000000019a RDI: ffffb1ba66db2ab8
RBP: 000000000000019a R8: 0000000000000040 R9: ffff94ca41b82200
R10: ffffffffffffffcf R11: 0000000000000001 R12: 0000000000000001
R13: 0000000000000001 R14: ffffffffffffffcf R15: 000000000000005f
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
#7 [ffffb1ba6707fdb8] kvm_irq_routing_update at ffffffffc09f19a1 [kvm]
#8 [ffffb1ba6707fde0] kvm_set_irq_routing at ffffffffc09f2133 [kvm]
#9 [ffffb1ba6707fe18] kvm_vm_ioctl at ffffffffc09ef544 [kvm]
RIP: 00007f143c36488b RSP: 00007f143a4e04b8 RFLAGS: 00000246
RAX: ffffffffffffffda RBX: 00007f05780041d0 RCX: 00007f143c36488b
RDX: 00007f05780041d0 RSI: 000000004008ae6a RDI: 0000000000000020
RBP: 00000000000004e8 R8: 0000000000000008 R9: 00007f05780041e0
R10: 00007f0578004560 R11: 0000000000000246 R12: 00000000000004e0
R13: 000000000000001a R14: 00007f1424001c60 R15: 00007f0578003bc0
ORIG_RAX: 0000000000000010 CS: 0033 SS: 002b
To solve this problem, move route.disable() before set_gsi_routes() to
remove the gsi from irqfds.items first.
This problem only exists on AMD platform, 'cause on Intel platform
kernel just return when update irte while it only prints a warning on
AMD.
Also, this patch adjusts the order of enable() and set_gsi_routes() in
unmask(), which should do no harm.
Signed-off-by: Yi Wang <wang.yi59@zte.com.cn>
When mask a msi irq, we set the entry.masked to be true, so kvm
hypervisor will not pass the gsi to kernel through KVM_SET_GSI_ROUTING
ioctl which update kvm->irq_routing. This will trigger kernel
panic on AMD platform when the gsi is the largest one in kernel
kvm->irqfds.items:
crash> bt
PID: 22218 TASK: ffff951a6ad74980 CPU: 73 COMMAND: "vcpu8"
#0 [ffffb1ba6707fa40] machine_kexec at ffffffff8565b397
#1 [ffffb1ba6707fa90] __crash_kexec at ffffffff85788a6d
#2 [ffffb1ba6707fb58] crash_kexec at ffffffff8578995d
#3 [ffffb1ba6707fb70] oops_end at ffffffff85623c0d
#4 [ffffb1ba6707fb90] no_context at ffffffff856692c9
#5 [ffffb1ba6707fbf8] exc_page_fault at ffffffff85f95b51
#6 [ffffb1ba6707fc50] asm_exc_page_fault at ffffffff86000ace
[exception RIP: svm_update_pi_irte+227]
RIP: ffffffffc0761b53 RSP: ffffb1ba6707fd08 RFLAGS: 00010086
RAX: ffffb1ba6707fd78 RBX: ffffb1ba66d91000 RCX: 0000000000000001
RDX: 00003c803f63f1c0 RSI: 000000000000019a RDI: ffffb1ba66db2ab8
RBP: 000000000000019a R8: 0000000000000040 R9: ffff94ca41b82200
R10: ffffffffffffffcf R11: 0000000000000001 R12: 0000000000000001
R13: 0000000000000001 R14: ffffffffffffffcf R15: 000000000000005f
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
#7 [ffffb1ba6707fdb8] kvm_irq_routing_update at ffffffffc09f19a1 [kvm]
#8 [ffffb1ba6707fde0] kvm_set_irq_routing at ffffffffc09f2133 [kvm]
#9 [ffffb1ba6707fe18] kvm_vm_ioctl at ffffffffc09ef544 [kvm]
RIP: 00007f143c36488b RSP: 00007f143a4e04b8 RFLAGS: 00000246
RAX: ffffffffffffffda RBX: 00007f05780041d0 RCX: 00007f143c36488b
RDX: 00007f05780041d0 RSI: 000000004008ae6a RDI: 0000000000000020
RBP: 00000000000004e8 R8: 0000000000000008 R9: 00007f05780041e0
R10: 00007f0578004560 R11: 0000000000000246 R12: 00000000000004e0
R13: 000000000000001a R14: 00007f1424001c60 R15: 00007f0578003bc0
ORIG_RAX: 0000000000000010 CS: 0033 SS: 002b
To solve this problem, move route.disable() before set_gsi_routes() to
remove the gsi from irqfds.items first.
This problem only exists on AMD platform, 'cause on Intel platform
kernel just return when update irte while it only prints a warning on
AMD.
Signed-off-by: Yi Wang <wang.yi59@zte.com.cn>
Move to release version v0.2.0 for both vm-virtio and vhost-user-backend
crates rather than relying on their main branch, as they might be
subject to breaking changes.
Fixes#3800
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Add a field for its length and fix up users.
Things work just because all hardcoded values agree with each other.
This is prone to breakage.
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This commit adds event fds and the event handler to send/receive
requests and responses from the GDB thread. It also adds `--gdb` flag to
enable GDB stub feature.
Signed-off-by: Akira Moroo <retrage01@gmail.com>
This commit adds `stop_on_boot` to `Vm` so that the VM stops before
starting on boot requested. This change is required to keep the target
VM stopped before a debugger attached as the user expected.
Signed-off-by: Akira Moroo <retrage01@gmail.com>
This commit adds `Vm::debug_request` to handle `GdbRequestPayload`,
which will be sent from the GDB thread.
Signed-off-by: Akira Moroo <retrage01@gmail.com>
This commit adds initial gdb.rs implementation for `Debuggable` trait to
describe a debuggable component. Some part of the trait bound
implementations is based on the crosvm GDB stub code [1].
[1] https://github.com/google/crosvm/blob/main/src/gdb.rs
Signed-off-by: Akira Moroo <retrage01@gmail.com>
This commit adds `KVM_SET_GUEST_DEBUG` and `KVM_TRANSLATE` ioctls to
seccomp filter to enable guest debugging without `--seccomp=false`.
Signed-off-by: Akira Moroo <retrage01@gmail.com>
This commit adds `VmState::BreakPoint` to handle hardware breakpoint.
The VM will enter this state when a breakpoint hits or a debugger
interrupts the execution.
Signed-off-by: Akira Moroo <retrage01@gmail.com>
42b5d4a2f7 has changed how the PciBdf
field of a DeviceNode is represented (from an int32 to its own struct).
To avoid marshelling / demarshelling issues for the projects relying on
the openapi auto generated code, let's propagate the change, updating
the yaml file accordingly.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
`Dies per package` setting of VCPU topology doesnot apply on AArch64.
Now we only accept `1` value. This way we can make the `dies` field
transparent, avoid it from impacting the topology setting.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Based on the helpers from the hypervisor crate, the VMM can identify
what type of hypercall has been issued through the KVM_EXIT_TDX reason.
For now, we only log warnings and set the status to INVALID_OPERAND
since these hypercalls aren't supported. The proper handling will be
implemented later.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Since the object returned from CpuManager.create_vcpu() is never used,
we can avoid the cloning of this object.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By having the DeviceNode storing a PciBdf, we simplify the internal code
as well as allow for custom Serialize/Deserialize implementation for the
PciBdf structure. These custom implementations let us display the PCI
s/b/d/f in a human readable format.
Fixes#3711
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
As we've added support for cold adding devices to a VM that was created
but not already started, we should propagate the `204` response
generated on those cases to the yaml file, so openapi-generator can
produce the correct client code on the go side, to handle both `200` and
`204` successful results.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Instead of erroring out when trying to change the configuration of the
VM somewhere between the VM was created but not yet booted, let's allow
users to change that without any issue, as long as the VM has already
been created.
Fixes: #3639
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add very basic unit for the vm_add_$device() functions, so we can
easily expand those when changing its behaviour in the coming commits.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Instead of doing the validation of the configuration change as part of
the vm, let's do this in the uper layer, in the Vmm.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's move add_to_config to config.rs so it can be used from both inside
and outside of the vm.rs file.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
TDx support is already present on the project for quite some time, but
the TDx configuration was not yet exposed to the ones using CH via the
OpenAPI auto generated code.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Since the devices behind the IOMMU cannot be changed at runtime we offer
the ability to place all devices on user chosen segments behind the
IOMMU. This allows the hotplugging of devices behind the IOMMU provided
that they are assigned to a segment that is located behind the iommu.
Fixes: #911
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Adding a new parameter free_page_reporting=on|off to the balloon device
so that we can enable the corresponding feature from virtio-balloon.
Running a VM with a balloon device where this feature is enabled allows
the guest to report pages that are free from guest's perspective. This
information is used by the VMM to release the corresponding pages on the
host.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to allow for human readable output for the VM configuration, we
pull it out of the snapshot, which becomes effectively the list of
states from the VM. The configuration is stored through a dedicated file
in JSON format (not including any binary output).
Having the ability to read and modify the VM configuration manually
between the snapshot and restore phases makes debugging easier, as well
as empowers users for extending the use cases relying on the
snapshot/restore feature.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
As per this kernel documentation:
For KVM_EXIT_IO, KVM_EXIT_MMIO, KVM_EXIT_OSI, KVM_EXIT_PAPR, KVM_EXIT_XEN,
KVM_EXIT_EPR, KVM_EXIT_X86_RDMSR and KVM_EXIT_X86_WRMSR the corresponding
operations are complete (and guest state is consistent) only after userspace
has re-entered the kernel with KVM_RUN. The kernel side will first finish
incomplete operations and then check for pending signals.
The pending state of the operation is not preserved in state which is
visible to userspace, thus userspace should ensure that the operation is
completed before performing a live migration. Userspace can re-enter the
guest with an unmasked signal pending or with the immediate_exit field set
to complete pending operations without allowing any further instructions
to be executed.
Since we capture the state as part of the pause and override it as part
of the resume we must ensure the state is consistent otherwise we will
lose the results of the MMIO or PIO operation that caused the exit from
which we paused.
Fixes: #3658
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
If a payload is found in the TDVF section, and after it's been copied to
the guest memory, make sure to create the corresponding TdPayload
structure and insert it through the HOB.
Signed-off-by: Jiaqi Gao <jiaqi.gao@intel.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In case of TDX, if a kernel and/or a command line are provided by the
user, they can't be treated the same way as for the non-TDX case. That
is why this patch ensures the function load_kernel() is only invoked for
the non-TDX case.
For the TDX case, whenever TDVF contains a Payload and/or PayloadParam
sections, the file provided through --kernel and the parameters provided
through --cmdline are copied at the locations specified by each TDVF
section.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The TDVF specification has been updated with the ability to provide a
specific payload, which means we will be able to achieve direct kernel
boot.
For that reason, let's not prevent the user from using --kernel
parameter when running with TDX.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Make sure Cloud Hypervisor relies on upstream and actively maintained
vfio-ioctls crate from the rust-vmm/vfio repository instead of the
deprecated version coming from rust-vmm/vfio-ioctls repository.
Fixes#3673
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that we introduced a separate method to indicate when the migration
is started, both start_dirty_log() and stop_dirty_log() don't have to
carry an implicit meaning as they can focus entirely on the dirty log
being started or stopped.
For that reason, we can now safely move stop_dirty_log() to the code
section performing non-local migration. It makes only sense to stop
logging dirty pages if this has been started before.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to clearly decouple when the migration is started compared to
when the dirty logging is started, we introduce a new method to the
Migratable trait. This clarifies the semantics as we don't end up using
start_dirty_log() for identifying when the migration has been started.
And similarly, we rely on the already existing complete_migration()
method to know when the migration has been ended.
A bug was reported when running a local migration with a vhost-user-net
device in server mode. The reason was because the migration_started
variable was never set to "true", since the start_dirty_log() function
was never invoked.
Signed-off-by: lizhaoxin1 <Lxiaoyouling@163.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
While cloud-hypervisor does support receiving the file descriptors of a
tuntap device, advertising the fds structure via the openAPI can lead to
misinterpretations of what can and what should be done.
An unadvertised consumer will think that they could rather just set the
file descriptors there directly, or even pass them as a byte array.
However, the proper way to go in those cases would be actually sending
those via send_msg(), together with the request.
As hacking the openAPI auto-generated code to properly do this is not
*that* trivial, and as doing so during a `create VM` request is not
supported, we better not advertising those.
Please, for more details, also check:
https://github.com/cloud-hypervisor/cloud-hypervisor/pull/3607#issuecomment-1020935523
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Now that all the preliminary work has been merged to make Cloud
Hypervisor work with the upstream crate virtio-queue from
rust-vmm/vm-virtio repository, we can move the whole codebase and remove
the local copy of the virtio-queue crate.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the latest code from the micro-http crate, this patch adds the
support for multiple file descriptors to be sent along with the add-net
request. This means we can now hotplug multiqueue network interface to
the VM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Moving the whole codebase to rely on the AccessPlatform definition from
vm-virtio so that we can fully remove it from virtio-queue crate.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
If a PMU is enabled in a VM, we also need to initialize the PMU
when the VM is restored. Otherwise, vCPUs cannot be started after
the VM is restored.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
When enable PMU on arm64, ioctl with group KVM_HAS_DEVICE_ATTR will be
blocked by seccomp, add it to authorized list.
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>