Moving the creation of the vCPUs before the DeviceManager gets created
will allow for the aarch64 vGIC to be created before the DeviceManager
as well in a follow up patch. The end goal being to adopt the same
creation sequence for both x86_64 and aarch64, and keeping in mind that
the vGIC requires every vCPU to be created.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Split the vCPU creation into two distincts parts. On the one hand we
create the actual Vcpu object with the creation of the hypervisor::Vcpu.
And on the other hand, we configure the existing Vcpu, setting registers
to proper values (such as setting the entry point).
This will allow for further work to move the creation earlier in the
boot, so that the hypervisor::Vcpu will be already created when the
DeviceManager gets created.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
The CpuManager is now created before the DeviceManager. This is required
as preliminary work for creating the vCPUs before the DeviceManager,
which is required to ensure both x86_64 and aarch64 follow the same
sequence.
It's important to note the optimization for faster PIO accesses on the
PCI config space had to be removed given the VmOps was required by the
CpuManager and by the Vcpu by extension. But given the PciConfigIo is
created as part of the DeviceManager, there was no proper way of moving
things around so that we could provide PciConfigIo early enough.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This optimisation provided some peformance improvement when measured by
perf however when considered in terms of boot time peformance this
optimisation doesn't have any impact measurable using our
peformance-metrics tooling.
Removing this optimisation helps simplify the VMM internals as it allows
the reordering of the VM creation process permitting refactoring of the
restore code path.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This panic was triggered with fuzzing on the virtio-net device. This
commits handles the error explicitly to avoid the panic, which also
makes the fuzzer happy (as panic is treated as bugs).
Signed-off-by: Bo Chen <chen.bo@intel.com>
If the KVM version is too old (pre Linux 5.7) then fetch the CPUID
information from the host and use that in the guest. We prefer the KVM
version over the host version as that would use the CPUID for the
running CPU vs the CPU that runs this code which might be different due
to a hybrid topology.
Fixes: #4918
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
From Rust 1.59, the cargo command is now able to strip a binary [1].
This can be enabled in Cargo.toml by adding a `strip = "true"` to
the `[profile.release]` section.
Adding such binary stripping support in Cargo.toml of the project,
also change the stripping process in the release workflow to the one
using toolchain, so that the AArch64 release binaries can also
be stripped.
Fixes: https://github.com/cloud-hypervisor/cloud-hypervisor/issues/4916
[1] https://doc.rust-lang.org/beta/cargo/reference/profiles.html#strip
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Fixes: #4678
Currently release build is done on kvm feature only,
that makes live upgrade test on MSHV failing since
it does not find /dev/kvm. As Cloud-Hypervisor
supports both kvm and mshv in a single binary we should
make the release build with both KVM/MSHV feature enabled.
That way live upgrade test does not fail on MSHV.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
The jammy disk image has a new enough kernel to support SGX and if we
rely on just the CPUid information (which is sufficient) then we can use
the regular jammy test image for testing.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>