We want to give the time to the nested VM to be fully ready before we
check it's correctly setup. This involves 3 layers of virtualization
when running the CI on Azure, which added to the high load happening
because of the parallelization, adds up to the start up time.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The virtio configuration structures have been slightly modified between
5.6-rc4 and 5.8-rc4, forcing the virtio-iommu device to be updated
accordingly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Updating the kernel from 5.6-rc4 to 5.8-rc4 allows us to remove the
dependency on both virtio-vsock and virtio-mem patches as they are now
part of the upstream kernel. We're still carrying virtio-iommu and
virtio-fs patches.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Move the definition of RawFile from virtio-devices crate into qcow
crate. All the code that consumes RawFile also already depends on the
qcow crate for image file type detection so this change breaks the
need for the qcow crate to depend on the very large virtio-devices
crate.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
As compiling without acpi (implied by mmio) means that the VM will
terminate on i8042 reset we cannot test the reboot.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Now that vhost_user_net crate does not depend on the virtio-devices
crate it is no longer compiled differently based on the mmio or pci
features.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move NetQueuePair and the related NetCounters into the net_util crate.
This means that the vhost_user_net crate now no longer depends on
virtio-devices and so does not depend on the pci, qcow or other similar
crates. This significantly simplifies the build chain for this backend.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
By moving the code for opening the two RX and TX queues into a shared
location we are starting to remove the requirement for the
vhost-user-net backend to depend on the virtio-devices crate which in of
itself depends on many other crates that are not necessary for the
backend to function.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rather than use an embedded String inside the MultiQueueSupport error
value use two different values to differentiate the two cases.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
By moving the code for opening the TAP device into a shared location we
are starting to remove the requirement for the vhost-user-net backend to
depend on the virtio-devices crate which in of itself depends on many
other crates that are not necessary for the backend to function.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
It gets bubbled all the way up from hypervsior crate to top-level
Cargo.toml.
Cloud Hypervisor can't function without KVM at this point, so make it
a default feature.
Fix all scripts that use --no-default-features.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
Check if increasing the time after the VM is spawned help with getting
more stable numbers for the base PSS.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit store balloon size to MemoryConfig.
After reboot, virtio-balloon can use this size to inflate back to
the size before reboot.
Signed-off-by: Hui Zhu <teawater@antfin.com>
The implementation of the virtio-balloon was slightly wrong as it was
generating the GPA (Guest Physical Address) from the PFN (Page Frame
Number) which was a u32. That means the GPA was created as a u32, and
later a cast was done to extend it to a u64 type. Unfortunately, by
doing so, the GPA was wrong if the value was supposedly more than 32
bits.
That's why the PFN is casted into a u64 before the GPA is generated,
which creates the GPA on 64 bits directly.
Additionally, this patch simplifies the process_queue() function,
relying on multiple vm-memory helpers.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit adds a Jenkins stage for AArch64 integration test,
and the test is carried out using the GNU toolchain.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
The compiled AArch64 linux kernel by running `make` is in PE format
instead of vmlinux, vmlinux.pvh and bzImage format. Therefore we
need to add integration test for this case.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
This commit adds supporting components and code for enabling the
AArch64 integration tests, including:
1. A Linux kernel config file to build kernel on AArch64 machines.
2. Refactoring the `run_integration_test.sh` to architecture
specific scripts for readability.
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Remove the vmm dependency from vhost_user_block and vhost_user_net where
it was existing to use config::OptionParser. By moving the OptionParser
to its own crate at the top-level we can remove the very heavy
dependency that these vhost-user backends had.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Refactored the construction of KVM IOCTL rules for Seccomp.
Separating the rules by architecture can reduce the risk of bugs and
attacks.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
In order to move the hypervisor specific parts of the VM exit handling
path, we're defining a generic, hypervisor agnostic VM exit enum.
This is what the hypervisor's Vcpu run() call should return when the VM
exit can not be completely handled through the hypervisor specific bits.
For KVM based hypervisors, this means directly forwarding the IO related
exits back to the VMM itself. For other hypervisors that e.g. rely on the
VMM to decode and emulate instructions, this means the decoding itself
would happen in the hypervisor crate exclusively, and the rest of the VM
exit handling would be handled through the VMM device model implementation.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Fix test_vm unit test by using the new abstraction and dropping some
dead code.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The code is purely for maintaining an internal counter. It is not really
tied to KVM.
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
There are several dependencies that need updating so update them
manually rather than relying on dependabot. This will reduce the load on
the CI.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The binary is still built in the same location but the source code and
the dependencies for it come from the vhost_user_block crate itself.
The binary will be built with:
`cargo build --all --bin vhost_user_block` or just `cargo build --all`
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The binary is still built in the same location but the source code and
the dependencies for it come from the vhost_user_net crate itself.
The binary will be built with:
`cargo build --all --bin vhost_user_net` or just `cargo build --all`
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
With vhost_user_fs binary moved to its own crate the dependencies in the
top level can be trimmed significantly.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The binary is still built in the same location but the source code and
the dependencies for it come from the vhost_user_fs crate itself.
The binary will be built with:
`cargo build --all --bin vhost_user_fs` or just `cargo build --all`
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In preparation for splitting the binaries into their own crates start
building all the binaries in the workspace as part of the integration
testing suite.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In preparation for splitting the binaries into their own crates start
building all the binaries in the workspace when running the build
command inside the container.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>