The support of AArch64 is in very early stage. The steps in building and
runing on X86 and AArch64 can not align well yet. Adding AArch64 content
to README.md would produce much divergence.
Adding a guide in docs/ folder could be a better way to start now.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Updated Dockerfile to work with multiple architectures.
Updated dev_cli.sh to:
1. Build container image before AArch64 image is ready in public.
2. Adjust default feature collection on AArch64.
3. Workaround a build problem with musl on AArch64.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
And use a bumped up container image for that.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The unit tests require some specific Linux capabilities and also to have
access to /dev/kvm device. This commit makes sure we enable only what's
necessary instead of blindly enable full priviliges with --privileged
option.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We need the host IPC for sharing eventfds with KVM, and the host network
for VFIO.
We also enforce the no-seccomp setting on the container, to overcome any
potential filtering set by our container's Ubuntu base.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
All our tests must be run as root and thus the build directory is owned
by root after we run any of them.
Start another container to fix all permissions whenever we're done with
our tests.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
By default we will run as root inside the container, which means all the
build artifacts will be owned by root. That prevents us from properly
cleaning our build from an unprivileged host user.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
When running the docker container there is no interactivity needed so
don't pass "-ti" to "docker run"
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
To mitigate Azure slow disk IO, we mount /tmp on tmpfs.
This is a reproduction of our CI environment, as setup by the
Jenkinsfile.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
In order, among other things, to use the development CLI to run specific
integration tests. For example, to run only the memory_overhead
integration test:
./scripts/dev_cli.sh tests --integration -- memory_overhead
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The script is a development tool that runs all commands in a dedicated
container. This allows for containerized, isolated and reproducible
builds and CI runs.
The script supports the following command:
* build: Build Cloud Hypervisor binaries (debug and release)
* build-container: Build the container used by the script
* tests: Run unit, cargo and integration tests
$ ./scripts/dev_cli.sh help
Cloud Hypervisor dev_cli.sh
Usage: dev_cli.sh <command> [<command args>]
Available commands:
build [--debug|--release] [-- [<cargo args>]]
Build the Cloud Hypervisor binaries.
--debug Build the debug binaries. This is the default.
--release Build the release binaries.
tests [--unit|--cargo|--all]
Run the Cloud Hypervisor tests.
--unit Run the unit tests.
--cargo Run the cargo tests.
--integration Run the integration tests.
--all Run all tests.
build-container [--type]
Build the Cloud Hypervisor container.
--dev Build dev container. This is the default.
help
Display this help message.
Fixes: #682Fixes: #684
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>