A Virtual Machine Monitor for modern Cloud workloads.
Go to file
dependabot[bot] 97b23f1448 build: Bump remain from 0.2.12 to 0.2.13 in /fuzz
Bumps [remain](https://github.com/dtolnay/remain) from 0.2.12 to 0.2.13.
- [Release notes](https://github.com/dtolnay/remain/releases)
- [Commits](https://github.com/dtolnay/remain/compare/0.2.12...0.2.13)

---
updated-dependencies:
- dependency-name: remain
  dependency-type: indirect
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-24 09:39:48 +00:00
.github build: Add GitHub action for VFIO integration tests 2024-02-23 16:25:47 +00:00
api_client build: Bump vmm-sys-util crate and its consumers 2024-01-25 10:14:54 +00:00
arch misc: Remove redundant "use" imports 2024-02-19 17:54:30 +00:00
block misc: Remove redundant "use" imports 2024-02-19 17:54:30 +00:00
devices build: Bump bitflags from 2.4.1 to 2.4.2 2024-02-20 23:54:02 +00:00
docs devices: add debug-console device 2024-01-25 10:25:14 -08:00
event_monitor build: Bump serde from 1.0.193 to 1.0.196 2024-02-09 23:45:54 +00:00
fuzz build: Bump remain from 0.2.12 to 0.2.13 in /fuzz 2024-02-24 09:39:48 +00:00
hypervisor hypervisor: mshv: Don't unregister ioevent in case of SEV-SNP guest 2024-02-20 06:55:13 -08:00
net_gen build: Bump vmm-sys-util crate and its consumers 2024-01-25 10:14:54 +00:00
net_util misc: Remove redundant "use" imports 2024-02-19 17:54:30 +00:00
option_parser vmm: support setting cpu affinity with host cpu indices >255 2024-01-19 09:30:16 +00:00
pci misc: Remove redundant "use" imports 2024-02-19 17:54:30 +00:00
performance-metrics misc: Remove redundant "use" imports 2024-02-19 17:54:30 +00:00
rate_limiter misc: Remove redundant "use" imports 2024-02-19 17:54:30 +00:00
resources tests: Migrate docker container from ubuntu 20.04 to 22.04 2023-12-20 12:12:05 -08:00
scripts tests: Remove download of unused bionic image for aarch64 2024-02-22 12:28:40 +00:00
serial_buffer vmm: Move SerialBuffer to its own crate 2022-08-30 13:47:51 +02:00
src main: Show help text when run without arguments 2024-02-24 09:35:37 +00:00
test_data/cloud-init/ubuntu main: switch command parsing to use clap 2023-10-20 11:44:28 -07:00
test_infra build: Bump serde from 1.0.193 to 1.0.196 2024-02-09 23:45:54 +00:00
tests tests: Remove download of unused bionic image for aarch64 2024-02-22 12:28:40 +00:00
tpm misc: Remove redundant "use" imports 2024-02-19 17:54:30 +00:00
tracer build: Bump serde from 1.0.193 to 1.0.196 2024-02-09 23:45:54 +00:00
vhost_user_block main: Show help text when run without arguments 2024-02-24 09:35:37 +00:00
vhost_user_net main: Show help text when run without arguments 2024-02-24 09:35:37 +00:00
virtio-devices misc: Remove redundant "use" imports 2024-02-19 17:54:30 +00:00
vm-allocator build: Bump libc from 0.2.151 to 0.2.153 2024-02-08 09:51:55 +00:00
vm-device misc: Remove redundant "use" imports 2024-02-19 17:54:30 +00:00
vm-migration build: Release v38.0 2024-02-16 10:00:41 -08:00
vm-virtio build: Bump vmm-sys-util crate and its consumers 2024-01-25 10:14:54 +00:00
vmm vmm: pass host data to SevSnp guest 2024-02-23 13:32:56 -08:00
.gitignore gitignore: ignore vendor directory 2024-01-18 14:00:37 -08:00
.gitlint gitlint: Increase the title length limit to 72 2023-11-17 08:43:19 -08:00
.rustfmt.toml build: migrate to Rust 2021 edition 2022-04-11 09:51:12 +01:00
.typos.toml misc: Add configuration file for typos 2023-09-09 10:46:21 +01:00
CODEOWNERS ci: Adding a CODEOWNERS file 2022-05-18 14:03:37 +01:00
CODE_OF_CONDUCT.md cloud-hypervisor: Adopt the Contributor Covenant code of conduct 2019-05-12 23:15:30 +02:00
CONTRIBUTING.md misc: Fix various spelling errors using typos 2023-09-09 10:46:21 +01:00
CREDITS.md cloud-hypervisor: Add CREDITS 2019-05-12 23:15:30 +02:00
Cargo.lock build: Bump mintex from 0.1.2 to 0.1.3 2024-02-24 08:40:35 +00:00
Cargo.toml build: Release v38.0 2024-02-16 10:00:41 -08:00
Jenkinsfile build: Re-enable test_vfio on AMD workers 2024-01-16 11:07:33 -08:00
LICENSE-APACHE cloud-hypervisor: Add proper licensing 2019-05-09 15:44:17 +02:00
LICENSE-BSD-3-Clause cloud-hypervisor: Add proper licensing 2019-05-09 15:44:17 +02:00
MAINTAINERS.md docs: Add @likebreath to MAINTAINERS.md 2023-02-01 12:19:07 +00:00
README.md README: update for direct boot bzImage support 2024-02-19 17:07:50 +00:00
build.rs build: rename `BUILT_VERSION` to `BUILD_VERSION` 2023-04-14 12:13:46 -07:00
release-notes.md build: Release v38.0 2024-02-16 10:00:41 -08:00

README.md

1. What is Cloud Hypervisor?

Cloud Hypervisor is an open source Virtual Machine Monitor (VMM) that runs on top of the KVM hypervisor and the Microsoft Hypervisor (MSHV).

The project focuses on running modern, Cloud Workloads, on specific, common, hardware architectures. In this case Cloud Workloads refers to those that are run by customers inside a Cloud Service Provider. This means modern operating systems with most I/O handled by paravirtualised devices (e.g. virtio), no requirement for legacy devices, and 64-bit CPUs.

Cloud Hypervisor is implemented in Rust and is based on the Rust VMM crates.

Objectives

High Level

  • Runs on KVM or MSHV
  • Minimal emulation
  • Low latency
  • Low memory footprint
  • Low complexity
  • High performance
  • Small attack surface
  • 64-bit support only
  • CPU, memory, PCI hotplug
  • Machine to machine migration

Architectures

Cloud Hypervisor supports the x86-64 and AArch64 architectures. There are minor differences in functionality between the two architectures (see #1125).

Guest OS

Cloud Hypervisor supports 64-bit Linux and Windows 10/Windows Server 2019.

2. Getting Started

The following sections describe how to build and run Cloud Hypervisor.

Prerequisites for AArch64

  • AArch64 servers (recommended) or development boards equipped with the GICv3 interrupt controller.

Host OS

For required KVM functionality and adequate performance the recommended host kernel version is 5.13. The majority of the CI currently tests with kernel version 5.15.

Use Pre-built Binaries

The recommended approach to getting started with Cloud Hypervisor is by using a pre-built binary. Binaries are available for the latest release. Use cloud-hypervisor-static for x86-64 or cloud-hypervisor-static-aarch64 for AArch64 platform.

Packages

For convenience, packages are also available targeting some popular Linux distributions. This is thanks to the Open Build Service. The OBS README explains how to enable the repository in a supported Linux distribution and install Cloud Hypervisor and accompanying packages. Please report any packaging issues in the obs-packaging repository.

Building from Source

Please see the instructions for building from source if you do not wish to use the pre-built binaries.

Booting Linux

Cloud Hypervisor supports direct kernel boot (the x86-64 kernel requires the kernel built with PVH support or a bzImage) or booting via a firmware (either Rust Hypervisor Firmware or an edk2 UEFI firmware called CLOUDHV / CLOUDHV_EFI.)

Binary builds of the firmware files are available for the latest release of Rust Hypervisor Firmware and our edk2 repository

The choice of firmware depends on your guest OS choice; some experimentation may be required.

Firmware Booting

Cloud Hypervisor supports booting disk images containing all needed components to run cloud workloads, a.k.a. cloud images.

The following sample commands will download an Ubuntu Cloud image, converting it into a format that Cloud Hypervisor can use and a firmware to boot the image with.

$ wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img
$ qemu-img convert -p -f qcow2 -O raw focal-server-cloudimg-amd64.img focal-server-cloudimg-amd64.raw
$ wget https://github.com/cloud-hypervisor/rust-hypervisor-firmware/releases/download/0.4.2/hypervisor-fw

The Ubuntu cloud images do not ship with a default password so it necessary to use a cloud-init disk image to customise the image on the first boot. A basic cloud-init image is generated by this script. This seeds the image with a default username/password of cloud/cloud123. It is only necessary to add this disk image on the first boot. Script also assigns default IP address using test_data/cloud-init/ubuntu/local/network-config details with --net "mac=12:34:56:78:90:ab,tap=" option. Then the matching mac address interface will be enabled as per network-config details.

$ sudo setcap cap_net_admin+ep ./cloud-hypervisor
$ ./create-cloud-init.sh
$ ./cloud-hypervisor \
	--kernel ./hypervisor-fw \
	--disk path=focal-server-cloudimg-amd64.raw path=/tmp/ubuntu-cloudinit.img \
	--cpus boot=4 \
	--memory size=1024M \
	--net "tap=,mac=,ip=,mask="

If access to the firmware messages or interaction with the boot loader (e.g. GRUB) is required then it necessary to switch to the serial console instead of virtio-console.

$ ./cloud-hypervisor \
	--kernel ./hypervisor-fw \
	--disk path=focal-server-cloudimg-amd64.raw path=/tmp/ubuntu-cloudinit.img \
	--cpus boot=4 \
	--memory size=1024M \
	--net "tap=,mac=,ip=,mask=" \
	--serial tty \
	--console off

Custom Kernel and Disk Image

Building your Kernel

Cloud Hypervisor also supports direct kernel boot. For x86-64, a vmlinux ELF kernel (compiled with PVH support) or a regular bzImage are supported. In order to support development there is a custom branch; however provided the required options are enabled any recent kernel will suffice.

To build the kernel:

# Clone the Cloud Hypervisor Linux branch
$ git clone --depth 1 https://github.com/cloud-hypervisor/linux.git -b ch-6.2 linux-cloud-hypervisor
$ pushd linux-cloud-hypervisor
# Use the x86-64 cloud-hypervisor kernel config to build your kernel for x86-64
$ wget https://raw.githubusercontent.com/cloud-hypervisor/cloud-hypervisor/main/resources/linux-config-x86_64
# Use the AArch64 cloud-hypervisor kernel config to build your kernel for AArch64
$ wget https://raw.githubusercontent.com/cloud-hypervisor/cloud-hypervisor/main/resources/linux-config-aarch64
$ cp linux-config-x86_64 .config  # x86-64
$ cp linux-config-aarch64 .config # AArch64
# Do native build of the x86-64 kernel
$ KCFLAGS="-Wa,-mx86-used-note=no" make bzImage -j `nproc`
# Do native build of the AArch64 kernel
$ make -j `nproc`
$ popd

For x86-64, the vmlinux kernel image will then be located at linux-cloud-hypervisor/arch/x86/boot/compressed/vmlinux.bin. For AArch64, the Image kernel image will then be located at linux-cloud-hypervisor/arch/arm64/boot/Image.

Disk image

For the disk image the same Ubuntu image as before can be used. This contains an ext4 root filesystem.

$ wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img # x86-64
$ wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-arm64.img # AArch64
$ qemu-img convert -p -f qcow2 -O raw focal-server-cloudimg-amd64.img focal-server-cloudimg-amd64.raw # x86-64
$ qemu-img convert -p -f qcow2 -O raw focal-server-cloudimg-arm64.img focal-server-cloudimg-arm64.raw # AArch64

Booting the guest VM

These sample commands boot the disk image using the custom kernel whilst also supplying the desired kernel command line.

  • x86-64
$ sudo setcap cap_net_admin+ep ./cloud-hypervisor
$ ./create-cloud-init.sh
$ ./cloud-hypervisor \
	--kernel ./linux-cloud-hypervisor/arch/x86/boot/compressed/vmlinux.bin \
	--disk path=focal-server-cloudimg-amd64.raw path=/tmp/ubuntu-cloudinit.img \
	--cmdline "console=hvc0 root=/dev/vda1 rw" \
	--cpus boot=4 \
	--memory size=1024M \
	--net "tap=,mac=,ip=,mask="
  • AArch64
$ sudo setcap cap_net_admin+ep ./cloud-hypervisor
$ ./create-cloud-init.sh
$ ./cloud-hypervisor \
	--kernel ./linux-cloud-hypervisor/arch/arm64/boot/Image \
	--disk path=focal-server-cloudimg-arm64.raw path=/tmp/ubuntu-cloudinit.img \
	--cmdline "console=hvc0 root=/dev/vda1 rw" \
	--cpus boot=4 \
	--memory size=1024M \
	--net "tap=,mac=,ip=,mask="

If earlier kernel messages are required the serial console should be used instead of virtio-console.

  • x86-64
$ ./cloud-hypervisor \
	--kernel ./linux-cloud-hypervisor/arch/x86/boot/compressed/vmlinux.bin \
	--console off \
	--serial tty \
	--disk path=focal-server-cloudimg-amd64.raw \
	--cmdline "console=ttyS0 root=/dev/vda1 rw" \
	--cpus boot=4 \
	--memory size=1024M \
	--net "tap=,mac=,ip=,mask="
  • AArch64
$ ./cloud-hypervisor \
	--kernel ./linux-cloud-hypervisor/arch/arm64/boot/Image \
	--console off \
	--serial tty \
	--disk path=focal-server-cloudimg-arm64.raw \
	--cmdline "console=ttyAMA0 root=/dev/vda1 rw" \
	--cpus boot=4 \
	--memory size=1024M \
	--net "tap=,mac=,ip=,mask="

3. Status

Cloud Hypervisor is under active development. The following stability guarantees are currently made:

  • The API (including command line options) will not be removed or changed in a breaking way without a minimum of 2 major releases notice. Where possible warnings will be given about the use of deprecated functionality and the deprecations will be documented in the release notes.

  • Point releases will be made between individual releases where there are substantial bug fixes or security issues that need to be fixed. These point releases will only include bug fixes.

Currently the following items are not guaranteed across updates:

  • Snapshot/restore is not supported across different versions
  • Live migration is not supported across different versions
  • The following features are considered experimental and may change substantially between releases: TDX, vfio-user, vDPA.

Further details can be found in the release documentation.

As of 2023-01-03, the following cloud images are supported:

Direct kernel boot to userspace should work with a rootfs from most distributions although you may need to enable exotic filesystem types in the reference kernel configuration (e.g. XFS or btrfs.)

Hot Plug

Cloud Hypervisor supports hotplug of CPUs, passthrough devices (VFIO), virtio-{net,block,pmem,fs,vsock} and memory resizing. This document details how to add devices to a running VM.

Device Model

Details of the device model can be found in this documentation.

Roadmap

The project roadmap is tracked through a GitHub project.

4. Relationship with Rust VMM Project

In order to satisfy the design goal of having a high-performance, security-focused hypervisor the decision was made to use the Rust programming language. The language's strong focus on memory and thread safety makes it an ideal candidate for implementing VMMs.

Instead of implementing the VMM components from scratch, Cloud Hypervisor is importing the Rust VMM crates, and sharing code and architecture together with other VMMs like e.g. Amazon's Firecracker and Google's crosvm.

Cloud Hypervisor embraces the Rust VMM project's goals, which is to be able to share and re-use as many virtualization crates as possible.

Differences with Firecracker and crosvm

A large part of the Cloud Hypervisor code is based on either the Firecracker or the crosvm project's implementations. Both of these are VMMs written in Rust with a focus on safety and security, like Cloud Hypervisor.

The goal of the Cloud Hypervisor project differs from the aforementioned projects in that it aims to be a general purpose VMM for Cloud Workloads and not limited to container/serverless or client workloads.

The Cloud Hypervisor community thanks the communities of both the Firecracker and crosvm projects for their excellent work.

5. Community

The Cloud Hypervisor project follows the governance, and community guidelines described in the Community repository.

Contribute

The project strongly believes in building a global, diverse and collaborative community around the Cloud Hypervisor project. Anyone who is interested in contributing to the project is welcome to participate.

Contributing to a open source project like Cloud Hypervisor covers a lot more than just sending code. Testing, documentation, pull request reviews, bug reports, feature requests, project improvement suggestions, etc, are all equal and welcome means of contribution. See the CONTRIBUTING document for more details.

Slack

Get an invite to our Slack channel, join us on Slack, and participate in our community activities.

Mailing list

Please report bugs using the GitHub issue tracker but for broader community discussions you may use our mailing list.

Security issues

Please contact the maintainers listed in the MAINTAINERS.md file with security issues.