cloud-hypervisor/release-notes.md
Samuel Ortiz 2f395e60a0 release: v0.5.0
Expand the release notes and bump Cargo.toml.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
2020-02-07 16:17:06 +01:00

16 KiB

v0.5.0

This release has been tracked through the 0.5.0 project.

Highlights for cloud-hypervisor version 0.5.0 include:

Virtual Machine Dynamic Resizing

With 0.4.0 we added support for CPU hot plug, and 0.5.0 adds CPU hot unplug and memory hot plug as well. This allows to dynamically resize Cloud Hypervisor guests which is needed for e.g. Kubernetes related use cases. The memory hot plug implementation is based on the same framework as the CPU hot plug/unplug one, i.e. hardware-reduced ACPI notifications to the guest.

Next on our VM resizing roadmap is the PCI devices hotplug feature.

Multi-Queue, Multi-Threaded Paravirtualization

We enhanced our virtio networking and block support by having both devices use multiple I/O queues handled by multiple threads. This improves our default paravirtualized networking and block devices throughput.

New Interrupt Management Framework

We improved our interrupt management implementation by introducing an Interrupt Manager framework, based on the currently on-going rust-vmm vm-device crates discussions. This move made the code significantly cleaner, and allowed us to remove several KVM related dependencies from crates like the PCI and virtio ones.

Development Tools

In order to provide a better developer experience, we worked on improving our build, development and testing tools. Somehow similar to the excellent Firecracker's devtool, we now provide a dev_cli script.

With this new tool, our users and contributors will be able to build and test Cloud Hypervisor through a containerized environment.

Kata Containers Integration

We spent some significant time and efforts debugging and fixing our integration with the Kata Containers project. Cloud Hypervisor is now a fully supported Kata Containers hypervisor, and is integrated into the project's CI.

Contributors

Many thanks to everyone that contributed to the 0.5.0 release:

v0.4.0

This release has been tracked through the 0.4.0 project.

Highlights for cloud-hypervisor version 0.4.0 include:

Dynamic virtual CPUs addition

As a way to vertically scale Cloud-Hypervisor guests, we now support dynamically adding virtual CPUs to the guests, a mechanism also known as CPU hot plug. Through hardware-reduced ACPI notifications, Cloud Hypervisor can now add CPUs to an already running guest and the high level operations for that process are documented here

During the next release cycles we are planning to extend Cloud Hypervisor hot plug framework to other resources, namely PCI devices and memory.

Programmatic firmware tables generation

As part of the CPU hot plug feature enablement, and as a requirement for hot plugging other resources like devices or RAM, we added support for programmatically generating the needed ACPI tables. Through a dedicated acpi-tables crate, we now have a flexible and clean way of generating those tables based on the VMM device model and topology.

Filesystem and block devices vhost-user backends

Our objective of running all Cloud Hypervisor paravirtualized I/O to a vhost-user based framework is getting closer as we've added Rust based implementations for vhost-user-blk and virtiofs backends. Together with the vhost-user-net backend that came with the 0.3.0 release, this will form the default Cloud Hypervisor I/O architecture.

Guest pause and resume

As an initial requiremnt for enabling live migration, we added support for pausing and resuming any VMM components. As an intermediate step towards live migration, the upcoming guest snapshotting feature will be based on the pause and resume capabilities.

Userspace IOAPIC by default

As a way to simplify our device manager implementation, but also in order to stay away from privileged rings as often as possible, any device that relies on pin based interrupts will be using the userspace IOAPIC implementation by default.

PCI BAR reprogramming

In order to allow for a more flexible device model, and also support guests that would want to move PCI devices, we added support for PCI devices BAR reprogramming.

New cloud-hypervisor organization

As we wanted to be more flexible on how we manage the Cloud Hypervisor project, we decided to move it under a dedicated GitHub organization. Together with the cloud-hypervisor project, this new organization also now hosts our kernel and firmware repositories. We may also use it to host any rust-vmm that we'd need to temporarily fork. Thanks to GitHub's seamless repository redirections, the move is completely transparent to all Cloud Hypervisor contributors, users and followers.

Contributors

Many thanks to everyone that contributed to the 0.4.0 release:

v0.3.0

This release has been tracked through the 0.3.0 project.

Highlights for cloud-hypervisor version 0.3.0 include:

Block device offloading

We continue to work on offloading paravirtualized I/O to external processes, and we added support for vhost-user-blk backends. This enables cloud-hypervisor users to plug a vhost-user based block device like SPDK) into the VMM as their paravirtualized storage backend.

Network device backend

The previous release provided support for vhost-user-net backends. Now we also provide a TAP based vhost-user-net backend, implemented in Rust. Together with the vhost-user-net device implementation, this will eventually become the Cloud Hypervisor default paravirtualized networking architecture.

Virtual sockets

In order to more efficiently and securely communicate between host and guest, we added an hybrid implementation of the VSOCK socket address family over virtio. Credits go to the Firecracker project as our implementation is a copy of theirs.

HTTP based API

In anticipation of the need to support asynchronous operations to Cloud Hypervisor guests (e.g. resources hotplug and guest migration), we added a HTTP based API to the VMM. The API will be more extensively documented during the next release cycle.

Memory mapped virtio transport

In order to support potential PCI-free use cases, we added support for the virtio MMIO transport layer. This will allow us to support simple, minimal guest configurations that do not require a PCI bus emulation.

Paravirtualized IOMMU

As we want to improve our nested guests support, we added support for exposing a paravirtualized IOMMU device through virtio. This allows for a safer nested virtio and directly assigned devices support.

To add the IOMMU support, we had to make some CLI changes for Cloud Hypervisor users to be able to specify if devices had to be handled through this virtual IOMMU or not. In particular, the --disk option now expects disk paths to be prefixed with a path= string, and supports an optional iommu=[on|off] setting.

Ubuntu 19.10

With the latest hypervisor firmware, we can now support the latest Ubuntu 19.10 (Eoan Ermine) cloud images.

Large memory guests

After simplifying and changing our guest address space handling, we can now support guests with large amount of memory (more than 64GB).

v0.2.0

This release has been tracked through the 0.2.0 project.

Highlights for cloud-hypervisor version 0.2.0 include:

Network device offloading

As part of our general effort to offload paravirtualized I/O to external processes, we added support for vhost-user-net backends. This enables cloud-hypervisor users to plug a vhost-user based networking device (e.g. DPDK) into the VMM as their virtio network backend.

Minimal hardware-reduced ACPI

In order to properly implement and guest reset and shutdown, we implemented a minimal version of the hardware-reduced ACPI specification. Together with a tiny I/O port based ACPI device, this allows cloud-hypervisor guests to cleanly reboot and shutdown.

The ACPI implementation is a cloud-hypervisor build time option that is enabled by default.

Debug I/O port

Based on the Firecracker idea of using a dedicated I/O port to measure guest boot times, we added support for logging guest events through the 0x80 PC debug port. This allows, among other things, for granular guest boot time measurements. See our debug port documentation for more details.

Improved direct device assignment

We fixed a major performance issue with our initial VFIO implementation: When enabling VT-d through the KVM and VFIO APIs, our guest memory writes and reads were (in many cases) not cached. After correctly tagging the guest memory from cloud-hypervisor we're now able to reach the expected performance from directly assigned devices.

Improved shared filesystem

We added shared memory region with DAX support to our virtio-fs shared file system. This provides better shared filesystem IO performance with a smaller guest memory footprint.

Ubuntu bionic based CI

Thanks to our simple KVM firmware improvements, we are now able to boot Ubuntu bionic images. We added those to our CI pipeline.

v0.1.0

This release has been tracked through the 0.1.0 project.

Highlights for cloud-hypervisor version 0.1.0 include:

Shared filesystem

We added support for the virtio-fs shared file system, allowing for an efficient and reliable way of sharing a filesystem between the host and the cloud-hypervisor guest.

See our filesystem sharing documentation for more details on how to use virtio-fs with cloud-hypervisor.

Initial direct device assignment support

VFIO (Virtual Function I/O) is a kernel framework that exposes direct device access to userspace. cloud-hypervisor uses VFIO to directly assign host physical devices into its guest.

See our VFIO documentation for more detail on how to directly assign host devices to cloud-hypervisor guests.

Userspace IOAPIC

cloud-hypervisor supports a so-called split IRQ chip implementation by implementing support for the IOAPIC. By moving part of the IRQ chip implementation from kernel space to user space, the IRQ chip emulation does not always run in a fully privileged mode.

Virtual persistent memory

The virtio-pmem implementation emulates a virtual persistent memory device that cloud-hypervisor can e.g. boot from. Booting from a virtio-pmem device allows to bypass the guest page cache and improve the guest memory footprint.

Linux kernel bzImage

The cloud-hypervisor linux kernel loader now supports direct kernel boot from bzImage kernel images, which is usually the format that Linux distributions use to ship their kernels. For example, this allows for booting from the host distribution kernel image.

Console over virtio

cloud-hypervisor now exposes a virtio-console device to the guest. Although using this device as a guest console can potentially cut some early boot messages, it can reduce the guest boot time and provides a complete console implementation.

The virtio-console device is enabled by default for the guest console. Switching back to the legacy serial port is done by selecting --serial tty --console off from the command line.

Unit testing

We now run all unit tests from all our crates directly from our CI.

Integration tests parallelization

The CI cycle run time has been significantly reduced by refactoring our integration tests; allowing them to all be run in parallel.