cloud-hypervisor/release-notes.md
Samuel Ortiz 7688e6e231 release-notes: Add table of contents
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
2019-09-05 16:47:12 +02:00

5.6 KiB

v0.2.0

This release has been tracked through the 0.2.0 project.

Highlights for cloud-hypervisor version 0.2.0 include:

Network device offloading

As part of our general effort to offload paravirtualized I/O to external processes, we added support for vhost-user-net backends. This enables cloud-hypervisor users to plug a vhost-user based networking device (e.g. DPDK) into the VMM as their virtio network backend.

Minimal hardware-reduced ACPI

In order to properly implement and guest reset and shutdown, we implemented a minimal version of the hardware-reduced ACPI specification. Together with a tiny I/O port based ACPI device, this allows cloud-hypervisor guests to cleanly reboot and shutdown.

The ACPI implementation is a cloud-hypervisor build time option that is enabled by default.

Debug I/O port

Based on the Firecracker idea of using a dedicated I/O port to measure guest boot times, we added support for logging guest events through the 0x80 PC debug port. This allows, among other things, for granular guest boot time measurements. See our debug port documentation for more details.

Improved direct device assignment

We fixed a major performance issue with our initial VFIO implementation: When enabling VT-d through the KVM and VFIO APIs, our guest memory writes and reads were (in many cases) not cached. After correctly tagging the guest memory from cloud-hypervisor we're now able to reach the expected performance from directly assigned devices.

Improved shared filesystem

We added shared memory region with DAX support to our virtio-fs shared file system. This provides better shared filesystem IO performance with a smaller guest memory footprint.

Ubuntu bionic based CI

Thanks to our simple KVM firmware improvements, we are now able to boot Ubuntu bionic images. We added those to our CI pipeline.

v0.1.0

This release has been tracked through the 0.1.0 project.

Highlights for cloud-hypervisor version 0.1.0 include:

Shared filesystem

We added support for the virtio-fs shared file system, allowing for an efficient and reliable way of sharing a filesystem between the host and the cloud-hypervisor guest.

See our filesystem sharing documentation for more details on how to use virtio-fs with cloud-hypervisor.

Initial direct device assignment support

VFIO (Virtual Function I/O) is a kernel framework that exposes direct device access to userspace. cloud-hypervisor uses VFIO to directly assign host physical devices into its guest.

See our VFIO documentation for more detail on how to directly assign host devices to cloud-hypervisor guests.

Userspace IOAPIC

cloud-hypervisor supports a so-called split IRQ chip implementation by implementing support for the IOAPIC. By moving part of the IRQ chip implementation from kernel space to user space, the IRQ chip emulation does not always run in a fully privileged mode.

Virtual persistent memory

The virtio-pmem implementation emulates a virtual persistent memory device that cloud-hypervisor can e.g. boot from. Booting from a virtio-pmem device allows to bypass the guest page cache and improve the guest memory footprint.

Linux kernel bzImage

The cloud-hypervisor linux kernel loader now supports direct kernel boot from bzImage kernel images, which is usually the format that Linux distributions use to ship their kernels. For example, this allows for booting from the host distribution kernel image.

Console over virtio

cloud-hypervisor now exposes a virtio-console device to the guest. Although using this device as a guest console can potentially cut some early boot messages, it can reduce the guest boot time and provides a complete console implementation.

The virtio-console device is enabled by default for the guest console. Switching back to the legacy serial port is done by selecting --serial tty --console off from the command line.

Unit testing

We now run all unit tests from all our crates directly from our CI.

Integration tests parallelization

The CI cycle run time has been significantly reduced by refactoring our integration tests; allowing them to all be run in parallel.