cloud-hypervisor/devices/src/lib.rs
Yuanchu Xie 5f18ac3bc0 devices: Add pvmemcontrol device
Pvmemcontrol provides a way for the guest to control its physical memory
properties, and enables optimizations and security features. For
example, the guest can provide information to the host where parts of a
hugepage may be unbacked, or sensitive data may not be swapped out, etc.

Pvmemcontrol allows guests to manipulate its gPTE entries in the SLAT,
and also some other properties of the memory map the back's host memory.
This is achieved by using the KVM_CAP_SYNC_MMU capability. When this
capability is available, the changes in the backing of the memory region
on the host are automatically reflected into the guest. For example, an
mmap() or madvise() that affects the region will be made visible
immediately.

There are two components of the implementation: the guest Linux driver
and Virtual Machine Monitor (VMM) device. A guest-allocated shared
buffer is negotiated per-cpu through a few PCI MMIO registers, the VMM
device assigns a unique command for each per-cpu buffer. The guest
writes its pvmemcontrol request in the per-cpu buffer, then writes the
corresponding command into the command register, calling into the VMM
device to perform the pvmemcontrol request.

The synchronous per-cpu shared buffer approach avoids the kick and busy
waiting that the guest would have to do with virtio virtqueue transport.

The Cloud Hypervisor component can be enabled with --pvmemcontrol.

Co-developed-by: Stanko Novakovic <stanko@google.com>
Co-developed-by: Pasha Tatashin <tatashin@google.com>
Signed-off-by: Yuanchu Xie <yuanchu@google.com>
2024-08-05 22:41:56 +00:00

99 lines
3.1 KiB
Rust

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
//
// Portions Copyright 2017 The Chromium OS Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE-BSD-3-Clause file.
//! Emulates virtual and hardware devices.
#[macro_use]
extern crate bitflags;
#[macro_use]
extern crate event_monitor;
#[macro_use]
extern crate log;
pub mod acpi;
#[cfg(target_arch = "x86_64")]
pub mod debug_console;
#[cfg(target_arch = "aarch64")]
pub mod gic;
pub mod interrupt_controller;
#[cfg(target_arch = "x86_64")]
pub mod ioapic;
pub mod legacy;
#[cfg(feature = "pvmemcontrol")]
pub mod pvmemcontrol;
pub mod pvpanic;
pub mod tpm;
pub use self::acpi::{AcpiGedDevice, AcpiPmTimerDevice, AcpiShutdownDevice};
pub use self::pvpanic::{PvPanicDevice, PVPANIC_DEVICE_MMIO_SIZE};
bitflags! {
pub struct AcpiNotificationFlags: u8 {
const NO_DEVICES_CHANGED = 0;
const CPU_DEVICES_CHANGED = 0b1;
const MEMORY_DEVICES_CHANGED = 0b10;
const PCI_DEVICES_CHANGED = 0b100;
const POWER_BUTTON_CHANGED = 0b1000;
}
}
#[cfg(target_arch = "aarch64")]
macro_rules! generate_read_fn {
($fn_name: ident, $data_type: ty, $byte_type: ty, $type_size: expr, $endian_type: ident) => {
pub fn $fn_name(input: &[$byte_type]) -> $data_type {
assert!($type_size == std::mem::size_of::<$data_type>());
let mut array = [0u8; $type_size];
for (byte, read) in array.iter_mut().zip(input.iter().cloned()) {
*byte = read as u8;
}
<$data_type>::$endian_type(array)
}
};
}
#[cfg(target_arch = "aarch64")]
macro_rules! generate_write_fn {
($fn_name: ident, $data_type: ty, $byte_type: ty, $endian_type: ident) => {
pub fn $fn_name(buf: &mut [$byte_type], n: $data_type) {
for (byte, read) in buf
.iter_mut()
.zip(<$data_type>::$endian_type(n).iter().cloned())
{
*byte = read as $byte_type;
}
}
};
}
#[cfg(target_arch = "aarch64")]
generate_read_fn!(read_le_u16, u16, u8, 2, from_le_bytes);
#[cfg(target_arch = "aarch64")]
generate_read_fn!(read_le_u32, u32, u8, 4, from_le_bytes);
#[cfg(target_arch = "aarch64")]
generate_read_fn!(read_le_u64, u64, u8, 8, from_le_bytes);
#[cfg(target_arch = "aarch64")]
generate_read_fn!(read_le_i32, i32, i8, 4, from_le_bytes);
#[cfg(target_arch = "aarch64")]
generate_read_fn!(read_be_u16, u16, u8, 2, from_be_bytes);
#[cfg(target_arch = "aarch64")]
generate_read_fn!(read_be_u32, u32, u8, 4, from_be_bytes);
#[cfg(target_arch = "aarch64")]
generate_write_fn!(write_le_u16, u16, u8, to_le_bytes);
#[cfg(target_arch = "aarch64")]
generate_write_fn!(write_le_u32, u32, u8, to_le_bytes);
#[cfg(target_arch = "aarch64")]
generate_write_fn!(write_le_u64, u64, u8, to_le_bytes);
#[cfg(target_arch = "aarch64")]
generate_write_fn!(write_le_i32, i32, i8, to_le_bytes);
#[cfg(target_arch = "aarch64")]
generate_write_fn!(write_be_u16, u16, u8, to_be_bytes);
#[cfg(target_arch = "aarch64")]
generate_write_fn!(write_be_u32, u32, u8, to_be_bytes);