Compare commits

...

22 Commits

Author SHA1 Message Date
Purna Pavan Chandra 02ffe2c94a tests: add back tests_snapshot_restore* but to common_sequential
tests_snapshot_restore* have been earlier removed from common_parallel
due to the falkiness they add testsuite. Running them sequentially would
eliminate the flakiness. Hence, add the tests back to testsuite but into
common_sequential module.

Signed-off-by: Purna Pavan Chandra <paekkaladevi@linux.microsoft.com>
2024-05-09 17:06:33 +00:00
Purna Pavan Chandra 2416f06d39 tests: remove test_snapshot_restore* tests from common_parallel
test_snapshot_restore_* tests often have transient failures and add to
overall flakiness of the integration testsuite. Hence, remove them from
common_parallel. However, these tests need to be added back to
common_sequential

Signed-off-by: Purna Pavan Chandra <paekkaladevi@linux.microsoft.com>
2024-05-09 17:06:33 +00:00
Purna Pavan Chandra e95f71f711 tests: Add test_snapshot_restore_with_fd to integration tests
VM is created with FDs explicitly passed to CH via --net parameter
and snapshotted. New net FDs are passed in turn during restore.
Boilerplate code from _test_snapshot_restore().

Signed-off-by: Purna Pavan Chandra <paekkaladevi@linux.microsoft.com>
2024-05-09 17:06:33 +00:00
Purna Pavan Chandra ab3eac6797 docs: Update snapshot/restore documentation
Add a section about restoring VM with new Net FDs explicitly passed to
ch-remote via 'net_fds' parameter

Signed-off-by: Purna Pavan Chandra <paekkaladevi@linux.microsoft.com>
2024-05-09 17:06:33 +00:00
Purna Pavan Chandra 8b6c75b304 ch-remote: allow fds to be sent along with 'restore'
Enable restore command the ability to send file descriptors along with
HTTP request. This is useful when restoring a VM with explicit FDs
passed to NetConfig(s).

Signed-off-by: Purna Pavan Chandra <paekkaladevi@linux.microsoft.com>
2024-05-09 17:06:33 +00:00
Purna Pavan Chandra 49476d2809 vmm: http_endpoint: Change PutHandler for VmRestore
Consume FDs passed via SCM_RIGHTs to VmRestore API and assign them
appropriately to RestoredNetConfig's fds field.

Signed-off-by: Purna Pavan Chandra <paekkaladevi@linux.microsoft.com>
2024-05-09 17:06:33 +00:00
Purna Pavan Chandra 598ebe81ae vmm: Support passing Net FDs to Restore
'NetConfig' FDs, when explicitly passed via SCM_RIGHTS during VM
creation, are marked as invalid during snapshot. See: #6332.
So, Restore should support input for the new net FDs. This patch adds
new field 'net_fds' to 'RestoreConfig'. The FDs passed using this new
field are replaced into the 'fds' field of NetConfig appropriately.

The 'validate()' function ensures all net devices from 'VmConfig' backed
by FDs have a corresponding 'RestoreNetConfig' with a matched 'id' and
expected number of FDs.

The unit tests provide different inputs to parse and validate functions
to make sure parsing and error handling is as per expectation.

Fixes #6286

Signed-off-by: Purna Pavan Chandra <paekkaladevi@linux.microsoft.com>
Co-authored-by: Bo Chen <chen.bo@intel.com>
2024-05-09 17:06:17 +00:00
Yi Wang 4fd5070f5d ch-remote: fix help of remove-device
remove-device can remove not only VFIO device but also pci device.

No functional change.

Signed-off-by: Yi Wang <foxywang@tencent.com>
2024-05-09 14:34:30 +00:00
Wei Liu 241d1d5cdb hypervisor: kvm: add missing capability requirements
The list is gathered from going through various code paths in the code
base.

No functional change intended.

Signed-off-by: Wei Liu <liuwe@microsoft.com>
2024-05-09 06:50:57 +00:00
Wei Liu c07671edb4 hypervisor: kvm: introduce a check_extension macro
That reduces code repetition.

Signed-off-by: Wei Liu <liuwe@microsoft.com>
2024-05-09 06:50:57 +00:00
Wei Liu 8093820965 hypervisor: kvm: sort the required capabilities
No functional change.

Signed-off-by: Wei Liu <liuwe@microsoft.com>
2024-05-09 06:50:57 +00:00
Wei Liu 86cf50565e hypervisor: kvm: drop the check for Cap::SignalMsi
Per the KVM API document, that capability is only valid with in-kernel
irqchip that handles MSIs.

Through out the code base, there is no call to KVM_IOCTL_SIGNAL_MSI.

Signed-off-by: Wei Liu <liuwe@microsoft.com>
2024-05-09 06:50:57 +00:00
Rob Bradford 95fd684ad7 pci: Remove extra whitespace line from Cargo.toml
This was preventing the Cargo.toml formatter (taplo) from correctly
alphabetically ordering.

Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
2024-05-08 21:46:13 +00:00
Rob Bradford ce8e76cf94 build: Add GitHub action to run taplo for Cargo.toml formatting
Check that the Cargo.toml files meet the formatting requirements.

Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
2024-05-08 21:46:13 +00:00
Rob Bradford 3f8cd52ffd build: Format Cargo.toml files using taplo
Run the taplo formatter with the newly added configuration file

Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
2024-05-08 21:46:13 +00:00
Rob Bradford f9d3c73c15 build: Add taplo configuration file for Cargo.toml files
This configuration enforces the alphebetical ordering of arrays and keys
in the Cargo.toml files.

Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
2024-05-08 21:46:13 +00:00
Rob Bradford 7e25cc2aa0 build: Add "fuzzing" as a valid cfg(..) attribute
The compiler is now able to warn if an invalid attribute (e.g like a
feature) is not available.

See https://blog.rust-lang.org/2024/05/06/check-cfg.html for more
details.

Add build.rs files in the crates that use #cfg(fuzzing) to add fuzzing
to the list of valid cfg attributes.

Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
2024-05-08 08:10:28 +00:00
Rob Bradford 8b86c7724b build: Bump MSRV to 1.77.0
The ability to control the rustc flags (required for adding new
attributes to the allowed list of #[cfg(..)]) requires bumping the MSRV
to 1.77.0

Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
2024-05-08 08:10:28 +00:00
Rob Bradford ea23c16c5a build: Expose and use "sev_snp" feature on virtio-devices
Code in this crate is conditional on this feature so it necessary to
expose as a new feature and use that feature as a dependency when the
feature is enabled on the vmm crate.

Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
2024-05-08 08:10:28 +00:00
Rob Bradford 3def18f502 fuzz: Fix use of "guest_debug" conditional code
Enable the use of the vmm crate with the "guest_debug" feature and make
the code that exercises that in the fuzzer unconditional on
"guest_debug" as a feature (as that is not specified as a feature in the
fuzz workspace itself.)

Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
2024-05-08 08:10:28 +00:00
Rob Bradford 2bf6f9300a hypervisor: Remove derivations conditional on non-existant feature
The "with-serde" feature does not exist so these [#derive(..)]
statements are never compiled in.

Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
2024-05-08 08:10:28 +00:00
Rob Bradford fd43b79f96 build: Correctly enable dhat support in vmm crate
The "dhat-heap" feature needs to be enabled inside the vmm crate as a
depenency from the top-level as there is build time check for that
feature inside the vmm crate.

Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
2024-05-08 08:10:28 +00:00
46 changed files with 1229 additions and 507 deletions

View File

@ -15,7 +15,7 @@ jobs:
- stable
- beta
- nightly
- "1.74.1"
- "1.77.0"
target:
- x86_64-unknown-linux-gnu
- x86_64-unknown-linux-musl

View File

@ -41,7 +41,7 @@ jobs:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# generate Docker tags based on the following events/attributes
tags: |
type=raw,value=20240407-0
type=raw,value=20240507-0
type=sha
- name: Build and push

21
.github/workflows/taplo.yaml vendored Normal file
View File

@ -0,0 +1,21 @@
name: Cargo.toml Formatting (taplo)
on:
pull_request:
paths:
- '**/Cargo.toml'
jobs:
cargo_toml_format:
name: Cargo.toml Formatting
runs-on: ubuntu-latest
steps:
- name: Code checkout
uses: actions/checkout@v4
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@stable
- name: Install build dependencies
run: sudo apt-get update && sudo apt-get -yqq install build-essential libssl-dev
- name: Install taplo
run: cargo install taplo-cli --locked
- name: Check formatting
run: taplo fmt --check

5
.taplo.toml Normal file
View File

@ -0,0 +1,5 @@
include = ["**/Cargo.toml"]
[formatting]
reoder_arrays = true
reorder_keys = true

1
Cargo.lock generated
View File

@ -2567,6 +2567,7 @@ dependencies = [
"cfg-if",
"clap",
"devices",
"dhat",
"epoll",
"event_monitor",
"flume",

View File

@ -1,13 +1,13 @@
[package]
authors = ["The Cloud Hypervisor Authors"]
build = "build.rs"
default-run = "cloud-hypervisor"
description = "Open source Virtual Machine Monitor (VMM) that runs on top of KVM"
edition = "2021"
homepage = "https://github.com/cloud-hypervisor/cloud-hypervisor"
license = "LICENSE-APACHE & LICENSE-BSD-3-Clause"
name = "cloud-hypervisor"
version = "39.0.0"
authors = ["The Cloud Hypervisor Authors"]
edition = "2021"
default-run = "cloud-hypervisor"
build = "build.rs"
license = "LICENSE-APACHE & LICENSE-BSD-3-Clause"
description = "Open source Virtual Machine Monitor (VMM) that runs on top of KVM"
homepage = "https://github.com/cloud-hypervisor/cloud-hypervisor"
# Minimum buildable version:
# Keep in sync with version in .github/workflows/build.yaml
# Policy on MSRV (see #4318):
@ -15,18 +15,18 @@ homepage = "https://github.com/cloud-hypervisor/cloud-hypervisor"
# a.) A dependency requires it,
# b.) If we want to use a new feature and that MSRV is at least 6 months old,
# c.) There is a security issue that is addressed by the toolchain update.
rust-version = "1.74.1"
rust-version = "1.77.0"
[profile.release]
lto = true
codegen-units = 1
lto = true
opt-level = "s"
strip = true
[profile.profiling]
debug = true
inherits = "release"
strip = false
debug = true
[dependencies]
anyhow = "1.0.81"
@ -43,11 +43,11 @@ seccompiler = "0.4.0"
serde_json = "1.0.115"
signal-hook = "0.3.17"
thiserror = "1.0.60"
tpm = { path = "tpm"}
tpm = { path = "tpm" }
tracer = { path = "tracer" }
vm-memory = "0.14.1"
vmm = { path = "vmm" }
vmm-sys-util = "0.12.1"
vm-memory = "0.14.1"
zbus = { version = "3.15.2", optional = true }
[dev-dependencies]
@ -61,9 +61,9 @@ wait-timeout = "0.2.0"
# Please adjust `vmm::feature_list()` accordingly when changing the
# feature list below
[features]
default = ["kvm", "io_uring"]
dbus_api = ["zbus", "vmm/dbus_api"]
dhat-heap = ["dhat"] # For heap profiling
default = ["kvm", "io_uring"]
dhat-heap = ["dhat", "vmm/dhat-heap"] # For heap profiling
guest_debug = ["vmm/guest_debug"]
igvm = ["vmm/igvm", "mshv"]
io_uring = ["vmm/io_uring"]
@ -75,27 +75,27 @@ tracing = ["vmm/tracing", "tracer/tracing"]
[workspace]
members = [
"api_client",
"arch",
"block",
"devices",
"event_monitor",
"hypervisor",
"net_gen",
"net_util",
"option_parser",
"pci",
"performance-metrics",
"rate_limiter",
"serial_buffer",
"test_infra",
"tracer",
"vhost_user_block",
"vhost_user_net",
"virtio-devices",
"vmm",
"vm-allocator",
"vm-device",
"vm-migration",
"vm-virtio"
"api_client",
"arch",
"block",
"devices",
"event_monitor",
"hypervisor",
"net_gen",
"net_util",
"option_parser",
"pci",
"performance-metrics",
"rate_limiter",
"serial_buffer",
"test_infra",
"tracer",
"vhost_user_block",
"vhost_user_net",
"virtio-devices",
"vmm",
"vm-allocator",
"vm-device",
"vm-migration",
"vm-virtio",
]

View File

@ -1,8 +1,8 @@
[package]
name = "api_client"
version = "0.1.0"
authors = ["The Cloud Hypervisor Authors"]
edition = "2021"
name = "api_client"
version = "0.1.0"
[dependencies]
vmm-sys-util = "0.12.1"

View File

@ -1,8 +1,8 @@
[package]
name = "arch"
version = "0.1.0"
authors = ["The Chromium OS Authors"]
edition = "2021"
name = "arch"
version = "0.1.0"
[features]
default = []
@ -19,7 +19,10 @@ log = "0.4.21"
serde = { version = "1.0.197", features = ["rc", "derive"] }
thiserror = "1.0.60"
uuid = "1.8.0"
vm-memory = { version = "0.14.1", features = ["backend-mmap", "backend-bitmap"] }
vm-memory = { version = "0.14.1", features = [
"backend-mmap",
"backend-bitmap",
] }
vm-migration = { path = "../vm-migration" }
vmm-sys-util = { version = "0.12.1", features = ["with-serde"] }

View File

@ -1,8 +1,8 @@
[package]
authors = ["The Cloud Hypervisor Authors", "The Chromium OS Authors"]
edition = "2021"
name = "block"
version = "0.1.0"
edition = "2021"
authors = ["The Cloud Hypervisor Authors", "The Chromium OS Authors"]
[features]
default = []
@ -21,6 +21,10 @@ thiserror = "1.0.60"
uuid = { version = "1.8.0", features = ["v4"] }
virtio-bindings = { version = "0.2.2", features = ["virtio-v5_0_0"] }
virtio-queue = "0.12.0"
vm-memory = { version = "0.14.1", features = ["backend-mmap", "backend-atomic", "backend-bitmap"] }
vm-memory = { version = "0.14.1", features = [
"backend-mmap",
"backend-atomic",
"backend-bitmap",
] }
vm-virtio = { path = "../vm-virtio" }
vmm-sys-util = "0.12.1"

View File

@ -1,11 +1,11 @@
[package]
name = "devices"
version = "0.1.0"
authors = ["The Chromium OS Authors"]
edition = "2021"
name = "devices"
version = "0.1.0"
[dependencies]
acpi_tables = { git = "https://github.com/rust-vmm/acpi_tables", branch = "main" }
acpi_tables = { git = "https://github.com/rust-vmm/acpi_tables", branch = "main" }
anyhow = "1.0.81"
arch = { path = "../arch" }
bitflags = "2.5.0"

View File

@ -63,7 +63,7 @@ component in the state it was left before the snapshot occurred.
## Restore a Cloud Hypervisor VM
Given that one has access to an existing snapshot in `/home/foo/snapshot`,
it is possible to create a new VM based on this snapshot with the following
it is possible to create a new VM based on this snapshot with the following
command:
```bash
@ -93,6 +93,21 @@ start using it.
At this point, the VM is fully restored and is identical to the VM which was
snapshot earlier.
## Restore a VM with new Net FDs
For a VM created with FDs explicitly passed to NetConfig, a set of valid FDs
need to be provided along with the VM restore command in the following syntax:
```bash
# First terminal
./cloud-hypervisor --api-socket /tmp/cloud-hypervisor.sock
# Second terminal
./ch-remote --api-socket=/tmp/cloud-hypervisor.sock restore source_url=file:///home/foo/snapshot net_fds=[net1@[23,24],net2@[25,26]]
```
In the example above, the net device with id `net1` will be backed by FDs '23'
and '24', and the net device with id `net2` will be backed by FDs '25' and '26'
from the restored VM.
## Limitations
VFIO devices and Intel SGX are out of scope.

View File

@ -1,8 +1,8 @@
[package]
name = "event_monitor"
version = "0.1.0"
authors = ["The Cloud Hypervisor Authors"]
edition = "2021"
name = "event_monitor"
version = "0.1.0"
[dependencies]
flume = "0.11.0"

View File

@ -1,9 +1,9 @@
[package]
name = "cloud-hypervisor-fuzz"
version = "0.0.0"
authors = ["Automatically generated"]
publish = false
edition = "2021"
name = "cloud-hypervisor-fuzz"
publish = false
version = "0.0.0"
[package.metadata]
cargo-fuzz = true
@ -24,12 +24,12 @@ once_cell = "1.19.0"
seccompiler = "0.4.0"
virtio-devices = { path = "../virtio-devices" }
virtio-queue = "0.12.0"
vmm = { path = "../vmm" }
vmm-sys-util = "0.12.1"
vm-device = { path = "../vm-device" }
vm-memory = "0.14.1"
vm-migration = { path = "../vm-migration" }
vm-device = { path = "../vm-device" }
vm-virtio = { path = "../vm-virtio" }
vmm = { path = "../vmm", features = ["guest_debug"] }
vmm-sys-util = "0.12.1"
[dependencies.cloud-hypervisor]
path = ".."
@ -39,97 +39,97 @@ path = ".."
members = ["."]
[[bin]]
doc = false
name = "balloon"
path = "fuzz_targets/balloon.rs"
test = false
doc = false
[[bin]]
doc = false
name = "block"
path = "fuzz_targets/block.rs"
test = false
doc = false
[[bin]]
doc = false
name = "cmos"
path = "fuzz_targets/cmos.rs"
test = false
doc = false
[[bin]]
doc = false
name = "console"
path = "fuzz_targets/console.rs"
test = false
doc = false
[[bin]]
doc = false
name = "http_api"
path = "fuzz_targets/http_api.rs"
test = false
doc = false
[[bin]]
doc = false
name = "iommu"
path = "fuzz_targets/iommu.rs"
test = false
doc = false
[[bin]]
doc = false
name = "linux_loader"
path = "fuzz_targets/linux_loader.rs"
test = false
doc = false
[[bin]]
doc = false
name = "linux_loader_cmdline"
path = "fuzz_targets/linux_loader_cmdline.rs"
test = false
doc = false
[[bin]]
doc = false
name = "mem"
path = "fuzz_targets/mem.rs"
test = false
doc = false
[[bin]]
doc = false
name = "net"
path = "fuzz_targets/net.rs"
test = false
doc = false
[[bin]]
doc = false
name = "pmem"
path = "fuzz_targets/pmem.rs"
test = false
doc = false
[[bin]]
doc = false
name = "qcow"
path = "fuzz_targets/qcow.rs"
test = false
doc = false
[[bin]]
doc = false
name = "rng"
path = "fuzz_targets/rng.rs"
test = false
doc = false
[[bin]]
doc = false
name = "serial"
path = "fuzz_targets/serial.rs"
test = false
doc = false
[[bin]]
doc = false
name = "vhdx"
path = "fuzz_targets/vhdx.rs"
test = false
doc = false
[[bin]]
doc = false
name = "watchdog"
path = "fuzz_targets/watchdog.rs"
test = false
doc = false

View File

@ -105,7 +105,7 @@ impl RequestHandler for StubApiRequestHandler {
Ok(())
}
#[cfg(all(target_arch = "x86_64", feature = "guest_debug"))]
#[cfg(target_arch = "x86_64")]
fn vm_coredump(&mut self, _: &str) -> Result<(), VmError> {
Ok(())
}
@ -185,7 +185,6 @@ impl RequestHandler for StubApiRequestHandler {
sgx_epc: None,
numa: None,
watchdog: false,
#[cfg(feature = "guest_debug")]
gdb: false,
pci_segments: None,
platform: None,

View File

@ -1,9 +1,9 @@
[package]
name = "hypervisor"
version = "0.1.0"
authors = ["Microsoft Authors"]
edition = "2021"
license = "Apache-2.0 OR BSD-3-Clause"
name = "hypervisor"
version = "0.1.0"
[features]
kvm = ["kvm-ioctls", "kvm-bindings", "vfio-ioctls/kvm"]
@ -14,26 +14,34 @@ tdx = []
[dependencies]
anyhow = "1.0.81"
byteorder = "1.5.0"
igvm = { version = "0.2.0", optional = true }
igvm_defs = { version = "0.2.0", optional = true }
libc = "0.2.153"
log = "0.4.21"
igvm = { version = "0.2.0", optional = true }
igvm_defs = { version = "0.2.0", optional = true }
kvm-bindings = { version = "0.8.1", optional = true, features = ["serde"] }
kvm-ioctls = { version = "0.17.0", optional = true }
mshv-bindings = { git = "https://github.com/rust-vmm/mshv", branch = "main", features = ["with-serde", "fam-wrappers"], optional = true }
mshv-ioctls = { git = "https://github.com/rust-vmm/mshv", branch = "main", optional = true}
libc = "0.2.153"
log = "0.4.21"
mshv-bindings = { git = "https://github.com/rust-vmm/mshv", branch = "main", features = [
"with-serde",
"fam-wrappers",
], optional = true }
mshv-ioctls = { git = "https://github.com/rust-vmm/mshv", branch = "main", optional = true }
serde = { version = "1.0.197", features = ["rc", "derive"] }
serde_with = { version = "3.7.0", default-features = false, features = ["macros"] }
vfio-ioctls = { git = "https://github.com/rust-vmm/vfio", branch = "main", default-features = false }
vm-memory = { version = "0.14.1", features = ["backend-mmap", "backend-atomic"] }
vmm-sys-util = { version = "0.12.1", features = ["with-serde"] }
serde_with = { version = "3.7.0", default-features = false, features = [
"macros",
] }
thiserror = "1.0.60"
vfio-ioctls = { git = "https://github.com/rust-vmm/vfio", branch = "main", default-features = false }
vm-memory = { version = "0.14.1", features = [
"backend-mmap",
"backend-atomic",
] }
vmm-sys-util = { version = "0.12.1", features = ["with-serde"] }
[target.'cfg(target_arch = "x86_64")'.dependencies.iced-x86]
optional = true
version = "1.21.0"
default-features = false
features = ["std", "decoder", "op_code_info", "instr_info", "fast_fmt"]
optional = true
version = "1.21.0"
[dev-dependencies]
env_logger = "0.11.3"

View File

@ -53,7 +53,6 @@ pub enum Exception {
pub mod regs;
#[derive(Debug, Default, Copy, Clone, PartialEq, Eq)]
#[cfg_attr(feature = "with-serde", derive(Deserialize, Serialize))]
pub struct SegmentRegister {
pub base: u64,
pub limit: u32,
@ -174,7 +173,6 @@ macro_rules! msr_data {
}
#[derive(Debug, Default, Copy, Clone, PartialEq, Eq)]
#[cfg_attr(feature = "with-serde", derive(Deserialize, Serialize))]
pub struct StandardRegisters {
pub rax: u64,
pub rbx: u64,
@ -197,14 +195,12 @@ pub struct StandardRegisters {
}
#[derive(Debug, Default, Copy, Clone, PartialEq, Eq)]
#[cfg_attr(feature = "with-serde", derive(Deserialize, Serialize))]
pub struct DescriptorTable {
pub base: u64,
pub limit: u16,
}
#[derive(Debug, Default, Copy, Clone, PartialEq, Eq)]
#[cfg_attr(feature = "with-serde", derive(Deserialize, Serialize))]
pub struct SpecialRegisters {
pub cs: SegmentRegister,
pub ds: SegmentRegister,

View File

@ -106,12 +106,23 @@ pub fn is_system_register(regid: u64) -> bool {
}
pub fn check_required_kvm_extensions(kvm: &Kvm) -> KvmResult<()> {
if !kvm.check_extension(Cap::SignalMsi) {
return Err(KvmError::CapabilityMissing(Cap::SignalMsi));
}
if !kvm.check_extension(Cap::OneReg) {
return Err(KvmError::CapabilityMissing(Cap::OneReg));
macro_rules! check_extension {
($cap:expr) => {
if !kvm.check_extension($cap) {
return Err(KvmError::CapabilityMissing($cap));
}
};
}
// SetGuestDebug is required but some kernels have it implemented without the capability flag.
check_extension!(Cap::ImmediateExit);
check_extension!(Cap::Ioeventfd);
check_extension!(Cap::Irqchip);
check_extension!(Cap::Irqfd);
check_extension!(Cap::IrqRouting);
check_extension!(Cap::MpState);
check_extension!(Cap::OneReg);
check_extension!(Cap::UserMemory);
Ok(())
}

View File

@ -32,29 +32,37 @@ pub use {
/// Check KVM extension for Linux
///
pub fn check_required_kvm_extensions(kvm: &Kvm) -> KvmResult<()> {
if !kvm.check_extension(Cap::SignalMsi) {
return Err(KvmError::CapabilityMissing(Cap::SignalMsi));
}
if !kvm.check_extension(Cap::TscDeadlineTimer) {
return Err(KvmError::CapabilityMissing(Cap::TscDeadlineTimer));
}
if !kvm.check_extension(Cap::SplitIrqchip) {
return Err(KvmError::CapabilityMissing(Cap::SplitIrqchip));
}
if !kvm.check_extension(Cap::SetIdentityMapAddr) {
return Err(KvmError::CapabilityMissing(Cap::SetIdentityMapAddr));
}
if !kvm.check_extension(Cap::SetTssAddr) {
return Err(KvmError::CapabilityMissing(Cap::SetTssAddr));
}
if !kvm.check_extension(Cap::ImmediateExit) {
return Err(KvmError::CapabilityMissing(Cap::ImmediateExit));
}
if !kvm.check_extension(Cap::GetTscKhz) {
return Err(KvmError::CapabilityMissing(Cap::GetTscKhz));
macro_rules! check_extension {
($cap:expr) => {
if !kvm.check_extension($cap) {
return Err(KvmError::CapabilityMissing($cap));
}
};
}
// DeviceCtrl, EnableCap, and SetGuestDebug are also required, but some kernels have
// the features implemented without the capability flags.
check_extension!(Cap::AdjustClock);
check_extension!(Cap::ExtCpuid);
check_extension!(Cap::GetTscKhz);
check_extension!(Cap::ImmediateExit);
check_extension!(Cap::Ioeventfd);
check_extension!(Cap::Irqchip);
check_extension!(Cap::Irqfd);
check_extension!(Cap::IrqRouting);
check_extension!(Cap::MpState);
check_extension!(Cap::SetIdentityMapAddr);
check_extension!(Cap::SetTssAddr);
check_extension!(Cap::SplitIrqchip);
check_extension!(Cap::TscDeadlineTimer);
check_extension!(Cap::UserMemory);
check_extension!(Cap::UserNmi);
check_extension!(Cap::VcpuEvents);
check_extension!(Cap::Xcrs);
check_extension!(Cap::Xsave);
Ok(())
}
#[derive(Clone, Serialize, Deserialize)]
pub struct VcpuKvmState {
pub cpuid: Vec<CpuIdEntry>,

View File

@ -1,8 +1,8 @@
[package]
name = "net_gen"
version = "0.1.0"
authors = ["The Chromium OS Authors"]
edition = "2021"
name = "net_gen"
version = "0.1.0"
[dependencies]
vmm-sys-util = "0.12.1"

View File

@ -1,8 +1,8 @@
[package]
name = "net_util"
version = "0.1.0"
authors = ["The Chromium OS Authors"]
edition = "2021"
name = "net_util"
version = "0.1.0"
[dependencies]
epoll = "4.3.3"
@ -11,11 +11,15 @@ libc = "0.2.153"
log = "0.4.21"
net_gen = { path = "../net_gen" }
rate_limiter = { path = "../rate_limiter" }
serde = {version = "1.0.197",features = ["derive"]}
serde = { version = "1.0.197", features = ["derive"] }
thiserror = "1.0.60"
virtio-bindings = "0.2.2"
virtio-queue = "0.12.0"
vm-memory = { version = "0.14.1", features = ["backend-mmap", "backend-atomic", "backend-bitmap"] }
vm-memory = { version = "0.14.1", features = [
"backend-mmap",
"backend-atomic",
"backend-bitmap",
] }
vm-virtio = { path = "../vm-virtio" }
vmm-sys-util = "0.12.1"

8
net_util/build.rs Normal file
View File

@ -0,0 +1,8 @@
// Copyright © 2024 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
//
fn main() {
println!("cargo::rustc-check-cfg=cfg(fuzzing)");
}

View File

@ -1,5 +1,5 @@
[package]
name = "option_parser"
version = "0.1.0"
authors = ["The Cloud Hypervisor Authors"]
edition = "2021"
name = "option_parser"
version = "0.1.0"

View File

@ -1,8 +1,8 @@
[package]
name = "pci"
version = "0.1.0"
authors = ["Samuel Ortiz <sameo@linux.intel.com>"]
edition = "2021"
name = "pci"
version = "0.1.0"
[features]
default = []
@ -13,16 +13,21 @@ mshv = ["vfio-ioctls/mshv"]
anyhow = "1.0.81"
byteorder = "1.5.0"
hypervisor = { path = "../hypervisor" }
vfio-bindings = { git = "https://github.com/rust-vmm/vfio", branch = "main", features = ["fam-wrappers"] }
vfio-ioctls = { git = "https://github.com/rust-vmm/vfio", branch = "main", default-features = false }
vfio_user = { git = "https://github.com/rust-vmm/vfio-user", branch = "main" }
vmm-sys-util = "0.12.1"
libc = "0.2.153"
log = "0.4.21"
serde = { version = "1.0.197", features = ["derive"] }
thiserror = "1.0.60"
vfio-bindings = { git = "https://github.com/rust-vmm/vfio", branch = "main", features = [
"fam-wrappers",
] }
vfio-ioctls = { git = "https://github.com/rust-vmm/vfio", branch = "main", default-features = false }
vfio_user = { git = "https://github.com/rust-vmm/vfio-user", branch = "main" }
vm-allocator = { path = "../vm-allocator" }
vm-device = { path = "../vm-device" }
vm-memory = { version = "0.14.1", features = ["backend-mmap", "backend-atomic", "backend-bitmap"] }
vm-memory = { version = "0.14.1", features = [
"backend-mmap",
"backend-atomic",
"backend-bitmap",
] }
vm-migration = { path = "../vm-migration" }
vmm-sys-util = "0.12.1"

View File

@ -1,9 +1,9 @@
[package]
authors = ["The Cloud Hypervisor Authors"]
build = "../build.rs"
edition = "2021"
name = "performance-metrics"
version = "0.1.0"
authors = ["The Cloud Hypervisor Authors"]
edition = "2021"
build = "../build.rs"
[dependencies]
clap = { version = "4.5.4", features = ["wrap_help"] }

View File

@ -1,7 +1,7 @@
[package]
edition = "2021"
name = "rate_limiter"
version = "0.1.0"
edition = "2021"
[dependencies]
epoll = "4.3.3"

View File

@ -8,7 +8,7 @@
FROM ubuntu:22.04 as dev
ARG TARGETARCH
ARG RUST_TOOLCHAIN="1.74.1"
ARG RUST_TOOLCHAIN="1.77.0"
ARG CLH_SRC_DIR="/cloud-hypervisor"
ARG CLH_BUILD_DIR="$CLH_SRC_DIR/build"
ARG CARGO_REGISTRY_DIR="$CLH_BUILD_DIR/cargo_registry"

View File

@ -9,7 +9,7 @@ CLI_NAME="Cloud Hypervisor"
CTR_IMAGE_TAG="ghcr.io/cloud-hypervisor/cloud-hypervisor"
# Needs to match explicit version in docker-image.yaml workflow
CTR_IMAGE_VERSION="20240407-0"
CTR_IMAGE_VERSION="20240507-0"
: "${CTR_IMAGE:=${CTR_IMAGE_TAG}:${CTR_IMAGE_VERSION}}"
DOCKER_RUNTIME="docker"

View File

@ -1,5 +1,5 @@
[package]
authors = ["The Cloud Hypervisor Authors"]
edition = "2021"
name = "serial_buffer"
version = "0.1.0"
authors = ["The Cloud Hypervisor Authors"]
edition = "2021"

View File

@ -445,14 +445,14 @@ fn rest_api_do_command(matches: &ArgMatches, socket: &mut UnixStream) -> ApiResu
.map_err(Error::HttpApiClient)
}
Some("restore") => {
let restore_config = restore_config(
let (restore_config, fds) = restore_config(
matches
.subcommand_matches("restore")
.unwrap()
.get_one::<String>("restore_config")
.unwrap(),
)?;
simple_api_command(socket, "PUT", "restore", Some(&restore_config))
simple_api_command_with_fds(socket, "PUT", "restore", Some(&restore_config), fds)
.map_err(Error::HttpApiClient)
}
Some("coredump") => {
@ -661,7 +661,7 @@ fn dbus_api_do_command(matches: &ArgMatches, proxy: &DBusApi1ProxyBlocking<'_>)
proxy.api_vm_snapshot(&snapshot_config)
}
Some("restore") => {
let restore_config = restore_config(
let (restore_config, _fds) = restore_config(
matches
.subcommand_matches("restore")
.unwrap()
@ -849,11 +849,20 @@ fn snapshot_config(url: &str) -> String {
serde_json::to_string(&snapshot_config).unwrap()
}
fn restore_config(config: &str) -> Result<String, Error> {
let restore_config = vmm::config::RestoreConfig::parse(config).map_err(Error::Restore)?;
fn restore_config(config: &str) -> Result<(String, Vec<i32>), Error> {
let mut restore_config = vmm::config::RestoreConfig::parse(config).map_err(Error::Restore)?;
// RestoreConfig is modified on purpose to take out the file descriptors.
// These fds are passed to the server side process via SCM_RIGHTS
let fds = match &mut restore_config.net_fds {
Some(net_fds) => net_fds
.iter_mut()
.flat_map(|net| net.fds.take().unwrap_or_default())
.collect(),
None => Vec::new(),
};
let restore_config = serde_json::to_string(&restore_config).unwrap();
Ok(restore_config)
Ok((restore_config, fds))
}
fn coredump_config(destination_url: &str) -> String {
@ -987,7 +996,7 @@ fn main() {
)
.subcommand(
Command::new("remove-device")
.about("Remove VFIO device")
.about("Remove VFIO and PCI device")
.arg(Arg::new("id").index(1).help("<device_id>")),
)
.subcommand(Command::new("info").about("Info on the VM"))

View File

@ -1,8 +1,8 @@
[package]
name = "test_infra"
version = "0.1.0"
authors = ["The Cloud Hypervisor Authors"]
edition = "2021"
name = "test_infra"
version = "0.1.0"
[dependencies]
dirs = "5.0.1"

View File

@ -2344,10 +2344,7 @@ fn make_guest_panic(guest: &Guest) {
}
mod common_parallel {
use std::{
fs::{remove_dir_all, OpenOptions},
io::SeekFrom,
};
use std::{fs::OpenOptions, io::SeekFrom};
use crate::*;
@ -5989,310 +5986,6 @@ mod common_parallel {
});
}
// One thing to note about this test. The virtio-net device is heavily used
// through each ssh command. There's no need to perform a dedicated test to
// verify the migration went well for virtio-net.
#[test]
#[cfg(not(feature = "mshv"))]
fn test_snapshot_restore_hotplug_virtiomem() {
_test_snapshot_restore(true);
}
#[test]
fn test_snapshot_restore_basic() {
_test_snapshot_restore(false);
}
fn _test_snapshot_restore(use_hotplug: bool) {
let focal = UbuntuDiskConfig::new(FOCAL_IMAGE_NAME.to_string());
let guest = Guest::new(Box::new(focal));
let kernel_path = direct_kernel_boot_path();
let api_socket_source = format!("{}.1", temp_api_path(&guest.tmp_dir));
let net_id = "net123";
let net_params = format!(
"id={},tap=,mac={},ip={},mask=255.255.255.0",
net_id, guest.network.guest_mac, guest.network.host_ip
);
let mut mem_params = "size=2G";
if use_hotplug {
mem_params = "size=2G,hotplug_method=virtio-mem,hotplug_size=32G"
}
let cloudinit_params = format!(
"path={},iommu=on",
guest.disk_config.disk(DiskType::CloudInit).unwrap()
);
let socket = temp_vsock_path(&guest.tmp_dir);
let event_path = temp_event_monitor_path(&guest.tmp_dir);
let mut child = GuestCommand::new(&guest)
.args(["--api-socket", &api_socket_source])
.args(["--event-monitor", format!("path={event_path}").as_str()])
.args(["--cpus", "boot=4"])
.args(["--memory", mem_params])
.args(["--balloon", "size=0"])
.args(["--kernel", kernel_path.to_str().unwrap()])
.args([
"--disk",
format!(
"path={}",
guest.disk_config.disk(DiskType::OperatingSystem).unwrap()
)
.as_str(),
cloudinit_params.as_str(),
])
.args(["--net", net_params.as_str()])
.args(["--vsock", format!("cid=3,socket={socket}").as_str()])
.args(["--cmdline", DIRECT_KERNEL_BOOT_CMDLINE])
.capture_output()
.spawn()
.unwrap();
let console_text = String::from("On a branch floating down river a cricket, singing.");
// Create the snapshot directory
let snapshot_dir = temp_snapshot_dir_path(&guest.tmp_dir);
let r = std::panic::catch_unwind(|| {
guest.wait_vm_boot(None).unwrap();
// Check the number of vCPUs
assert_eq!(guest.get_cpu_count().unwrap_or_default(), 4);
// Check the guest RAM
assert!(guest.get_total_memory().unwrap_or_default() > 1_920_000);
if use_hotplug {
// Increase guest RAM with virtio-mem
resize_command(
&api_socket_source,
None,
Some(6 << 30),
None,
Some(&event_path),
);
thread::sleep(std::time::Duration::new(5, 0));
assert!(guest.get_total_memory().unwrap_or_default() > 5_760_000);
// Use balloon to remove RAM from the VM
resize_command(
&api_socket_source,
None,
None,
Some(1 << 30),
Some(&event_path),
);
thread::sleep(std::time::Duration::new(5, 0));
let total_memory = guest.get_total_memory().unwrap_or_default();
assert!(total_memory > 4_800_000);
assert!(total_memory < 5_760_000);
}
// Check the guest virtio-devices, e.g. block, rng, vsock, console, and net
guest.check_devices_common(Some(&socket), Some(&console_text), None);
// x86_64: We check that removing and adding back the virtio-net device
// does not break the snapshot/restore support for virtio-pci.
// This is an important thing to test as the hotplug will
// trigger a PCI BAR reprogramming, which is a good way of
// checking if the stored resources are correctly restored.
// Unplug the virtio-net device
// AArch64: Device hotplug is currently not supported, skipping here.
#[cfg(target_arch = "x86_64")]
{
assert!(remote_command(
&api_socket_source,
"remove-device",
Some(net_id),
));
thread::sleep(std::time::Duration::new(10, 0));
let latest_events = [&MetaEvent {
event: "device-removed".to_string(),
device_id: Some(net_id.to_string()),
}];
// See: #5938
thread::sleep(std::time::Duration::new(1, 0));
assert!(check_latest_events_exact(&latest_events, &event_path));
// Plug the virtio-net device again
assert!(remote_command(
&api_socket_source,
"add-net",
Some(net_params.as_str()),
));
thread::sleep(std::time::Duration::new(10, 0));
}
// Pause the VM
assert!(remote_command(&api_socket_source, "pause", None));
let latest_events = [
&MetaEvent {
event: "pausing".to_string(),
device_id: None,
},
&MetaEvent {
event: "paused".to_string(),
device_id: None,
},
];
// See: #5938
thread::sleep(std::time::Duration::new(1, 0));
assert!(check_latest_events_exact(&latest_events, &event_path));
// Take a snapshot from the VM
assert!(remote_command(
&api_socket_source,
"snapshot",
Some(format!("file://{snapshot_dir}").as_str()),
));
// Wait to make sure the snapshot is completed
thread::sleep(std::time::Duration::new(10, 0));
let latest_events = [
&MetaEvent {
event: "snapshotting".to_string(),
device_id: None,
},
&MetaEvent {
event: "snapshotted".to_string(),
device_id: None,
},
];
// See: #5938
thread::sleep(std::time::Duration::new(1, 0));
assert!(check_latest_events_exact(&latest_events, &event_path));
});
// Shutdown the source VM and check console output
let _ = child.kill();
let output = child.wait_with_output().unwrap();
handle_child_output(r, &output);
let r = std::panic::catch_unwind(|| {
assert!(String::from_utf8_lossy(&output.stdout).contains(&console_text));
});
handle_child_output(r, &output);
// Remove the vsock socket file.
Command::new("rm")
.arg("-f")
.arg(socket.as_str())
.output()
.unwrap();
let api_socket_restored = format!("{}.2", temp_api_path(&guest.tmp_dir));
let event_path_restored = format!("{}.2", temp_event_monitor_path(&guest.tmp_dir));
// Restore the VM from the snapshot
let mut child = GuestCommand::new(&guest)
.args(["--api-socket", &api_socket_restored])
.args([
"--event-monitor",
format!("path={event_path_restored}").as_str(),
])
.args([
"--restore",
format!("source_url=file://{snapshot_dir}").as_str(),
])
.capture_output()
.spawn()
.unwrap();
// Wait for the VM to be restored
thread::sleep(std::time::Duration::new(20, 0));
let expected_events = [
&MetaEvent {
event: "starting".to_string(),
device_id: None,
},
&MetaEvent {
event: "activated".to_string(),
device_id: Some("__console".to_string()),
},
&MetaEvent {
event: "activated".to_string(),
device_id: Some("__rng".to_string()),
},
&MetaEvent {
event: "restoring".to_string(),
device_id: None,
},
];
assert!(check_sequential_events(
&expected_events,
&event_path_restored
));
let latest_events = [&MetaEvent {
event: "restored".to_string(),
device_id: None,
}];
assert!(check_latest_events_exact(
&latest_events,
&event_path_restored
));
// Remove the snapshot dir
let _ = remove_dir_all(snapshot_dir.as_str());
let r = std::panic::catch_unwind(|| {
// Resume the VM
assert!(remote_command(&api_socket_restored, "resume", None));
// There is no way that we can ensure the 'write()' to the
// event file is completed when the 'resume' request is
// returned successfully, because the 'write()' was done
// asynchronously from a different thread of Cloud
// Hypervisor (e.g. the event-monitor thread).
thread::sleep(std::time::Duration::new(1, 0));
let latest_events = [
&MetaEvent {
event: "resuming".to_string(),
device_id: None,
},
&MetaEvent {
event: "resumed".to_string(),
device_id: None,
},
];
assert!(check_latest_events_exact(
&latest_events,
&event_path_restored
));
// Perform same checks to validate VM has been properly restored
assert_eq!(guest.get_cpu_count().unwrap_or_default(), 4);
let total_memory = guest.get_total_memory().unwrap_or_default();
if !use_hotplug {
assert!(total_memory > 1_920_000);
} else {
assert!(total_memory > 4_800_000);
assert!(total_memory < 5_760_000);
// Deflate balloon to restore entire RAM to the VM
resize_command(&api_socket_restored, None, None, Some(0), None);
thread::sleep(std::time::Duration::new(5, 0));
assert!(guest.get_total_memory().unwrap_or_default() > 5_760_000);
// Decrease guest RAM with virtio-mem
resize_command(&api_socket_restored, None, Some(5 << 30), None, None);
thread::sleep(std::time::Duration::new(5, 0));
let total_memory = guest.get_total_memory().unwrap_or_default();
assert!(total_memory > 4_800_000);
assert!(total_memory < 5_760_000);
}
guest.check_devices_common(Some(&socket), Some(&console_text), None);
});
// Shutdown the target VM and check console output
let _ = child.kill();
let output = child.wait_with_output().unwrap();
handle_child_output(r, &output);
let r = std::panic::catch_unwind(|| {
assert!(String::from_utf8_lossy(&output.stdout).contains(&console_text));
});
handle_child_output(r, &output);
}
#[test]
fn test_counters() {
let focal = UbuntuDiskConfig::new(FOCAL_IMAGE_NAME.to_string());
@ -7493,7 +7186,8 @@ mod dbus_api {
}
mod common_sequential {
#[cfg(not(feature = "mshv"))]
use std::fs::remove_dir_all;
use crate::*;
#[test]
@ -7501,6 +7195,532 @@ mod common_sequential {
fn test_memory_mergeable_on() {
test_memory_mergeable(true)
}
fn snapshot_and_check_events(api_socket: &str, snapshot_dir: &str, event_path: &str) {
// Pause the VM
assert!(remote_command(api_socket, "pause", None));
let latest_events: [&MetaEvent; 2] = [
&MetaEvent {
event: "pausing".to_string(),
device_id: None,
},
&MetaEvent {
event: "paused".to_string(),
device_id: None,
},
];
// See: #5938
thread::sleep(std::time::Duration::new(1, 0));
assert!(check_latest_events_exact(&latest_events, event_path));
// Take a snapshot from the VM
assert!(remote_command(
api_socket,
"snapshot",
Some(format!("file://{snapshot_dir}").as_str()),
));
// Wait to make sure the snapshot is completed
thread::sleep(std::time::Duration::new(10, 0));
let latest_events = [
&MetaEvent {
event: "snapshotting".to_string(),
device_id: None,
},
&MetaEvent {
event: "snapshotted".to_string(),
device_id: None,
},
];
// See: #5938
thread::sleep(std::time::Duration::new(1, 0));
assert!(check_latest_events_exact(&latest_events, event_path));
}
// One thing to note about this test. The virtio-net device is heavily used
// through each ssh command. There's no need to perform a dedicated test to
// verify the migration went well for virtio-net.
#[test]
#[cfg(not(feature = "mshv"))]
fn test_snapshot_restore_hotplug_virtiomem() {
_test_snapshot_restore(true);
}
#[test]
fn test_snapshot_restore_basic() {
_test_snapshot_restore(false);
}
fn _test_snapshot_restore(use_hotplug: bool) {
let focal = UbuntuDiskConfig::new(FOCAL_IMAGE_NAME.to_string());
let guest = Guest::new(Box::new(focal));
let kernel_path = direct_kernel_boot_path();
let api_socket_source = format!("{}.1", temp_api_path(&guest.tmp_dir));
let net_id = "net123";
let net_params = format!(
"id={},tap=,mac={},ip={},mask=255.255.255.0",
net_id, guest.network.guest_mac, guest.network.host_ip
);
let mut mem_params = "size=2G";
if use_hotplug {
mem_params = "size=2G,hotplug_method=virtio-mem,hotplug_size=32G"
}
let cloudinit_params = format!(
"path={},iommu=on",
guest.disk_config.disk(DiskType::CloudInit).unwrap()
);
let socket = temp_vsock_path(&guest.tmp_dir);
let event_path = temp_event_monitor_path(&guest.tmp_dir);
let mut child = GuestCommand::new(&guest)
.args(["--api-socket", &api_socket_source])
.args(["--event-monitor", format!("path={event_path}").as_str()])
.args(["--cpus", "boot=4"])
.args(["--memory", mem_params])
.args(["--balloon", "size=0"])
.args(["--kernel", kernel_path.to_str().unwrap()])
.args([
"--disk",
format!(
"path={}",
guest.disk_config.disk(DiskType::OperatingSystem).unwrap()
)
.as_str(),
cloudinit_params.as_str(),
])
.args(["--net", net_params.as_str()])
.args(["--vsock", format!("cid=3,socket={socket}").as_str()])
.args(["--cmdline", DIRECT_KERNEL_BOOT_CMDLINE])
.capture_output()
.spawn()
.unwrap();
let console_text = String::from("On a branch floating down river a cricket, singing.");
// Create the snapshot directory
let snapshot_dir = temp_snapshot_dir_path(&guest.tmp_dir);
let r = std::panic::catch_unwind(|| {
guest.wait_vm_boot(None).unwrap();
// Check the number of vCPUs
assert_eq!(guest.get_cpu_count().unwrap_or_default(), 4);
// Check the guest RAM
assert!(guest.get_total_memory().unwrap_or_default() > 1_920_000);
if use_hotplug {
// Increase guest RAM with virtio-mem
resize_command(
&api_socket_source,
None,
Some(6 << 30),
None,
Some(&event_path),
);
thread::sleep(std::time::Duration::new(5, 0));
assert!(guest.get_total_memory().unwrap_or_default() > 5_760_000);
// Use balloon to remove RAM from the VM
resize_command(
&api_socket_source,
None,
None,
Some(1 << 30),
Some(&event_path),
);
thread::sleep(std::time::Duration::new(5, 0));
let total_memory = guest.get_total_memory().unwrap_or_default();
assert!(total_memory > 4_800_000);
assert!(total_memory < 5_760_000);
}
// Check the guest virtio-devices, e.g. block, rng, vsock, console, and net
guest.check_devices_common(Some(&socket), Some(&console_text), None);
// x86_64: We check that removing and adding back the virtio-net device
// does not break the snapshot/restore support for virtio-pci.
// This is an important thing to test as the hotplug will
// trigger a PCI BAR reprogramming, which is a good way of
// checking if the stored resources are correctly restored.
// Unplug the virtio-net device
// AArch64: Device hotplug is currently not supported, skipping here.
#[cfg(target_arch = "x86_64")]
{
assert!(remote_command(
&api_socket_source,
"remove-device",
Some(net_id),
));
thread::sleep(std::time::Duration::new(10, 0));
let latest_events = [&MetaEvent {
event: "device-removed".to_string(),
device_id: Some(net_id.to_string()),
}];
// See: #5938
thread::sleep(std::time::Duration::new(1, 0));
assert!(check_latest_events_exact(&latest_events, &event_path));
// Plug the virtio-net device again
assert!(remote_command(
&api_socket_source,
"add-net",
Some(net_params.as_str()),
));
thread::sleep(std::time::Duration::new(10, 0));
}
snapshot_and_check_events(&api_socket_source, &snapshot_dir, &event_path);
});
// Shutdown the source VM and check console output
let _ = child.kill();
let output = child.wait_with_output().unwrap();
handle_child_output(r, &output);
let r = std::panic::catch_unwind(|| {
assert!(String::from_utf8_lossy(&output.stdout).contains(&console_text));
});
handle_child_output(r, &output);
// Remove the vsock socket file.
Command::new("rm")
.arg("-f")
.arg(socket.as_str())
.output()
.unwrap();
let api_socket_restored = format!("{}.2", temp_api_path(&guest.tmp_dir));
let event_path_restored = format!("{}.2", temp_event_monitor_path(&guest.tmp_dir));
// Restore the VM from the snapshot
let mut child = GuestCommand::new(&guest)
.args(["--api-socket", &api_socket_restored])
.args([
"--event-monitor",
format!("path={event_path_restored}").as_str(),
])
.args([
"--restore",
format!("source_url=file://{snapshot_dir}").as_str(),
])
.capture_output()
.spawn()
.unwrap();
// Wait for the VM to be restored
thread::sleep(std::time::Duration::new(20, 0));
let expected_events = [
&MetaEvent {
event: "starting".to_string(),
device_id: None,
},
&MetaEvent {
event: "activated".to_string(),
device_id: Some("__console".to_string()),
},
&MetaEvent {
event: "activated".to_string(),
device_id: Some("__rng".to_string()),
},
&MetaEvent {
event: "restoring".to_string(),
device_id: None,
},
];
assert!(check_sequential_events(
&expected_events,
&event_path_restored
));
let latest_events = [&MetaEvent {
event: "restored".to_string(),
device_id: None,
}];
assert!(check_latest_events_exact(
&latest_events,
&event_path_restored
));
// Remove the snapshot dir
let _ = remove_dir_all(snapshot_dir.as_str());
let r = std::panic::catch_unwind(|| {
// Resume the VM
assert!(remote_command(&api_socket_restored, "resume", None));
// There is no way that we can ensure the 'write()' to the
// event file is completed when the 'resume' request is
// returned successfully, because the 'write()' was done
// asynchronously from a different thread of Cloud
// Hypervisor (e.g. the event-monitor thread).
thread::sleep(std::time::Duration::new(1, 0));
let latest_events = [
&MetaEvent {
event: "resuming".to_string(),
device_id: None,
},
&MetaEvent {
event: "resumed".to_string(),
device_id: None,
},
];
assert!(check_latest_events_exact(
&latest_events,
&event_path_restored
));
// Perform same checks to validate VM has been properly restored
assert_eq!(guest.get_cpu_count().unwrap_or_default(), 4);
let total_memory = guest.get_total_memory().unwrap_or_default();
if !use_hotplug {
assert!(total_memory > 1_920_000);
} else {
assert!(total_memory > 4_800_000);
assert!(total_memory < 5_760_000);
// Deflate balloon to restore entire RAM to the VM
resize_command(&api_socket_restored, None, None, Some(0), None);
thread::sleep(std::time::Duration::new(5, 0));
assert!(guest.get_total_memory().unwrap_or_default() > 5_760_000);
// Decrease guest RAM with virtio-mem
resize_command(&api_socket_restored, None, Some(5 << 30), None, None);
thread::sleep(std::time::Duration::new(5, 0));
let total_memory = guest.get_total_memory().unwrap_or_default();
assert!(total_memory > 4_800_000);
assert!(total_memory < 5_760_000);
}
guest.check_devices_common(Some(&socket), Some(&console_text), None);
});
// Shutdown the target VM and check console output
let _ = child.kill();
let output = child.wait_with_output().unwrap();
handle_child_output(r, &output);
let r = std::panic::catch_unwind(|| {
assert!(String::from_utf8_lossy(&output.stdout).contains(&console_text));
});
handle_child_output(r, &output);
}
#[test]
fn test_snapshot_restore_with_fd() {
let focal = UbuntuDiskConfig::new(FOCAL_IMAGE_NAME.to_string());
let guest = Guest::new(Box::new(focal));
let kernel_path = direct_kernel_boot_path();
let api_socket_source = format!("{}.1", temp_api_path(&guest.tmp_dir));
let net_id = "net123";
let num_queue_pairs: usize = 2;
// use a name that does not conflict with tap dev created from other tests
let tap_name = "chtap999";
use std::str::FromStr;
let taps = net_util::open_tap(
Some(tap_name),
Some(std::net::Ipv4Addr::from_str(&guest.network.host_ip).unwrap()),
None,
&mut None,
None,
num_queue_pairs,
Some(libc::O_RDWR | libc::O_NONBLOCK),
)
.unwrap();
let net_params = format!(
"id={},fd=[{},{}],mac={},ip={},mask=255.255.255.0,num_queues={}",
net_id,
taps[0].as_raw_fd(),
taps[1].as_raw_fd(),
guest.network.guest_mac,
guest.network.host_ip,
num_queue_pairs * 2
);
let cloudinit_params = format!(
"path={},iommu=on",
guest.disk_config.disk(DiskType::CloudInit).unwrap()
);
let n_cpu = 2;
let event_path = temp_event_monitor_path(&guest.tmp_dir);
let mut child = GuestCommand::new(&guest)
.args(["--api-socket", &api_socket_source])
.args(["--event-monitor", format!("path={event_path}").as_str()])
.args(["--cpus", format!("boot={}", n_cpu).as_str()])
.args(["--memory", "size=1G"])
.args(["--kernel", kernel_path.to_str().unwrap()])
.args([
"--disk",
format!(
"path={}",
guest.disk_config.disk(DiskType::OperatingSystem).unwrap()
)
.as_str(),
cloudinit_params.as_str(),
])
.args(["--net", net_params.as_str()])
.args(["--cmdline", DIRECT_KERNEL_BOOT_CMDLINE])
.capture_output()
.spawn()
.unwrap();
let console_text = String::from("On a branch floating down river a cricket, singing.");
// Create the snapshot directory
let snapshot_dir = temp_snapshot_dir_path(&guest.tmp_dir);
let r = std::panic::catch_unwind(|| {
guest.wait_vm_boot(None).unwrap();
// close the fds after VM boots, as CH duplicates them before using
for tap in taps.iter() {
unsafe { libc::close(tap.as_raw_fd()) };
}
// Check the number of vCPUs
assert_eq!(guest.get_cpu_count().unwrap_or_default(), n_cpu);
// Check the guest RAM
assert!(guest.get_total_memory().unwrap_or_default() > 960_000);
// Check the guest virtio-devices, e.g. block, rng, vsock, console, and net
guest.check_devices_common(None, Some(&console_text), None);
snapshot_and_check_events(&api_socket_source, &snapshot_dir, &event_path);
});
// Shutdown the source VM and check console output
let _ = child.kill();
let output = child.wait_with_output().unwrap();
handle_child_output(r, &output);
let r = std::panic::catch_unwind(|| {
assert!(String::from_utf8_lossy(&output.stdout).contains(&console_text));
});
handle_child_output(r, &output);
let api_socket_restored = format!("{}.2", temp_api_path(&guest.tmp_dir));
let event_path_restored = format!("{}.2", temp_event_monitor_path(&guest.tmp_dir));
// Restore the VM from the snapshot
let mut child = GuestCommand::new(&guest)
.args(["--api-socket", &api_socket_restored])
.args([
"--event-monitor",
format!("path={event_path_restored}").as_str(),
])
.capture_output()
.spawn()
.unwrap();
thread::sleep(std::time::Duration::new(2, 0));
let taps = net_util::open_tap(
Some(tap_name),
Some(std::net::Ipv4Addr::from_str(&guest.network.host_ip).unwrap()),
None,
&mut None,
None,
num_queue_pairs,
Some(libc::O_RDWR | libc::O_NONBLOCK),
)
.unwrap();
let restore_params = format!(
"source_url=file://{},net_fds=[{}@[{},{}]]",
snapshot_dir,
net_id,
taps[0].as_raw_fd(),
taps[1].as_raw_fd()
);
assert!(remote_command(
&api_socket_restored,
"restore",
Some(restore_params.as_str())
));
// Wait for the VM to be restored
thread::sleep(std::time::Duration::new(20, 0));
// close the fds as CH duplicates them before using
for tap in taps.iter() {
unsafe { libc::close(tap.as_raw_fd()) };
}
let expected_events = [
&MetaEvent {
event: "starting".to_string(),
device_id: None,
},
&MetaEvent {
event: "activated".to_string(),
device_id: Some("__console".to_string()),
},
&MetaEvent {
event: "activated".to_string(),
device_id: Some("__rng".to_string()),
},
&MetaEvent {
event: "restoring".to_string(),
device_id: None,
},
];
assert!(check_sequential_events(
&expected_events,
&event_path_restored
));
let latest_events = [&MetaEvent {
event: "restored".to_string(),
device_id: None,
}];
assert!(check_latest_events_exact(
&latest_events,
&event_path_restored
));
// Remove the snapshot dir
let _ = remove_dir_all(snapshot_dir.as_str());
let r = std::panic::catch_unwind(|| {
// Resume the VM
assert!(remote_command(&api_socket_restored, "resume", None));
// There is no way that we can ensure the 'write()' to the
// event file is completed when the 'resume' request is
// returned successfully, because the 'write()' was done
// asynchronously from a different thread of Cloud
// Hypervisor (e.g. the event-monitor thread).
thread::sleep(std::time::Duration::new(1, 0));
let latest_events = [
&MetaEvent {
event: "resuming".to_string(),
device_id: None,
},
&MetaEvent {
event: "resumed".to_string(),
device_id: None,
},
];
assert!(check_latest_events_exact(
&latest_events,
&event_path_restored
));
// Perform same checks to validate VM has been properly restored
assert_eq!(guest.get_cpu_count().unwrap_or_default(), n_cpu);
assert!(guest.get_total_memory().unwrap_or_default() > 960_000);
guest.check_devices_common(None, Some(&console_text), None);
});
// Shutdown the target VM and check console output
let _ = child.kill();
let output = child.wait_with_output().unwrap();
handle_child_output(r, &output);
let r = std::panic::catch_unwind(|| {
assert!(String::from_utf8_lossy(&output.stdout).contains(&console_text));
});
handle_child_output(r, &output);
}
}
mod windows {

View File

@ -1,8 +1,8 @@
[package]
name = "tpm"
edition = "2021"
authors = ["Microsoft Authors"]
edition = "2021"
license = "Apache-2.0"
name = "tpm"
version = "0.1.0"
[dependencies]

View File

@ -1,8 +1,8 @@
[package]
name = "tracer"
version = "0.1.0"
authors = ["The Cloud Hypervisor Authors"]
edition = "2021"
name = "tracer"
version = "0.1.0"
[dependencies]
libc = "0.2.153"

View File

@ -1,13 +1,13 @@
[package]
authors = ["The Cloud Hypervisor Authors"]
build = "../build.rs"
edition = "2021"
name = "vhost_user_block"
version = "0.1.0"
authors = ["The Cloud Hypervisor Authors"]
edition = "2021"
build = "../build.rs"
[dependencies]
clap = { version = "4.5.4", features = ["wrap_help","cargo"] }
block = { path = "../block" }
clap = { version = "4.5.4", features = ["wrap_help", "cargo"] }
env_logger = "0.11.3"
epoll = "4.3.3"
libc = "0.2.153"

View File

@ -1,12 +1,12 @@
[package]
authors = ["The Cloud Hypervisor Authors"]
build = "../build.rs"
edition = "2021"
name = "vhost_user_net"
version = "0.1.0"
authors = ["The Cloud Hypervisor Authors"]
edition = "2021"
build = "../build.rs"
[dependencies]
clap = { version = "4.5.4", features = ["wrap_help","cargo"] }
clap = { version = "4.5.4", features = ["wrap_help", "cargo"] }
env_logger = "0.11.3"
epoll = "4.3.3"
libc = "0.2.153"

View File

@ -1,11 +1,12 @@
[package]
name = "virtio-devices"
version = "0.1.0"
authors = ["The Cloud Hypervisor Authors"]
edition = "2021"
name = "virtio-devices"
version = "0.1.0"
[features]
default = []
sev_snp = []
[dependencies]
anyhow = "1.0.81"
@ -23,15 +24,26 @@ rate_limiter = { path = "../rate_limiter" }
seccompiler = "0.4.0"
serde = { version = "1.0.197", features = ["derive"] }
serde_json = "1.0.115"
serde_with = { version = "3.7.0", default-features = false, features = ["macros"] }
serde_with = { version = "3.7.0", default-features = false, features = [
"macros",
] }
serial_buffer = { path = "../serial_buffer" }
thiserror = "1.0.60"
vhost = { version = "0.11.0", features = ["vhost-user-frontend", "vhost-user-backend", "vhost-kern", "vhost-vdpa"] }
vhost = { version = "0.11.0", features = [
"vhost-user-frontend",
"vhost-user-backend",
"vhost-kern",
"vhost-vdpa",
] }
virtio-bindings = { version = "0.2.2", features = ["virtio-v5_0_0"] }
virtio-queue = "0.12.0"
vm-allocator = { path = "../vm-allocator" }
vm-device = { path = "../vm-device" }
vm-memory = { version = "0.14.1", features = ["backend-mmap", "backend-atomic", "backend-bitmap"] }
vm-memory = { version = "0.14.1", features = [
"backend-mmap",
"backend-atomic",
"backend-bitmap",
] }
vm-migration = { path = "../vm-migration" }
vm-virtio = { path = "../vm-virtio" }
vmm-sys-util = "0.12.1"

8
virtio-devices/build.rs Normal file
View File

@ -0,0 +1,8 @@
// Copyright © 2024 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
//
fn main() {
println!("cargo::rustc-check-cfg=cfg(fuzzing)");
}

View File

@ -1,8 +1,8 @@
[package]
name = "vm-allocator"
version = "0.1.0"
authors = ["The Chromium OS Authors"]
edition = "2021"
name = "vm-allocator"
version = "0.1.0"
[dependencies]
libc = "0.2.153"

View File

@ -1,8 +1,8 @@
[package]
name = "vm-device"
version = "0.1.0"
authors = ["The Cloud Hypervisor Authors"]
edition = "2021"
name = "vm-device"
version = "0.1.0"
[features]
default = []
@ -12,8 +12,8 @@ mshv = ["vfio-ioctls/mshv"]
[dependencies]
anyhow = "1.0.81"
hypervisor = { path = "../hypervisor" }
thiserror = "1.0.60"
serde = { version = "1.0.197", features = ["rc", "derive"] }
thiserror = "1.0.60"
vfio-ioctls = { git = "https://github.com/rust-vmm/vfio", branch = "main", default-features = false }
vm-memory = { version = "0.14.1", features = ["backend-mmap"] }
vmm-sys-util = "0.12.1"

View File

@ -1,12 +1,15 @@
[package]
name = "vm-migration"
version = "0.1.0"
authors = ["The Cloud Hypervisor Authors"]
edition = "2021"
name = "vm-migration"
version = "0.1.0"
[dependencies]
anyhow = "1.0.81"
thiserror = "1.0.60"
serde = { version = "1.0.197", features = ["rc", "derive"] }
serde_json = "1.0.115"
vm-memory = { version = "0.14.1", features = ["backend-mmap", "backend-atomic"] }
thiserror = "1.0.60"
vm-memory = { version = "0.14.1", features = [
"backend-mmap",
"backend-atomic",
] }

View File

@ -1,8 +1,8 @@
[package]
name = "vm-virtio"
version = "0.1.0"
authors = ["The Cloud Hypervisor Authors"]
edition = "2021"
name = "vm-virtio"
version = "0.1.0"
[features]
default = []
@ -10,4 +10,8 @@ default = []
[dependencies]
log = "0.4.21"
virtio-queue = "0.12.0"
vm-memory = { version = "0.14.1", features = ["backend-mmap", "backend-atomic", "backend-bitmap"] }
vm-memory = { version = "0.14.1", features = [
"backend-mmap",
"backend-atomic",
"backend-bitmap",
] }

View File

@ -1,23 +1,24 @@
[package]
name = "vmm"
version = "0.1.0"
authors = ["The Cloud Hypervisor Authors"]
edition = "2021"
name = "vmm"
version = "0.1.0"
[features]
default = []
dbus_api = ["blocking", "futures", "zbus"]
default = []
dhat-heap = ["dhat"] # For heap profiling
guest_debug = ["kvm", "gdbstub", "gdbstub_arch"]
igvm = ["hex", "dep:igvm", "igvm_defs", "mshv-bindings", "range_map_vec"]
igvm = ["hex", "dep:igvm", "igvm_defs", "mshv-bindings", "range_map_vec"]
io_uring = ["block/io_uring"]
kvm = ["hypervisor/kvm", "vfio-ioctls/kvm", "vm-device/kvm", "pci/kvm"]
mshv = ["hypervisor/mshv", "vfio-ioctls/mshv", "vm-device/mshv", "pci/mshv"]
sev_snp = ["arch/sev_snp", "hypervisor/sev_snp"]
sev_snp = ["arch/sev_snp", "hypervisor/sev_snp", "virtio-devices/sev_snp"]
tdx = ["arch/tdx", "hypervisor/tdx"]
tracing = ["tracer/tracing"]
[dependencies]
acpi_tables = { git = "https://github.com/rust-vmm/acpi_tables", branch = "main" }
acpi_tables = { git = "https://github.com/rust-vmm/acpi_tables", branch = "main" }
anyhow = "1.0.81"
arc-swap = "1.7.1"
arch = { path = "../arch" }
@ -27,6 +28,7 @@ blocking = { version = "1.5.1", optional = true }
cfg-if = "1.0.0"
clap = "4.5.4"
devices = { path = "../devices" }
dhat = { version = "0.3.3", optional = true }
epoll = "4.3.3"
event_monitor = { path = "../event_monitor" }
flume = "0.11.0"
@ -35,13 +37,16 @@ gdbstub = { version = "0.7.1", optional = true }
gdbstub_arch = { version = "0.3.0", optional = true }
hex = { version = "0.4.3", optional = true }
hypervisor = { path = "../hypervisor" }
igvm = { version = "0.2.0", optional = true }
igvm_defs = { version = "0.2.0", optional = true }
igvm = { version = "0.2.0", optional = true }
igvm_defs = { version = "0.2.0", optional = true }
libc = "0.2.153"
linux-loader = { version = "0.11.0", features = ["elf", "bzimage", "pe"] }
log = "0.4.21"
micro_http = { git = "https://github.com/firecracker-microvm/micro-http", branch = "main" }
mshv-bindings = { git = "https://github.com/rust-vmm/mshv", branch = "main", features = ["with-serde", "fam-wrappers"], optional = true }
mshv-bindings = { git = "https://github.com/rust-vmm/mshv", branch = "main", features = [
"with-serde",
"fam-wrappers",
], optional = true }
net_util = { path = "../net_util" }
once_cell = "1.19.0"
option_parser = { path = "../option_parser" }
@ -62,9 +67,13 @@ virtio-devices = { path = "../virtio-devices" }
virtio-queue = "0.12.0"
vm-allocator = { path = "../vm-allocator" }
vm-device = { path = "../vm-device" }
vm-memory = { version = "0.14.1", features = ["backend-mmap", "backend-atomic", "backend-bitmap"] }
vm-memory = { version = "0.14.1", features = [
"backend-mmap",
"backend-atomic",
"backend-bitmap",
] }
vm-migration = { path = "../vm-migration" }
vm-virtio = { path = "../vm-virtio" }
vmm-sys-util = { version = "0.12.1", features = ["with-serde"] }
zbus = { version = "3.15.2", optional = true }
zerocopy = { version = "0.7.32", features = ["alloc","derive"] }
zerocopy = { version = "0.7.32", features = ["alloc", "derive"] }

8
vmm/build.rs Normal file
View File

@ -0,0 +1,8 @@
// Copyright © 2024 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
//
fn main() {
println!("cargo::rustc-check-cfg=cfg(fuzzing)");
}

View File

@ -13,7 +13,7 @@ use crate::api::{
VmReboot, VmReceiveMigration, VmRemoveDevice, VmResize, VmResizeZone, VmRestore, VmResume,
VmSendMigration, VmShutdown, VmSnapshot,
};
use crate::config::NetConfig;
use crate::config::{NetConfig, RestoreConfig};
use micro_http::{Body, Method, Request, Response, StatusCode, Version};
use std::fs::File;
use std::os::unix::io::IntoRawFd;
@ -184,7 +184,6 @@ vm_action_put_handler_body!(VmAddUserDevice);
vm_action_put_handler_body!(VmRemoveDevice);
vm_action_put_handler_body!(VmResize);
vm_action_put_handler_body!(VmResizeZone);
vm_action_put_handler_body!(VmRestore);
vm_action_put_handler_body!(VmSnapshot);
vm_action_put_handler_body!(VmReceiveMigration);
vm_action_put_handler_body!(VmSendMigration);
@ -220,6 +219,53 @@ impl PutHandler for VmAddNet {
impl GetHandler for VmAddNet {}
impl PutHandler for VmRestore {
fn handle_request(
&'static self,
api_notifier: EventFd,
api_sender: Sender<ApiRequest>,
body: &Option<Body>,
mut files: Vec<File>,
) -> std::result::Result<Option<Body>, HttpError> {
if let Some(body) = body {
let mut restore_cfg: RestoreConfig = serde_json::from_slice(body.raw())?;
let mut fds = Vec::new();
if !files.is_empty() {
fds = files.drain(..).map(|f| f.into_raw_fd()).collect();
}
let expected_fds = match restore_cfg.net_fds {
Some(ref net_fds) => net_fds.iter().map(|net| net.num_fds).sum(),
None => 0,
};
if fds.len() != expected_fds {
error!(
"Number of FDs expected: {}, but received: {}",
expected_fds,
fds.len()
);
return Err(HttpError::BadRequest);
}
if let Some(ref mut nets) = restore_cfg.net_fds {
warn!("Ignoring FDs sent via the HTTP request body");
let mut start_idx = 0;
for restored_net in nets.iter_mut() {
let end_idx = start_idx + restored_net.num_fds;
restored_net.fds = Some(fds[start_idx..end_idx].to_vec());
start_idx = end_idx;
}
}
self.send(api_notifier, api_sender, restore_cfg)
.map_err(HttpError::ApiError)
} else {
Err(HttpError::BadRequest)
}
}
}
impl GetHandler for VmRestore {}
// Common handler for boot, shutdown and reboot
pub struct VmActionHandler {
action: &'static dyn HttpVmAction,

View File

@ -201,6 +201,10 @@ pub enum ValidationError {
InvalidIoPortHex(String),
#[cfg(feature = "sev_snp")]
InvalidHostData,
/// Restore expects all net ids that have fds
RestoreMissingRequiredNetId(String),
/// Number of FDs passed during Restore are incorrect to the NetConfig
RestoreNetFdCountMismatch(String, usize, usize),
}
type ValidationResult<T> = std::result::Result<T, ValidationError>;
@ -343,6 +347,15 @@ impl fmt::Display for ValidationError {
InvalidHostData => {
write!(f, "Invalid host data format")
}
RestoreMissingRequiredNetId(s) => {
write!(f, "Net id {s} is associated with FDs and is required")
}
RestoreNetFdCountMismatch(s, u1, u2) => {
write!(
f,
"Number of Net FDs passed for '{s}' during Restore: {u1}. Expected: {u2}"
)
}
}
}
}
@ -2130,22 +2143,71 @@ impl NumaConfig {
}
}
#[derive(Clone, Debug, PartialEq, Eq, Deserialize, Serialize, Default)]
pub struct RestoredNetConfig {
pub id: String,
#[serde(default)]
pub num_fds: usize,
#[serde(
default,
serialize_with = "serialize_restorednetconfig_fds",
deserialize_with = "deserialize_restorednetconfig_fds"
)]
pub fds: Option<Vec<i32>>,
}
fn serialize_restorednetconfig_fds<S>(
x: &Option<Vec<i32>>,
s: S,
) -> std::result::Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
if let Some(x) = x {
warn!("'RestoredNetConfig' contains FDs that can't be serialized correctly. Serializing them as invalid FDs.");
let invalid_fds = vec![-1; x.len()];
s.serialize_some(&invalid_fds)
} else {
s.serialize_none()
}
}
fn deserialize_restorednetconfig_fds<'de, D>(
d: D,
) -> std::result::Result<Option<Vec<i32>>, D::Error>
where
D: serde::Deserializer<'de>,
{
let invalid_fds: Option<Vec<i32>> = Option::deserialize(d)?;
if let Some(invalid_fds) = invalid_fds {
warn!("'RestoredNetConfig' contains FDs that can't be deserialized correctly. Deserializing them as invalid FDs.");
Ok(Some(vec![-1; invalid_fds.len()]))
} else {
Ok(None)
}
}
#[derive(Clone, Debug, PartialEq, Eq, Deserialize, Serialize, Default)]
pub struct RestoreConfig {
pub source_url: PathBuf,
#[serde(default)]
pub prefault: bool,
#[serde(default)]
pub net_fds: Option<Vec<RestoredNetConfig>>,
}
impl RestoreConfig {
pub const SYNTAX: &'static str = "Restore from a VM snapshot. \
\nRestore parameters \"source_url=<source_url>,prefault=on|off\" \
\nRestore parameters \"source_url=<source_url>,prefault=on|off,\
net_fds=<list_of_net_ids_with_their_associated_fds>\" \
\n`source_url` should be a valid URL (e.g file:///foo/bar or tcp://192.168.1.10/foo) \
\n`prefault` brings memory pages in when enabled (disabled by default)";
\n`prefault` brings memory pages in when enabled (disabled by default) \
\n`net_fds` is a list of net ids with new file descriptors. \
Only net devices backed by FDs directly are needed as input.";
pub fn parse(restore: &str) -> Result<Self> {
let mut parser = OptionParser::new();
parser.add("source_url").add("prefault");
parser.add("source_url").add("prefault").add("net_fds");
parser.parse(restore).map_err(Error::ParseRestore)?;
let source_url = parser
@ -2157,12 +2219,70 @@ impl RestoreConfig {
.map_err(Error::ParseRestore)?
.unwrap_or(Toggle(false))
.0;
let net_fds = parser
.convert::<Tuple<String, Vec<u64>>>("net_fds")
.map_err(Error::ParseRestore)?
.map(|v| {
v.0.iter()
.map(|(id, fds)| RestoredNetConfig {
id: id.clone(),
num_fds: fds.len(),
fds: Some(fds.iter().map(|e| *e as i32).collect()),
})
.collect()
});
Ok(RestoreConfig {
source_url,
prefault,
net_fds,
})
}
// Ensure all net devices from 'VmConfig' backed by FDs have a
// corresponding 'RestoreNetConfig' with a matched 'id' and expected
// number of FDs.
pub fn validate(&self, vm_config: &VmConfig) -> ValidationResult<()> {
let mut restored_net_with_fds = HashMap::new();
for n in self.net_fds.iter().flatten() {
assert_eq!(
n.num_fds,
n.fds.as_ref().map_or(0, |f| f.len()),
"Invalid 'RestoredNetConfig' with conflicted fields."
);
if restored_net_with_fds.insert(n.id.clone(), n).is_some() {
return Err(ValidationError::IdentifierNotUnique(n.id.clone()));
}
}
for net_fds in vm_config.net.iter().flatten() {
if let Some(expected_fds) = &net_fds.fds {
let expected_id = net_fds
.id
.as_ref()
.expect("Invalid 'NetConfig' with empty 'id' for VM restore.");
if let Some(r) = restored_net_with_fds.remove(expected_id) {
if r.num_fds != expected_fds.len() {
return Err(ValidationError::RestoreNetFdCountMismatch(
expected_id.clone(),
r.num_fds,
expected_fds.len(),
));
}
} else {
return Err(ValidationError::RestoreMissingRequiredNetId(
expected_id.clone(),
));
}
}
}
if !restored_net_with_fds.is_empty() {
warn!("Ignoring unused 'net_fds' for VM restore.")
}
Ok(())
}
}
impl TpmConfig {
@ -3570,6 +3690,183 @@ mod tests {
Ok(())
}
#[test]
fn test_restore_parsing() -> Result<()> {
assert_eq!(
RestoreConfig::parse("source_url=/path/to/snapshot")?,
RestoreConfig {
source_url: PathBuf::from("/path/to/snapshot"),
prefault: false,
net_fds: None,
}
);
assert_eq!(
RestoreConfig::parse(
"source_url=/path/to/snapshot,prefault=off,net_fds=[net0@[3,4],net1@[5,6,7,8]]"
)?,
RestoreConfig {
source_url: PathBuf::from("/path/to/snapshot"),
prefault: false,
net_fds: Some(vec![
RestoredNetConfig {
id: "net0".to_string(),
num_fds: 2,
fds: Some(vec![3, 4]),
},
RestoredNetConfig {
id: "net1".to_string(),
num_fds: 4,
fds: Some(vec![5, 6, 7, 8]),
}
]),
}
);
// Parsing should fail as source_url is a required field
assert!(RestoreConfig::parse("prefault=off").is_err());
Ok(())
}
#[test]
fn test_restore_config_validation() {
// interested in only VmConfig.net, so set rest to default values
let mut snapshot_vm_config = VmConfig {
cpus: CpusConfig::default(),
memory: MemoryConfig::default(),
payload: None,
rate_limit_groups: None,
disks: None,
rng: RngConfig::default(),
balloon: None,
fs: None,
pmem: None,
serial: default_serial(),
console: default_console(),
#[cfg(target_arch = "x86_64")]
debug_console: DebugConsoleConfig::default(),
devices: None,
user_devices: None,
vdpa: None,
vsock: None,
pvpanic: false,
iommu: false,
#[cfg(target_arch = "x86_64")]
sgx_epc: None,
numa: None,
watchdog: false,
#[cfg(feature = "guest_debug")]
gdb: false,
pci_segments: None,
platform: None,
tpm: None,
preserved_fds: None,
net: Some(vec![
NetConfig {
id: Some("net0".to_owned()),
num_queues: 2,
fds: Some(vec![-1, -1, -1, -1]),
..net_fixture()
},
NetConfig {
id: Some("net1".to_owned()),
num_queues: 1,
fds: Some(vec![-1, -1]),
..net_fixture()
},
NetConfig {
id: Some("net2".to_owned()),
fds: None,
..net_fixture()
},
]),
};
let valid_config = RestoreConfig {
source_url: PathBuf::from("/path/to/snapshot"),
prefault: false,
net_fds: Some(vec![
RestoredNetConfig {
id: "net0".to_string(),
num_fds: 4,
fds: Some(vec![3, 4, 5, 6]),
},
RestoredNetConfig {
id: "net1".to_string(),
num_fds: 2,
fds: Some(vec![7, 8]),
},
]),
};
assert!(valid_config.validate(&snapshot_vm_config).is_ok());
let mut invalid_config = valid_config.clone();
invalid_config.net_fds = Some(vec![RestoredNetConfig {
id: "netx".to_string(),
num_fds: 4,
fds: Some(vec![3, 4, 5, 6]),
}]);
assert_eq!(
invalid_config.validate(&snapshot_vm_config),
Err(ValidationError::RestoreMissingRequiredNetId(
"net0".to_string()
))
);
invalid_config.net_fds = Some(vec![
RestoredNetConfig {
id: "net0".to_string(),
num_fds: 4,
fds: Some(vec![3, 4, 5, 6]),
},
RestoredNetConfig {
id: "net0".to_string(),
num_fds: 4,
fds: Some(vec![3, 4, 5, 6]),
},
]);
assert_eq!(
invalid_config.validate(&snapshot_vm_config),
Err(ValidationError::IdentifierNotUnique("net0".to_string()))
);
invalid_config.net_fds = Some(vec![RestoredNetConfig {
id: "net0".to_string(),
num_fds: 4,
fds: Some(vec![3, 4, 5, 6]),
}]);
assert_eq!(
invalid_config.validate(&snapshot_vm_config),
Err(ValidationError::RestoreMissingRequiredNetId(
"net1".to_string()
))
);
invalid_config.net_fds = Some(vec![RestoredNetConfig {
id: "net0".to_string(),
num_fds: 2,
fds: Some(vec![3, 4]),
}]);
assert_eq!(
invalid_config.validate(&snapshot_vm_config),
Err(ValidationError::RestoreNetFdCountMismatch(
"net0".to_string(),
2,
4
))
);
let another_valid_config = RestoreConfig {
source_url: PathBuf::from("/path/to/snapshot"),
prefault: false,
net_fds: None,
};
snapshot_vm_config.net = Some(vec![NetConfig {
id: Some("net2".to_owned()),
fds: None,
..net_fixture()
}]);
assert!(another_valid_config.validate(&snapshot_vm_config).is_ok());
}
fn platform_fixture() -> PlatformConfig {
PlatformConfig {
num_pci_segments: MAX_NUM_PCI_SEGMENTS,

View File

@ -1321,6 +1321,24 @@ impl RequestHandler for Vmm {
let vm_config = Arc::new(Mutex::new(
recv_vm_config(source_url).map_err(VmError::Restore)?,
));
restore_cfg
.validate(&vm_config.lock().unwrap().clone())
.map_err(VmError::ConfigValidation)?;
// Update VM's net configurations with new fds received for restore operation
if let (Some(restored_nets), Some(vm_net_configs)) =
(restore_cfg.net_fds, &mut vm_config.lock().unwrap().net)
{
for net in restored_nets.iter() {
for net_config in vm_net_configs.iter_mut() {
// update only if the net dev is backed by FDs
if net_config.id == Some(net.id.clone()) && net_config.fds.is_some() {
net_config.fds.clone_from(&net.fds);
}
}
}
}
let snapshot = recv_vm_state(source_url).map_err(VmError::Restore)?;
#[cfg(all(feature = "kvm", target_arch = "x86_64"))]
let vm_snapshot = get_vm_snapshot(&snapshot).map_err(VmError::Restore)?;