Compare commits

...

12 Commits

Author SHA1 Message Date
Purna Pavan Chandra bda0846e8d
Merge 7a0b3ac9d0 into 241d1d5cdb 2024-05-09 13:07:04 +00:00
Purna Pavan Chandra 7a0b3ac9d0 tests: add back tests_snapshot_restore* but to common_sequential
tests_snapshot_restore* have been earlier removed from common_parallel
due to the falkiness they add testsuite. Running them sequentially would
eliminate the flakiness. Hence, add the tests back to testsuite but into
common_sequential module.

Signed-off-by: Purna Pavan Chandra <paekkaladevi@linux.microsoft.com>
2024-05-09 13:06:34 +00:00
Purna Pavan Chandra 67cfa5323d tests: remove test_snapshot_restore* tests from common_parallel
test_snapshot_restore_* tests often have transient failures and add to
overall flakiness of the integration testsuite. Hence, remove them from
common_parallel. However, these tests need to be added back to
common_sequential

Signed-off-by: Purna Pavan Chandra <paekkaladevi@linux.microsoft.com>
2024-05-09 12:46:16 +00:00
Purna Pavan Chandra 42442c1f62 tests: Add test_snapshot_restore_with_fd to integration tests
VM is created with FDs explicitly passed to CH via --net parameter
and snapshotted. New net FDs are passed in turn during restore.
Boilerplate code from _test_snapshot_restore().

Signed-off-by: Purna Pavan Chandra <paekkaladevi@linux.microsoft.com>
2024-05-09 09:55:50 +00:00
Purna Pavan Chandra b74c586c37 docs: Update snapshot/restore documentation
Add a section about restoring VM with new Net FDs explicitly passed to
ch-remote via 'net_fds' parameter

Signed-off-by: Purna Pavan Chandra <paekkaladevi@linux.microsoft.com>
2024-05-09 09:03:02 +00:00
Wei Liu 241d1d5cdb hypervisor: kvm: add missing capability requirements
The list is gathered from going through various code paths in the code
base.

No functional change intended.

Signed-off-by: Wei Liu <liuwe@microsoft.com>
2024-05-09 06:50:57 +00:00
Wei Liu c07671edb4 hypervisor: kvm: introduce a check_extension macro
That reduces code repetition.

Signed-off-by: Wei Liu <liuwe@microsoft.com>
2024-05-09 06:50:57 +00:00
Wei Liu 8093820965 hypervisor: kvm: sort the required capabilities
No functional change.

Signed-off-by: Wei Liu <liuwe@microsoft.com>
2024-05-09 06:50:57 +00:00
Wei Liu 86cf50565e hypervisor: kvm: drop the check for Cap::SignalMsi
Per the KVM API document, that capability is only valid with in-kernel
irqchip that handles MSIs.

Through out the code base, there is no call to KVM_IOCTL_SIGNAL_MSI.

Signed-off-by: Wei Liu <liuwe@microsoft.com>
2024-05-09 06:50:57 +00:00
Purna Pavan Chandra e9ac5851bf ch-remote: allow fds to be sent along with 'restore'
Enable restore command the ability to send file descriptors along with
HTTP request. This is useful when restoring a VM with explicit FDs
passed to NetConfig(s).

Signed-off-by: Purna Pavan Chandra <paekkaladevi@linux.microsoft.com>
2024-05-08 08:29:06 +00:00
Purna Pavan Chandra f7f2b95a8d vmm: http_endpoint: Change PutHandler for VmRestore
Consume FDs passed via SCM_RIGHTs to VmRestore API and assign them
appropriately to RestoredNetConfig's fds field.

Signed-off-by: Purna Pavan Chandra <paekkaladevi@linux.microsoft.com>
2024-05-08 08:29:06 +00:00
Purna Pavan Chandra a68e3b768a vmm: Support passing Net FDs to Restore
'NetConfig' FDs, when explicitly passed via SCM_RIGHTS during VM
creation, are marked as invalid during snapshot. See: #6332.
So, Restore should support input for the new net FDs. This patch adds
new field 'net_fds' to 'RestoreConfig'. The FDs passed using this new
field are replaced into the 'fds' field of NetConfig appropriately.

The 'validate()' function ensures all net devices from 'VmConfig' backed
by FDs have a corresponding 'RestoreNetConfig' with a matched 'id' and
expected number of FDs.

The unit tests provide different inputs to parse and validate functions
to make sure parsing and error handling is as per expectation.

Fixes: #6286

Signed-off-by: Purna Pavan Chandra <paekkaladevi@linux.microsoft.com>
Co-authored-by: Bo Chen <chen.bo@intel.com>
2024-05-08 08:29:06 +00:00
8 changed files with 970 additions and 346 deletions

View File

@ -63,7 +63,7 @@ component in the state it was left before the snapshot occurred.
## Restore a Cloud Hypervisor VM
Given that one has access to an existing snapshot in `/home/foo/snapshot`,
it is possible to create a new VM based on this snapshot with the following
it is possible to create a new VM based on this snapshot with the following
command:
```bash
@ -93,6 +93,21 @@ start using it.
At this point, the VM is fully restored and is identical to the VM which was
snapshot earlier.
## Restore a VM with new Net FDs
For a VM created with FDs explicitly passed to NetConfig, a set of valid FDs
need to be provided along with the VM restore command in the following syntax:
```bash
# First terminal
./cloud-hypervisor --api-socket /tmp/cloud-hypervisor.sock
# Second terminal
./ch-remote --api-socket=/tmp/cloud-hypervisor.sock restore source_url=file:///home/foo/snapshot net_fds=[net1@[23,24],net2@[25,26]]
```
In the example above, the net device with id `net1` will be backed by FDs '23'
and '24', and the net device with id `net2` will be backed by FDs '25' and '26'
from the restored VM.
## Limitations
VFIO devices and Intel SGX are out of scope.

View File

@ -106,12 +106,23 @@ pub fn is_system_register(regid: u64) -> bool {
}
pub fn check_required_kvm_extensions(kvm: &Kvm) -> KvmResult<()> {
if !kvm.check_extension(Cap::SignalMsi) {
return Err(KvmError::CapabilityMissing(Cap::SignalMsi));
}
if !kvm.check_extension(Cap::OneReg) {
return Err(KvmError::CapabilityMissing(Cap::OneReg));
macro_rules! check_extension {
($cap:expr) => {
if !kvm.check_extension($cap) {
return Err(KvmError::CapabilityMissing($cap));
}
};
}
// SetGuestDebug is required but some kernels have it implemented without the capability flag.
check_extension!(Cap::ImmediateExit);
check_extension!(Cap::Ioeventfd);
check_extension!(Cap::Irqchip);
check_extension!(Cap::Irqfd);
check_extension!(Cap::IrqRouting);
check_extension!(Cap::MpState);
check_extension!(Cap::OneReg);
check_extension!(Cap::UserMemory);
Ok(())
}

View File

@ -32,29 +32,37 @@ pub use {
/// Check KVM extension for Linux
///
pub fn check_required_kvm_extensions(kvm: &Kvm) -> KvmResult<()> {
if !kvm.check_extension(Cap::SignalMsi) {
return Err(KvmError::CapabilityMissing(Cap::SignalMsi));
}
if !kvm.check_extension(Cap::TscDeadlineTimer) {
return Err(KvmError::CapabilityMissing(Cap::TscDeadlineTimer));
}
if !kvm.check_extension(Cap::SplitIrqchip) {
return Err(KvmError::CapabilityMissing(Cap::SplitIrqchip));
}
if !kvm.check_extension(Cap::SetIdentityMapAddr) {
return Err(KvmError::CapabilityMissing(Cap::SetIdentityMapAddr));
}
if !kvm.check_extension(Cap::SetTssAddr) {
return Err(KvmError::CapabilityMissing(Cap::SetTssAddr));
}
if !kvm.check_extension(Cap::ImmediateExit) {
return Err(KvmError::CapabilityMissing(Cap::ImmediateExit));
}
if !kvm.check_extension(Cap::GetTscKhz) {
return Err(KvmError::CapabilityMissing(Cap::GetTscKhz));
macro_rules! check_extension {
($cap:expr) => {
if !kvm.check_extension($cap) {
return Err(KvmError::CapabilityMissing($cap));
}
};
}
// DeviceCtrl, EnableCap, and SetGuestDebug are also required, but some kernels have
// the features implemented without the capability flags.
check_extension!(Cap::AdjustClock);
check_extension!(Cap::ExtCpuid);
check_extension!(Cap::GetTscKhz);
check_extension!(Cap::ImmediateExit);
check_extension!(Cap::Ioeventfd);
check_extension!(Cap::Irqchip);
check_extension!(Cap::Irqfd);
check_extension!(Cap::IrqRouting);
check_extension!(Cap::MpState);
check_extension!(Cap::SetIdentityMapAddr);
check_extension!(Cap::SetTssAddr);
check_extension!(Cap::SplitIrqchip);
check_extension!(Cap::TscDeadlineTimer);
check_extension!(Cap::UserMemory);
check_extension!(Cap::UserNmi);
check_extension!(Cap::VcpuEvents);
check_extension!(Cap::Xcrs);
check_extension!(Cap::Xsave);
Ok(())
}
#[derive(Clone, Serialize, Deserialize)]
pub struct VcpuKvmState {
pub cpuid: Vec<CpuIdEntry>,

View File

@ -445,14 +445,14 @@ fn rest_api_do_command(matches: &ArgMatches, socket: &mut UnixStream) -> ApiResu
.map_err(Error::HttpApiClient)
}
Some("restore") => {
let restore_config = restore_config(
let (restore_config, fds) = restore_config(
matches
.subcommand_matches("restore")
.unwrap()
.get_one::<String>("restore_config")
.unwrap(),
)?;
simple_api_command(socket, "PUT", "restore", Some(&restore_config))
simple_api_command_with_fds(socket, "PUT", "restore", Some(&restore_config), fds)
.map_err(Error::HttpApiClient)
}
Some("coredump") => {
@ -661,7 +661,7 @@ fn dbus_api_do_command(matches: &ArgMatches, proxy: &DBusApi1ProxyBlocking<'_>)
proxy.api_vm_snapshot(&snapshot_config)
}
Some("restore") => {
let restore_config = restore_config(
let (restore_config, _fds) = restore_config(
matches
.subcommand_matches("restore")
.unwrap()
@ -849,11 +849,20 @@ fn snapshot_config(url: &str) -> String {
serde_json::to_string(&snapshot_config).unwrap()
}
fn restore_config(config: &str) -> Result<String, Error> {
let restore_config = vmm::config::RestoreConfig::parse(config).map_err(Error::Restore)?;
fn restore_config(config: &str) -> Result<(String, Vec<i32>), Error> {
let mut restore_config = vmm::config::RestoreConfig::parse(config).map_err(Error::Restore)?;
// RestoreConfig is modified on purpose to take out the file descriptors.
// These fds are passed to the server side process via SCM_RIGHTS
let fds = match &mut restore_config.net_fds {
Some(net_fds) => net_fds
.iter_mut()
.flat_map(|net| net.fds.take().unwrap_or_default())
.collect(),
None => Vec::new(),
};
let restore_config = serde_json::to_string(&restore_config).unwrap();
Ok(restore_config)
Ok((restore_config, fds))
}
fn coredump_config(destination_url: &str) -> String {

View File

@ -2344,10 +2344,7 @@ fn make_guest_panic(guest: &Guest) {
}
mod common_parallel {
use std::{
fs::{remove_dir_all, OpenOptions},
io::SeekFrom,
};
use std::{fs::OpenOptions, io::SeekFrom};
use crate::*;
@ -5989,310 +5986,6 @@ mod common_parallel {
});
}
// One thing to note about this test. The virtio-net device is heavily used
// through each ssh command. There's no need to perform a dedicated test to
// verify the migration went well for virtio-net.
#[test]
#[cfg(not(feature = "mshv"))]
fn test_snapshot_restore_hotplug_virtiomem() {
_test_snapshot_restore(true);
}
#[test]
fn test_snapshot_restore_basic() {
_test_snapshot_restore(false);
}
fn _test_snapshot_restore(use_hotplug: bool) {
let focal = UbuntuDiskConfig::new(FOCAL_IMAGE_NAME.to_string());
let guest = Guest::new(Box::new(focal));
let kernel_path = direct_kernel_boot_path();
let api_socket_source = format!("{}.1", temp_api_path(&guest.tmp_dir));
let net_id = "net123";
let net_params = format!(
"id={},tap=,mac={},ip={},mask=255.255.255.0",
net_id, guest.network.guest_mac, guest.network.host_ip
);
let mut mem_params = "size=2G";
if use_hotplug {
mem_params = "size=2G,hotplug_method=virtio-mem,hotplug_size=32G"
}
let cloudinit_params = format!(
"path={},iommu=on",
guest.disk_config.disk(DiskType::CloudInit).unwrap()
);
let socket = temp_vsock_path(&guest.tmp_dir);
let event_path = temp_event_monitor_path(&guest.tmp_dir);
let mut child = GuestCommand::new(&guest)
.args(["--api-socket", &api_socket_source])
.args(["--event-monitor", format!("path={event_path}").as_str()])
.args(["--cpus", "boot=4"])
.args(["--memory", mem_params])
.args(["--balloon", "size=0"])
.args(["--kernel", kernel_path.to_str().unwrap()])
.args([
"--disk",
format!(
"path={}",
guest.disk_config.disk(DiskType::OperatingSystem).unwrap()
)
.as_str(),
cloudinit_params.as_str(),
])
.args(["--net", net_params.as_str()])
.args(["--vsock", format!("cid=3,socket={socket}").as_str()])
.args(["--cmdline", DIRECT_KERNEL_BOOT_CMDLINE])
.capture_output()
.spawn()
.unwrap();
let console_text = String::from("On a branch floating down river a cricket, singing.");
// Create the snapshot directory
let snapshot_dir = temp_snapshot_dir_path(&guest.tmp_dir);
let r = std::panic::catch_unwind(|| {
guest.wait_vm_boot(None).unwrap();
// Check the number of vCPUs
assert_eq!(guest.get_cpu_count().unwrap_or_default(), 4);
// Check the guest RAM
assert!(guest.get_total_memory().unwrap_or_default() > 1_920_000);
if use_hotplug {
// Increase guest RAM with virtio-mem
resize_command(
&api_socket_source,
None,
Some(6 << 30),
None,
Some(&event_path),
);
thread::sleep(std::time::Duration::new(5, 0));
assert!(guest.get_total_memory().unwrap_or_default() > 5_760_000);
// Use balloon to remove RAM from the VM
resize_command(
&api_socket_source,
None,
None,
Some(1 << 30),
Some(&event_path),
);
thread::sleep(std::time::Duration::new(5, 0));
let total_memory = guest.get_total_memory().unwrap_or_default();
assert!(total_memory > 4_800_000);
assert!(total_memory < 5_760_000);
}
// Check the guest virtio-devices, e.g. block, rng, vsock, console, and net
guest.check_devices_common(Some(&socket), Some(&console_text), None);
// x86_64: We check that removing and adding back the virtio-net device
// does not break the snapshot/restore support for virtio-pci.
// This is an important thing to test as the hotplug will
// trigger a PCI BAR reprogramming, which is a good way of
// checking if the stored resources are correctly restored.
// Unplug the virtio-net device
// AArch64: Device hotplug is currently not supported, skipping here.
#[cfg(target_arch = "x86_64")]
{
assert!(remote_command(
&api_socket_source,
"remove-device",
Some(net_id),
));
thread::sleep(std::time::Duration::new(10, 0));
let latest_events = [&MetaEvent {
event: "device-removed".to_string(),
device_id: Some(net_id.to_string()),
}];
// See: #5938
thread::sleep(std::time::Duration::new(1, 0));
assert!(check_latest_events_exact(&latest_events, &event_path));
// Plug the virtio-net device again
assert!(remote_command(
&api_socket_source,
"add-net",
Some(net_params.as_str()),
));
thread::sleep(std::time::Duration::new(10, 0));
}
// Pause the VM
assert!(remote_command(&api_socket_source, "pause", None));
let latest_events = [
&MetaEvent {
event: "pausing".to_string(),
device_id: None,
},
&MetaEvent {
event: "paused".to_string(),
device_id: None,
},
];
// See: #5938
thread::sleep(std::time::Duration::new(1, 0));
assert!(check_latest_events_exact(&latest_events, &event_path));
// Take a snapshot from the VM
assert!(remote_command(
&api_socket_source,
"snapshot",
Some(format!("file://{snapshot_dir}").as_str()),
));
// Wait to make sure the snapshot is completed
thread::sleep(std::time::Duration::new(10, 0));
let latest_events = [
&MetaEvent {
event: "snapshotting".to_string(),
device_id: None,
},
&MetaEvent {
event: "snapshotted".to_string(),
device_id: None,
},
];
// See: #5938
thread::sleep(std::time::Duration::new(1, 0));
assert!(check_latest_events_exact(&latest_events, &event_path));
});
// Shutdown the source VM and check console output
let _ = child.kill();
let output = child.wait_with_output().unwrap();
handle_child_output(r, &output);
let r = std::panic::catch_unwind(|| {
assert!(String::from_utf8_lossy(&output.stdout).contains(&console_text));
});
handle_child_output(r, &output);
// Remove the vsock socket file.
Command::new("rm")
.arg("-f")
.arg(socket.as_str())
.output()
.unwrap();
let api_socket_restored = format!("{}.2", temp_api_path(&guest.tmp_dir));
let event_path_restored = format!("{}.2", temp_event_monitor_path(&guest.tmp_dir));
// Restore the VM from the snapshot
let mut child = GuestCommand::new(&guest)
.args(["--api-socket", &api_socket_restored])
.args([
"--event-monitor",
format!("path={event_path_restored}").as_str(),
])
.args([
"--restore",
format!("source_url=file://{snapshot_dir}").as_str(),
])
.capture_output()
.spawn()
.unwrap();
// Wait for the VM to be restored
thread::sleep(std::time::Duration::new(20, 0));
let expected_events = [
&MetaEvent {
event: "starting".to_string(),
device_id: None,
},
&MetaEvent {
event: "activated".to_string(),
device_id: Some("__console".to_string()),
},
&MetaEvent {
event: "activated".to_string(),
device_id: Some("__rng".to_string()),
},
&MetaEvent {
event: "restoring".to_string(),
device_id: None,
},
];
assert!(check_sequential_events(
&expected_events,
&event_path_restored
));
let latest_events = [&MetaEvent {
event: "restored".to_string(),
device_id: None,
}];
assert!(check_latest_events_exact(
&latest_events,
&event_path_restored
));
// Remove the snapshot dir
let _ = remove_dir_all(snapshot_dir.as_str());
let r = std::panic::catch_unwind(|| {
// Resume the VM
assert!(remote_command(&api_socket_restored, "resume", None));
// There is no way that we can ensure the 'write()' to the
// event file is completed when the 'resume' request is
// returned successfully, because the 'write()' was done
// asynchronously from a different thread of Cloud
// Hypervisor (e.g. the event-monitor thread).
thread::sleep(std::time::Duration::new(1, 0));
let latest_events = [
&MetaEvent {
event: "resuming".to_string(),
device_id: None,
},
&MetaEvent {
event: "resumed".to_string(),
device_id: None,
},
];
assert!(check_latest_events_exact(
&latest_events,
&event_path_restored
));
// Perform same checks to validate VM has been properly restored
assert_eq!(guest.get_cpu_count().unwrap_or_default(), 4);
let total_memory = guest.get_total_memory().unwrap_or_default();
if !use_hotplug {
assert!(total_memory > 1_920_000);
} else {
assert!(total_memory > 4_800_000);
assert!(total_memory < 5_760_000);
// Deflate balloon to restore entire RAM to the VM
resize_command(&api_socket_restored, None, None, Some(0), None);
thread::sleep(std::time::Duration::new(5, 0));
assert!(guest.get_total_memory().unwrap_or_default() > 5_760_000);
// Decrease guest RAM with virtio-mem
resize_command(&api_socket_restored, None, Some(5 << 30), None, None);
thread::sleep(std::time::Duration::new(5, 0));
let total_memory = guest.get_total_memory().unwrap_or_default();
assert!(total_memory > 4_800_000);
assert!(total_memory < 5_760_000);
}
guest.check_devices_common(Some(&socket), Some(&console_text), None);
});
// Shutdown the target VM and check console output
let _ = child.kill();
let output = child.wait_with_output().unwrap();
handle_child_output(r, &output);
let r = std::panic::catch_unwind(|| {
assert!(String::from_utf8_lossy(&output.stdout).contains(&console_text));
});
handle_child_output(r, &output);
}
#[test]
fn test_counters() {
let focal = UbuntuDiskConfig::new(FOCAL_IMAGE_NAME.to_string());
@ -7493,7 +7186,8 @@ mod dbus_api {
}
mod common_sequential {
#[cfg(not(feature = "mshv"))]
use std::fs::remove_dir_all;
use crate::*;
#[test]
@ -7501,6 +7195,532 @@ mod common_sequential {
fn test_memory_mergeable_on() {
test_memory_mergeable(true)
}
fn snapshot_and_check_events(api_socket: &str, snapshot_dir: &str, event_path: &str) {
// Pause the VM
assert!(remote_command(api_socket, "pause", None));
let latest_events: [&MetaEvent; 2] = [
&MetaEvent {
event: "pausing".to_string(),
device_id: None,
},
&MetaEvent {
event: "paused".to_string(),
device_id: None,
},
];
// See: #5938
thread::sleep(std::time::Duration::new(1, 0));
assert!(check_latest_events_exact(&latest_events, event_path));
// Take a snapshot from the VM
assert!(remote_command(
api_socket,
"snapshot",
Some(format!("file://{snapshot_dir}").as_str()),
));
// Wait to make sure the snapshot is completed
thread::sleep(std::time::Duration::new(10, 0));
let latest_events = [
&MetaEvent {
event: "snapshotting".to_string(),
device_id: None,
},
&MetaEvent {
event: "snapshotted".to_string(),
device_id: None,
},
];
// See: #5938
thread::sleep(std::time::Duration::new(1, 0));
assert!(check_latest_events_exact(&latest_events, event_path));
}
// One thing to note about this test. The virtio-net device is heavily used
// through each ssh command. There's no need to perform a dedicated test to
// verify the migration went well for virtio-net.
#[test]
#[cfg(not(feature = "mshv"))]
fn test_snapshot_restore_hotplug_virtiomem() {
_test_snapshot_restore(true);
}
#[test]
fn test_snapshot_restore_basic() {
_test_snapshot_restore(false);
}
fn _test_snapshot_restore(use_hotplug: bool) {
let focal = UbuntuDiskConfig::new(FOCAL_IMAGE_NAME.to_string());
let guest = Guest::new(Box::new(focal));
let kernel_path = direct_kernel_boot_path();
let api_socket_source = format!("{}.1", temp_api_path(&guest.tmp_dir));
let net_id = "net123";
let net_params = format!(
"id={},tap=,mac={},ip={},mask=255.255.255.0",
net_id, guest.network.guest_mac, guest.network.host_ip
);
let mut mem_params = "size=2G";
if use_hotplug {
mem_params = "size=2G,hotplug_method=virtio-mem,hotplug_size=32G"
}
let cloudinit_params = format!(
"path={},iommu=on",
guest.disk_config.disk(DiskType::CloudInit).unwrap()
);
let socket = temp_vsock_path(&guest.tmp_dir);
let event_path = temp_event_monitor_path(&guest.tmp_dir);
let mut child = GuestCommand::new(&guest)
.args(["--api-socket", &api_socket_source])
.args(["--event-monitor", format!("path={event_path}").as_str()])
.args(["--cpus", "boot=4"])
.args(["--memory", mem_params])
.args(["--balloon", "size=0"])
.args(["--kernel", kernel_path.to_str().unwrap()])
.args([
"--disk",
format!(
"path={}",
guest.disk_config.disk(DiskType::OperatingSystem).unwrap()
)
.as_str(),
cloudinit_params.as_str(),
])
.args(["--net", net_params.as_str()])
.args(["--vsock", format!("cid=3,socket={socket}").as_str()])
.args(["--cmdline", DIRECT_KERNEL_BOOT_CMDLINE])
.capture_output()
.spawn()
.unwrap();
let console_text = String::from("On a branch floating down river a cricket, singing.");
// Create the snapshot directory
let snapshot_dir = temp_snapshot_dir_path(&guest.tmp_dir);
let r = std::panic::catch_unwind(|| {
guest.wait_vm_boot(None).unwrap();
// Check the number of vCPUs
assert_eq!(guest.get_cpu_count().unwrap_or_default(), 4);
// Check the guest RAM
assert!(guest.get_total_memory().unwrap_or_default() > 1_920_000);
if use_hotplug {
// Increase guest RAM with virtio-mem
resize_command(
&api_socket_source,
None,
Some(6 << 30),
None,
Some(&event_path),
);
thread::sleep(std::time::Duration::new(5, 0));
assert!(guest.get_total_memory().unwrap_or_default() > 5_760_000);
// Use balloon to remove RAM from the VM
resize_command(
&api_socket_source,
None,
None,
Some(1 << 30),
Some(&event_path),
);
thread::sleep(std::time::Duration::new(5, 0));
let total_memory = guest.get_total_memory().unwrap_or_default();
assert!(total_memory > 4_800_000);
assert!(total_memory < 5_760_000);
}
// Check the guest virtio-devices, e.g. block, rng, vsock, console, and net
guest.check_devices_common(Some(&socket), Some(&console_text), None);
// x86_64: We check that removing and adding back the virtio-net device
// does not break the snapshot/restore support for virtio-pci.
// This is an important thing to test as the hotplug will
// trigger a PCI BAR reprogramming, which is a good way of
// checking if the stored resources are correctly restored.
// Unplug the virtio-net device
// AArch64: Device hotplug is currently not supported, skipping here.
#[cfg(target_arch = "x86_64")]
{
assert!(remote_command(
&api_socket_source,
"remove-device",
Some(net_id),
));
thread::sleep(std::time::Duration::new(10, 0));
let latest_events = [&MetaEvent {
event: "device-removed".to_string(),
device_id: Some(net_id.to_string()),
}];
// See: #5938
thread::sleep(std::time::Duration::new(1, 0));
assert!(check_latest_events_exact(&latest_events, &event_path));
// Plug the virtio-net device again
assert!(remote_command(
&api_socket_source,
"add-net",
Some(net_params.as_str()),
));
thread::sleep(std::time::Duration::new(10, 0));
}
snapshot_and_check_events(&api_socket_source, &snapshot_dir, &event_path);
});
// Shutdown the source VM and check console output
let _ = child.kill();
let output = child.wait_with_output().unwrap();
handle_child_output(r, &output);
let r = std::panic::catch_unwind(|| {
assert!(String::from_utf8_lossy(&output.stdout).contains(&console_text));
});
handle_child_output(r, &output);
// Remove the vsock socket file.
Command::new("rm")
.arg("-f")
.arg(socket.as_str())
.output()
.unwrap();
let api_socket_restored = format!("{}.2", temp_api_path(&guest.tmp_dir));
let event_path_restored = format!("{}.2", temp_event_monitor_path(&guest.tmp_dir));
// Restore the VM from the snapshot
let mut child = GuestCommand::new(&guest)
.args(["--api-socket", &api_socket_restored])
.args([
"--event-monitor",
format!("path={event_path_restored}").as_str(),
])
.args([
"--restore",
format!("source_url=file://{snapshot_dir}").as_str(),
])
.capture_output()
.spawn()
.unwrap();
// Wait for the VM to be restored
thread::sleep(std::time::Duration::new(20, 0));
let expected_events = [
&MetaEvent {
event: "starting".to_string(),
device_id: None,
},
&MetaEvent {
event: "activated".to_string(),
device_id: Some("__console".to_string()),
},
&MetaEvent {
event: "activated".to_string(),
device_id: Some("__rng".to_string()),
},
&MetaEvent {
event: "restoring".to_string(),
device_id: None,
},
];
assert!(check_sequential_events(
&expected_events,
&event_path_restored
));
let latest_events = [&MetaEvent {
event: "restored".to_string(),
device_id: None,
}];
assert!(check_latest_events_exact(
&latest_events,
&event_path_restored
));
// Remove the snapshot dir
let _ = remove_dir_all(snapshot_dir.as_str());
let r = std::panic::catch_unwind(|| {
// Resume the VM
assert!(remote_command(&api_socket_restored, "resume", None));
// There is no way that we can ensure the 'write()' to the
// event file is completed when the 'resume' request is
// returned successfully, because the 'write()' was done
// asynchronously from a different thread of Cloud
// Hypervisor (e.g. the event-monitor thread).
thread::sleep(std::time::Duration::new(1, 0));
let latest_events = [
&MetaEvent {
event: "resuming".to_string(),
device_id: None,
},
&MetaEvent {
event: "resumed".to_string(),
device_id: None,
},
];
assert!(check_latest_events_exact(
&latest_events,
&event_path_restored
));
// Perform same checks to validate VM has been properly restored
assert_eq!(guest.get_cpu_count().unwrap_or_default(), 4);
let total_memory = guest.get_total_memory().unwrap_or_default();
if !use_hotplug {
assert!(total_memory > 1_920_000);
} else {
assert!(total_memory > 4_800_000);
assert!(total_memory < 5_760_000);
// Deflate balloon to restore entire RAM to the VM
resize_command(&api_socket_restored, None, None, Some(0), None);
thread::sleep(std::time::Duration::new(5, 0));
assert!(guest.get_total_memory().unwrap_or_default() > 5_760_000);
// Decrease guest RAM with virtio-mem
resize_command(&api_socket_restored, None, Some(5 << 30), None, None);
thread::sleep(std::time::Duration::new(5, 0));
let total_memory = guest.get_total_memory().unwrap_or_default();
assert!(total_memory > 4_800_000);
assert!(total_memory < 5_760_000);
}
guest.check_devices_common(Some(&socket), Some(&console_text), None);
});
// Shutdown the target VM and check console output
let _ = child.kill();
let output = child.wait_with_output().unwrap();
handle_child_output(r, &output);
let r = std::panic::catch_unwind(|| {
assert!(String::from_utf8_lossy(&output.stdout).contains(&console_text));
});
handle_child_output(r, &output);
}
#[test]
fn test_snapshot_restore_with_fd() {
let focal = UbuntuDiskConfig::new(FOCAL_IMAGE_NAME.to_string());
let guest = Guest::new(Box::new(focal));
let kernel_path = direct_kernel_boot_path();
let api_socket_source = format!("{}.1", temp_api_path(&guest.tmp_dir));
let net_id = "net123";
let num_queue_pairs: usize = 2;
// use a name that does not conflict with tap dev created from other tests
let tap_name = "chtap999";
use std::str::FromStr;
let taps = net_util::open_tap(
Some(tap_name),
Some(std::net::Ipv4Addr::from_str(&guest.network.host_ip).unwrap()),
None,
&mut None,
None,
num_queue_pairs,
Some(libc::O_RDWR | libc::O_NONBLOCK),
)
.unwrap();
let net_params = format!(
"id={},fd=[{},{}],mac={},ip={},mask=255.255.255.0,num_queues={}",
net_id,
taps[0].as_raw_fd(),
taps[1].as_raw_fd(),
guest.network.guest_mac,
guest.network.host_ip,
num_queue_pairs * 2
);
let cloudinit_params = format!(
"path={},iommu=on",
guest.disk_config.disk(DiskType::CloudInit).unwrap()
);
let n_cpu = 2;
let event_path = temp_event_monitor_path(&guest.tmp_dir);
let mut child = GuestCommand::new(&guest)
.args(["--api-socket", &api_socket_source])
.args(["--event-monitor", format!("path={event_path}").as_str()])
.args(["--cpus", format!("boot={}", n_cpu).as_str()])
.args(["--memory", "size=1G"])
.args(["--kernel", kernel_path.to_str().unwrap()])
.args([
"--disk",
format!(
"path={}",
guest.disk_config.disk(DiskType::OperatingSystem).unwrap()
)
.as_str(),
cloudinit_params.as_str(),
])
.args(["--net", net_params.as_str()])
.args(["--cmdline", DIRECT_KERNEL_BOOT_CMDLINE])
.capture_output()
.spawn()
.unwrap();
let console_text = String::from("On a branch floating down river a cricket, singing.");
// Create the snapshot directory
let snapshot_dir = temp_snapshot_dir_path(&guest.tmp_dir);
let r = std::panic::catch_unwind(|| {
guest.wait_vm_boot(None).unwrap();
// close the fds after VM boots, as CH duplicates them before using
for tap in taps.iter() {
unsafe { libc::close(tap.as_raw_fd()) };
}
// Check the number of vCPUs
assert_eq!(guest.get_cpu_count().unwrap_or_default(), n_cpu);
// Check the guest RAM
assert!(guest.get_total_memory().unwrap_or_default() > 960_000);
// Check the guest virtio-devices, e.g. block, rng, vsock, console, and net
guest.check_devices_common(None, Some(&console_text), None);
snapshot_and_check_events(&api_socket_source, &snapshot_dir, &event_path);
});
// Shutdown the source VM and check console output
let _ = child.kill();
let output = child.wait_with_output().unwrap();
handle_child_output(r, &output);
let r = std::panic::catch_unwind(|| {
assert!(String::from_utf8_lossy(&output.stdout).contains(&console_text));
});
handle_child_output(r, &output);
let api_socket_restored = format!("{}.2", temp_api_path(&guest.tmp_dir));
let event_path_restored = format!("{}.2", temp_event_monitor_path(&guest.tmp_dir));
// Restore the VM from the snapshot
let mut child = GuestCommand::new(&guest)
.args(["--api-socket", &api_socket_restored])
.args([
"--event-monitor",
format!("path={event_path_restored}").as_str(),
])
.capture_output()
.spawn()
.unwrap();
thread::sleep(std::time::Duration::new(2, 0));
let taps = net_util::open_tap(
Some(tap_name),
Some(std::net::Ipv4Addr::from_str(&guest.network.host_ip).unwrap()),
None,
&mut None,
None,
num_queue_pairs,
Some(libc::O_RDWR | libc::O_NONBLOCK),
)
.unwrap();
let restore_params = format!(
"source_url=file://{},net_fds=[{}@[{},{}]]",
snapshot_dir,
net_id,
taps[0].as_raw_fd(),
taps[1].as_raw_fd()
);
assert!(remote_command(
&api_socket_restored,
"restore",
Some(restore_params.as_str())
));
// Wait for the VM to be restored
thread::sleep(std::time::Duration::new(20, 0));
// close the fds as CH duplicates them before using
for tap in taps.iter() {
unsafe { libc::close(tap.as_raw_fd()) };
}
let expected_events = [
&MetaEvent {
event: "starting".to_string(),
device_id: None,
},
&MetaEvent {
event: "activated".to_string(),
device_id: Some("__console".to_string()),
},
&MetaEvent {
event: "activated".to_string(),
device_id: Some("__rng".to_string()),
},
&MetaEvent {
event: "restoring".to_string(),
device_id: None,
},
];
assert!(check_sequential_events(
&expected_events,
&event_path_restored
));
let latest_events = [&MetaEvent {
event: "restored".to_string(),
device_id: None,
}];
assert!(check_latest_events_exact(
&latest_events,
&event_path_restored
));
// Remove the snapshot dir
let _ = remove_dir_all(snapshot_dir.as_str());
let r = std::panic::catch_unwind(|| {
// Resume the VM
assert!(remote_command(&api_socket_restored, "resume", None));
// There is no way that we can ensure the 'write()' to the
// event file is completed when the 'resume' request is
// returned successfully, because the 'write()' was done
// asynchronously from a different thread of Cloud
// Hypervisor (e.g. the event-monitor thread).
thread::sleep(std::time::Duration::new(1, 0));
let latest_events = [
&MetaEvent {
event: "resuming".to_string(),
device_id: None,
},
&MetaEvent {
event: "resumed".to_string(),
device_id: None,
},
];
assert!(check_latest_events_exact(
&latest_events,
&event_path_restored
));
// Perform same checks to validate VM has been properly restored
assert_eq!(guest.get_cpu_count().unwrap_or_default(), n_cpu);
assert!(guest.get_total_memory().unwrap_or_default() > 960_000);
guest.check_devices_common(None, Some(&console_text), None);
});
// Shutdown the target VM and check console output
let _ = child.kill();
let output = child.wait_with_output().unwrap();
handle_child_output(r, &output);
let r = std::panic::catch_unwind(|| {
assert!(String::from_utf8_lossy(&output.stdout).contains(&console_text));
});
handle_child_output(r, &output);
}
}
mod windows {

View File

@ -13,7 +13,7 @@ use crate::api::{
VmReboot, VmReceiveMigration, VmRemoveDevice, VmResize, VmResizeZone, VmRestore, VmResume,
VmSendMigration, VmShutdown, VmSnapshot,
};
use crate::config::NetConfig;
use crate::config::{NetConfig, RestoreConfig};
use micro_http::{Body, Method, Request, Response, StatusCode, Version};
use std::fs::File;
use std::os::unix::io::IntoRawFd;
@ -184,7 +184,6 @@ vm_action_put_handler_body!(VmAddUserDevice);
vm_action_put_handler_body!(VmRemoveDevice);
vm_action_put_handler_body!(VmResize);
vm_action_put_handler_body!(VmResizeZone);
vm_action_put_handler_body!(VmRestore);
vm_action_put_handler_body!(VmSnapshot);
vm_action_put_handler_body!(VmReceiveMigration);
vm_action_put_handler_body!(VmSendMigration);
@ -220,6 +219,53 @@ impl PutHandler for VmAddNet {
impl GetHandler for VmAddNet {}
impl PutHandler for VmRestore {
fn handle_request(
&'static self,
api_notifier: EventFd,
api_sender: Sender<ApiRequest>,
body: &Option<Body>,
mut files: Vec<File>,
) -> std::result::Result<Option<Body>, HttpError> {
if let Some(body) = body {
let mut restore_cfg: RestoreConfig = serde_json::from_slice(body.raw())?;
let mut fds = Vec::new();
if !files.is_empty() {
fds = files.drain(..).map(|f| f.into_raw_fd()).collect();
}
let expected_fds = match restore_cfg.net_fds {
Some(ref net_fds) => net_fds.iter().map(|net| net.num_fds).sum(),
None => 0,
};
if fds.len() != expected_fds {
error!(
"Number of FDs expected: {}, but received: {}",
expected_fds,
fds.len()
);
return Err(HttpError::BadRequest);
}
if let Some(ref mut nets) = restore_cfg.net_fds {
warn!("Ignoring FDs sent via the HTTP request body");
let mut start_idx = 0;
for restored_net in nets.iter_mut() {
let end_idx = start_idx + restored_net.num_fds;
restored_net.fds = Some(fds[start_idx..end_idx].to_vec());
start_idx = end_idx;
}
}
self.send(api_notifier, api_sender, restore_cfg)
.map_err(HttpError::ApiError)
} else {
Err(HttpError::BadRequest)
}
}
}
impl GetHandler for VmRestore {}
// Common handler for boot, shutdown and reboot
pub struct VmActionHandler {
action: &'static dyn HttpVmAction,

View File

@ -201,6 +201,10 @@ pub enum ValidationError {
InvalidIoPortHex(String),
#[cfg(feature = "sev_snp")]
InvalidHostData,
/// Restore expects all net ids that have fds
RestoreMissingRequiredNetId(String),
/// Number of FDs passed during Restore are incorrect to the NetConfig
RestoreNetFdCountMismatch(String, usize, usize),
}
type ValidationResult<T> = std::result::Result<T, ValidationError>;
@ -343,6 +347,15 @@ impl fmt::Display for ValidationError {
InvalidHostData => {
write!(f, "Invalid host data format")
}
RestoreMissingRequiredNetId(s) => {
write!(f, "Net id {s} is associated with FDs and is required")
}
RestoreNetFdCountMismatch(s, u1, u2) => {
write!(
f,
"Number of Net FDs passed for '{s}' during Restore: {u1}. Expected: {u2}"
)
}
}
}
}
@ -2130,22 +2143,71 @@ impl NumaConfig {
}
}
#[derive(Clone, Debug, PartialEq, Eq, Deserialize, Serialize, Default)]
pub struct RestoredNetConfig {
pub id: String,
#[serde(default)]
pub num_fds: usize,
#[serde(
default,
serialize_with = "serialize_restorednetconfig_fds",
deserialize_with = "deserialize_restorednetconfig_fds"
)]
pub fds: Option<Vec<i32>>,
}
fn serialize_restorednetconfig_fds<S>(
x: &Option<Vec<i32>>,
s: S,
) -> std::result::Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
if let Some(x) = x {
warn!("'RestoredNetConfig' contains FDs that can't be serialized correctly. Serializing them as invalid FDs.");
let invalid_fds = vec![-1; x.len()];
s.serialize_some(&invalid_fds)
} else {
s.serialize_none()
}
}
fn deserialize_restorednetconfig_fds<'de, D>(
d: D,
) -> std::result::Result<Option<Vec<i32>>, D::Error>
where
D: serde::Deserializer<'de>,
{
let invalid_fds: Option<Vec<i32>> = Option::deserialize(d)?;
if let Some(invalid_fds) = invalid_fds {
warn!("'RestoredNetConfig' contains FDs that can't be deserialized correctly. Deserializing them as invalid FDs.");
Ok(Some(vec![-1; invalid_fds.len()]))
} else {
Ok(None)
}
}
#[derive(Clone, Debug, PartialEq, Eq, Deserialize, Serialize, Default)]
pub struct RestoreConfig {
pub source_url: PathBuf,
#[serde(default)]
pub prefault: bool,
#[serde(default)]
pub net_fds: Option<Vec<RestoredNetConfig>>,
}
impl RestoreConfig {
pub const SYNTAX: &'static str = "Restore from a VM snapshot. \
\nRestore parameters \"source_url=<source_url>,prefault=on|off\" \
\nRestore parameters \"source_url=<source_url>,prefault=on|off,\
net_fds=<list_of_net_ids_with_their_associated_fds>\" \
\n`source_url` should be a valid URL (e.g file:///foo/bar or tcp://192.168.1.10/foo) \
\n`prefault` brings memory pages in when enabled (disabled by default)";
\n`prefault` brings memory pages in when enabled (disabled by default) \
\n`net_fds` is a list of net ids with new file descriptors. \
Only net devices backed by FDs directly are needed as input.";
pub fn parse(restore: &str) -> Result<Self> {
let mut parser = OptionParser::new();
parser.add("source_url").add("prefault");
parser.add("source_url").add("prefault").add("net_fds");
parser.parse(restore).map_err(Error::ParseRestore)?;
let source_url = parser
@ -2157,12 +2219,70 @@ impl RestoreConfig {
.map_err(Error::ParseRestore)?
.unwrap_or(Toggle(false))
.0;
let net_fds = parser
.convert::<Tuple<String, Vec<u64>>>("net_fds")
.map_err(Error::ParseRestore)?
.map(|v| {
v.0.iter()
.map(|(id, fds)| RestoredNetConfig {
id: id.clone(),
num_fds: fds.len(),
fds: Some(fds.iter().map(|e| *e as i32).collect()),
})
.collect()
});
Ok(RestoreConfig {
source_url,
prefault,
net_fds,
})
}
// Ensure all net devices from 'VmConfig' backed by FDs have a
// corresponding 'RestoreNetConfig' with a matched 'id' and expected
// number of FDs.
pub fn validate(&self, vm_config: &VmConfig) -> ValidationResult<()> {
let mut restored_net_with_fds = HashMap::new();
for n in self.net_fds.iter().flatten() {
assert_eq!(
n.num_fds,
n.fds.as_ref().map_or(0, |f| f.len()),
"Invalid 'RestoredNetConfig' with conflicted fields."
);
if restored_net_with_fds.insert(n.id.clone(), n).is_some() {
return Err(ValidationError::IdentifierNotUnique(n.id.clone()));
}
}
for net_fds in vm_config.net.iter().flatten() {
if let Some(expected_fds) = &net_fds.fds {
let expected_id = net_fds
.id
.as_ref()
.expect("Invalid 'NetConfig' with empty 'id' for VM restore.");
if let Some(r) = restored_net_with_fds.remove(expected_id) {
if r.num_fds != expected_fds.len() {
return Err(ValidationError::RestoreNetFdCountMismatch(
expected_id.clone(),
r.num_fds,
expected_fds.len(),
));
}
} else {
return Err(ValidationError::RestoreMissingRequiredNetId(
expected_id.clone(),
));
}
}
}
if !restored_net_with_fds.is_empty() {
warn!("Ignoring unused 'net_fds' for VM restore.")
}
Ok(())
}
}
impl TpmConfig {
@ -3570,6 +3690,183 @@ mod tests {
Ok(())
}
#[test]
fn test_restore_parsing() -> Result<()> {
assert_eq!(
RestoreConfig::parse("source_url=/path/to/snapshot")?,
RestoreConfig {
source_url: PathBuf::from("/path/to/snapshot"),
prefault: false,
net_fds: None,
}
);
assert_eq!(
RestoreConfig::parse(
"source_url=/path/to/snapshot,prefault=off,net_fds=[net0@[3,4],net1@[5,6,7,8]]"
)?,
RestoreConfig {
source_url: PathBuf::from("/path/to/snapshot"),
prefault: false,
net_fds: Some(vec![
RestoredNetConfig {
id: "net0".to_string(),
num_fds: 2,
fds: Some(vec![3, 4]),
},
RestoredNetConfig {
id: "net1".to_string(),
num_fds: 4,
fds: Some(vec![5, 6, 7, 8]),
}
]),
}
);
// Parsing should fail as source_url is a required field
assert!(RestoreConfig::parse("prefault=off").is_err());
Ok(())
}
#[test]
fn test_restore_config_validation() {
// interested in only VmConfig.net, so set rest to default values
let mut snapshot_vm_config = VmConfig {
cpus: CpusConfig::default(),
memory: MemoryConfig::default(),
payload: None,
rate_limit_groups: None,
disks: None,
rng: RngConfig::default(),
balloon: None,
fs: None,
pmem: None,
serial: default_serial(),
console: default_console(),
#[cfg(target_arch = "x86_64")]
debug_console: DebugConsoleConfig::default(),
devices: None,
user_devices: None,
vdpa: None,
vsock: None,
pvpanic: false,
iommu: false,
#[cfg(target_arch = "x86_64")]
sgx_epc: None,
numa: None,
watchdog: false,
#[cfg(feature = "guest_debug")]
gdb: false,
pci_segments: None,
platform: None,
tpm: None,
preserved_fds: None,
net: Some(vec![
NetConfig {
id: Some("net0".to_owned()),
num_queues: 2,
fds: Some(vec![-1, -1, -1, -1]),
..net_fixture()
},
NetConfig {
id: Some("net1".to_owned()),
num_queues: 1,
fds: Some(vec![-1, -1]),
..net_fixture()
},
NetConfig {
id: Some("net2".to_owned()),
fds: None,
..net_fixture()
},
]),
};
let valid_config = RestoreConfig {
source_url: PathBuf::from("/path/to/snapshot"),
prefault: false,
net_fds: Some(vec![
RestoredNetConfig {
id: "net0".to_string(),
num_fds: 4,
fds: Some(vec![3, 4, 5, 6]),
},
RestoredNetConfig {
id: "net1".to_string(),
num_fds: 2,
fds: Some(vec![7, 8]),
},
]),
};
assert!(valid_config.validate(&snapshot_vm_config).is_ok());
let mut invalid_config = valid_config.clone();
invalid_config.net_fds = Some(vec![RestoredNetConfig {
id: "netx".to_string(),
num_fds: 4,
fds: Some(vec![3, 4, 5, 6]),
}]);
assert_eq!(
invalid_config.validate(&snapshot_vm_config),
Err(ValidationError::RestoreMissingRequiredNetId(
"net0".to_string()
))
);
invalid_config.net_fds = Some(vec![
RestoredNetConfig {
id: "net0".to_string(),
num_fds: 4,
fds: Some(vec![3, 4, 5, 6]),
},
RestoredNetConfig {
id: "net0".to_string(),
num_fds: 4,
fds: Some(vec![3, 4, 5, 6]),
},
]);
assert_eq!(
invalid_config.validate(&snapshot_vm_config),
Err(ValidationError::IdentifierNotUnique("net0".to_string()))
);
invalid_config.net_fds = Some(vec![RestoredNetConfig {
id: "net0".to_string(),
num_fds: 4,
fds: Some(vec![3, 4, 5, 6]),
}]);
assert_eq!(
invalid_config.validate(&snapshot_vm_config),
Err(ValidationError::RestoreMissingRequiredNetId(
"net1".to_string()
))
);
invalid_config.net_fds = Some(vec![RestoredNetConfig {
id: "net0".to_string(),
num_fds: 2,
fds: Some(vec![3, 4]),
}]);
assert_eq!(
invalid_config.validate(&snapshot_vm_config),
Err(ValidationError::RestoreNetFdCountMismatch(
"net0".to_string(),
2,
4
))
);
let another_valid_config = RestoreConfig {
source_url: PathBuf::from("/path/to/snapshot"),
prefault: false,
net_fds: None,
};
snapshot_vm_config.net = Some(vec![NetConfig {
id: Some("net2".to_owned()),
fds: None,
..net_fixture()
}]);
assert!(another_valid_config.validate(&snapshot_vm_config).is_ok());
}
fn platform_fixture() -> PlatformConfig {
PlatformConfig {
num_pci_segments: MAX_NUM_PCI_SEGMENTS,

View File

@ -1321,6 +1321,24 @@ impl RequestHandler for Vmm {
let vm_config = Arc::new(Mutex::new(
recv_vm_config(source_url).map_err(VmError::Restore)?,
));
restore_cfg
.validate(&vm_config.lock().unwrap().clone())
.map_err(VmError::ConfigValidation)?;
// Update VM's net configurations with new fds received for restore operation
if let (Some(restored_nets), Some(vm_net_configs)) =
(restore_cfg.net_fds, &mut vm_config.lock().unwrap().net)
{
for net in restored_nets.iter() {
for net_config in vm_net_configs.iter_mut() {
// update only if the net dev is backed by FDs
if net_config.id == Some(net.id.clone()) && net_config.fds.is_some() {
net_config.fds.clone_from(&net.fds);
}
}
}
}
let snapshot = recv_vm_state(source_url).map_err(VmError::Restore)?;
#[cfg(all(feature = "kvm", target_arch = "x86_64"))]
let vm_snapshot = get_vm_snapshot(&snapshot).map_err(VmError::Restore)?;