diff --git a/docs/live_migration.md b/docs/live_migration.md index 347cbb7c4..d6453e8b6 100644 --- a/docs/live_migration.md +++ b/docs/live_migration.md @@ -9,8 +9,10 @@ support in Cloud Hypervisor: are running on the same machine. ## Local Migration (Suitable for Live Upgrade of VMM) + Launch the source VM (on the host machine): -```bash + +```console $ target/release/cloud-hypervisor --kernel ~/workloads/vmlinux \ --disk path=~/workloads/focal.raw \ @@ -20,17 +22,20 @@ $ target/release/cloud-hypervisor ``` Launch the destination VM from the same directory (on the host machine): -```bash + +```console $ target/release/cloud-hypervisor --api-socket=/tmp/api2 ``` Get ready for receiving migration for the destination VM (on the host machine): -```bash + +```console $ target/release/ch-remote --api-socket=/tmp/api2 receive-migration unix:/tmp/sock ``` Start to send migration for the source VM (on the host machine): -```bash + +```console $ target/release/ch-remote --api-socket=/tmp/api1 send-migration --local unix:/tmp/sock ``` @@ -42,9 +47,11 @@ the source VM is terminated gracefully. Launch VM 1 (on the host machine) with an extra virtio-blk device for exposing a guest image for the nested source VM: + > Note: the example below also attached an additional virtio-blk device > with a dummy image for testing purpose (which is optional). -```bash + +```console $ head -c 1M < /dev/urandom > tmp.img # create a dummy image for testing $ sudo /target/release/cloud-hypervisor \ --serial tty --console off \ @@ -57,7 +64,8 @@ $ sudo /target/release/cloud-hypervisor \ Launch VM 2 (on the host machine) with an extra virtio-blk device for exposing the same guest image for the nested destination VM: -```bash + +```console $ sudo /target/release/cloud-hypervisor \ --serial tty --console off \ --cpus boot=1 --memory size=512M \ @@ -68,7 +76,8 @@ $ sudo /target/release/cloud-hypervisor \ ``` Launch the nested source VM (inside the guest OS of the VM 1) : -```bash + +```console vm-1:~$ sudo ./cloud-hypervisor \ --serial tty --console off \ --memory size=128M \ @@ -82,10 +91,12 @@ vm-1:~$ sudo ip addr add 192.168.101.2/24 dev ens4 vm-1:~$ sudo ip link set up dev ens4 vm-1:~$ sudo ip r add default via 192.168.101.1 ``` + Optional: Run the guest workload below (on the guest OS of the nested source VM), which performs intensive virtio-blk operations. Now the console of the nested source VM should repeatedly print `"equal"`, and our goal is migrating this VM and the running workload without interruption. + ```bash #/bin/bash @@ -107,7 +118,8 @@ echo "tmp = $tmp" ``` Launch the nested destination VM (inside the guest OS of the VM 2): -```bash + +```console vm-2:~$ sudo ./cloud-hypervisor --api-socket=/tmp/api2 vm-2:~$ # setup the guest network with the following commands if needed vm-2:~$ sudo ip addr add 192.168.102.2/24 dev ens4 @@ -115,20 +127,23 @@ vm-2:~$ sudo ip link set up dev ens4 vm-2:~$ sudo ip r add default via 192.168.102.1 vm-2:~$ ping 192.168.101.2 # This should succeed ``` + > Note: If the above ping failed, please check the iptables rule on the > host machine, e.g. whether the policy for the `FORWARD` chain is set > to `DROP` (which is the default setting configured by Docker). Get ready for receiving migration for the nested destination VM (inside the guest OS of the VM 2): -```bash + +```console vm-2:~$ sudo ./ch-remote --api-socket=/tmp/api2 receive-migration unix:/tmp/sock2 vm-2:~$ sudo socat TCP-LISTEN:6000,reuseaddr UNIX-CLIENT:/tmp/sock2 ``` Start to send migration for the nested source VM (inside the guest OS of the VM 1): -```bash + +```console vm-1:~$ sudo socat UNIX-LISTEN:/tmp/sock1,reuseaddr TCP:192.168.102.2:6000 vm-1:~$ sudo ./ch-remote --api-socket=/tmp/api1 send-migration unix:/tmp/sock1 ```