docs: apply style fixes to live migration docs

I've added newlines between paragraphs and code blocks for easier
reading. I've also changed the code blocks to use the correct
highlighting.

Signed-off-by: Julian Stecklina <julian.stecklina@cyberus-technology.de>
This commit is contained in:
Julian Stecklina 2024-11-28 16:51:32 +01:00 committed by Rob Bradford
parent ab7b294688
commit 5b822191c0

View File

@ -9,8 +9,10 @@ support in Cloud Hypervisor:
are running on the same machine. are running on the same machine.
## Local Migration (Suitable for Live Upgrade of VMM) ## Local Migration (Suitable for Live Upgrade of VMM)
Launch the source VM (on the host machine): Launch the source VM (on the host machine):
```bash
```console
$ target/release/cloud-hypervisor $ target/release/cloud-hypervisor
--kernel ~/workloads/vmlinux \ --kernel ~/workloads/vmlinux \
--disk path=~/workloads/focal.raw \ --disk path=~/workloads/focal.raw \
@ -20,17 +22,20 @@ $ target/release/cloud-hypervisor
``` ```
Launch the destination VM from the same directory (on the host machine): Launch the destination VM from the same directory (on the host machine):
```bash
```console
$ target/release/cloud-hypervisor --api-socket=/tmp/api2 $ target/release/cloud-hypervisor --api-socket=/tmp/api2
``` ```
Get ready for receiving migration for the destination VM (on the host machine): Get ready for receiving migration for the destination VM (on the host machine):
```bash
```console
$ target/release/ch-remote --api-socket=/tmp/api2 receive-migration unix:/tmp/sock $ target/release/ch-remote --api-socket=/tmp/api2 receive-migration unix:/tmp/sock
``` ```
Start to send migration for the source VM (on the host machine): Start to send migration for the source VM (on the host machine):
```bash
```console
$ target/release/ch-remote --api-socket=/tmp/api1 send-migration --local unix:/tmp/sock $ target/release/ch-remote --api-socket=/tmp/api1 send-migration --local unix:/tmp/sock
``` ```
@ -42,9 +47,11 @@ the source VM is terminated gracefully.
Launch VM 1 (on the host machine) with an extra virtio-blk device for Launch VM 1 (on the host machine) with an extra virtio-blk device for
exposing a guest image for the nested source VM: exposing a guest image for the nested source VM:
> Note: the example below also attached an additional virtio-blk device > Note: the example below also attached an additional virtio-blk device
> with a dummy image for testing purpose (which is optional). > with a dummy image for testing purpose (which is optional).
```bash
```console
$ head -c 1M < /dev/urandom > tmp.img # create a dummy image for testing $ head -c 1M < /dev/urandom > tmp.img # create a dummy image for testing
$ sudo /target/release/cloud-hypervisor \ $ sudo /target/release/cloud-hypervisor \
--serial tty --console off \ --serial tty --console off \
@ -57,7 +64,8 @@ $ sudo /target/release/cloud-hypervisor \
Launch VM 2 (on the host machine) with an extra virtio-blk device for Launch VM 2 (on the host machine) with an extra virtio-blk device for
exposing the same guest image for the nested destination VM: exposing the same guest image for the nested destination VM:
```bash
```console
$ sudo /target/release/cloud-hypervisor \ $ sudo /target/release/cloud-hypervisor \
--serial tty --console off \ --serial tty --console off \
--cpus boot=1 --memory size=512M \ --cpus boot=1 --memory size=512M \
@ -68,7 +76,8 @@ $ sudo /target/release/cloud-hypervisor \
``` ```
Launch the nested source VM (inside the guest OS of the VM 1) : Launch the nested source VM (inside the guest OS of the VM 1) :
```bash
```console
vm-1:~$ sudo ./cloud-hypervisor \ vm-1:~$ sudo ./cloud-hypervisor \
--serial tty --console off \ --serial tty --console off \
--memory size=128M \ --memory size=128M \
@ -82,10 +91,12 @@ vm-1:~$ sudo ip addr add 192.168.101.2/24 dev ens4
vm-1:~$ sudo ip link set up dev ens4 vm-1:~$ sudo ip link set up dev ens4
vm-1:~$ sudo ip r add default via 192.168.101.1 vm-1:~$ sudo ip r add default via 192.168.101.1
``` ```
Optional: Run the guest workload below (on the guest OS of the nested source VM), Optional: Run the guest workload below (on the guest OS of the nested source VM),
which performs intensive virtio-blk operations. Now the console of the nested which performs intensive virtio-blk operations. Now the console of the nested
source VM should repeatedly print `"equal"`, and our goal is migrating source VM should repeatedly print `"equal"`, and our goal is migrating
this VM and the running workload without interruption. this VM and the running workload without interruption.
```bash ```bash
#/bin/bash #/bin/bash
@ -107,7 +118,8 @@ echo "tmp = $tmp"
``` ```
Launch the nested destination VM (inside the guest OS of the VM 2): Launch the nested destination VM (inside the guest OS of the VM 2):
```bash
```console
vm-2:~$ sudo ./cloud-hypervisor --api-socket=/tmp/api2 vm-2:~$ sudo ./cloud-hypervisor --api-socket=/tmp/api2
vm-2:~$ # setup the guest network with the following commands if needed vm-2:~$ # setup the guest network with the following commands if needed
vm-2:~$ sudo ip addr add 192.168.102.2/24 dev ens4 vm-2:~$ sudo ip addr add 192.168.102.2/24 dev ens4
@ -115,20 +127,23 @@ vm-2:~$ sudo ip link set up dev ens4
vm-2:~$ sudo ip r add default via 192.168.102.1 vm-2:~$ sudo ip r add default via 192.168.102.1
vm-2:~$ ping 192.168.101.2 # This should succeed vm-2:~$ ping 192.168.101.2 # This should succeed
``` ```
> Note: If the above ping failed, please check the iptables rule on the > Note: If the above ping failed, please check the iptables rule on the
> host machine, e.g. whether the policy for the `FORWARD` chain is set > host machine, e.g. whether the policy for the `FORWARD` chain is set
> to `DROP` (which is the default setting configured by Docker). > to `DROP` (which is the default setting configured by Docker).
Get ready for receiving migration for the nested destination VM (inside Get ready for receiving migration for the nested destination VM (inside
the guest OS of the VM 2): the guest OS of the VM 2):
```bash
```console
vm-2:~$ sudo ./ch-remote --api-socket=/tmp/api2 receive-migration unix:/tmp/sock2 vm-2:~$ sudo ./ch-remote --api-socket=/tmp/api2 receive-migration unix:/tmp/sock2
vm-2:~$ sudo socat TCP-LISTEN:6000,reuseaddr UNIX-CLIENT:/tmp/sock2 vm-2:~$ sudo socat TCP-LISTEN:6000,reuseaddr UNIX-CLIENT:/tmp/sock2
``` ```
Start to send migration for the nested source VM (inside the guest OS of Start to send migration for the nested source VM (inside the guest OS of
the VM 1): the VM 1):
```bash
```console
vm-1:~$ sudo socat UNIX-LISTEN:/tmp/sock1,reuseaddr TCP:192.168.102.2:6000 vm-1:~$ sudo socat UNIX-LISTEN:/tmp/sock1,reuseaddr TCP:192.168.102.2:6000
vm-1:~$ sudo ./ch-remote --api-socket=/tmp/api1 send-migration unix:/tmp/sock1 vm-1:~$ sudo ./ch-remote --api-socket=/tmp/api1 send-migration unix:/tmp/sock1
``` ```