docs: update command line options to use clap

Reverts commit a0922930b1
and update to latest changes.

Signed-off-by: Ravi kumar Veeramally <ravikumar.veeramally@intel.com>
This commit is contained in:
Ravi kumar Veeramally 2023-09-11 18:32:41 +03:00 committed by Bo Chen
parent 7bc3452139
commit fa22cb0be5
12 changed files with 53 additions and 53 deletions

View File

@ -150,7 +150,7 @@ $ sudo setcap cap_net_admin+ep ./cloud-hypervisor
$ ./create-cloud-init.sh $ ./create-cloud-init.sh
$ ./cloud-hypervisor \ $ ./cloud-hypervisor \
--kernel ./hypervisor-fw \ --kernel ./hypervisor-fw \
--disk path=focal-server-cloudimg-amd64.raw --disk path=/tmp/ubuntu-cloudinit.img \ --disk path=focal-server-cloudimg-amd64.raw path=/tmp/ubuntu-cloudinit.img \
--cpus boot=4 \ --cpus boot=4 \
--memory size=1024M \ --memory size=1024M \
--net "tap=,mac=,ip=,mask=" --net "tap=,mac=,ip=,mask="
@ -163,7 +163,7 @@ GRUB) is required then it necessary to switch to the serial console instead of
```shell ```shell
$ ./cloud-hypervisor \ $ ./cloud-hypervisor \
--kernel ./hypervisor-fw \ --kernel ./hypervisor-fw \
--disk path=focal-server-cloudimg-amd64.raw --disk path=/tmp/ubuntu-cloudinit.img \ --disk path=focal-server-cloudimg-amd64.raw path=/tmp/ubuntu-cloudinit.img \
--cpus boot=4 \ --cpus boot=4 \
--memory size=1024M \ --memory size=1024M \
--net "tap=,mac=,ip=,mask=" \ --net "tap=,mac=,ip=,mask=" \
@ -225,7 +225,7 @@ $ sudo setcap cap_net_admin+ep ./cloud-hypervisor
$ ./create-cloud-init.sh $ ./create-cloud-init.sh
$ ./cloud-hypervisor \ $ ./cloud-hypervisor \
--kernel ./linux-cloud-hypervisor/arch/x86/boot/compressed/vmlinux.bin \ --kernel ./linux-cloud-hypervisor/arch/x86/boot/compressed/vmlinux.bin \
--disk path=focal-server-cloudimg-amd64.raw --disk path=/tmp/ubuntu-cloudinit.img \ --disk path=focal-server-cloudimg-amd64.raw path=/tmp/ubuntu-cloudinit.img \
--cmdline "console=hvc0 root=/dev/vda1 rw" \ --cmdline "console=hvc0 root=/dev/vda1 rw" \
--cpus boot=4 \ --cpus boot=4 \
--memory size=1024M \ --memory size=1024M \
@ -239,7 +239,7 @@ $ sudo setcap cap_net_admin+ep ./cloud-hypervisor
$ ./create-cloud-init.sh $ ./create-cloud-init.sh
$ ./cloud-hypervisor \ $ ./cloud-hypervisor \
--kernel ./linux-cloud-hypervisor/arch/arm64/boot/Image \ --kernel ./linux-cloud-hypervisor/arch/arm64/boot/Image \
--disk path=focal-server-cloudimg-arm64.raw --disk path=/tmp/ubuntu-cloudinit.img \ --disk path=focal-server-cloudimg-arm64.raw path=/tmp/ubuntu-cloudinit.img \
--cmdline "console=hvc0 root=/dev/vda1 rw" \ --cmdline "console=hvc0 root=/dev/vda1 rw" \
--cpus boot=4 \ --cpus boot=4 \
--memory size=1024M \ --memory size=1024M \

View File

@ -295,7 +295,7 @@ From the CLI, one can:
The REST API, D-Bus API and the CLI all rely on a common, [internal API](#internal-api). The REST API, D-Bus API and the CLI all rely on a common, [internal API](#internal-api).
The CLI options are parsed by the The CLI options are parsed by the
[argh crate](https://docs.rs/argh/latest/argh/) and then translated into [clap crate](https://docs.rs/clap/4.3.11/clap/) and then translated into
[internal API](#internal-api) commands. [internal API](#internal-api) commands.
The REST API is processed by an HTTP thread using the The REST API is processed by an HTTP thread using the
@ -327,7 +327,7 @@ As a summary, the REST API, the D-Bus API and the CLI are essentially frontends
| | +------------------------+ | | +------------------------+
| +----------+ | VMM | +----------+ | VMM
| CLI | | | | CLI | | |
+----------->+ argh +--------------+ +----------->+ clap +--------------+
| | | |
+----------+ +----------+

View File

@ -36,7 +36,7 @@ Assuming parts of the guest software stack have been instrumented to use the
`cloud-hypervisor` debug I/O port, we may want to gather the related logs. `cloud-hypervisor` debug I/O port, we may want to gather the related logs.
To do so we need to start `cloud-hypervisor` with the right debug level To do so we need to start `cloud-hypervisor` with the right debug level
(`-v -v -v`). It is also recommended to have it log into a dedicated file in order (`-vvv`). It is also recommended to have it log into a dedicated file in order
to easily grep for the tracing logs (e.g. to easily grep for the tracing logs (e.g.
`--log-file /tmp/cloud-hypervisor.log`): `--log-file /tmp/cloud-hypervisor.log`):
@ -48,7 +48,7 @@ to easily grep for the tracing logs (e.g.
--memory size=1024M \ --memory size=1024M \
--rng \ --rng \
--log-file /tmp/ch-fw.log \ --log-file /tmp/ch-fw.log \
-v -v -v -vvv
``` ```
After booting the guest, we then have to grep for the debug I/O port traces in After booting the guest, we then have to grep for the debug I/O port traces in

View File

@ -8,7 +8,7 @@ To enable debugging with GDB, build with the `guest_debug` feature enabled:
cargo build --features guest_debug cargo build --features guest_debug
``` ```
To use the `--gdb` option, specify the Unix Domain Socket with `path` that Cloud Hypervisor will use to communicate with the host's GDB: To use the `--gdb` option, specify the Unix Domain Socket with `--path` that Cloud Hypervisor will use to communicate with the host's GDB:
```bash ```bash
./cloud-hypervisor \ ./cloud-hypervisor \

View File

@ -27,16 +27,16 @@ $ ./cloud-hypervisor/target/release/cloud-hypervisor \
--memory size=1024M \ --memory size=1024M \
--net "tap=,mac=,ip=,mask=" \ --net "tap=,mac=,ip=,mask=" \
--rng \ --rng \
--api-socket /tmp/ch-socket --api-socket=/tmp/ch-socket
$ popd $ popd
``` ```
Notice the addition of `--api-socket /tmp/ch-socket` and a `max` parameter on `--cpus boot=4,max=8`. Notice the addition of `--api-socket=/tmp/ch-socket` and a `max` parameter on `--cpus boot=4,max=8`.
To ask the VMM to add additional vCPUs then use the resize API: To ask the VMM to add additional vCPUs then use the resize API:
```shell ```shell
./ch-remote --api-socket /tmp/ch-socket resize --cpus 8 ./ch-remote --api-socket=/tmp/ch-socket resize --cpus 8
``` ```
The extra vCPU threads will be created and advertised to the running kernel. The kernel does not bring up the CPUs immediately and instead the user must "online" them from inside the VM: The extra vCPU threads will be created and advertised to the running kernel. The kernel does not bring up the CPUs immediately and instead the user must "online" them from inside the VM:
@ -56,7 +56,7 @@ After a reboot the added CPUs will remain.
Removing CPUs works similarly by reducing the number in the "desired_vcpus" field of the reisze API. The CPUs will be automatically offlined inside the guest so there is no need to run any commands inside the guest: Removing CPUs works similarly by reducing the number in the "desired_vcpus" field of the reisze API. The CPUs will be automatically offlined inside the guest so there is no need to run any commands inside the guest:
```shell ```shell
./ch-remote --api-socket /tmp/ch-socket resize --cpus 2 ./ch-remote --api-socket=/tmp/ch-socket resize --cpus 2
``` ```
As per adding CPUs to the guest, after a reboot the VM will be running with the reduced number of vCPUs. As per adding CPUs to the guest, after a reboot the VM will be running with the reduced number of vCPUs.
@ -85,7 +85,7 @@ $ ./cloud-hypervisor/target/release/cloud-hypervisor \
--memory size=1024M,hotplug_size=8192M \ --memory size=1024M,hotplug_size=8192M \
--net "tap=,mac=,ip=,mask=" \ --net "tap=,mac=,ip=,mask=" \
--rng \ --rng \
--api-socket /tmp/ch-socket --api-socket=/tmp/ch-socket
$ popd $ popd
``` ```
@ -98,7 +98,7 @@ root@ch-guest ~ # echo online | sudo tee /sys/devices/system/memory/auto_online_
To ask the VMM to expand the RAM for the VM: To ask the VMM to expand the RAM for the VM:
```shell ```shell
./ch-remote --api-socket /tmp/ch-socket resize --memory 3G ./ch-remote --api-socket=/tmp/ch-socket resize --memory 3G
``` ```
The new memory is now available to use inside the VM: The new memory is now available to use inside the VM:
@ -134,14 +134,14 @@ $ ./cloud-hypervisor/target/release/cloud-hypervisor \
--disk path=focal-server-cloudimg-amd64.raw \ --disk path=focal-server-cloudimg-amd64.raw \
--memory size=1024M,hotplug_size=8192M,hotplug_method=virtio-mem \ --memory size=1024M,hotplug_size=8192M,hotplug_method=virtio-mem \
--net "tap=,mac=,ip=,mask=" \ --net "tap=,mac=,ip=,mask=" \
--api-socket /tmp/ch-socket --api-socket=/tmp/ch-socket
$ popd $ popd
``` ```
To ask the VMM to expand the RAM for the VM (request is in bytes): To ask the VMM to expand the RAM for the VM (request is in bytes):
```shell ```shell
./ch-remote --api-socket /tmp/ch-socket resize --memory 3G ./ch-remote --api-socket=/tmp/ch-socket resize --memory 3G
``` ```
The new memory is now available to use inside the VM: The new memory is now available to use inside the VM:
@ -172,17 +172,17 @@ $ ./cloud-hypervisor/target/release/cloud-hypervisor \
--cpus boot=4 \ --cpus boot=4 \
--memory size=1024M \ --memory size=1024M \
--net "tap=,mac=,ip=,mask=" \ --net "tap=,mac=,ip=,mask=" \
--api-socket /tmp/ch-socket --api-socket=/tmp/ch-socket
``` ```
Notice the addition of `--api-socket /tmp/ch-socket`. Notice the addition of `--api-socket=/tmp/ch-socket`.
### Add VFIO Device ### Add VFIO Device
To ask the VMM to add additional VFIO device then use the `add-device` API. To ask the VMM to add additional VFIO device then use the `add-device` API.
```shell ```shell
./ch-remote --api-socket /tmp/ch-socket add-device path=/sys/bus/pci/devices/0000:01:00.0/ ./ch-remote --api-socket=/tmp/ch-socket add-device path=/sys/bus/pci/devices/0000:01:00.0/
``` ```
### Add Disk Device ### Add Disk Device
@ -190,7 +190,7 @@ To ask the VMM to add additional VFIO device then use the `add-device` API.
To ask the VMM to add additional disk device then use the `add-disk` API. To ask the VMM to add additional disk device then use the `add-disk` API.
```shell ```shell
./ch-remote --api-socket /tmp/ch-socket add-disk path=/foo/bar/cloud.img ./ch-remote --api-socket=/tmp/ch-socket add-disk path=/foo/bar/cloud.img
``` ```
### Add Fs Device ### Add Fs Device
@ -198,7 +198,7 @@ To ask the VMM to add additional disk device then use the `add-disk` API.
To ask the VMM to add additional fs device then use the `add-fs` API. To ask the VMM to add additional fs device then use the `add-fs` API.
```shell ```shell
./ch-remote --api-socket /tmp/ch-socket add-fs tag=myfs,socket=/foo/bar/virtiofs.sock ./ch-remote --api-socket=/tmp/ch-socket add-fs tag=myfs,socket=/foo/bar/virtiofs.sock
``` ```
### Add Net Device ### Add Net Device
@ -206,7 +206,7 @@ To ask the VMM to add additional fs device then use the `add-fs` API.
To ask the VMM to add additional network device then use the `add-net` API. To ask the VMM to add additional network device then use the `add-net` API.
```shell ```shell
./ch-remote --api-socket /tmp/ch-socket add-net tap=chtap0 ./ch-remote --api-socket=/tmp/ch-socket add-net tap=chtap0
``` ```
### Add Pmem Device ### Add Pmem Device
@ -214,7 +214,7 @@ To ask the VMM to add additional network device then use the `add-net` API.
To ask the VMM to add additional PMEM device then use the `add-pmem` API. To ask the VMM to add additional PMEM device then use the `add-pmem` API.
```shell ```shell
./ch-remote --api-socket /tmp/ch-socket add-pmem file=/foo/bar.cloud.img ./ch-remote --api-socket=/tmp/ch-socket add-pmem file=/foo/bar.cloud.img
``` ```
### Add Vsock Device ### Add Vsock Device
@ -222,7 +222,7 @@ To ask the VMM to add additional PMEM device then use the `add-pmem` API.
To ask the VMM to add additional vsock device then use the `add-vsock` API. To ask the VMM to add additional vsock device then use the `add-vsock` API.
```shell ```shell
./ch-remote --api-socket /tmp/ch-socket add-vsock cid=3,socket=/foo/bar/vsock.sock ./ch-remote --api-socket=/tmp/ch-socket add-vsock cid=3,socket=/foo/bar/vsock.sock
``` ```
### Common Across All PCI Devices ### Common Across All PCI Devices
@ -244,7 +244,7 @@ After a reboot the added PCI device will remain.
Removing a PCI device works the same way for all kind of PCI devices. The unique identifier related to the device must be provided. This identifier can be provided by the user when adding the new device, or by default Cloud Hypervisor will assign one. Removing a PCI device works the same way for all kind of PCI devices. The unique identifier related to the device must be provided. This identifier can be provided by the user when adding the new device, or by default Cloud Hypervisor will assign one.
```shell ```shell
./ch-remote --api-socket /tmp/ch-socket remove-device _disk0 ./ch-remote --api-socket=/tmp/ch-socket remove-device _disk0
``` ```
As per adding a PCI device to the guest, after a reboot the VM will be running without the removed PCI device. As per adding a PCI device to the guest, after a reboot the VM will be running without the removed PCI device.

View File

@ -245,7 +245,7 @@ e.g.
```bash ```bash
./cloud-hypervisor \ ./cloud-hypervisor \
--api-socket /tmp/api \ --api-socket=/tmp/api \
--cpus boot=1 \ --cpus boot=1 \
--memory size=4G,hugepages=on \ --memory size=4G,hugepages=on \
--disk path=focal-server-cloudimg-amd64.raw \ --disk path=focal-server-cloudimg-amd64.raw \
@ -260,7 +260,7 @@ requiring the IOMMU then may be hotplugged:
e.g. e.g.
```bash ```bash
./ch-remote --api-socket /tmp/api add-device path=/sys/bus/pci/devices/0000:00:04.0,iommu=on,pci_segment=1 ./ch-remote --api-socket=/tmp/api add-device path=/sys/bus/pci/devices/0000:00:04.0,iommu=on,pci_segment=1
``` ```
Devices that cannot be placed behind an IOMMU (e.g. lacking an `iommu=` option) Devices that cannot be placed behind an IOMMU (e.g. lacking an `iommu=` option)

View File

@ -16,22 +16,22 @@ $ target/release/cloud-hypervisor
--disk path=~/workloads/focal.raw \ --disk path=~/workloads/focal.raw \
--cpus boot=1 --memory size=1G,shared=on \ --cpus boot=1 --memory size=1G,shared=on \
--cmdline "root=/dev/vda1 console=ttyS0" \ --cmdline "root=/dev/vda1 console=ttyS0" \
--serial tty --console off --api-socket /tmp/api1 --serial tty --console off --api-socket=/tmp/api1
``` ```
Launch the destination VM from the same directory (on the host machine): Launch the destination VM from the same directory (on the host machine):
```bash ```bash
$ target/release/cloud-hypervisor --api-socket /tmp/api2 $ target/release/cloud-hypervisor --api-socket=/tmp/api2
``` ```
Get ready for receiving migration for the destination VM (on the host machine): Get ready for receiving migration for the destination VM (on the host machine):
```bash ```bash
$ target/release/ch-remote --api-socket /tmp/api2 receive-migration unix:/tmp/sock $ target/release/ch-remote --api-socket=/tmp/api2 receive-migration unix:/tmp/sock
``` ```
Start to send migration for the source VM (on the host machine): Start to send migration for the source VM (on the host machine):
```bash ```bash
$ target/release/ch-remote --api-socket /tmp/api1 send-migration --local unix:/tmp/sock $ target/release/ch-remote --api-socket=/tmp/api1 send-migration --local unix:/tmp/sock
``` ```
When the above commands completed, the source VM should be successfully When the above commands completed, the source VM should be successfully
@ -51,7 +51,7 @@ $ sudo /target/release/cloud-hypervisor \
--cpus boot=1 --memory size=512M \ --cpus boot=1 --memory size=512M \
--kernel vmlinux \ --kernel vmlinux \
--cmdline "root=/dev/vda1 console=ttyS0" \ --cmdline "root=/dev/vda1 console=ttyS0" \
--disk path=focal-1.raw path=focal-nested.raw --disk path=tmp.img\ --disk path=focal-1.raw path=focal-nested.raw path=tmp.img\
--net ip=192.168.101.1 --net ip=192.168.101.1
``` ```
@ -63,7 +63,7 @@ $ sudo /target/release/cloud-hypervisor \
--cpus boot=1 --memory size=512M \ --cpus boot=1 --memory size=512M \
--kernel vmlinux \ --kernel vmlinux \
--cmdline "root=/dev/vda1 console=ttyS0" \ --cmdline "root=/dev/vda1 console=ttyS0" \
--disk path=focal-2.raw path=focal-nested.raw --disk path=tmp.img\ --disk path=focal-2.raw path=focal-nested.raw path=tmp.img\
--net ip=192.168.102.1 --net ip=192.168.102.1
``` ```
@ -74,8 +74,8 @@ vm-1:~$ sudo ./cloud-hypervisor \
--memory size=128M \ --memory size=128M \
--kernel vmlinux \ --kernel vmlinux \
--cmdline "console=ttyS0 root=/dev/vda1" \ --cmdline "console=ttyS0 root=/dev/vda1" \
--disk path=/dev/vdb --disk path=/dev/vdc \ --disk path=/dev/vdb path=/dev/vdc \
--api-socket /tmp/api1 \ --api-socket=/tmp/api1 \
--net ip=192.168.100.1 --net ip=192.168.100.1
vm-1:~$ # setup the guest network if needed vm-1:~$ # setup the guest network if needed
vm-1:~$ sudo ip addr add 192.168.101.2/24 dev ens4 vm-1:~$ sudo ip addr add 192.168.101.2/24 dev ens4
@ -108,7 +108,7 @@ echo "tmp = $tmp"
Launch the nested destination VM (inside the guest OS of the VM 2): Launch the nested destination VM (inside the guest OS of the VM 2):
```bash ```bash
vm-2:~$ sudo ./cloud-hypervisor --api-socket /tmp/api2 vm-2:~$ sudo ./cloud-hypervisor --api-socket=/tmp/api2
vm-2:~$ # setup the guest network with the following commands if needed vm-2:~$ # setup the guest network with the following commands if needed
vm-2:~$ sudo ip addr add 192.168.102.2/24 dev ens4 vm-2:~$ sudo ip addr add 192.168.102.2/24 dev ens4
vm-2:~$ sudo ip link set up dev ens4 vm-2:~$ sudo ip link set up dev ens4
@ -122,7 +122,7 @@ vm-2:~$ ping 192.168.101.2 # This should succeed
Get ready for receiving migration for the nested destination VM (inside Get ready for receiving migration for the nested destination VM (inside
the guest OS of the VM 2): the guest OS of the VM 2):
```bash ```bash
vm-2:~$ sudo ./ch-remote --api-socket /tmp/api2 receive-migration unix:/tmp/sock2 vm-2:~$ sudo ./ch-remote --api-socket=/tmp/api2 receive-migration unix:/tmp/sock2
vm-2:~$ sudo socat TCP-LISTEN:6000,reuseaddr UNIX-CLIENT:/tmp/sock2 vm-2:~$ sudo socat TCP-LISTEN:6000,reuseaddr UNIX-CLIENT:/tmp/sock2
``` ```
@ -130,7 +130,7 @@ Start to send migration for the nested source VM (inside the guest OS of
the VM 1): the VM 1):
```bash ```bash
vm-1:~$ sudo socat UNIX-LISTEN:/tmp/sock1,reuseaddr TCP:192.168.102.2:6000 vm-1:~$ sudo socat UNIX-LISTEN:/tmp/sock1,reuseaddr TCP:192.168.102.2:6000
vm-1:~$ sudo ./ch-remote --api-socket /tmp/api1 send-migration unix:/tmp/sock1 vm-1:~$ sudo ./ch-remote --api-socket=/tmp/api1 send-migration unix:/tmp/sock1
``` ```
When the above commands completed, the source VM should be successfully When the above commands completed, the source VM should be successfully

View File

@ -38,6 +38,6 @@ This level is for the benefit of developers. It should be used for sporadic and
### `debug!()` ### `debug!()`
Use `-v -v` to enable. Use `-vv` to enable.
For the most verbose of logging messages. It is acceptable to "spam" the log with repeated invocations of the same message. This level of logging would be combined with `--log-file`. For the most verbose of logging messages. It is acceptable to "spam" the log with repeated invocations of the same message. This level of logging would be combined with `--log-file`.

View File

@ -516,7 +516,7 @@ different distances, it can be described with the following example.
_Example_ _Example_
``` ```
--numa guest_numa_id=0,distances=[1@15,2@25] --numa guest_numa_id=1,distances=[0@15,2@20] guest_numa_id=2,distances=[0@25,1@20] --numa guest_numa_id=0,distances=[1@15,2@25] guest_numa_id=1,distances=[0@15,2@20] guest_numa_id=2,distances=[0@25,1@20]
``` ```
### `memory_zones` ### `memory_zones`
@ -540,14 +540,14 @@ demarcate the list.
Note that a memory zone must belong to a single NUMA node. The following Note that a memory zone must belong to a single NUMA node. The following
configuration is incorrect, therefore not allowed: configuration is incorrect, therefore not allowed:
`--numa guest_numa_id=0,memory_zones=mem0 --numa guest_numa_id=1,memory_zones=mem0` `--numa guest_numa_id=0,memory_zones=mem0 guest_numa_id=1,memory_zones=mem0`
_Example_ _Example_
``` ```
--memory size=0 --memory size=0
--memory-zone id=mem0,size=1G id=mem1,size=1G --memory-zone id=mem2,size=1G --memory-zone id=mem0,size=1G id=mem1,size=1G id=mem2,size=1G
--numa guest_numa_id=0,memory_zones=[mem0,mem2] --numa guest_numa_id=1,memory_zones=mem1 --numa guest_numa_id=0,memory_zones=[mem0,mem2] guest_numa_id=1,memory_zones=mem1
``` ```
### `sgx_epc_sections` ### `sgx_epc_sections`
@ -567,7 +567,7 @@ _Example_
``` ```
--sgx-epc id=epc0,size=32M id=epc1,size=64M id=epc2,size=32M --sgx-epc id=epc0,size=32M id=epc1,size=64M id=epc2,size=32M
--numa guest_numa_id=0,sgx_epc_sections=epc1 --numa guest_numa_id=1,sgx_epc_sections=[epc0,epc2] --numa guest_numa_id=0,sgx_epc_sections=epc1 guest_numa_id=1,sgx_epc_sections=[epc0,epc2]
``` ```
### PCI bus ### PCI bus

View File

@ -25,7 +25,7 @@ $ perf record -g target/profiling/cloud-hypervisor \
--cpus boot=1 --memory size=1G \ --cpus boot=1 --memory size=1G \
--cmdline "root=/dev/pmem0p1 console=ttyS0" \ --cmdline "root=/dev/pmem0p1 console=ttyS0" \
--serial tty --console off \ --serial tty --console off \
--api-socket /tmp/api1 --api-socket=/tmp/api1
``` ```
For analysing the samples: For analysing the samples:
@ -52,5 +52,5 @@ $ perf record --call-graph lbr --all-user --user-callchains -g target/release/cl
--cpus boot=1 --memory size=1G \ --cpus boot=1 --memory size=1G \
--cmdline "root=/dev/pmem0p1 console=ttyS0" \ --cmdline "root=/dev/pmem0p1 console=ttyS0" \
--serial tty --console off \ --serial tty --console off \
--api-socket /tmp/api1 --api-socket=/tmp/api1
``` ```

View File

@ -25,14 +25,14 @@ First thing, we must run a Cloud Hypervisor VM:
At any point in time when the VM is running, one might choose to pause it: At any point in time when the VM is running, one might choose to pause it:
```bash ```bash
./ch-remote --api-socket /tmp/cloud-hypervisor.sock pause ./ch-remote --api-socket=/tmp/cloud-hypervisor.sock pause
``` ```
Once paused, the VM can be safely snapshot into the specified directory and Once paused, the VM can be safely snapshot into the specified directory and
using the following command: using the following command:
```bash ```bash
./ch-remote --api-socket /tmp/cloud-hypervisor.sock snapshot file:///home/foo/snapshot ./ch-remote --api-socket=/tmp/cloud-hypervisor.sock snapshot file:///home/foo/snapshot
``` ```
Given the directory was present on the system, the snapshot will succeed and Given the directory was present on the system, the snapshot will succeed and
@ -79,7 +79,7 @@ Or using two different commands from two terminals:
./cloud-hypervisor --api-socket /tmp/cloud-hypervisor.sock ./cloud-hypervisor --api-socket /tmp/cloud-hypervisor.sock
# Second terminal # Second terminal
./ch-remote --api-socket /tmp/cloud-hypervisor.sock restore source_url=file:///home/foo/snapshot ./ch-remote --api-socket=/tmp/cloud-hypervisor.sock restore source_url=file:///home/foo/snapshot
``` ```
Remember the VM is restored in a `paused` state, which was the VM's state when Remember the VM is restored in a `paused` state, which was the VM's state when
@ -87,7 +87,7 @@ it was snapshot. For this reason, one must explicitly `resume` the VM before to
start using it. start using it.
```bash ```bash
./ch-remote --api-socket /tmp/cloud-hypervisor.sock resume ./ch-remote --api-socket=/tmp/cloud-hypervisor.sock resume
``` ```
At this point, the VM is fully restored and is identical to the VM which was At this point, the VM is fully restored and is identical to the VM which was

View File

@ -94,7 +94,7 @@ VMs run in client mode. They connect to the socket created by the `dpdkvhostuser
--memory size=1024M,hugepages=on,shared=true \ --memory size=1024M,hugepages=on,shared=true \
--kernel linux/arch/x86/boot/compressed/vmlinux.bin \ --kernel linux/arch/x86/boot/compressed/vmlinux.bin \
--cmdline "console=ttyS0 root=/dev/vda1 rw iommu=off" \ --cmdline "console=ttyS0 root=/dev/vda1 rw iommu=off" \
--disk path=images/focal-server-cloudimg-amd64.raw --disk vhost_user=true,socket=/var/tmp/vhost.1,num_queues=4,queue_size=128 \ --disk path=images/focal-server-cloudimg-amd64.raw vhost_user=true,socket=/var/tmp/vhost.1,num_queues=4,queue_size=128 \
--console off \ --console off \
--serial tty \ --serial tty \
--rng --rng