docs: Replace every mention of ClearLinux with Ubuntu

Now that our CI has transitioned from ClearLinux to Ubuntu images
exclusively, let's update the documentation to refer to Ubuntu images
instead of ClearLinux's ones.

Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit is contained in:
Sebastien Boeuf 2020-07-02 18:30:16 +02:00
parent b452e8be00
commit a3342bdb25
10 changed files with 59 additions and 60 deletions

View File

@ -127,12 +127,12 @@ Firmware](https://github.com/cloud-hypervisor/rust-hypervisor-firmware) project
an ELF
formatted KVM firmware for `cloud-hypervisor` to directly boot into.
We need to get the latest `rust-hypervisor-firmware` release and also a working cloud image. Here we will use a Clear Linux image:
We need to get the latest `rust-hypervisor-firmware` release and also a working cloud image. Here we will use a Ubuntu image:
```shell
$ pushd $CLOUDH
$ wget https://download.clearlinux.org/releases/31890/clear/clear-31890-kvm.img.xz
$ unxz clear-31890-kvm.img.xz
$ wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img
$ qemu-img convert -p -f qcow2 -O raw focal-server-cloudimg-amd64.img focal-server-cloudimg-amd64.raw
$ wget https://github.com/cloud-hypervisor/rust-hypervisor-firmware/releases/download/0.2.8/hypervisor-fw
$ popd
```
@ -142,7 +142,7 @@ $ pushd $CLOUDH
$ sudo setcap cap_net_admin+ep ./cloud-hypervisor/target/release/cloud-hypervisor
$ ./cloud-hypervisor/target/release/cloud-hypervisor \
--kernel ./hypervisor-fw \
--disk path=clear-31890-kvm.img \
--disk path=focal-server-cloudimg-amd64.raw \
--cpus boot=4 \
--memory size=1024M \
--net "tap=,mac=,ip=,mask=" \
@ -178,18 +178,18 @@ The `vmlinux` kernel image will then be located at `linux-cloud-hypervisor/arch/
#### Disk image
For the disk image, we will use a Clear Linux cloud image that contains a root partition:
For the disk image, we will use a Ubuntu cloud image that contains a root partition:
```shell
$ pushd $CLOUDH
$ wget https://download.clearlinux.org/releases/31890/clear/clear-31890-kvm.img.xz
$ unxz clear-31890-kvm.img.xz
$ wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img
$ qemu-img convert -p -f qcow2 -O raw focal-server-cloudimg-amd64.img focal-server-cloudimg-amd64.raw
$ popd
```
#### Booting the guest VM
Now we can directly boot into our custom kernel and make it use the Clear Linux root partition.
Now we can directly boot into our custom kernel and make it use the Ubuntu root partition.
If we want to have 4 vCPUs and 512 MBytes of memory:
```shell
@ -197,8 +197,8 @@ $ pushd $CLOUDH
$ sudo setcap cap_net_admin+ep ./cloud-hypervisor/target/release/cloud-hypervisor
$ ./cloud-hypervisor/target/release/cloud-hypervisor \
--kernel ./linux-cloud-hypervisor/arch/x86/boot/compressed/vmlinux.bin \
--disk path=clear-31890-kvm.img \
--cmdline "console=hvc0 reboot=k panic=1 nomodules i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd root=/dev/vda3" \
--disk path=focal-server-cloudimg-amd64.raw \
--cmdline "console=hvc0 root=/dev/vda1 rw" \
--cpus boot=4 \
--memory size=1024M \
--net "tap=,mac=,ip=,mask=" \
@ -217,8 +217,8 @@ $ ./cloud-hypervisor/target/release/cloud-hypervisor \
--kernel ./linux-cloud-hypervisor/arch/x86/boot/compressed/vmlinux.bin \
--console off \
--serial tty \
--disk path=clear-31890-kvm.img \
--cmdline "console=ttyS0 reboot=k panic=1 nomodules i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd root=/dev/vda3" \
--disk path=focal-server-cloudimg-amd64.raw \
--cmdline "console=ttyS0 root=/dev/vda1 rw" \
--cpus boot=4 \
--memory size=1024M \
--net "tap=,mac=,ip=,mask=" \
@ -229,8 +229,7 @@ $ ./cloud-hypervisor/target/release/cloud-hypervisor \
`cloud-hypervisor` is in a very early, pre-alpha stage. Use at your own risk!
As of 2020-04-23, the following cloud images are supported:
* [Clear Linux](https://download.clearlinux.org/current/) (cloudguest and kvm)
As of 2020-07-02, the following cloud images are supported:
* [Ubuntu Bionic](https://cloud-images.ubuntu.com/bionic/current/) (cloudimg)
* [Ubuntu Focal](https://cloud-images.ubuntu.com/focal/current/) (cloudimg)

View File

@ -122,10 +122,10 @@ We want to create a virtual machine with the following characteristics:
* 4 vCPUs
* 1 GB of RAM
* 1 virtio based networking interface
* Direct kernel boot from a custom 5.5.0 Linux kernel located at
* Direct kernel boot from a custom 5.6.0-rc4 Linux kernel located at
`/opt/clh/kernel/vmlinux-virtio-fs-virtio-iommu`
* Using a Clear Linux image as its root filesystem, located at
`/opt/clh/images/clear-30080-kvm.img`
* Using a Ubuntu image as its root filesystem, located at
`/opt/clh/images/focal-server-cloudimg-amd64.raw`
```shell
#!/bin/bash
@ -137,8 +137,8 @@ curl --unix-socket /tmp/cloud-hypervisor.sock -i \
-d '{
"cpus":{"boot_vcpus": 4, "max_vcpus": 4},
"kernel":{"path":"/opt/clh/kernel/vmlinux-virtio-fs-virtio-iommu"},
"cmdline":{"args":"console=hvc0 reboot=k panic=1 nomodules i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd root=/dev/vda3"},
"disks":[{"path":"/opt/clh/images/clear-30080-kvm.img"}],
"cmdline":{"args":"console=ttyS0 console=hvc0 root=/dev/vda1 rw"},
"disks":[{"path":"/opt/clh/images/focal-server-cloudimg-amd64.raw"}],
"rng":{"src":"/dev/urandom"},
"net":[{"ip":"192.168.10.10", "mask":"255.255.255.0", "mac":"12:34:56:78:90:01"}]
}'
@ -305,8 +305,8 @@ APIs work together, let's look at a complete VM creation flow, from the
-d '{
"cpus":{"boot_vcpus": 4, "max_vcpus": 4},
"kernel":{"path":"/opt/clh/kernel/vmlinux-virtio-fs-virtio-iommu"},
"cmdline":{"args":"console=hvc0 reboot=k panic=1 nomodules i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd root=/dev/vda3"},
"disks":[{"path":"/opt/clh/images/clear-30080-kvm.img"}],
"cmdline":{"args":"console=ttyS0 console=hvc0 root=/dev/vda1 rw"},
"disks":[{"path":"/opt/clh/images/focal-server-cloudimg-amd64.raw"}],
"rng":{"src":"/dev/urandom"},
"net":[{"ip":"192.168.10.10", "mask":"255.255.255.0", "mac":"12:34:56:78:90:01"}]
}'

View File

@ -43,7 +43,7 @@ to easily grep for the tracing logs (e.g.
```
./target/debug/cloud-hypervisor \
--kernel ~/rust-hypervisor-firmware/target/target/release/hypervisor-fw \
--disk path=~/hypervisor/images/clear-30080-kvm.img \
--disk path=~/hypervisor/images/focal-server-cloudimg-amd64.raw \
--cpus 4 \
--memory size=1024M \
--rng \

View File

@ -52,14 +52,14 @@ Direct kernel boot option is preferred since we need to provide the custom kerne
Because _vhost-user_ expects a dedicated process (__virtiofsd__ in this case) to be able to access the guest RAM to communicate through the _virtqueues_ with the driver running in the guest, `--memory` option needs to be slightly modified. It needs to specify a backing file for the memory so that an external process can access it.
Assuming you have `clear-kvm.img` and `custom-vmlinux.bin` on your system, here is the __cloud-hypervisor__ command you need to run:
Assuming you have `focal-server-cloudimg-amd64.raw` and `custom-vmlinux.bin` on your system, here is the __cloud-hypervisor__ command you need to run:
```bash
./cloud-hypervisor \
--cpus 4 \
--memory "size=512,file=/dev/shm" \
--disk path=clear-kvm.img \
--disk path=focal-server-cloudimg-amd64.raw \
--kernel custom-vmlinux.bin \
--cmdline "console=ttyS0 reboot=k panic=1 nomodules root=/dev/vda3" \
--cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \
--fs tag=myfs,socket=/tmp/virtiofs,num_queues=1,queue_size=512
```

View File

@ -5,9 +5,7 @@ Currently Cloud Hypervisor only support hot plugging of CPU devices.
## Kernel support
For hotplug on Cloud Hypervisor ACPI GED support is needed. This can either be achieved by turning on `CONFIG_ACPI_REDUCED_HARDWARE_ONLY`
or by using this kernel patch (available in 5.5rc1 and later): https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/drivers/acpi/Makefile?id=ac36d37e943635fc072e9d4f47e40a48fbcdb3f0
This patch is integrated into the Clear Linux KVM and cloudguest images.
or by using this kernel patch (available in 5.5-rc1 and later): https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/drivers/acpi/Makefile?id=ac36d37e943635fc072e9d4f47e40a48fbcdb3f0
## CPU Hot Plug
@ -22,8 +20,9 @@ To use CPU hotplug start the VM with the number of max vCPUs greater than the nu
$ pushd $CLOUDH
$ sudo setcap cap_net_admin+ep ./cloud-hypervisor/target/release/cloud-hypervisor
$ ./cloud-hypervisor/target/release/cloud-hypervisor \
--kernel ./hypervisor-fw \
--disk path=clear-31890-kvm.img \
--kernel custom-vmlinux.bin \
--cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \
--disk path=focal-server-cloudimg-amd64.raw \
--cpus boot=4,max=8 \
--memory size=1024M \
--net "tap=,mac=,ip=,mask=" \
@ -75,8 +74,9 @@ To use memory hotplug start the VM specifying some size RAM in the "hotplug_size
$ pushd $CLOUDH
$ sudo setcap cap_net_admin+ep ./cloud-hypervisor/target/release/cloud-hypervisor
$ ./cloud-hypervisor/target/release/cloud-hypervisor \
--kernel ./hypervisor-fw \
--disk path=clear-31890-kvm.img \
--kernel custom-vmlinux.bin \
--cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \
--disk path=focal-server-cloudimg-amd64.raw \
--cpus boot=4,max=8 \
--memory size=1024M,hotplug_size=8192M \
--net "tap=,mac=,ip=,mask=" \
@ -110,4 +110,4 @@ Due to guest OS limitations is is necessary to ensure that amount of memory adde
The same API can also be used to reduce the desired RAM for a VM but the change will not be applied until the VM is rebooted.
Memory and CPU resizing can be combined together into the same HTTP API request.
Memory and CPU resizing can be combined together into the same HTTP API request.

View File

@ -88,9 +88,9 @@ virtual IOMMU:
./cloud-hypervisor \
--cpus 1 \
--memory size=512M \
--disk path=clear-kvm.img,iommu=on \
--disk path=focal-server-cloudimg-amd64.raw,iommu=on \
--kernel custom-bzImage \
--cmdline "console=ttyS0 root=/dev/vda3" \
--cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \
```
From a guest perspective, it is easy to verify if the device is protected by
@ -165,9 +165,9 @@ be consumed.
./cloud-hypervisor \
--cpus 1 \
--memory size=8G,file=/dev/hugepages \
--disk path=clear-kvm.img \
--disk path=focal-server-cloudimg-amd64.raw \
--kernel custom-bzImage \
--cmdline "console=ttyS0 root=/dev/vda3 hugepagesz=2M hugepages=2048" \
--cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw hugepagesz=2M hugepages=2048" \
--net tap=,mac=,iommu=on
```
@ -182,9 +182,9 @@ passing through is `0000:00:01.0`.
./cloud-hypervisor \
--cpus 1 \
--memory size=8G,file=/dev/hugepages \
--disk path=clear-kvm.img \
--disk path=focal-server-cloudimg-amd64.raw \
--kernel custom-bzImage \
--cmdline "console=ttyS0 root=/dev/vda3 kvm-intel.nested=1 vfio_iommu_type1.allow_unsafe_interrupts rw hugepagesz=2M hugepages=2048" \
--cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw kvm-intel.nested=1 vfio_iommu_type1.allow_unsafe_interrupts rw hugepagesz=2M hugepages=2048" \
--device path=/sys/bus/pci/devices/0000:00:01.0,iommu=on
```
@ -202,8 +202,8 @@ Last thing is to start the L2 guest with the huge pages memory backend.
./cloud-hypervisor \
--cpus 1 \
--memory size=4G,file=/dev/hugepages \
--disk path=clear-kvm.img \
--disk path=focal-server-cloudimg-amd64.raw \
--kernel custom-bzImage \
--cmdline "console=ttyS0 root=/dev/vda3" \
--cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \
--device path=/sys/bus/pci/devices/0000:00:04.0
```

View File

@ -25,9 +25,9 @@ Use one `--net` command-line argument from cloud-hypervisor to specify the emula
./cloud-hypervisor \
--cpus 4 \
--memory "size=512M" \
--disk path=my-root-disk.img \
--disk path=focal-server-cloudimg-amd64.raw \
--kernel my-vmlinux.bin \
--cmdline "console=ttyS0 reboot=k panic=1 nomodules root=/dev/vda3" \
--cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \
--net tap=ich0,mac=a4:a1:c2:00:00:01,ip=192.168.4.2,mask=255.255.255.0,num_queues=2,queue_size=256 \
tap=ich1,mac=a4:a1:c2:00:00:02,ip=10.0.1.2,mask=255.255.255.0,num_queues=2,queue_size=256
```

View File

@ -66,10 +66,10 @@ takes the device's sysfs path as an argument. In our example it is
```
./target/debug/cloud-hypervisor \
--kernel ~/vmlinux \
--disk path=~/clear-29160-kvm.img \
--disk path=~/focal-server-cloudimg-amd64.raw \
--console off \
--serial tty \
--cmdline "console=ttyS0 reboot=k panic=1 nomodules i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd root=/dev/vda3" \
--cmdline "console=ttyS0 root=/dev/vda1 rw" \
--cpus 4 \
--memory size=512M \
--device path=/sys/bus/pci/devices/0000:01:00.0/

View File

@ -73,10 +73,10 @@ VMs run in client mode. They connect to the socket created by the `dpdkvhostuser
# From the test terminal. We need to create one vhost-user-blk device for the --disk.
./cloud-hypervisor \
--cpus boot=4 \
--memory size=1024M,file=/dev/hugepages \
--memory size=1024M,hugepages=on \
--kernel linux/arch/x86/boot/compressed/vmlinux.bin \
--cmdline "console=ttyS0 reboot=k panic=1 nomodules i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd root=/dev/vda3 iommu=off" \
--disk "path=images/clear-kvm.img" "num_queues=4,queue_size=128,vhost_user=true,socket=/var/tmp/vhost.1" \
--cmdline "console=ttyS0 root=/dev/vda1 rw iommu=off" \
--disk path=images/focal-server-cloudimg-amd64.raw vhost_user=true,socket=/var/tmp/vhost.1,num_queues=4,queue_size=128 \
--console off \
--serial tty \
--rng
@ -88,11 +88,11 @@ login in guest
# Use lsblk command to find out vhost-user-blk device
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 8.5G 0 disk
├─vda1 253:1 0 511M 0 part
├─vda2 253:2 0 32M 0 part [SWAP]
└─vda3 253:3 0 8G 0 part /
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 2.2G 0 disk
├─vda1 252:1 0 2.1G 0 part /
├─vda14 252:14 0 4M 0 part
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 253:16 0 512M 0 disk
The vhost-user-blk device is /dev/vdb

View File

@ -76,19 +76,19 @@ VMs run in client mode. They connect to the socket created by the `dpdkvhostuser
# From one terminal. We need to give the cloud-hypervisor binary the NET_ADMIN capabilities for it to set TAP interfaces up on the host.
./cloud-hypervisor \
--cpus boot=2 \
--memory size=512M,file=/dev/hugepages \
--memory size=512M,hugepages=on \
--kernel vmlinux \
--cmdline "reboot=k panic=1 nomodules i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd root=/dev/vda3" \
--disk path=clear-kvm.img \
--net "mac=52:54:00:02:d9:01,vhost_user=true,socket=/var/run/openvswitch/vhost-user1,num_queues=4"
--cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \
--disk path=focal-server-cloudimg-amd64.raw \
--net mac=52:54:00:02:d9:01,vhost_user=true,socket=/var/run/openvswitch/vhost-user1,num_queues=4
# From another terminal. We need to give the cloud-hypervisor binary the NET_ADMIN capabilities for it to set TAP interfaces up on the host.
./cloud-hypervisor \
--cpus boot=2 \
--memory size=512M,file=/dev/hugepages \
--memory size=512M,hugepages=on \
--kernel vmlinux \
--cmdline "reboot=k panic=1 nomodules i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd root=/dev/vda3" \
--disk path=clear-kvm.img \
--cmdline "console=ttyS0 console=hvc0 root=/dev/vda1 rw" \
--disk path=focal-server-cloudimg-amd64.raw \
--net "mac=52:54:20:11:C5:02,vhost_user=true,socket=/var/run/openvswitch/vhost-user2,num_queues=4"
```