2021-08-05 15:36:39 +00:00
# Cloud Hypervisor VFIO-user HOWTO
VFIO-user is an *experimental* protocol for allowing devices to be implemented in another process and communicate over a socket; ie.e VFIO-user is to VFIO as virtio is to vhost-user.
The protocol is documented here: https://github.com/nutanix/libvfio-user/blob/master/docs/vfio-user.rst
2021-08-12 11:30:56 +00:00
The Cloud Hypervisor support for such devices is *experimental* . Not all Cloud Hypervisor functionality is supported in particular: virtio-mem and iommu are not supported.
2021-08-05 15:36:39 +00:00
## Usage
2021-08-12 11:30:56 +00:00
The `--user-device socket=<path>` parameter is used to create a vfio-user device when creating the VM specifying the socket to connect to. The device can also be hotplugged with `ch-remote add-user-device socket=<path>` .
2021-08-05 15:36:39 +00:00
## Example (GPIO device)
There is a simple GPIO device included in the libvfio-user repository: https://github.com/nutanix/libvfio-user#gpio
Run the example from the libvfio-user repository:
```sh
rm /tmp/vfio-user.sock
./build/dbg/samples/gpio-pci-idio-16 -v /tmp/vfio-user.sock &
```
Start Cloud Hypervisor:
```sh
target/debug/cloud-hypervisor \
--memory size=1G,shared=on \
--disk path=~/images/focal-server-cloudimg-amd64.raw \
--kernel ~/src/linux/vmlinux \
--cmdline "root=/dev/vda1 console=hvc0" \
--user-device socket=/tmp/vfio-user.sock
```
Inside the VM you can test the device with:
```sh
cat /sys/class/gpio/gpiochip480/base > /sys/class/gpio/export
for ((i=0;i< 12 ; i + + ) ) ; do cat / sys / class / gpio / OUT0 / value ; done
```
## Example (NVMe device)
Use SPDK: https://github.com/spdk/spdk
Compile with `./configure --with-vfio-user`
Create an NVMe controller listening on a vfio-user socket with a simple block device:
```sh
sudo scripts/setup.sh
rm ~/images/test-disk.raw
truncate ~/images/test-disk.raw -s 128M
2021-08-25 15:04:51 +00:00
mkfs.ext4 ~/images/test-disk.raw
2021-08-05 15:36:39 +00:00
sudo killall ./build/bin/nvmf_tgt
sudo ./build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 &
sleep 2
sudo ./scripts/rpc.py nvmf_create_transport -t VFIOUSER
sudo rm -rf /tmp/nvme-vfio-user
sudo mkdir -p /tmp/nvme-vfio-user
2021-08-12 12:42:22 +00:00
sudo ./scripts/rpc.py bdev_aio_create ~/images/test-disk.raw test 512
2021-08-05 15:36:39 +00:00
sudo ./scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode -a -s test
sudo ./scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode test
sudo ./scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode -t VFIOUSER -a /tmp/nvme-vfio-user -s 0
sudo chown $USER.$USER -R /tmp/nvme-vfio-user
```
Start Cloud Hypervisor:
```sh
target/debug/cloud-hypervisor \
2021-08-12 12:42:22 +00:00
--memory size=1G,shared=on \
2021-08-05 15:36:39 +00:00
--disk path=~/images/focal-server-cloudimg-amd64.raw \
--kernel ~/src/linux/vmlinux \
--cmdline "root=/dev/vda1 console=hvc0" \
--user-device socket=/tmp/nvme-vfio-user/cntrl
```