Update all references to the new repository locations. Many of these will redirect however the one used for the hypervisor-fw binary does not so this is required to allow the builds to pass. Signed-off-by: Rob Bradford <robert.bradford@intel.com>
4.1 KiB
How to use virtio-fs
In the context of virtualization, it is always convenient to be able to share a directory from the host with the guest.
virtio-fs, also known as vhost-user-fs is a virtual device defined by the VIRTIO specification which allows any VMM to perform filesystem sharing.
Pre-requisites
The daemon
This virtual device relies on the vhost-user protocol, which assumes the backend (device emulation) is handled by a dedicated process running on the host. This daemon is called virtiofsd and needs to be present on the host.
Install virtiofsd
VIRTIOFSD_URL="$(curl --silent https://api.github.com/repos/cloud-hypervisor/nemu/releases/latest | grep "browser_download_url" | grep "virtiofsd-x86_64" | grep -o 'https://.*[^ "]')"
wget --quiet $VIRTIOFSD_URL -O "virtiofsd"
chmod +x "virtiofsd"
sudo setcap cap_sys_admin+epi "virtiofsd"
Create shared directory
mkdir /tmp/shared_dir
Run virtiofsd
./virtiofsd \
-d \
-o vhost_user_socket=/tmp/virtiofs \
-o source=/tmp/shared_dir \
-o cache=always
The cache=always
option should be the default when using virtiofsd
with the cloud-hypervisor VMM. This allows the daemon to memory map the shared files, which gives better I/O performance.
The cache=none
option is another way to run the daemon but because the virtqueues are used to convey the files content in this case, the I/O performance is impacted.
The kernel
In order to leverage virtio-fs support from within the guest, and because the code has not been merged in upstream Linux kernel yet, it is required to build a custom kernel embedding the patches.
The following branch virtio-pmem_and_virtio-fs
on the repository https://github.com/sboeuf/linux.git includes all the needed patches to support virtio-fs.
Make sure to build a kernel out of this branch that can be then used to boot the VM.
How to share directories with cloud-hypervisor
Start the VM
Once the daemon is running, the option --fs
from cloud-hypervisor needs to be used.
Direct kernel boot option is preferred since we need to provide the custom kernel including the virtio-fs patches. We could boot from hypervisor-fw
if we had previously edited the image to replace the kernel binary.
Because vhost-user expects a dedicated process (virtiofsd in this case) to be able to access the guest RAM to communicate through the virtqueues with the driver running in the guest, --memory
option needs to be slightly modified. It needs to specify a backing file for the memory so that an external process can access it.
Assuming you have clear-kvm.img
and custom-vmlinux.bin
on your system, here is the cloud-hypervisor command you need to run:
./cloud-hypervisor \
--cpus 4 \
--memory "size=512,file=/dev/shm" \
--disk path=clear-kvm.img \
--kernel custom-vmlinux.bin \
--cmdline "console=ttyS0 reboot=k panic=1 nomodules root=/dev/vda3" \
--fs tag=virtiofs,sock=/tmp/virtiofs,num_queues=1,queue_size=512
By default, DAX is enabled with a cache window of 8GiB. You can specify a custom size (let's say 4GiB for this example) for the cache by explicitly setting DAX and the cache size:
--fs tag=virtiofs,sock=/tmp/virtiofs,num_queues=1,queue_size=512,dax=on,cache_size=4G
In case you don't want to use a shared window of cache to pass the shared files content, this means you will have to explicitly disable DAX with dax=off
. Note that in this case, the cache_size
parameter will be ignored.
--fs tag=virtiofs,sock=/tmp/virtiofs,num_queues=1,queue_size=512,dax=off
Mount the shared directory
The last step is to mount the shared directory inside the guest, using the virtio_fs
filesystem type.
mkdir mount_dir
mount \
-t virtio_fs virtiofs mount_dir/ \
-o rootmode=040000,user_id=0,group_id=0,dax
The tag
needs to be consistent with what has been provided through the cloud-hypervisor command line, which happens to be virtiofs
in this example.
The dax
option must be removed in case the shared cache region is not enabled from the VMM.