mirror of
https://gitlab.com/libvirt/libvirt.git
synced 2024-11-05 12:51:12 +00:00
e8863b91fb
When adding support for externally launched virtiofsd, I was too liberal and did not require a target. But the target is required, because it's passed to the QEMU device, not to virtiofsd. https://bugzilla.redhat.com/show_bug.cgi?id=1969232 Fixes:12967c3e13
Fixes:56dcdec1ac
Signed-off-by: Ján Tomko <jtomko@redhat.com> Reviewed-by: Peter Krempa <pkrempa@redhat.com>
185 lines
4.6 KiB
ReStructuredText
185 lines
4.6 KiB
ReStructuredText
============================
|
|
Sharing files with Virtio-FS
|
|
============================
|
|
|
|
.. contents::
|
|
|
|
Virtio-FS
|
|
=========
|
|
|
|
Virtio-FS is a shared file system that lets virtual machines access
|
|
a directory tree on the host. Unlike existing approaches, it
|
|
is designed to offer local file system semantics and performance.
|
|
|
|
See https://virtio-fs.gitlab.io/
|
|
|
|
Host setup
|
|
==========
|
|
|
|
Almost all virtio devices (all that use virtqueues) require access to
|
|
at least certain portions of guest RAM (possibly policed by DMA). In
|
|
case of virtiofsd, much like in case of other vhost-user (see
|
|
https://www.qemu.org/docs/master/interop/vhost-user.html) virtio
|
|
devices that are realized by an userspace process, this in practice
|
|
means that QEMU needs to allocate the backing memory for all the guest
|
|
RAM as shared memory. As of QEMU 4.2, it is possible to explicitly
|
|
specify a memory backend when specifying the NUMA topology. This
|
|
method is however only viable for machine types that do support
|
|
NUMA. As of QEMU 5.0.0 and libvirt 6.9.0, it is possible to
|
|
specify the memory backend without NUMA (using the so called
|
|
memobject interface).
|
|
|
|
One of the following:
|
|
|
|
* Use memfd memory
|
|
|
|
No host setup is required when using the Linux memfd memory backend.
|
|
|
|
* Use file-backed memory
|
|
|
|
Configure the directory where the files backing the memory will be stored
|
|
with the ``memory_backing_dir`` option in ``/etc/libvirt/qemu.conf``
|
|
|
|
::
|
|
|
|
# This directory is used for memoryBacking source if configured as file.
|
|
# NOTE: big files will be stored here
|
|
memory_backing_dir = "/dev/shm/"
|
|
|
|
* Use hugepage-backed memory
|
|
|
|
Make sure there are enough huge pages allocated for the requested guest memory.
|
|
For example, for one guest with 2 GiB of RAM backed by 2 MiB hugepages:
|
|
|
|
::
|
|
|
|
# virsh allocpages 2M 1024
|
|
|
|
Guest setup
|
|
===========
|
|
|
|
#. Specify the NUMA topology (this step is only required for the NUMA case)
|
|
|
|
in the domain XML of the guest.
|
|
For the simplest one-node topology for a guest with 2GiB of RAM and 8 vCPUs:
|
|
|
|
::
|
|
|
|
<domain>
|
|
...
|
|
<cpu ...>
|
|
<numa>
|
|
<cell id='0' cpus='0-7' memory='2' unit='GiB' memAccess='shared'/>
|
|
</numa>
|
|
</cpu>
|
|
...
|
|
</domain>
|
|
|
|
Note that the CPU element might already be specified and only one is allowed.
|
|
|
|
#. Specify the memory backend
|
|
|
|
One of the following:
|
|
|
|
* memfd memory
|
|
|
|
::
|
|
|
|
<domain>
|
|
...
|
|
<memoryBacking>
|
|
<source type='memfd'/>
|
|
<access mode='shared'/>
|
|
</memoryBacking>
|
|
...
|
|
</domain>
|
|
|
|
* File-backed memory
|
|
|
|
::
|
|
|
|
<domain>
|
|
...
|
|
<memoryBacking>
|
|
<access mode='shared'/>
|
|
</memoryBacking>
|
|
...
|
|
</domain>
|
|
|
|
This will create a file in the directory specified in ``qemu.conf``
|
|
|
|
* Hugepage-backed memory
|
|
|
|
::
|
|
|
|
<domain>
|
|
...
|
|
<memoryBacking>
|
|
<hugepages>
|
|
<page size='2' unit='M'/>
|
|
</hugepages>
|
|
<access mode='shared'/>
|
|
</memoryBacking>
|
|
...
|
|
</domain>
|
|
|
|
#. Add the ``vhost-user-fs`` QEMU device via the ``filesystem`` element
|
|
|
|
::
|
|
|
|
<domain>
|
|
...
|
|
<devices>
|
|
...
|
|
<filesystem type='mount' accessmode='passthrough'>
|
|
<driver type='virtiofs'/>
|
|
<source dir='/path'/>
|
|
<target dir='mount_tag'/>
|
|
</filesystem>
|
|
...
|
|
</devices>
|
|
</domain>
|
|
|
|
Note that despite its name, the ``target dir`` is actually a mount tag and does
|
|
not have to correspond to the desired mount point in the guest.
|
|
|
|
So far, ``passthrough`` is the only supported access mode and it requires
|
|
running the ``virtiofsd`` daemon as root.
|
|
|
|
#. Boot the guest and mount the filesystem
|
|
|
|
::
|
|
|
|
guest# mount -t virtiofs mount_tag /mnt/mount/path
|
|
|
|
Note: this requires virtiofs support in the guest kernel (Linux v5.4 or later)
|
|
|
|
Optional parameters
|
|
===================
|
|
|
|
More optional elements can be specified
|
|
|
|
::
|
|
|
|
<driver type='virtiofs' queue='1024'/>
|
|
<binary path='/usr/libexec/virtiofsd' xattr='on'>
|
|
<cache mode='always'/>
|
|
<lock posix='on' flock='on'/>
|
|
</binary>
|
|
|
|
Externally-launched virtiofsd
|
|
=============================
|
|
|
|
Libvirtd can also connect the ``vhost-user-fs`` device to a ``virtiofsd``
|
|
daemon launched outside of libvirtd. In that case socket permissions,
|
|
the mount tag and all the virtiofsd options are out of libvirtd's
|
|
control and need to be set by the application running virtiofsd.
|
|
|
|
::
|
|
|
|
<filesystem type='mount'/>
|
|
<driver type='virtiofs' queue='1024'/>
|
|
<source socket='/var/virtiofsd.sock'/>
|
|
<target dir='tag'/>
|
|
</filesystem>
|