2021-09-09 14:58:29 +00:00
|
|
|
===========================
|
|
|
|
Sharing files with Virtiofs
|
|
|
|
===========================
|
2019-12-11 12:10:07 +00:00
|
|
|
|
|
|
|
.. contents::
|
|
|
|
|
2021-09-09 14:58:29 +00:00
|
|
|
Virtiofs
|
|
|
|
========
|
2019-12-11 12:10:07 +00:00
|
|
|
|
2021-09-09 14:58:29 +00:00
|
|
|
Virtiofs is a shared file system that lets virtual machines access
|
2019-12-11 12:10:07 +00:00
|
|
|
a directory tree on the host. Unlike existing approaches, it
|
|
|
|
is designed to offer local file system semantics and performance.
|
|
|
|
|
|
|
|
See https://virtio-fs.gitlab.io/
|
|
|
|
|
2023-02-27 08:10:08 +00:00
|
|
|
*Note:* virtiofs currently does not support migration so operations such as
|
|
|
|
migration, save/managed-save, or snapshots with memory are not supported if
|
|
|
|
a VM has a virtiofs filesystem connected.
|
|
|
|
|
2021-09-09 14:58:28 +00:00
|
|
|
Sharing a host directory with a guest
|
|
|
|
=====================================
|
|
|
|
|
|
|
|
#. Add the following domain XML elements to share the host directory `/path`
|
|
|
|
with the guest
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
|
|
<domain>
|
|
|
|
...
|
|
|
|
<memoryBacking>
|
|
|
|
<source type='memfd'/>
|
|
|
|
<access mode='shared'/>
|
|
|
|
</memoryBacking>
|
|
|
|
...
|
|
|
|
<devices>
|
|
|
|
...
|
|
|
|
<filesystem type='mount' accessmode='passthrough'>
|
2022-11-14 15:00:09 +00:00
|
|
|
<driver type='virtiofs' queue='1024'/>
|
2021-09-09 14:58:28 +00:00
|
|
|
<source dir='/path'/>
|
|
|
|
<target dir='mount_tag'/>
|
|
|
|
</filesystem>
|
|
|
|
...
|
|
|
|
</devices>
|
|
|
|
</domain>
|
|
|
|
|
|
|
|
Don't forget the ``<memoryBacking>`` elements. They are necessary for the
|
|
|
|
vhost-user connection with the ``virtiofsd`` daemon.
|
|
|
|
|
|
|
|
Note that despite its name, the ``target dir`` is an arbitrary string called
|
|
|
|
a mount tag that is used inside the guest to identify the shared file system
|
|
|
|
to be mounted. It does not have to correspond to the desired mount point in the
|
|
|
|
guest.
|
|
|
|
|
|
|
|
#. Boot the guest and mount the filesystem
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
|
|
guest# mount -t virtiofs mount_tag /mnt/mount/path
|
|
|
|
|
|
|
|
Note: this requires virtiofs support in the guest kernel (Linux v5.4 or later)
|
|
|
|
|
2023-09-11 13:38:10 +00:00
|
|
|
Running unprivileged
|
|
|
|
====================
|
|
|
|
|
|
|
|
In unprivileged mode (``qemu:///session``), mapping user/group IDs is available
|
|
|
|
(since libvirt version 10.0.0). The root user (ID 0) in the guest will be mapped
|
|
|
|
to the current user on the host.
|
|
|
|
|
|
|
|
The rest of the IDs will be mapped to the subordinate user IDs specified
|
|
|
|
in `/etc/subuid`:
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
|
|
$ cat /etc/subuid
|
|
|
|
jtomko:100000:65536
|
|
|
|
$ cat /etc/subgid
|
|
|
|
jtomko:100000:65536
|
|
|
|
|
|
|
|
To manually tweak the user ID mapping, the `idmap` element can be used.
|
|
|
|
|
2021-09-09 14:58:28 +00:00
|
|
|
Optional parameters
|
|
|
|
===================
|
|
|
|
|
|
|
|
More optional elements can be specified
|
|
|
|
|
|
|
|
::
|
|
|
|
|
2021-09-09 15:52:38 +00:00
|
|
|
<filesystem type='mount' accessmode='passthrough'>
|
|
|
|
<driver type='virtiofs' queue='1024'/>
|
|
|
|
...
|
|
|
|
<binary path='/usr/libexec/virtiofsd' xattr='on'>
|
|
|
|
<cache mode='always'/>
|
|
|
|
<lock posix='on' flock='on'/>
|
|
|
|
</binary>
|
|
|
|
</filesystem>
|
2021-09-09 14:58:28 +00:00
|
|
|
|
|
|
|
Externally-launched virtiofsd
|
|
|
|
=============================
|
|
|
|
|
|
|
|
Libvirtd can also connect the ``vhost-user-fs`` device to a ``virtiofsd``
|
|
|
|
daemon launched outside of libvirtd. In that case socket permissions,
|
|
|
|
the mount tag and all the virtiofsd options are out of libvirtd's
|
|
|
|
control and need to be set by the application running virtiofsd.
|
|
|
|
|
|
|
|
::
|
|
|
|
|
2021-09-09 15:53:18 +00:00
|
|
|
<filesystem type='mount'>
|
2021-09-09 14:58:28 +00:00
|
|
|
<driver type='virtiofs' queue='1024'/>
|
|
|
|
<source socket='/var/virtiofsd.sock'/>
|
|
|
|
<target dir='tag'/>
|
|
|
|
</filesystem>
|
|
|
|
|
|
|
|
Other options for vhost-user memory setup
|
|
|
|
=========================================
|
|
|
|
|
|
|
|
The following information is necessary if you are using older versions of QEMU
|
|
|
|
and libvirt or have special memory backend requirements.
|
2019-12-11 12:10:07 +00:00
|
|
|
|
2020-10-13 16:53:12 +00:00
|
|
|
Almost all virtio devices (all that use virtqueues) require access to
|
|
|
|
at least certain portions of guest RAM (possibly policed by DMA). In
|
|
|
|
case of virtiofsd, much like in case of other vhost-user (see
|
|
|
|
https://www.qemu.org/docs/master/interop/vhost-user.html) virtio
|
|
|
|
devices that are realized by an userspace process, this in practice
|
|
|
|
means that QEMU needs to allocate the backing memory for all the guest
|
|
|
|
RAM as shared memory. As of QEMU 4.2, it is possible to explicitly
|
|
|
|
specify a memory backend when specifying the NUMA topology. This
|
|
|
|
method is however only viable for machine types that do support
|
2020-11-12 20:59:04 +00:00
|
|
|
NUMA. As of QEMU 5.0.0 and libvirt 6.9.0, it is possible to
|
|
|
|
specify the memory backend without NUMA (using the so called
|
|
|
|
memobject interface).
|
2019-12-11 12:10:07 +00:00
|
|
|
|
2021-09-09 14:58:28 +00:00
|
|
|
#. Set up the memory backend
|
2021-06-07 13:50:24 +00:00
|
|
|
|
2021-09-09 14:58:28 +00:00
|
|
|
* Use memfd memory
|
2021-06-07 13:50:24 +00:00
|
|
|
|
2021-09-09 14:58:28 +00:00
|
|
|
No host setup is required when using the Linux memfd memory backend.
|
2019-12-11 12:10:07 +00:00
|
|
|
|
2021-09-09 14:58:28 +00:00
|
|
|
* Use file-backed memory
|
2019-12-11 12:10:07 +00:00
|
|
|
|
2021-09-09 14:58:28 +00:00
|
|
|
Configure the directory where the files backing the memory will be stored
|
|
|
|
with the ``memory_backing_dir`` option in ``/etc/libvirt/qemu.conf``
|
2019-12-11 12:10:07 +00:00
|
|
|
|
2021-09-09 14:58:28 +00:00
|
|
|
::
|
2019-12-11 12:10:07 +00:00
|
|
|
|
2021-09-09 14:58:28 +00:00
|
|
|
# This directory is used for memoryBacking source if configured as file.
|
|
|
|
# NOTE: big files will be stored here
|
|
|
|
memory_backing_dir = "/dev/shm/"
|
2019-12-11 12:10:07 +00:00
|
|
|
|
2021-09-09 14:58:28 +00:00
|
|
|
* Use hugepage-backed memory
|
2019-12-11 12:10:07 +00:00
|
|
|
|
2021-09-09 14:58:28 +00:00
|
|
|
Make sure there are enough huge pages allocated for the requested guest memory.
|
|
|
|
For example, for one guest with 2 GiB of RAM backed by 2 MiB hugepages:
|
2019-12-11 12:10:07 +00:00
|
|
|
|
2021-09-09 14:58:28 +00:00
|
|
|
::
|
2019-12-11 12:10:07 +00:00
|
|
|
|
2021-09-09 14:58:28 +00:00
|
|
|
# virsh allocpages 2M 1024
|
2019-12-11 12:10:07 +00:00
|
|
|
|
2020-10-13 16:53:12 +00:00
|
|
|
#. Specify the NUMA topology (this step is only required for the NUMA case)
|
2019-12-11 12:10:07 +00:00
|
|
|
|
|
|
|
in the domain XML of the guest.
|
|
|
|
For the simplest one-node topology for a guest with 2GiB of RAM and 8 vCPUs:
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
|
|
<domain>
|
|
|
|
...
|
|
|
|
<cpu ...>
|
|
|
|
<numa>
|
|
|
|
<cell id='0' cpus='0-7' memory='2' unit='GiB' memAccess='shared'/>
|
|
|
|
</numa>
|
|
|
|
</cpu>
|
|
|
|
...
|
|
|
|
</domain>
|
|
|
|
|
|
|
|
Note that the CPU element might already be specified and only one is allowed.
|
|
|
|
|
|
|
|
#. Specify the memory backend
|
|
|
|
|
2021-06-07 13:50:24 +00:00
|
|
|
One of the following:
|
|
|
|
|
|
|
|
* memfd memory
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
|
|
<domain>
|
|
|
|
...
|
|
|
|
<memoryBacking>
|
|
|
|
<source type='memfd'/>
|
|
|
|
<access mode='shared'/>
|
|
|
|
</memoryBacking>
|
|
|
|
...
|
|
|
|
</domain>
|
2019-12-11 12:10:07 +00:00
|
|
|
|
|
|
|
* File-backed memory
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
|
|
<domain>
|
|
|
|
...
|
|
|
|
<memoryBacking>
|
|
|
|
<access mode='shared'/>
|
|
|
|
</memoryBacking>
|
|
|
|
...
|
|
|
|
</domain>
|
|
|
|
|
|
|
|
This will create a file in the directory specified in ``qemu.conf``
|
|
|
|
|
|
|
|
* Hugepage-backed memory
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
|
|
<domain>
|
|
|
|
...
|
|
|
|
<memoryBacking>
|
|
|
|
<hugepages>
|
|
|
|
<page size='2' unit='M'/>
|
|
|
|
</hugepages>
|
|
|
|
<access mode='shared'/>
|
|
|
|
</memoryBacking>
|
|
|
|
...
|
|
|
|
</domain>
|