virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
// Copyright © 2022 Intel Corporation
|
|
|
|
//
|
|
|
|
// SPDX-License-Identifier: Apache-2.0
|
|
|
|
//
|
|
|
|
|
|
|
|
use crate::{
|
|
|
|
ActivateError, ActivateResult, GuestMemoryMmap, VirtioCommon, VirtioDevice, VirtioInterrupt,
|
|
|
|
VirtioInterruptType, DEVICE_ACKNOWLEDGE, DEVICE_DRIVER, DEVICE_DRIVER_OK, DEVICE_FEATURES_OK,
|
|
|
|
VIRTIO_F_IOMMU_PLATFORM,
|
|
|
|
};
|
2022-10-12 09:23:16 +00:00
|
|
|
use anyhow::anyhow;
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
use std::{
|
2022-10-12 08:54:45 +00:00
|
|
|
collections::BTreeMap,
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
io, result,
|
2022-11-30 16:10:04 +00:00
|
|
|
sync::{
|
|
|
|
atomic::{AtomicBool, Ordering},
|
|
|
|
Arc, Mutex,
|
|
|
|
},
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
};
|
|
|
|
use thiserror::Error;
|
2022-10-12 09:23:16 +00:00
|
|
|
use versionize::{VersionMap, Versionize, VersionizeResult};
|
|
|
|
use versionize_derive::Versionize;
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
use vhost::{
|
|
|
|
vdpa::{VhostVdpa, VhostVdpaIovaRange},
|
|
|
|
vhost_kern::VhostKernFeatures,
|
2022-10-12 09:23:16 +00:00
|
|
|
vhost_kern::{vdpa::VhostKernVdpa, vhost_binding::VHOST_BACKEND_F_SUSPEND},
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
VhostBackend, VringConfigData,
|
|
|
|
};
|
2022-07-06 14:08:08 +00:00
|
|
|
use virtio_queue::{Descriptor, Queue, QueueT};
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
use vm_device::dma_mapping::ExternalDmaMapping;
|
|
|
|
use vm_memory::{GuestAddress, GuestAddressSpace, GuestMemory, GuestMemoryAtomic};
|
2022-10-12 09:23:16 +00:00
|
|
|
use vm_migration::{
|
|
|
|
Migratable, MigratableError, Pausable, Snapshot, Snapshottable, Transportable, VersionMapped,
|
|
|
|
};
|
2022-04-04 12:14:19 +00:00
|
|
|
use vm_virtio::{AccessPlatform, Translatable};
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
use vmm_sys_util::eventfd::EventFd;
|
|
|
|
|
|
|
|
#[derive(Error, Debug)]
|
|
|
|
pub enum Error {
|
|
|
|
#[error("Failed to create vhost-vdpa: {0}")]
|
|
|
|
CreateVhostVdpa(vhost::Error),
|
|
|
|
#[error("Failed to map DMA range: {0}")]
|
|
|
|
DmaMap(vhost::Error),
|
|
|
|
#[error("Failed to unmap DMA range: {0}")]
|
|
|
|
DmaUnmap(vhost::Error),
|
|
|
|
#[error("Failed to get address range")]
|
|
|
|
GetAddressRange,
|
|
|
|
#[error("Failed to get the available index from the virtio queue: {0}")]
|
|
|
|
GetAvailableIndex(virtio_queue::Error),
|
2022-10-12 09:23:16 +00:00
|
|
|
#[error("Get virtio configuration size: {0}")]
|
|
|
|
GetConfigSize(vhost::Error),
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
#[error("Get virtio device identifier: {0}")]
|
|
|
|
GetDeviceId(vhost::Error),
|
|
|
|
#[error("Failed to get backend specific features: {0}")]
|
|
|
|
GetBackendFeatures(vhost::Error),
|
|
|
|
#[error("Failed to get virtio features: {0}")]
|
|
|
|
GetFeatures(vhost::Error),
|
|
|
|
#[error("Failed to get the IOVA range: {0}")]
|
|
|
|
GetIovaRange(vhost::Error),
|
|
|
|
#[error("Failed to get queue size: {0}")]
|
|
|
|
GetVringNum(vhost::Error),
|
|
|
|
#[error("Invalid IOVA range: {0}-{1}")]
|
|
|
|
InvalidIovaRange(u64, u64),
|
|
|
|
#[error("Missing VIRTIO_F_ACCESS_PLATFORM feature")]
|
|
|
|
MissingAccessPlatformVirtioFeature,
|
|
|
|
#[error("Failed to reset owner: {0}")]
|
|
|
|
ResetOwner(vhost::Error),
|
|
|
|
#[error("Failed to set backend specific features: {0}")]
|
|
|
|
SetBackendFeatures(vhost::Error),
|
2022-10-18 15:14:43 +00:00
|
|
|
#[error("Failed to set backend configuration: {0}")]
|
|
|
|
SetConfig(vhost::Error),
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
#[error("Failed to set eventfd notifying about a configuration change: {0}")]
|
|
|
|
SetConfigCall(vhost::Error),
|
|
|
|
#[error("Failed to set virtio features: {0}")]
|
|
|
|
SetFeatures(vhost::Error),
|
|
|
|
#[error("Failed to set memory table: {0}")]
|
|
|
|
SetMemTable(vhost::Error),
|
|
|
|
#[error("Failed to set owner: {0}")]
|
|
|
|
SetOwner(vhost::Error),
|
|
|
|
#[error("Failed to set virtio status: {0}")]
|
|
|
|
SetStatus(vhost::Error),
|
|
|
|
#[error("Failed to set vring address: {0}")]
|
|
|
|
SetVringAddr(vhost::Error),
|
|
|
|
#[error("Failed to set vring base: {0}")]
|
|
|
|
SetVringBase(vhost::Error),
|
|
|
|
#[error("Failed to set vring eventfd when buffer are used: {0}")]
|
|
|
|
SetVringCall(vhost::Error),
|
|
|
|
#[error("Failed to enable/disable vring: {0}")]
|
|
|
|
SetVringEnable(vhost::Error),
|
|
|
|
#[error("Failed to set vring eventfd when new descriptors are available: {0}")]
|
|
|
|
SetVringKick(vhost::Error),
|
|
|
|
#[error("Failed to set vring size: {0}")]
|
|
|
|
SetVringNum(vhost::Error),
|
|
|
|
}
|
|
|
|
|
|
|
|
pub type Result<T> = std::result::Result<T, Error>;
|
|
|
|
|
2022-10-12 09:23:16 +00:00
|
|
|
#[derive(Versionize)]
|
|
|
|
pub struct VdpaState {
|
|
|
|
pub avail_features: u64,
|
|
|
|
pub acked_features: u64,
|
|
|
|
pub device_type: u32,
|
|
|
|
pub iova_range_first: u64,
|
|
|
|
pub iova_range_last: u64,
|
|
|
|
pub config: Vec<u8>,
|
|
|
|
pub queue_sizes: Vec<u16>,
|
|
|
|
pub backend_features: u64,
|
|
|
|
}
|
|
|
|
|
|
|
|
impl VersionMapped for VdpaState {}
|
|
|
|
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
pub struct Vdpa {
|
|
|
|
common: VirtioCommon,
|
|
|
|
id: String,
|
2022-10-12 09:13:39 +00:00
|
|
|
vhost: Option<VhostKernVdpa<GuestMemoryAtomic<GuestMemoryMmap>>>,
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
iova_range: VhostVdpaIovaRange,
|
2022-10-12 08:54:45 +00:00
|
|
|
enabled_queues: BTreeMap<usize, bool>,
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
backend_features: u64,
|
2022-10-12 09:23:16 +00:00
|
|
|
migrating: bool,
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
impl Vdpa {
|
|
|
|
pub fn new(
|
|
|
|
id: String,
|
|
|
|
device_path: &str,
|
|
|
|
mem: GuestMemoryAtomic<GuestMemoryMmap>,
|
|
|
|
num_queues: u16,
|
2022-10-18 15:14:43 +00:00
|
|
|
state: Option<VdpaState>,
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
) -> Result<Self> {
|
2022-10-18 15:14:43 +00:00
|
|
|
let mut vhost = VhostKernVdpa::new(device_path, mem).map_err(Error::CreateVhostVdpa)?;
|
|
|
|
vhost.set_owner().map_err(Error::SetOwner)?;
|
|
|
|
|
|
|
|
let (
|
|
|
|
device_type,
|
|
|
|
avail_features,
|
|
|
|
acked_features,
|
|
|
|
queue_sizes,
|
|
|
|
iova_range,
|
|
|
|
backend_features,
|
2022-11-30 16:10:04 +00:00
|
|
|
paused,
|
2022-10-18 15:14:43 +00:00
|
|
|
) = if let Some(state) = state {
|
|
|
|
info!("Restoring vDPA {}", id);
|
|
|
|
|
|
|
|
vhost.set_backend_features_acked(state.backend_features);
|
|
|
|
vhost
|
|
|
|
.set_config(0, state.config.as_slice())
|
|
|
|
.map_err(Error::SetConfig)?;
|
|
|
|
|
|
|
|
(
|
|
|
|
state.device_type,
|
|
|
|
state.avail_features,
|
|
|
|
state.acked_features,
|
|
|
|
state.queue_sizes,
|
|
|
|
VhostVdpaIovaRange {
|
|
|
|
first: state.iova_range_first,
|
|
|
|
last: state.iova_range_last,
|
2022-10-12 09:23:16 +00:00
|
|
|
},
|
2022-10-18 15:14:43 +00:00
|
|
|
state.backend_features,
|
2022-11-30 16:10:04 +00:00
|
|
|
true,
|
2022-10-18 15:14:43 +00:00
|
|
|
)
|
|
|
|
} else {
|
|
|
|
let device_type = vhost.get_device_id().map_err(Error::GetDeviceId)?;
|
|
|
|
let queue_size = vhost.get_vring_num().map_err(Error::GetVringNum)?;
|
|
|
|
let avail_features = vhost.get_features().map_err(Error::GetFeatures)?;
|
|
|
|
let backend_features = vhost
|
|
|
|
.get_backend_features()
|
|
|
|
.map_err(Error::GetBackendFeatures)?;
|
|
|
|
vhost.set_backend_features_acked(backend_features);
|
|
|
|
|
|
|
|
let iova_range = vhost.get_iova_range().map_err(Error::GetIovaRange)?;
|
|
|
|
|
|
|
|
if avail_features & (1u64 << VIRTIO_F_IOMMU_PLATFORM) == 0 {
|
|
|
|
return Err(Error::MissingAccessPlatformVirtioFeature);
|
|
|
|
}
|
2022-10-12 09:23:16 +00:00
|
|
|
|
2022-10-18 15:14:43 +00:00
|
|
|
(
|
|
|
|
device_type,
|
|
|
|
avail_features,
|
|
|
|
0,
|
|
|
|
vec![queue_size; num_queues as usize],
|
|
|
|
iova_range,
|
|
|
|
backend_features,
|
2022-11-30 16:10:04 +00:00
|
|
|
false,
|
2022-10-18 15:14:43 +00:00
|
|
|
)
|
|
|
|
};
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
|
|
|
|
Ok(Vdpa {
|
|
|
|
common: VirtioCommon {
|
|
|
|
device_type,
|
2022-10-18 15:14:43 +00:00
|
|
|
queue_sizes,
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
avail_features,
|
2022-10-18 15:14:43 +00:00
|
|
|
acked_features,
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
min_queues: num_queues,
|
2022-11-30 16:10:04 +00:00
|
|
|
paused: Arc::new(AtomicBool::new(paused)),
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
..Default::default()
|
|
|
|
},
|
|
|
|
id,
|
2022-10-12 09:13:39 +00:00
|
|
|
vhost: Some(vhost),
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
iova_range,
|
2022-10-12 08:54:45 +00:00
|
|
|
enabled_queues: BTreeMap::new(),
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
backend_features,
|
2022-10-12 09:23:16 +00:00
|
|
|
migrating: false,
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
})
|
|
|
|
}
|
|
|
|
|
2022-10-12 08:54:45 +00:00
|
|
|
fn enable_vrings(&mut self, enable: bool) -> Result<()> {
|
2022-10-12 09:13:39 +00:00
|
|
|
assert!(self.vhost.is_some());
|
|
|
|
|
2022-10-12 08:54:45 +00:00
|
|
|
for (queue_index, enabled) in self.enabled_queues.iter_mut() {
|
|
|
|
if *enabled != enable {
|
|
|
|
self.vhost
|
2022-10-12 09:13:39 +00:00
|
|
|
.as_ref()
|
|
|
|
.unwrap()
|
2022-10-12 08:54:45 +00:00
|
|
|
.set_vring_enable(*queue_index, enable)
|
|
|
|
.map_err(Error::SetVringEnable)?;
|
|
|
|
*enabled = enable;
|
|
|
|
}
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
Ok(())
|
|
|
|
}
|
|
|
|
|
|
|
|
fn activate_vdpa(
|
|
|
|
&mut self,
|
2022-07-06 14:08:08 +00:00
|
|
|
mem: &GuestMemoryMmap,
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
virtio_interrupt: &Arc<dyn VirtioInterrupt>,
|
2022-07-06 14:08:08 +00:00
|
|
|
queues: Vec<(usize, Queue, EventFd)>,
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
) -> Result<()> {
|
2022-10-12 09:23:16 +00:00
|
|
|
assert!(self.vhost.is_some());
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
self.vhost
|
2022-10-12 09:13:39 +00:00
|
|
|
.as_ref()
|
|
|
|
.unwrap()
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
.set_features(self.common.acked_features)
|
|
|
|
.map_err(Error::SetFeatures)?;
|
|
|
|
self.vhost
|
2022-10-12 09:13:39 +00:00
|
|
|
.as_mut()
|
|
|
|
.unwrap()
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
.set_backend_features(self.backend_features)
|
|
|
|
.map_err(Error::SetBackendFeatures)?;
|
|
|
|
|
2022-07-20 14:45:49 +00:00
|
|
|
for (queue_index, queue, queue_evt) in queues.iter() {
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
let queue_max_size = queue.max_size();
|
2022-07-06 14:08:08 +00:00
|
|
|
let queue_size = queue.size();
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
self.vhost
|
2022-10-12 09:13:39 +00:00
|
|
|
.as_ref()
|
|
|
|
.unwrap()
|
2022-07-20 14:45:49 +00:00
|
|
|
.set_vring_num(*queue_index, queue_size)
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
.map_err(Error::SetVringNum)?;
|
|
|
|
|
|
|
|
let config_data = VringConfigData {
|
|
|
|
queue_max_size,
|
|
|
|
queue_size,
|
|
|
|
flags: 0u32,
|
2022-07-06 14:08:08 +00:00
|
|
|
desc_table_addr: queue.desc_table().translate_gpa(
|
|
|
|
self.common.access_platform.as_ref(),
|
|
|
|
queue_size as usize * std::mem::size_of::<Descriptor>(),
|
|
|
|
),
|
|
|
|
used_ring_addr: queue.used_ring().translate_gpa(
|
|
|
|
self.common.access_platform.as_ref(),
|
|
|
|
4 + queue_size as usize * 8,
|
|
|
|
),
|
|
|
|
avail_ring_addr: queue.avail_ring().translate_gpa(
|
|
|
|
self.common.access_platform.as_ref(),
|
|
|
|
4 + queue_size as usize * 2,
|
|
|
|
),
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
log_addr: None,
|
|
|
|
};
|
|
|
|
|
|
|
|
self.vhost
|
2022-10-12 09:13:39 +00:00
|
|
|
.as_ref()
|
|
|
|
.unwrap()
|
2022-07-20 14:45:49 +00:00
|
|
|
.set_vring_addr(*queue_index, &config_data)
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
.map_err(Error::SetVringAddr)?;
|
|
|
|
self.vhost
|
2022-10-12 09:13:39 +00:00
|
|
|
.as_ref()
|
|
|
|
.unwrap()
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
.set_vring_base(
|
2022-07-20 14:45:49 +00:00
|
|
|
*queue_index,
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
queue
|
2022-07-06 14:08:08 +00:00
|
|
|
.avail_idx(mem, Ordering::Acquire)
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
.map_err(Error::GetAvailableIndex)?
|
|
|
|
.0,
|
|
|
|
)
|
|
|
|
.map_err(Error::SetVringBase)?;
|
|
|
|
|
|
|
|
if let Some(eventfd) =
|
2022-07-20 14:45:49 +00:00
|
|
|
virtio_interrupt.notifier(VirtioInterruptType::Queue(*queue_index as u16))
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
{
|
|
|
|
self.vhost
|
2022-10-12 09:13:39 +00:00
|
|
|
.as_ref()
|
|
|
|
.unwrap()
|
2022-07-20 14:45:49 +00:00
|
|
|
.set_vring_call(*queue_index, &eventfd)
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
.map_err(Error::SetVringCall)?;
|
|
|
|
}
|
|
|
|
|
|
|
|
self.vhost
|
2022-10-12 09:13:39 +00:00
|
|
|
.as_ref()
|
|
|
|
.unwrap()
|
2022-07-20 14:45:49 +00:00
|
|
|
.set_vring_kick(*queue_index, queue_evt)
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
.map_err(Error::SetVringKick)?;
|
2022-10-12 09:23:16 +00:00
|
|
|
|
|
|
|
self.enabled_queues.insert(*queue_index, false);
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Setup the config eventfd if there is one
|
|
|
|
if let Some(eventfd) = virtio_interrupt.notifier(VirtioInterruptType::Config) {
|
|
|
|
self.vhost
|
2022-10-12 09:13:39 +00:00
|
|
|
.as_ref()
|
|
|
|
.unwrap()
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
.set_config_call(&eventfd)
|
|
|
|
.map_err(Error::SetConfigCall)?;
|
|
|
|
}
|
|
|
|
|
2022-10-12 08:54:45 +00:00
|
|
|
self.enable_vrings(true)?;
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
|
|
|
|
self.vhost
|
2022-10-12 09:13:39 +00:00
|
|
|
.as_ref()
|
|
|
|
.unwrap()
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
.set_status(
|
|
|
|
(DEVICE_ACKNOWLEDGE | DEVICE_DRIVER | DEVICE_DRIVER_OK | DEVICE_FEATURES_OK) as u8,
|
|
|
|
)
|
|
|
|
.map_err(Error::SetStatus)
|
|
|
|
}
|
|
|
|
|
|
|
|
fn reset_vdpa(&mut self) -> Result<()> {
|
2022-10-12 08:54:45 +00:00
|
|
|
self.enable_vrings(false)?;
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
|
2022-10-12 09:23:16 +00:00
|
|
|
assert!(self.vhost.is_some());
|
2022-10-12 09:13:39 +00:00
|
|
|
self.vhost
|
|
|
|
.as_ref()
|
|
|
|
.unwrap()
|
|
|
|
.set_status(0)
|
|
|
|
.map_err(Error::SetStatus)
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
}
|
|
|
|
|
2022-10-12 09:23:16 +00:00
|
|
|
fn dma_map(
|
|
|
|
&mut self,
|
|
|
|
iova: u64,
|
|
|
|
size: u64,
|
|
|
|
host_vaddr: *const u8,
|
|
|
|
readonly: bool,
|
|
|
|
) -> Result<()> {
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
let iova_last = iova + size - 1;
|
|
|
|
if iova < self.iova_range.first || iova_last > self.iova_range.last {
|
|
|
|
return Err(Error::InvalidIovaRange(iova, iova_last));
|
|
|
|
}
|
|
|
|
|
2022-10-12 09:13:39 +00:00
|
|
|
assert!(self.vhost.is_some());
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
self.vhost
|
2022-10-12 09:13:39 +00:00
|
|
|
.as_ref()
|
|
|
|
.unwrap()
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
.dma_map(iova, size, host_vaddr, readonly)
|
|
|
|
.map_err(Error::DmaMap)
|
|
|
|
}
|
|
|
|
|
|
|
|
fn dma_unmap(&self, iova: u64, size: u64) -> Result<()> {
|
|
|
|
let iova_last = iova + size - 1;
|
|
|
|
if iova < self.iova_range.first || iova_last > self.iova_range.last {
|
|
|
|
return Err(Error::InvalidIovaRange(iova, iova_last));
|
|
|
|
}
|
|
|
|
|
2022-10-12 09:13:39 +00:00
|
|
|
assert!(self.vhost.is_some());
|
|
|
|
self.vhost
|
|
|
|
.as_ref()
|
|
|
|
.unwrap()
|
|
|
|
.dma_unmap(iova, size)
|
|
|
|
.map_err(Error::DmaUnmap)
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
}
|
2022-10-12 09:23:16 +00:00
|
|
|
|
|
|
|
fn state(&self) -> Result<VdpaState> {
|
|
|
|
assert!(self.vhost.is_some());
|
|
|
|
let config_size = self
|
|
|
|
.vhost
|
|
|
|
.as_ref()
|
|
|
|
.unwrap()
|
|
|
|
.get_config_size()
|
|
|
|
.map_err(Error::GetConfigSize)?;
|
|
|
|
let mut config = vec![0; config_size as usize];
|
|
|
|
self.read_config(0, config.as_mut_slice());
|
|
|
|
|
|
|
|
Ok(VdpaState {
|
|
|
|
avail_features: self.common.avail_features,
|
|
|
|
acked_features: self.common.acked_features,
|
|
|
|
device_type: self.common.device_type,
|
|
|
|
queue_sizes: self.common.queue_sizes.clone(),
|
|
|
|
iova_range_first: self.iova_range.first,
|
|
|
|
iova_range_last: self.iova_range.last,
|
|
|
|
config,
|
|
|
|
backend_features: self.backend_features,
|
|
|
|
})
|
|
|
|
}
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
impl VirtioDevice for Vdpa {
|
|
|
|
fn device_type(&self) -> u32 {
|
|
|
|
self.common.device_type
|
|
|
|
}
|
|
|
|
|
|
|
|
fn queue_max_sizes(&self) -> &[u16] {
|
|
|
|
&self.common.queue_sizes
|
|
|
|
}
|
|
|
|
|
|
|
|
fn features(&self) -> u64 {
|
|
|
|
self.common.avail_features
|
|
|
|
}
|
|
|
|
|
|
|
|
fn ack_features(&mut self, value: u64) {
|
|
|
|
self.common.ack_features(value)
|
|
|
|
}
|
|
|
|
|
|
|
|
fn read_config(&self, offset: u64, data: &mut [u8]) {
|
2022-10-12 09:13:39 +00:00
|
|
|
assert!(self.vhost.is_some());
|
|
|
|
if let Err(e) = self.vhost.as_ref().unwrap().get_config(offset as u32, data) {
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
error!("Failed reading virtio config: {}", e);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
fn write_config(&mut self, offset: u64, data: &[u8]) {
|
2022-10-12 09:13:39 +00:00
|
|
|
assert!(self.vhost.is_some());
|
|
|
|
if let Err(e) = self.vhost.as_ref().unwrap().set_config(offset as u32, data) {
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
error!("Failed writing virtio config: {}", e);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
fn activate(
|
|
|
|
&mut self,
|
|
|
|
mem: GuestMemoryAtomic<GuestMemoryMmap>,
|
|
|
|
virtio_interrupt: Arc<dyn VirtioInterrupt>,
|
2022-07-06 14:08:08 +00:00
|
|
|
queues: Vec<(usize, Queue, EventFd)>,
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
) -> ActivateResult {
|
2022-07-20 14:45:49 +00:00
|
|
|
self.activate_vdpa(&mem.memory(), &virtio_interrupt, queues)
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
.map_err(ActivateError::ActivateVdpa)?;
|
|
|
|
|
|
|
|
// Store the virtio interrupt handler as we need to return it on reset
|
|
|
|
self.common.interrupt_cb = Some(virtio_interrupt);
|
|
|
|
|
|
|
|
event!("vdpa", "activated", "id", &self.id);
|
|
|
|
Ok(())
|
|
|
|
}
|
|
|
|
|
|
|
|
fn reset(&mut self) -> Option<Arc<dyn VirtioInterrupt>> {
|
|
|
|
if let Err(e) = self.reset_vdpa() {
|
|
|
|
error!("Failed to reset vhost-vdpa: {:?}", e);
|
|
|
|
return None;
|
|
|
|
}
|
|
|
|
|
|
|
|
event!("vdpa", "reset", "id", &self.id);
|
|
|
|
|
|
|
|
// Return the virtio interrupt handler
|
|
|
|
self.common.interrupt_cb.take()
|
|
|
|
}
|
|
|
|
|
|
|
|
fn set_access_platform(&mut self, access_platform: Arc<dyn AccessPlatform>) {
|
|
|
|
self.common.set_access_platform(access_platform)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-10-12 09:23:16 +00:00
|
|
|
impl Pausable for Vdpa {
|
|
|
|
fn pause(&mut self) -> std::result::Result<(), MigratableError> {
|
|
|
|
if !self.migrating {
|
|
|
|
Err(MigratableError::Pause(anyhow!(
|
|
|
|
"Can't pause a vDPA device outside live migration"
|
|
|
|
)))
|
|
|
|
} else {
|
|
|
|
Ok(())
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
fn resume(&mut self) -> std::result::Result<(), MigratableError> {
|
|
|
|
if !self.migrating {
|
|
|
|
Err(MigratableError::Resume(anyhow!(
|
|
|
|
"Can't resume a vDPA device outside live migration"
|
|
|
|
)))
|
|
|
|
} else {
|
|
|
|
Ok(())
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl Snapshottable for Vdpa {
|
|
|
|
fn id(&self) -> String {
|
|
|
|
self.id.clone()
|
|
|
|
}
|
|
|
|
|
|
|
|
fn snapshot(&mut self) -> std::result::Result<Snapshot, MigratableError> {
|
|
|
|
if !self.migrating {
|
|
|
|
return Err(MigratableError::Snapshot(anyhow!(
|
|
|
|
"Can't snapshot a vDPA device outside live migration"
|
|
|
|
)));
|
|
|
|
}
|
|
|
|
|
2022-12-02 14:31:53 +00:00
|
|
|
let snapshot = Snapshot::new_from_versioned_state(&self.state().map_err(|e| {
|
|
|
|
MigratableError::Snapshot(anyhow!("Error snapshotting vDPA device: {:?}", e))
|
|
|
|
})?)?;
|
2022-10-12 09:23:16 +00:00
|
|
|
|
|
|
|
// Force the vhost handler to be dropped in order to close the vDPA
|
|
|
|
// file. This will ensure the device can be accessed if the VM is
|
|
|
|
// migrated on the same host machine.
|
|
|
|
self.vhost.take();
|
|
|
|
|
|
|
|
Ok(snapshot)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl Transportable for Vdpa {}
|
|
|
|
|
|
|
|
impl Migratable for Vdpa {
|
|
|
|
fn start_migration(&mut self) -> std::result::Result<(), MigratableError> {
|
|
|
|
self.migrating = true;
|
|
|
|
// Given there's no way to track dirty pages, we must suspend the
|
|
|
|
// device as soon as the migration process starts.
|
|
|
|
if self.backend_features & (1 << VHOST_BACKEND_F_SUSPEND) != 0 {
|
|
|
|
assert!(self.vhost.is_some());
|
|
|
|
self.vhost.as_ref().unwrap().suspend().map_err(|e| {
|
|
|
|
MigratableError::StartMigration(anyhow!("Error suspending vDPA device: {:?}", e))
|
|
|
|
})
|
|
|
|
} else {
|
|
|
|
Err(MigratableError::StartMigration(anyhow!(
|
|
|
|
"vDPA device can't be suspended"
|
|
|
|
)))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
fn complete_migration(&mut self) -> std::result::Result<(), MigratableError> {
|
|
|
|
self.migrating = false;
|
|
|
|
Ok(())
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
pub struct VdpaDmaMapping<M: GuestAddressSpace> {
|
|
|
|
device: Arc<Mutex<Vdpa>>,
|
|
|
|
memory: Arc<M>,
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<M: GuestAddressSpace> VdpaDmaMapping<M> {
|
|
|
|
pub fn new(device: Arc<Mutex<Vdpa>>, memory: Arc<M>) -> Self {
|
|
|
|
Self { device, memory }
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<M: GuestAddressSpace + Sync + Send> ExternalDmaMapping for VdpaDmaMapping<M> {
|
|
|
|
fn map(&self, iova: u64, gpa: u64, size: u64) -> result::Result<(), io::Error> {
|
|
|
|
let mem = self.memory.memory();
|
|
|
|
let guest_addr = GuestAddress(gpa);
|
|
|
|
let user_addr = if mem.check_range(guest_addr, size as usize) {
|
|
|
|
mem.get_host_address(guest_addr).unwrap() as *const u8
|
|
|
|
} else {
|
|
|
|
return Err(io::Error::new(
|
|
|
|
io::ErrorKind::Other,
|
|
|
|
format!(
|
2022-12-14 11:41:15 +00:00
|
|
|
"failed to convert guest address 0x{gpa:x} into \
|
|
|
|
host user virtual address"
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
),
|
|
|
|
));
|
|
|
|
};
|
|
|
|
|
2022-04-04 10:58:28 +00:00
|
|
|
debug!(
|
|
|
|
"DMA map iova 0x{:x}, gpa 0x{:x}, size 0x{:x}, host_addr 0x{:x}",
|
|
|
|
iova, gpa, size, user_addr as u64
|
|
|
|
);
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
self.device
|
|
|
|
.lock()
|
|
|
|
.unwrap()
|
|
|
|
.dma_map(iova, size, user_addr, false)
|
|
|
|
.map_err(|e| {
|
|
|
|
io::Error::new(
|
|
|
|
io::ErrorKind::Other,
|
|
|
|
format!(
|
|
|
|
"failed to map memory for vDPA device, \
|
2022-12-14 11:41:15 +00:00
|
|
|
iova 0x{iova:x}, gpa 0x{gpa:x}, size 0x{size:x}: {e:?}"
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
),
|
|
|
|
)
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
|
|
|
fn unmap(&self, iova: u64, size: u64) -> std::result::Result<(), std::io::Error> {
|
2022-04-04 10:58:28 +00:00
|
|
|
debug!("DMA unmap iova 0x{:x} size 0x{:x}", iova, size);
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
self.device
|
|
|
|
.lock()
|
|
|
|
.unwrap()
|
|
|
|
.dma_unmap(iova, size)
|
|
|
|
.map_err(|e| {
|
|
|
|
io::Error::new(
|
|
|
|
io::ErrorKind::Other,
|
|
|
|
format!(
|
|
|
|
"failed to unmap memory for vDPA device, \
|
2022-12-14 11:41:15 +00:00
|
|
|
iova 0x{iova:x}, size 0x{size:x}: {e:?}"
|
virtio-devices: Add Vdpa device
vDPA is a kernel framework introduced fairly recently in order to handle
devices complying with virtio specification on their datapath, while the
control path is vendor specific. For the datapath, that means the
virtqueues are handled through DMA directly between the hardware and the
guest, while the control path goes through the vDPA framework,
eventually exposed through a vhost-vdpa device.
vDPA, like VFIO, aims at achieving baremetal performance for devices
that are passed into a VM. But unlike VFIO, it provides a simpler/better
framework for achieving migration. Because the DMA accesses between the
device and the guest are going through virtio queues, migration can be
achieved way more easily, and doesn't require each device driver to
implement the migration support. In the VFIO case, each vendor is
expected to provide an implementation of the VFIO migration framework,
which makes things harder as it must be done for each and every device.
So to summarize the point is to support migration for hardware devices
through which we can achieve baremetal performances.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
2022-03-07 14:34:44 +00:00
|
|
|
),
|
|
|
|
)
|
|
|
|
})
|
|
|
|
}
|
|
|
|
}
|