virtio-devices: fix broken vsock doc comments

These need to be //! comments, because they apply to the module as a
whole, not to whatever directly follows the comment.  Using ///
comments here resulted in documentation being attached to the wrong
thing, or not rendered at all.

I've also checked the Markdown formatting of these comments as
rendered by rustdoc, and fixed it where appropriate.

Signed-off-by: Alyssa Ross <hi@alyssa.is>
This commit is contained in:
Alyssa Ross 2023-04-04 16:19:39 +00:00 committed by Bo Chen
parent 95f83320b1
commit f6236087d8
7 changed files with 118 additions and 110 deletions

View File

@ -1,28 +1,28 @@
// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
//
/// The main job of `VsockConnection` is to forward data traffic, back and forth, between a
/// guest-side AF_VSOCK socket and a host-side generic `Read + Write + AsRawFd` stream, while
/// also managing its internal state.
/// To that end, `VsockConnection` implements:
/// - `VsockChannel` for:
/// - moving data from the host stream to a guest-provided RX buffer, via `recv_pkt()`; and
/// - moving data from a guest-provided TX buffer to the host stream, via `send_pkt()`; and
/// - updating its internal state, by absorbing control packets (anything other than
/// VSOCK_OP_RW).
/// - `VsockEpollListener` for getting notified about the availability of data or free buffer
/// space at the host stream.
///
/// Note: there is a certain asymmetry to the RX and TX data flows:
/// - RX transfers do not need any data buffering, since data is read straight from the
/// host stream and into the guest-provided RX buffer;
/// - TX transfers may require some data to be buffered by `VsockConnection`, if the host
/// peer can't keep up with reading the data that we're writing. This is because, once
/// the guest driver provides some data in a virtio TX buffer, the vsock device must
/// consume it. If that data can't be forwarded straight to the host stream, we'll
/// have to store it in a buffer (and flush it at a later time). Vsock flow control
/// ensures that our TX buffer doesn't overflow.
///
//! The main job of `VsockConnection` is to forward data traffic, back and forth, between a
//! guest-side AF_VSOCK socket and a host-side generic `Read + Write + AsRawFd` stream, while
//! also managing its internal state.
//! To that end, `VsockConnection` implements:
//! - `VsockChannel` for:
//! - moving data from the host stream to a guest-provided RX buffer, via `recv_pkt()`; and
//! - moving data from a guest-provided TX buffer to the host stream, via `send_pkt()`; and
//! - updating its internal state, by absorbing control packets (anything other than
//! VSOCK_OP_RW).
//! - `VsockEpollListener` for getting notified about the availability of data or free buffer
//! space at the host stream.
//!
//! Note: there is a certain asymmetry to the RX and TX data flows:
//! - RX transfers do not need any data buffering, since data is read straight from the
//! host stream and into the guest-provided RX buffer;
//! - TX transfers may require some data to be buffered by `VsockConnection`, if the host
//! peer can't keep up with reading the data that we're writing. This is because, once
//! the guest driver provides some data in a virtio TX buffer, the vsock device must
//! consume it. If that data can't be forwarded straight to the host stream, we'll
//! have to store it in a buffer (and flush it at a later time). Vsock flow control
//! ensures that our TX buffer doesn't overflow.
//
// The code in this file is best read with a fresh memory of the vsock protocol inner-workings.
// To help with that, here is a
//

View File

@ -1,9 +1,9 @@
// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
//
/// This module implements our vsock connection state machine. The heavy lifting is done by
/// `connection::VsockConnection`, while this file only defines some constants and helper structs.
///
//! This module implements our vsock connection state machine. The heavy lifting is done by
//! `connection::VsockConnection`, while this file only defines some constants and helper structs.
mod connection;
mod txbuf;

View File

@ -2,19 +2,19 @@
// SPDX-License-Identifier: Apache-2.0
//
/// `VsockPacket` provides a thin wrapper over the buffers exchanged via virtio queues.
/// There are two components to a vsock packet, each using its own descriptor in a
/// virtio queue:
/// - the packet header; and
/// - the packet data/buffer.
/// There is a 1:1 relation between descriptor chains and packets: the first (chain head) holds
/// the header, and an optional second descriptor holds the data. The second descriptor is only
/// present for data packets (VSOCK_OP_RW).
///
/// `VsockPacket` wraps these two buffers and provides direct access to the data stored
/// in guest memory. This is done to avoid unnecessarily copying data from guest memory
/// to temporary buffers, before passing it on to the vsock backend.
///
//! `VsockPacket` provides a thin wrapper over the buffers exchanged via virtio queues.
//! There are two components to a vsock packet, each using its own descriptor in a
//! virtio queue:
//! - the packet header; and
//! - the packet data/buffer.
//! There is a 1:1 relation between descriptor chains and packets: the first (chain head) holds
//! the header, and an optional second descriptor holds the data. The second descriptor is only
//! present for data packets (VSOCK_OP_RW).
//!
//! `VsockPacket` wraps these two buffers and provides direct access to the data stored
//! in guest memory. This is done to avoid unnecessarily copying data from guest memory
//! to temporary buffers, before passing it on to the vsock backend.
use byteorder::{ByteOrder, LittleEndian};
use std::ops::Deref;
use std::sync::Arc;

View File

@ -2,12 +2,13 @@
// SPDX-License-Identifier: Apache-2.0
//
/// This module implements the Unix Domain Sockets backend for vsock - a mediator between
/// guest-side AF_VSOCK sockets and host-side AF_UNIX sockets. The heavy lifting is performed by
/// `muxer::VsockMuxer`, a connection multiplexer that uses `super::csm::VsockConnection` for
/// handling vsock connection states.
/// Check out `muxer.rs` for a more detailed explanation of the inner workings of this backend.
///
//! This module implements the Unix Domain Sockets backend for vsock - a mediator between
//! guest-side AF_VSOCK sockets and host-side AF_UNIX sockets. The heavy lifting is performed by
//! `muxer::VsockMuxer`, a connection multiplexer that uses `super::csm::VsockConnection` for
//! handling vsock connection states.
//!
//! Check out `muxer.rs` for a more detailed explanation of the inner workings of this backend.
mod muxer;
mod muxer_killq;
mod muxer_rxq;

View File

@ -2,35 +2,42 @@
// SPDX-License-Identifier: Apache-2.0
//
/// `VsockMuxer` is the device-facing component of the Unix domain sockets vsock backend. I.e.
/// by implementing the `VsockBackend` trait, it abstracts away the gory details of translating
/// between AF_VSOCK and AF_UNIX, and presents a clean interface to the rest of the vsock
/// device model.
///
/// The vsock muxer has two main roles:
/// 1. Vsock connection multiplexer:
/// It's the muxer's job to create, manage, and terminate `VsockConnection` objects. The
/// muxer also routes packets to their owning connections. It does so via a connection
/// `HashMap`, keyed by what is basically a (host_port, guest_port) tuple.
/// Vsock packet traffic needs to be inspected, in order to detect connection request
/// packets (leading to the creation of a new connection), and connection reset packets
/// (leading to the termination of an existing connection). All other packets, though, must
/// belong to an existing connection and, as such, the muxer simply forwards them.
/// 2. Event dispatcher
/// There are three event categories that the vsock backend is interested it:
/// 1. A new host-initiated connection is ready to be accepted from the listening host Unix
/// socket;
/// 2. Data is available for reading from a newly-accepted host-initiated connection (i.e.
/// the host is ready to issue a vsock connection request, informing us of the
/// destination port to which it wants to connect);
/// 3. Some event was triggered for a connected Unix socket, that belongs to a
/// `VsockConnection`.
/// The muxer gets notified about all of these events, because, as a `VsockEpollListener`
/// implementor, it gets to register a nested epoll FD into the main VMM epolling loop. All
/// other pollable FDs are then registered under this nested epoll FD.
/// To route all these events to their handlers, the muxer uses another `HashMap` object,
/// mapping `RawFd`s to `EpollListener`s.
///
//! `VsockMuxer` is the device-facing component of the Unix domain sockets vsock backend. I.e.
//! by implementing the `VsockBackend` trait, it abstracts away the gory details of translating
//! between AF_VSOCK and AF_UNIX, and presents a clean interface to the rest of the vsock
//! device model.
//!
//! The vsock muxer has two main roles:
//!
//! ## Vsock connection multiplexer
//!
//! It's the muxer's job to create, manage, and terminate `VsockConnection` objects. The
//! muxer also routes packets to their owning connections. It does so via a connection
//! `HashMap`, keyed by what is basically a (host_port, guest_port) tuple.
//!
//! Vsock packet traffic needs to be inspected, in order to detect connection request
//! packets (leading to the creation of a new connection), and connection reset packets
//! (leading to the termination of an existing connection). All other packets, though, must
//! belong to an existing connection and, as such, the muxer simply forwards them.
//!
//! ## Event dispatcher
//!
//! There are three event categories that the vsock backend is interested it:
//! 1. A new host-initiated connection is ready to be accepted from the listening host Unix
//! socket;
//! 2. Data is available for reading from a newly-accepted host-initiated connection (i.e.
//! the host is ready to issue a vsock connection request, informing us of the
//! destination port to which it wants to connect);
//! 3. Some event was triggered for a connected Unix socket, that belongs to a
//! `VsockConnection`.
//!
//! The muxer gets notified about all of these events, because, as a `VsockEpollListener`
//! implementor, it gets to register a nested epoll FD into the main VMM epolling loop. All
//! other pollable FDs are then registered under this nested epoll FD.
//!
//! To route all these events to their handlers, the muxer uses another `HashMap` object,
//! mapping `RawFd`s to `EpollListener`s.
use std::collections::{HashMap, HashSet};
use std::fs::File;
use std::io::{self, Read};

View File

@ -2,29 +2,29 @@
// SPDX-License-Identifier: Apache-2.0
//
/// `MuxerKillQ` implements a helper object that `VsockMuxer` can use for scheduling forced
/// connection termination. I.e. after one peer issues a clean shutdown request
/// (VSOCK_OP_SHUTDOWN), the concerned connection is queued for termination (VSOCK_OP_RST) in
/// the near future (herein implemented via an expiring timer).
///
/// Whenever the muxer needs to schedule a connection for termination, it pushes it (or rather
/// an identifier - the connection key) to this queue. A subsequent pop() operation will
/// succeed if and only if the first connection in the queue is ready to be terminated (i.e.
/// its kill timer expired).
///
/// Without using this queue, the muxer would have to walk its entire connection pool
/// (hashmap), whenever it needs to check for expired kill timers. With this queue, both
/// scheduling and termination are performed in constant time. However, since we don't want to
/// waste space on a kill queue that's as big as the connection hashmap itself, it is possible
/// that this queue may become full at times. We call this kill queue "synchronized" if we are
/// certain that all connections that are awaiting termination are present in the queue. This
/// means a simple constant-time pop() operation is enough to check whether any connections
/// need to be terminated. When the kill queue becomes full, though, pushing fails, so
/// connections that should be terminated are left out. The queue is not synchronized anymore.
/// When that happens, the muxer will first drain the queue, and then replace it with a new
/// queue, created by walking the connection pool, looking for connections that will be
/// expiring in the future.
///
//! `MuxerKillQ` implements a helper object that `VsockMuxer` can use for scheduling forced
//! connection termination. I.e. after one peer issues a clean shutdown request
//! (VSOCK_OP_SHUTDOWN), the concerned connection is queued for termination (VSOCK_OP_RST) in
//! the near future (herein implemented via an expiring timer).
//!
//! Whenever the muxer needs to schedule a connection for termination, it pushes it (or rather
//! an identifier - the connection key) to this queue. A subsequent pop() operation will
//! succeed if and only if the first connection in the queue is ready to be terminated (i.e.
//! its kill timer expired).
//!
//! Without using this queue, the muxer would have to walk its entire connection pool
//! (hashmap), whenever it needs to check for expired kill timers. With this queue, both
//! scheduling and termination are performed in constant time. However, since we don't want to
//! waste space on a kill queue that's as big as the connection hashmap itself, it is possible
//! that this queue may become full at times. We call this kill queue "synchronized" if we are
//! certain that all connections that are awaiting termination are present in the queue. This
//! means a simple constant-time pop() operation is enough to check whether any connections
//! need to be terminated. When the kill queue becomes full, though, pushing fails, so
//! connections that should be terminated are left out. The queue is not synchronized anymore.
//! When that happens, the muxer will first drain the queue, and then replace it with a new
//! queue, created by walking the connection pool, looking for connections that will be
//! expiring in the future.
use std::collections::{HashMap, VecDeque};
use std::time::Instant;

View File

@ -2,20 +2,20 @@
// SPDX-License-Identifier: Apache-2.0
//
/// `MuxerRxQ` implements a helper object that `VsockMuxer` can use for queuing RX (host -> guest)
/// packets (or rather instructions on how to build said packets).
///
/// Under ideal operation, every connection, that has pending RX data, will be present in the muxer
/// RX queue. However, since the RX queue is smaller than the connection pool, it may, under some
/// conditions, become full, meaning that it can no longer account for all the connections that can
/// yield RX data. When that happens, we say that it is no longer "synchronized" (i.e. with the
/// connection pool). A desynchronized RX queue still holds valid data, and the muxer will
/// continue to pop packets from it. However, when a desynchronized queue is drained, additional
/// data may still be available, so the muxer will have to perform a more costly walk of the entire
/// connection pool to find it. This walk is performed here, as part of building an RX queue from
/// the connection pool. When an out-of-sync is drained, the muxer will discard it, and attempt to
/// rebuild a synced one.
///
//! `MuxerRxQ` implements a helper object that `VsockMuxer` can use for queuing RX (host -> guest)
//! packets (or rather instructions on how to build said packets).
//!
//! Under ideal operation, every connection, that has pending RX data, will be present in the muxer
//! RX queue. However, since the RX queue is smaller than the connection pool, it may, under some
//! conditions, become full, meaning that it can no longer account for all the connections that can
//! yield RX data. When that happens, we say that it is no longer "synchronized" (i.e. with the
//! connection pool). A desynchronized RX queue still holds valid data, and the muxer will
//! continue to pop packets from it. However, when a desynchronized queue is drained, additional
//! data may still be available, so the muxer will have to perform a more costly walk of the entire
//! connection pool to find it. This walk is performed here, as part of building an RX queue from
//! the connection pool. When an out-of-sync is drained, the muxer will discard it, and attempt to
//! rebuild a synced one.
use std::collections::{HashMap, VecDeque};
use super::super::VsockChannel;